[Passband] a useful self contained model?

Martin Hill mchill at dial.pipex.com
Thu Jun 10 01:13:45 PDT 2004


Jonathan McDowell wrote:

> Hi Data Modelers...
>   I'm baaack! 

Hello!  Welcome your frooont! An email storm begins... But it's 
important, so at least try and skim it folks!

>>Will time have the equivalent of passrate?
> 
> You betcha. In fact, what you are calling passrate is what I
> would call the spectral transmission P_E(E). This is just a projection
> of the general transmission (or, to use your term, passrate):
> 
>     P(E,t,alpha,delta)
> 

Hmm yes I see (good example of contaminates, and presumably same for 
atmosphere).  How often is this used in practice for, say, an 
observation?  Given that most are taken at a given time? It sounds like 
we need a more general case to be attached as part of a datasets 
metadata.  Or is this an implementation issue, where the passrate at the 
observation time is attached to the observation when the observation is 
about to be exchanged?

Similarly while I realise most filters are not even across their 
physical dimensions, I felt that could be modelled as a separate 
aggregation of Passbands; maybe this is a Bad Idea.  However looking at 
existing data, I see passbands tend to be written as transmission 
functions only, with weighting maps then applied to the images to 
compensate for instrument characteristics.  Would people prefer to 
combine all these things together?

What are alpha & delta? spatial coverage?

> So I don't see that fravergy has particularly different characteristics from
> time and space. Your Passband is what I would call Coordinate Coverage,
> with passrate = Sensitivity and min/max = Support and central = Location,
> and Name as an extra property that extends my concept of coverage.

OK... Although as I note elsewhere, I would like to see the *data model* 
explicitly state what the data is and what it represents.  Generic terms 
should be left to implementors, even if they are presented as proposed 
*implementation models*....

> In the Observation DM document we went to some trouble to address
> exactly this question by identifying the analogous concepts 
> for spatial, spectral and temporal coordinates in a big table.

Yes... I'm not really happy with a table like that in the main *data 
model* for the same reason as above.  Tables are not naturally OO for a 
start...

>>What I really don't want is all the other baggage..
> 
> So, I think we agree that the /min/max/ and /central/ fravergies are
> well modelled by the other baggage, they certainly have  UCD and units,
> and /central/ at least may have an error. Is the passrate well modelled
> by Q? It's dimensionless, but it can have errors (see below)

Well... that's not quite true...  min/max don't (in practice) have 
errors.  None of them have units (*Fravergies* don't have units, though 
you can get values from them that do) or UCDs that describe their 
values; *passband* has a *list* of UCDs describing what the passband 
coverage represents.  The error that you talk about from central is 
really the Passrate(fravergy, [etc]) and I believe it would be better to 
generate a suitable Accuracy depending on how that Passrate() is 
defined.  This *could* be generated as part of the Central property but 
I think these are different things.

>>there are no UCDs ... that describe probability
> 
> 
> The UCD1+ for spectral passrate is instr.filter.transm according
> to the current UCD1 list, although there are currently no UCDs
> for spatial QE or secular sensitivity variations (because these
> concepts tend only to occur in calibration data files and not in VIZIER catalogs).
> 
> For my money, everything should have a UCD, you always want
> to be able to ask 'what is this thing?'.

There seems to be two kinds of UCDs; one describing something about 
'type', and one describing what a set of 'instance' values mean. 
instr.filter.trasm is a fixed UCD describing what a class/property is (I 
think).  There are other UCDs describing what a particular value 
represents - eg PHOT_IR_K-10 (I thought there were some PHOT_etc UCDs 
for spectral coverage that were not flux related but now that I look I 
can't find any!).

I quite agree that we should be able to 'find stuff' using UCDs but I 
wonder how far down that is useful.  Once you've 'found' an SED, you've 
found the model.  Are the UCDs then useful for navigating the model?  ie 
using UCDs to describe type?  I would have thought not - we know the 
shape of the model.  At this stage we need UCDs to describe instance 
information.

> However, not everything is a Q. I believe the components of Passband
> (min/max, central, passrate) are each arguably Q, but Passband itself
> is not a Q.

I'm happier using a cutdown Quantity to represent 'leaves' of the model 
(I thought you were advocating that everything was a Quantity), but 
there are aspects of Quantity that I'd like to see used at higher levels 
too...

> And it's an overall normalization on my P(E,t,RA,Dec) rather than
> being a separate normalization for each of P(E), P(t), P(RA,Dec).
> So I wouldn't include it in Passband but we need to remember to put
> it somewhere else. 
> (but again in reality, and particularly for ground-based optical, the
> p(fravergy) and area are often not separately calibrated, so some will
> argue it would be convenient to allow it to be lumped in.)

Hmmm there seems to be a case for two Passband interfaces.  A 
SimplePassband for most use, and an extension that deals with this extra 
detail.

> Martin:
> "Passrate shape and min/max should include .. *actual* not *intended*
> throughput". 
> 
> I think that passrate should be actual, but min/max should be intended
> (and considered as a rectangular approximation to the passrate function).
> We think of red leaks and sidelobes as erroneous data to be calibrated
> out rather than as part of the measurement. It makes no sense
> for min/max to be "actual", since then the values are always
> -Inf:+Inf (well certainly +Inf, a sufficiently energetic gamma ray
> will always melt the filter and get through). 

Hmmm OK will need to think about this a bit more then, as if the min/max 
don't match what's actually come through then presumably fluxes 
associated with the passband won't make sense?  Do we need more than one 
kind of min/max?

>>fravergy is just a trivial conversion ... and doesn't need a coordinate system
> 
> oooh, no. We need all kinds of coord system info - rest frame, and so on.

I would expect that all this extra information would be held at some 
higher level.  In other words, you would say 'I measured this photon at 
this fravergy at this rest frame'.  This is what I mean by modelling 
small things and building up; it keeps things simple at, say, the 
passband stage as we're always talking about photon transmission at the 
rest frame of the passband (or?).

>>Passband *is* the error on fravergy.
> 
> True but only up to a point and very dependent on context. In the
> context of an SED or a spectrum I mostly agree, as long as Passband.CentralFravergy
> has Q-type accuracy info. (the width of the bandpass is distinct from the
> uncertainty in our knowledge of the bandpass, and for some kinds of spectra
> that does become relevant). In other contexts, I think the opposite of
> your statement is true: we associate errors with fravergies at a higher
> level for *general* purposes, using passbands for *specific* purposes.

Yes I think you're right; it makes more sense to be able to create an 
Accuracy from a Passband rather than assume a Passband is an Accuracy. 
Because Passbands are usually more complex than a simple central 
frequency + width limts, I'd like to see this marked as a 
function/property to be modelled later...

> 
> Martin:
> 
>>I want to be sure that the model describes .. how it interacts
>>.. if someone wants to implement it [some other way], .. fine too.
> 
> Ah. But the first requirement for interoperability is that we define
> standard serializations. Alternate software implementations is fine,
> but our priority should be to define one reference implementation whose
> structure maps directly to a serialization which we will also specify
> (in collab with DAL), because it's the serializations which will make
> the VO work.

Hmmm the first requirement for interoperability is actually that we 
agree what our data looks like, what the properties are called and typed 
as, how we are going to structure it in a way that is suitable for 
all/most of our disciplines, and how it maps to existing data.  This is 
the modelling bit.  Serialisation is then (relatively) trivial, and is 
only (initially) used as an exchange format to transform between 
existing data structures. We might use it for examples of our data 
model, but it's not the first step...

(I haven't read through the Resolution stuff yet, off to a meeting, will do)

Cheers!

MC

-- 
Martin Hill
www.mchill.net
(07901 55 24 66) - from 17th June
0131 659 6426 (home)
0131 668 8100 (ROE)



More information about the dm mailing list