[Passband] a useful self contained model?

Jonathan McDowell jcm at head.cfa.harvard.edu
Wed Jun 9 23:15:02 PDT 2004


Hi Data Modelers...
  I'm baaack! 

Today, I'll try and catch up on the Passbands discussion.

Passbands, from Martin:

> Will time have the equivalent of passrate?

You betcha. In fact, what you are calling passrate is what I
would call the spectral transmission P_E(E). This is just a projection
of the general transmission (or, to use your term, passrate):

    P(E,t,alpha,delta)

We often assume that this is separable:

   P(E,t,alpha,delta) = P_E(E) P_t(t) P_pos(alpha,delta)

where P_t is the variation of on-axis sensitivity with
time (e.g. due to buildup of contaminants) and P_pos incorporates
things like vignetting and QE/flat field variations (but not PSF).

In fact, it's often the case that P is not that separable,
but we usually fudge it.

So I don't see that fravergy has particularly different characteristics from
time and space. Your Passband is what I would call Coordinate Coverage,
with passrate = Sensitivity and min/max = Support and central = Location,
and Name as an extra property that extends my concept of coverage.
In the Observation DM document we went to some trouble to address
exactly this question by identifying the analogous concepts 
for spatial, spectral and temporal coordinates in a big table.


> What I really don't want is all the other baggage..

So, I think we agree that the /min/max/ and /central/ fravergies are
well modelled by the other baggage, they certainly have  UCD and units,
and /central/ at least may have an error. Is the passrate well modelled
by Q? It's dimensionless, but it can have errors (see below)

> there are no UCDs ... that describe probability

The UCD1+ for spectral passrate is instr.filter.transm according
to the current UCD1 list, although there are currently no UCDs
for spatial QE or secular sensitivity variations (because these
concepts tend only to occur in calibration data files and not in VIZIER catalogs).

For my money, everything should have a UCD, you always want
to be able to ask 'what is this thing?'.

However, not everything is a Q. I believe the components of Passband
(min/max, central, passrate) are each arguably Q, but Passband itself
is not a Q.

> Who needs errors on passrate()?

Certainly for the Chandra effective area curve, which is just a funny
way of encoding the passrate,  we quote an overall systematic error for
it. That's a critical input to our forward-folding spectral fitting
programs, without it we don't get a usable goodness-of-fit measure.


>From Anita:

> To me, an SED means ....

I don't quite agree with your definition... I agree with your
definition of spectrum (except for the focus on transitions) but
for me an SED doesn't have to be the low-res thing you describe,
it can include hi-res spectra as components of it. So the approach
we took in the SSAP data model proposal is for an SED to be 
an overarching thing which contains photometry points and spectra,
and thus a spectrum is a special case of an SED which has only
one segment, a spectrum segment.

> A spectral region ... can be divided into channels

I don't want to mandate the 'regular spacing', the whole game
with optical spectra is that you need to at least polynomial-fit the
channels. 

Further, when you say the individual channels are described 'all by the
same gaussian', I take you to mean that the line resolution (spectral
line spread or LSF) function R(nu1,nu2) is gaussian in nu1-nu2, where
nu1 is the actual freq and nu2 is the measured freq.  I am aware of
cases, particularly in high energy grating data, where the LSF cannot be
approximated as the same gaussian for each channel - our resolution
changes by an order of magnitude from one end of the passband to
another.

Is this LSF the same as Martin's passrate? You can choose to model
it that way by giving each spectral channel its own independent passrate
function, so number of counts in each channel i given a photon number
flux n(nu) (not very radio I know!)
     N(i) = Integral dnu1  P(nu1,i) n(nu1)
This corresponds to considering a spectrum with N channels as N independent
photometry filter measurements.

In contrast, in optical and X-ray astronomy we normally do more what you
imply by referring to a single passrate function for the spectrum, and
split P(nu1,i) into  P'(nu1)* R(nu1,i)  where the 'resolution', R, is
normalized so that Sum(i) R(nu1,i) = 1 and we have separated the concept
of the overall fravergy dependence of the *sensitivity* for the spectrum
from the concept of *resolution* - how the photons are distributed into different
channels - so we consider the spectrum as 1 photometry passband divided
into N related 'channels'.

So to rephrase the question Martin then asked - 'would this work
if each channel was a separate Passband?', we can ask
'should the Passband object include the concept of Resolution as
well as Sensitivity/Passrate, or should a contiguous, multichannel
spectrum be modelled by a compound Passband?'. Either will work
in an abstract sense, but using the Resolution concept will, I believe, map
better to current practical implementations of astronomical data; if
we do the compound passband approach, it will be harder to keep the
normalizations straight and it will be messy to even define Resolution.

And since, as Martin points out elsewhere, I always think in this
'dimensions' mindset, let me do so: when we consider the analog of
passband in spatial coordinates (min/max -> region, passrate -> QE), the
equivalent of the compound passband approach will be to consider each
image pixel as a separate image, with a passrate consisting of the
product of the QE(etc) and PSF functions. Again, it takes away any easy
way to separate QE and PSF, and I think that's a failing.

Martin again:
> Will people want any other characteristic than prob. of transmission 
(i.e. 0.0 <= p(fravergy) <= 1.0 )

Yes - we also need a normalization from flux to counts, which at a
minimum involves something with the dimensions of area (effective
aperture area).  This probably should not be in the Passband object, though,
as I see Martin later argued.
And it's an overall normalization on my P(E,t,RA,Dec) rather than
being a separate normalization for each of P(E), P(t), P(RA,Dec).
So I wouldn't include it in Passband but we need to remember to put
it somewhere else. 
(but again in reality, and particularly for ground-based optical, the
p(fravergy) and area are often not separately calibrated, so some will
argue it would be convenient to allow it to be lumped in.)


And as I argued above, I think we need Resolution.

Alberto:
"different observatories use different characteristic wavelengths".
Very true. I propose that we not adopt a precise definition of the
"central" fravergy and make clear that its purpose is as a representative
value rather than something for precise calculations.

Martin:
"Passrate shape and min/max should include .. *actual* not *intended*
throughput". 

I think that passrate should be actual, but min/max should be intended
(and considered as a rectangular approximation to the passrate function).
We think of red leaks and sidelobes as erroneous data to be calibrated
out rather than as part of the measurement. It makes no sense
for min/max to be "actual", since then the values are always
-Inf:+Inf (well certainly +Inf, a sufficiently energetic gamma ray
will always melt the filter and get through). 


> fravergy is just a trivial conversion ... and doesn't need a coordinate system

oooh, no. We need all kinds of coord system info - rest frame, and so on.

> Passband *is* the error on fravergy.

True but only up to a point and very dependent on context. In the
context of an SED or a spectrum I mostly agree, as long as Passband.CentralFravergy
has Q-type accuracy info. (the width of the bandpass is distinct from the
uncertainty in our knowledge of the bandpass, and for some kinds of spectra
that does become relevant). In other contexts, I think the opposite of
your statement is true: we associate errors with fravergies at a higher
level for *general* purposes, using passbands for *specific* purposes.

Martin:
> I want to be sure that the model describes .. how it interacts
> .. if someone wants to implement it [some other way], .. fine too.

Ah. But the first requirement for interoperability is that we define
standard serializations. Alternate software implementations is fine,
but our priority should be to define one reference implementation whose
structure maps directly to a serialization which we will also specify
(in collab with DAL), because it's the serializations which will make
the VO work.

I haven't absorbed David's Mapping/Frame approach yet.

 - Jonathan



More information about the dm mailing list