Observation data model comments

Anita Richards amsr at jb.man.ac.uk
Tue May 11 02:14:30 PDT 2004


> : > > e.g. spatial coverage in objects/deg^2
> : > I would say that objects/deg^2 does not belong to "spatial coverage", it's
> : > instead a way to characterise the observation.
> : I don't quite understand what you mean - what class should objects/deg^2
> : be under? (in either case it is under the Characterisation superclass!)
>
> I agree with Alberto. To me, the source density is a measure
> not so much of spatial coverage, but of flux (observable) coverage.
> It pretty much converts to a limiting flux (you need the luminosity
> function, but if that is position-independent you don't need any
> spatial information). But in fact it's not even that, it's
> its own UCD: source density. So I would just add another axis
> to the Characterization of this particular observation,
> call it SourceDensity, and tag it with an appropriate UCD.
>
> Now if you do a query to the VO on SourceDensity (give me all catalogs
> covering SourceDensity more than this..) you are all set when the
> VO gets to this Observation. If the VO gets to an observation
> which has no SourceDensity entered in its characterization, but
> does have a Flux characterization, then you may still be in luck
> *if* the query specifies an assumed luminosity function and also
> refers to, or is able to know via some default set of VO
> resources, how to link the UCDs for SourceDensity and Flux.
> (I realize I'm skipping crucial details like the fact that it
> depends what band the flux is in!)

I don't mind adding an extra class for catalogues but in the Registry
model it is all in one object and as that has been agreed and seems to
work where we have used it to describe real data, we should make sure we
are consistent.

> To me this is a nice example to show why specifying that
> Characterization should be a fixed set of 5 axes would not work.
> The axes given in our example table are just the common examples,
> to provide a familiar context for the generalized characterization
> variables. They are not meant to represent all the possible axes.
> Instead, we provide a framework for data providers and users to describe
> whatever constraints make sense to them. The standard axes of position,
> time, spectral coordinate and flux will often be the ones wanted, but
> not always. Forcing something subtly different like `source density'
> into these standard axes will cause trouble and lack of
> interoperability.

Yes, I agree with Jonathan.  I also think that we need an additional
velocity axis to cover situations like the example on
http://wiki.astrogrid.org/bin/view/Astrogrid/ObservationDataModelRevision
where you ahve two different spectral lines in one band, both at the same
correccted velocity but in different frequencies (and for all the other
resons described in STC).

> The source density is not the same as a filling factor -
> the filling factor is what fraction of the region we looked at,
> not what fraction of the region is filled with sources.
> It does potentially imply a confusion/saturation factor, I guess.

No, but as you explained above, in practice one uses the quantities in a
similar way to search.  But as long as both situations are covered (and
analogies for spectral coverage v. lines detected etc.) it doesn't matter
how.

_Hoewver_ if we do adopt a separate Characterisation for catalogues of
MeasuredQuantities, that strengthens my suggestion that MeasuredQuantities
and AnalysisMethod do need to be dealt with separately from ObsData and
Processing!

> : Maybe that is before I put up my latest plot.  I have now put Processing
> : back as a part of Provenance but I am still a bit concerned as different
> : processing methods will give data with different characterisation e.g.
> : different synthesised beam size - for interferometry data you can
>
> So if I understand your concern about versioning correctly, it's for the
> data discovery part of the problem - if an archive has different
> versions of the observation, they can all have different
> Characterization and Provenance - that seems fine - but if the archive
> can run the mapping software on the fly, there may be a range of
> possible characterizations that could be generated - we'll have
> to be careful how to represent that in a query response, but I think
> it can be done. For the analysis problem, once you have selected
> a certain realization of the data to download, everything is well
> defined and not problematic.

Yes, that sounds OK.

>
> : > In the last table:
(I agree with most of Jonathan's comments)

>
> : > Sensitivity: for Temporal I would say "exposure map"
> : This is hard to generalise for radio interferometry.  To make a good image
> : this is the synthesis time, that is, the time it takes for the earth to
>
> I think there's confusion here. For me, the temporal sensitivity is the
> change in sensitivity with time. Not the integration time. It would
> be the correction factor you have to apply to the visibilities
> because some idiot has installed a flaky oscillator in one of the
> antenna feeds and the signal from your calibrators is going up and
> down by a factor of three every five minutes.

That's why I suggest calling it SensitivityFunction

We are all using the same words to mean different things here... Actually,
it does not matter because the VO generally will want numbers or fucntions
in the matrix.  We should try and come up with an example which is as
unambiguous as possible - I ahve usually found the IR to e.g. ISO to be
good for that - physical units but complicated data.  Anyway, for radio:

Resolution.Temporal = integration time (the smallest time unit in the
data, usually a few-few hundred sec, but much less for psr...)

SensitivityFunction.Temporal - Jonathan's example is possible but I think
that there is a more common and useful definition which is how the quality
of your data product degrades (e.g. noise or artefact level increases) as
you chop the data up into smaller chunks.  For example, observing an x-ray
binary where the core is varying but the jet isn't, if you take each hour
of visibility data you can maybe just make a usable map - or you can
subtract out the core and make a full 12hr map of the jet.

Support.Temporal could be the scan length, e.g. between source changes,
this is used as an important class for ATCA and probably the VLA but less
so for MERLIN and VLBI; there is also the duration of observation for each
source (not necessarily continuous)..

Anyway, the point is that some of these details are _not_ relevant outside
of the observatory at this stage; data provider do have to understand the
matrix so that any values supplied are meaningful but we should make it
clear that if something is irrelevant it is better to leave it out than
to supply confusing information or, worse still, not publish data at all
becaus it looks too complicated.


> : > Sample precision: for Spatial -> pixel scale,
>
> : As I've explained, if interferometry image data have been conventionally
> : reduced, the pixel is a rough indicator of sample precision, but it isn't
> : an immutable property of the detector.  For extracted source catalogues
>
> Doesn't matter. Characterization is about the dataset in your hand, not
> about the detector. So sample precision is exactly that: the
> pixelization you have chosen for your map, if a map is what you have.
> It doesn't say anything about your point source position precision,
> that's covered by the Resolution which is a different element of the
> model.  If you're dealing with visibility data and haven't made a map
> yet, then there's no spatial sample precision to define, but there is
> sample precision in the U V coordinates (at least in the pixelized U V
> data I have seen coming out of AIPS) and in an ideal world that should
> be recorded in U and V characterization axes. Of course it will be a
> while before much software will do useful stuff with such axes.

Yes, my worry is that most astronomers attatch far more importance to
pixel size than is merited outside of data from pixellated detectors.  The
only place the VO needs to know the pixel size of radio maps is for a
display tool or to convert Jy/beam to Jy/pixel.  As for uv data cell size,
that is an even more esoteric concept and I think will always - or for a
long time - only be relevant behind a protective screen, i.e. VO
interfaces to specialist interferoometry data centres.  The minimum
baseline length is a useful quantity as it tells you the largest spatial
scale in the data.

> I would say it's the rows (e.g. SensitivityFunction, that's the rows right?)
> that must be usable generally. The columns (spatial, etc.) can vary,
> although the standard ones will almost always be usable.

Yes (and to the rest of Jonathan's comments)

Thanks again

Anita

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Dr. Anita M. S. Richards, AVO Astronomer
MERLIN/VLBI National Facility, University of Manchester,
Jodrell Bank Observatory, Macclesfield, Cheshire SK11 9DL, U.K.
tel +44 (0)1477 572683 (direct); 571321 (switchboard); 571618 (fax).



More information about the dm mailing list