[ImageDM] Mapping
Douglas Tody
dtody at nrao.edu
Thu Nov 21 09:36:26 PST 2013
On Wed, 20 Nov 2013, CresitelloDittmar, Mark wrote:
> Doug,
>
> I was looking at the model from the opposite side today. Asking "what is
> a ND-Cube or ND-Image", ignoring all the Obs/Char metadata stuff, and
> then re-visiting ImageDM..
>
> A couple quick questions.
>
> 1) pg 29: "If multiple WCS instances are required for an image, this can
> be expressed by merely having multiple instances of
> Mapping,"
> The diagram (Figure 2 on pg 12) shows a 0..1 relation between Data and
> Mapping. was this supposed to be 0..*?
Yes, although there may be better alternatives to having an entire new
Mapping instance. Often when multiple WCS mappings are supported, this
affects only the Spatial or Spectral function, not the linear term.
> 2) Is it possible to scale the data values?
Not with a WCS, so far as I know. This would require some other sort of
transformation (FluxFrame for example addresses this for magnitude to
flux conversion). Unit conversions are far more common.
> 3) Can you give an example of the content for "Mapping.AxisMap"?
> Say I have an image with WCS ("RA", "DEC", "Time"), pixel axes (#1,
> #2, #3) plus value
> The 'Linear' portion is applied (CD matrix) to (#1, #2, #3) producing
> intermediate axes.
> a) do these have names?
>
> AxisMap (pg 29) "is used to map the axes of the intermediate world
> coordinates (i.e., after the CD or PC transform, which may transpose or
> rotate the image axes) to the axes of
> the final world coordinate system.
>
> This has to show that the result from Linear transform on #1 and #2
> (assuming no trasposition), go to the 2D SpatialAxis projection, while
> the result of the #3 goes to TimeAxis.
The point of the AxisMap and Spatial/Spectral/Time etc. mappings is to
separate out the image axis to world function mappings, that are
combined into one CTYPE term in FITS WCS, so that we can define and deal
with a world function like the Spatial or Spectral transform without
having to know about the image geometry.
The details of how the axis map is used have not yet been fully
specified and are still being prototyped. My SIAV2 prototype implements
an axis map in the cutout code (C task). In that case I just used
integer codes AX_SP1, AX_SP2, AX_EM, AX_TIME, AX_POL (spatial 1 and 2,
spectral/em, etc.). In the Mapping model we specify the axis map as a
string type so we would need to define constant string equivalents to
identify the WCS axis types.
> And a couple comments below..
>
>
> On Tue, Nov 19, 2013 at 4:00 PM, Douglas Tody <dtody at nrao.edu> wrote:
> In the current ImageDM, CoordSys is mainly used to define the
> coordinate
> systems used globally for metadata describing the overall
> image dataset
> (Image, Observation, Char, etc.).
>
>
> So, this defines a singular coordinate system which is to be used
> throughout the metadata portions of the model?
> pg 12: "Image also adds a CoordSys element defining a uniform set of
> (default) coordinate frames and units for all Image metadata including
> Observation and Characterisation."
>
> If my Data has WCS for the Spectral axis in both Wavelength and Energy,
> (presumably via different Mapping-s), I CANNOT characterize my image in
> both?
Correct; we want uniform coordinate frames to characterize datasets.
The ultimate example of this is ObsTAP, where not only are all datasets
characterized using the same frames and units, the spectral axis units
are fixed as meters. Characterization is mainly used in VO for
discovery, for which uniform units are essential. Mapping/WCS is not
used for discovery, rather it is an astrometic calibration and is used
for analysis, actual interaction with a specific dataset.
That said, it is possible to have more advanced usage of
Characterization, where additional metadata is added; this could be used
to override the global coordinate system with something defined more
locally. However this is not typical usage.
> It is highly desirable (from the client application
> perspective at
> least) to have a single set of coordinate frames used
> uniformly for all
> this higher level metadata. It is pretty much required to do
> this in a
> use case such as a data discovery query response, where there
> are many
> table rows each describing a different dataset. Otherwise we
> would have
> to add table columns to describe the frames used separately
> for each
> dataset/row, and the client would have a much harder time
> sorting out
> the information that comes back.
>
>
> I can see that particular applications (like data discovery) may wish to
> require a homogenized metadata set to simplify their use-case, but I'm
> not sure that is a limitation on the generic model, but rather a
> requirement of the application (SIAP).
Certainly SIAP or ObsTAP require homogenized metadata for discovery.
For Image data we have two primary use cases or applications: discovery
(SIAP or ObsTAP), and representing an actual Image dataset. Probably
one can allow more flexibility in terms of coordinate frames and units
in a dataset instance: we already do for Mapping for example. In
practice though, I suspect data providers will only compute
characterization metadata once for a dataset, so the primary use
case (data discovery in the VO) will govern what is done for high
level metadata.
The main point of Characterization in the VO is to uniformly
characterize datasets. Here "uniform" means characterize in the same
way for each measurement axis: location, bounds, support, etc. -
obviously a simplified/approximate approach that cannot be precise, at
least at the higher levels. When we start to get into precise
calibrations the models get more complex. The Photometry model is a
good example of that. Mapping/WCS is still mostly a uniform model, but
it is not describing the character of a dataset, it is precisely mapping
sample coordinates to world coordinates and vice versa.
- Doug
More information about the dm
mailing list