[TRANSFORM] What is a "frame"?
Ed Shaya
Edward.J.Shaya.1 at gsfc.nasa.gov
Thu Nov 6 10:35:30 PST 2003
David Berry wrote:
>
>
>1) I am not sure it is a good idea to have the algorithm defined
>externally (i.e. via a URL). If your implication is that client software
>could actually download some code to implement the transformation,
>then what happens if no network connection is available? On the other
>hand, if your implication is that the URL is just a label for a
>pre-defined algorithm which the client has built-in code to execute, then
>how can the system be extended (i.e. what does a data produced do if he
>wants to use a Mapping which the client does not know anything about)?
>This is exactly where FITS-WCS fails - it defines a list of recipes
>with associated labels, which all conforming clients know how to
>implement. But if you want to do something for which there is no recipe,
>you are snookered (unless you are willing to wait 10 years for the FITS
>committees to agree a new recipe and for all clients to implement the
>changes).
>
>I think a better system is to define the algorithm internally within
>the Mapping itself, using a system such as that I outlined in
>
>http://www.ivoa.net/internal/IVOA/IVOADMTransformsWP/VOMapping.sxw
>
>In this system, you *still* have a list of pre-defined algorithms, but the
>list includes many very simple algorithms (maybe down to things as
>simple as 4-function arithmatic), and, crucially, allows new
>self-describing Mappings to be formed by combining these pre-defined
>Mappings together in various ways. This is much more flexible.
>
>
What you describe here sound like what is called pseudocode and it is a
good idea for us to work out
such a system. We could use your VOMapping. Perhaps though we should
look into WWW standards on this.
It could be that one can MathML or if UML ever gets to the point that
one can autogenerate all the code that we need, we could use that.
Something called xUML claims to do this. But, back to your question on URL.
I think it is not dangerous to use a URL to get code. Our systems can
be designed to download software the
first time it is pointed to and then know that it is cached locally
everytime it sees the same URL. In the
metadata working group we have discussed XML Catalogs which would do
this and also perhaps keep track
of mirror sites for any IP address.
>
>2) The first Mapping describes how to convert pixel indices to (RA,Dec).
>So why do you also need a second Mapping to describe the Mapping from
>pixel indices to galactic coords? Once you know (RA,Dec) you can calculate
>the corresponding galactic coords. This is because the Frame describing
>(RA,Dec) includes all the necessary information (reference frame, equinox,
>etc). We do not need to have millions of data files floating around the
>world all of which have an (implicit) description of how to convert
>equatorial coords to galactic coords. Here, we *do* want fixed, built-in
>algorithms, because the conversion from (RA,Dec) to (L,B) is not like
>converting from pixel to (RA,DEC). There are potentially an infinity of
>different ways in which (RA,Dec) could be related to pixel cords and so we
>need a flexible way to describe them. But there is (in principle) only
>*one* correct way of converting from (RA,Dec) to (L,B) so we can just
>build that algorithm into our client software. So just include one Mapping
>in the Quantity (whether it be pixel->(RA,Dec) or pixel->(L,B) does not
>really matter), and then clients can use their built-in knowledge of how
>to convert to other celestial coordinate systems. [Of course, the Quantity
>could include other Mappings which convert pixel coords to other
>non-celestial systems such as focal plance cords.]
>
>
I was just giving an example of an alternate axis because you had asked
about that in an earlier e-mail.
Typically one would not need the galactic mapping if one had the
equatorial one.
>
>3) You have components marked "Input". Some of these seem to be values
>which parameterise the algorithm (e.g. centerRADE, pixelsPerArcsec, etc),
>but others (Frame1) seem to define the actual input coordinate system used
>by the Mapping [By the way, OutputUnits should be contained within "frame1"
>since it forms part of the description of the output coordinate system].
>These are two very different sorts of "input". I'm happy to see the first
>type (algorithmic parameter values) inside the Mapping, but not the
>second (descriptions of coordinate systems).
>
Frame1 is the output coordinate system info that the code needs to know
to choose which branch of code
to go down.
>
>The reason for this is that you need to keep the definition of a Mapping
>totally separate from the nature of its inputs or outputs if you want to
>be able to combine Mappings together into arbitrary compound Mappings, as
>decribed in the document reference above. You should think of a Mapping as
>being exactly the same as a mathematical "function" (we all remember
>learning what a function is in maths lessons at school). It is simply a
>recipe for transformining numbers, without any restrictions on what those
>numbers may represent. If (as a mathematician) I define a function f(x) as
>
> f(x) = 2*x + 1
>
>the function carries no information around with it about what "x"
>represents - it could be the mass of a star, the distance to the nearest
>chemist, number of papers published per year. The function itself does not
>know or care what "x" is. Neither does it know or care what the numbers
>produced by the function represent. This independance of the algorithm
>from the coordinate systems is essential if you want to be able to join
>functions (or Mappings) together. For instance, if you have a second
>function
>
> g(x) = sin( x ) + x
>
>then you want the function g(f(x)) to be well defined as
>
> g(f(x)) = sin( 2*x + 1 ) + ( 2*x + 1 )
>
>If these functions carried knowledge of coordinate systems around with
>them, you would need to check that the output coordinate system produced
>by f(x) matched the input coordinate system required by g(x). So you would
>then need to define a whole collection of variants on "f" which all
>used different coordinate systems in order to ensure that one was
>available which matched the coordinate system required by g.
>
>So I really think we need to move the definition of the input and output
>coordinate systems out of the Mapping class if we want to be able to
>create customised Mappings easily by combining other Mappings together.
>
>But obviously you do need to know at some point how a Mapping is being
>used (i.e. what its inputs and outputs represent). So for this reason
>we define a higher class of object, a "FrameSet" which combines Frames and
>Mappings. So a simple FrameSet used to describe (say) WCS may look like:
>
> WCS { (an instance of a FrameSet)
> Frame0 (a description of the input coordinate system used by
> Mapping1)
> Frame1 (a description of the output coordinate system used by
> Mapping1)
> Mapping1 (the Mapping from frame0 to frame1 - this may be a
> complex Mapping formed by combining several other
> simpler Mappings)
> }
>
Mapping of frames is not the most general type of mapping and I am
trying to put it into the framework of
any Mapping. Does it map between 3 Johnson UVB images to a Cousins I
image? Does it map between astronomical terms and space physics terms?
WCS is a type of mapping:
WCSMapping
Algorithm
"Mapping1"
Input
InfoOnInputQuantity
Frame0
InfoOnOutputQuantity
Frame1
Output
QuantityType
I am motivated by this to add an Output QuantityType because, as you
say, the Algorithm need not
care about the output Quantytype. So I think we are now just arguing
about order of containment.
I see a need for a few more tags to be thrown in to explicitly say what
is input to what and you like
to use a standard practice way of indicating this.
>
>
>You can add as many Frames as you like to a FrameSet, by specifying a
>new Frame together with a Mapping which connects it to one of the Frames
>already in the FrameSet. So you end up with a tree structure in which the
>nodes and leaves are Frames, and the connections between them are
>Mappings. One of the Frames in a WCS FrameSet will represent pixel
>coordinates. This Frame is special because not only does it describe the
>input coordinate system for one or more Mappings, it also describes the
>coordinate system in which the data array is accessed. For this reason, we
>flag it and call it the "base" Frame. So to use a FrameSet to transform a
>pixel position, you find the base Frame in the FrameSet, you then select
>one of the other Frames (the one which matches the coordinate system you
>want to transform into), and get the total Mapping from the base Frame to
>your selected Frame, which you then use to transform your pixel position.
>
>So, in conclusion ("at last!") I would say that you should replace the
>collection of Mappings in your structure with a FrameSet.
>
>We have a practical, working implementation of these ideas which we have
>been using to describe WCS within data sets for approaching a decade now.
>It has prooved to be extremely versatile, and includes code for doing all
>the above steps, including a system for searching a FrameSet for a Frame
>with any given characteristics, and finding the total Mapping from that
>Frame to any other Frame in the FrameSet (as usual, the URL is
>www.starlink.ac.uk/ast/).
>
>Sorry for the long message, but I feel that mis-understanding (mine, at
>least) is rife in these threads, so I've attempted to spell out these
>ideas simply if somewhat long-windedly!
>
>David
>
>
>----------------------------------------------------------------------
>Dr David S. Berry (dsb at ast.man.ac.uk)
>
>STARLINK project Centre for Astrophysics
>(http://www.starlink.ac.uk/) University of Central Lancashire
>Rutherford Appleton Laboratory PRESTON
>DIDCOT United Kingdom
>United Kingdom PR1 2HE
>OX11 0QX
>
>
>
More information about the dm
mailing list