Time Series Cube DM - IVOA Note
msdemlei at ari.uni-heidelberg.de
Mon Feb 27 10:44:06 CET 2017
On Fri, Feb 24, 2017 at 08:50:24PM -0500, CresitelloDittmar, Mark wrote:
> Can you make a statement about how you think this proposed arrangement
> would effect
> 1) Validation
You separately validate against each version of each DM that is in
the annotation. This is actually, I claim, what matters to clients.
Say I'm a simple plot program -- I'm not at all concerned about
PubDIDs or about objects observed, all I need is figure out which
axis make up the cube in what way. So, I need valid NDCube-1.x, but
I don't break if the data provider chose to annotate with Dataset-1.0
Say I'm a component that can transform coordinates between different
frames. I'm concerned about correct STC-2.x annotation, but I
couldn't care less if the coordinates I'm converting are axes of an
NDCube or are source positions in a source DM, or yet something else,
and again I don't care at all about any other DM that might be
instanciated in that particular dataset.
Say I'm a cube analysis program. In that case I'll use NDCube to
understand the structure, STC to do reprojections and regridding,
perhaps photometry to convert the units of the dependent axis, or
some other DM if the cube values require it. And for all of these, I
can simultaneously support multiple versions (independently from each
other and thus relatively cheaply), so I can maintain backwards
compatibility for a long time.
> 2) Interoperability
Interoperability is actually what this is about. If we build
Megamodels doing everything, we either can't evolve the model or will
break all kinds of clients needlessly all the time -- typcially,
whatever annotation they expect *would* be there, but because their
positition in the embedding DM changed, they can't find it any more.
Client authors will, by the way, quickly figure this out and start
hacking around it in weird ways, further harming interoperability;
we've seen it with VOTable, which is what led us to the
recommendations in the XML versioning note.
Keeping individual DMs small and as independent as humanly possible,
even if one has to be incompatibly changed, most other functionality
will just keep working and code won't have to be touched (phewy!).
I'd argue by pulling all the various aspects into one structure,
we're following the God object anti-pattern
(https://en.wikipedia.org/wiki/God_object). Ok, since we're using
composition rather than a flat aggregation, it's not a clear case,
but I maintain we're buying into many of the issues with God objects
without having to.
> The Cube model has the following links to other models
> a) Dataset - defines itself as an extension of ObsDataset
> b) Coordsys - where coordinate system objects and the pattern for Frames
> are defined
> c) Coords - where the pattern for Coordinates is defined (and implemented
> for several domains, but that is not important here)
> d) Trans - where Transform mappings are defined.
For all of these I'd ask: How does it help clients to have these
pulled together in one place? What can it do that it couldn't do if
these were separate annotations?
That's actually my personal guideline for DM design: "But does it
> You say that cube should not import Coords to identify what a Coordinate
> is.. that it simply indicates that 'it has Coordinates'.
> It currently says that an Observable is a coords:DerivedCoordinate .. which
> is an abstract object restricted to
> follow the pattern defined in that model. Any model can implement the
> pattern and declare itself as that type of Coordinate,
> and be instantly usable in a cube instance.
> Without this explicit link, then one cannot validate across these
> An instance would have
> Element with role cube:DataAxis.observable
> Instance with type <whatever implemented "Coordinate" type> ie:
> But a validator cannot check if FluxCoord is actually a Coordinate... (I
> could put a ds:Curation instance
> there.. the role and the instance would be individually valid, but the
> combination is nonsense).
I'd maintain there's far too much that might work as coordinate
(Filter name, anyone?) or, even worse, observable to even hope that a
static validator will provide more benefit that harm (in terms of
making things hard that clients would otherwise have no issue with).
In the end, people will annotate something as a Coordinate just to
shut up the validator and confuse actual users of a Coordinate DM,
making things worse. And have mis-pointed axes really been a
problem in any published dataset?
So, I'm afraid I find the use case "make sure the thing referenced
from an NDCube axis is derived from Coordinate" too unconvincing to
warrant the complication of linking otherwise independent DMs.
> And.. without the link, there is no binding the various implementations of
> Coordinate to the pattern.
I have to admit that I find the current artefacts for current STC on
volute somewhat hard to figure out. But from what I can see I'd be
unsure how that binding would help me as a client; that may, of
course, be because I've not quite understood the pattern.
If this turned out to be true, I'd take that as an indication that
Coordinate should move into ivoa or, perhaps, a DM of its own, being
so generic a concept that it actually needs sharing across
> Interoperability would suffer because there would be no guarantee of
> compatibility of different Coordinate types.
> My code that understands the Coords pattern would have no hope of
> understanding any portion of
> independently defined Coordinates.
What does "understand" mean here? This is not a rhethorical question
-- I'm actually trying to understand where the complication comes
from. What information, in addition to what you get from STC or
comparable annotation, does your code require, and is there really no
other way to communicate it without having to have a hard link
between NDCube and STC (or any other "physical" DM, really)?
More information about the dm