<html>
<head>
<meta content="text/html; charset=utf-8" http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Dear all,<br>
</p>
<p><br>
</p>
<p>Mireille Louys, Laurent Michel and I discussed the TimeSeries
Cube data model here in Strasbourg.
<br>
<br>
Before going to serialization we try to go back to the basic
concepts needed to represent TimeSeries and try to match them to
Cube Data model as Jiri did (although we apparently differ
eventually)
<br>
<br>
<br>
In our approach, we focus on the time axis considering it as
generally irregularly sampled, in other words "sparsed".
<br>
<br>
<br>
For each time sample we have a (set of) measurements, which may
be one single flux (in the case of light curves) or whatever
scalar value, but can also be an observations dataset spanned on
other data axes (spectrum, image, radio cube, velocity map....)
Actually for each time sample we have an ND cube (of whatever
dimension excluding time). And if a single data point , or single
value (flux) can be seen as a degenerate case of an ND cube then
everything is a set of NDCubes for different time samples !!!<br>
<br>
<br>
This concept allows to describe Light curves, time-sequences
of spectra, of 2D-images, of (hyper)cubes.
<br>
<br>
<br>
By doing this we are not fully consistent with ND cube data model
: we have something like a mixture between SparseCube and NDImage
: the Time axis is sparsed and each sample on the Time Axis
indexes an ND Cube . It Could be a third specialisation of a
generic NDCube ?</p>
<p><br>
</p>
<p>Cheers</p>
<p>François in collaboration with Laurent and Mireille.<br>
<br>
<br>
</p>
<br>
<div class="moz-cite-prefix">Le 27/02/2017 à 16:56,
CresitelloDittmar, Mark a écrit :<br>
</div>
<blockquote
cite="mid:CAH4enyNJP9E+xe5arNgbn=7nYxymOao+1qP66+tS-dvHkupvMw@mail.gmail.com"
type="cite">Markus,
<div><br>
</div>
<div>I'm on leave this week.. so just a quick reply.</div>
<div>This approach is basically butting an instance against a role
across model boundaries, instead of defining a relationship.
This is a very different approach than the vo-dml and compliant
model work has been taking over the past few years. I don't
even know how to express that in the model. It seems a bit late
in the game to be expressing this concern so urgently.</div>
<div><br>
</div>
<div>(Harrumph), off to enjoy my vacation,</div>
<div>Mark</div>
<div><br>
</div>
<div><br>
On Monday, February 27, 2017, Markus Demleitner <<a
moz-do-not-send="true"
href="mailto:msdemlei@ari.uni-heidelberg.de">msdemlei@ari.uni-heidelberg.de</a>>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0
.8ex;border-left:1px #ccc solid;padding-left:1ex">Hi Mark,<br>
<br>
On Fri, Feb 24, 2017 at 08:50:24PM -0500, CresitelloDittmar,
Mark wrote:<br>
> Can you make a statement about how you think this
proposed arrangement<br>
> would effect<br>
> 1) Validation<br>
<br>
You separately validate against each version of each DM that
is in<br>
the annotation. This is actually, I claim, what matters to
clients.<br>
Say I'm a simple plot program -- I'm not at all concerned
about<br>
PubDIDs or about objects observed, all I need is figure out
which<br>
axis make up the cube in what way. So, I need valid
NDCube-1.x, but<br>
I don't break if the data provider chose to annotate with
Dataset-1.0<br>
or Dataset-2.0.<br>
<br>
Say I'm a component that can transform coordinates between
different<br>
frames. I'm concerned about correct STC-2.x annotation, but I<br>
couldn't care less if the coordinates I'm converting are axes
of an<br>
NDCube or are source positions in a source DM, or yet
something else,<br>
and again I don't care at all about any other DM that might be<br>
instanciated in that particular dataset.<br>
<br>
Say I'm a cube analysis program. In that case I'll use NDCube
to<br>
understand the structure, STC to do reprojections and
regridding,<br>
perhaps photometry to convert the units of the dependent axis,
or<br>
some other DM if the cube values require it. And for all of
these, I<br>
can simultaneously support multiple versions (independently
from each<br>
other and thus relatively cheaply), so I can maintain
backwards<br>
compatibility for a long time.<br>
<br>
> 2) Interoperability<br>
<br>
Interoperability is actually what this is about. If we build<br>
Megamodels doing everything, we either can't evolve the model
or will<br>
break all kinds of clients needlessly all the time --
typcially,<br>
whatever annotation they expect *would* be there, but because
their<br>
positition in the embedding DM changed, they can't find it any
more.<br>
Client authors will, by the way, quickly figure this out and
start<br>
hacking around it in weird ways, further harming
interoperability;<br>
we've seen it with VOTable, which is what led us to the<br>
recommendations in the XML versioning note.<br>
<br>
Keeping individual DMs small and as independent as humanly
possible,<br>
even if one has to be incompatibly changed, most other
functionality<br>
will just keep working and code won't have to be touched
(phewy!).<br>
<br>
I'd argue by pulling all the various aspects into one
structure,<br>
we're following the God object anti-pattern<br>
(<a moz-do-not-send="true"
href="https://en.wikipedia.org/wiki/God_object"
target="_blank">https://en.wikipedia.org/<wbr>wiki/God_object</a>).
Ok, since we're using<br>
composition rather than a flat aggregation, it's not a clear
case,<br>
but I maintain we're buying into many of the issues with God
objects<br>
without having to.<br>
<br>
> ???<br>
> The Cube model has the following links to other models<br>
> a) Dataset - defines itself as an extension of
ObsDataset<br>
> b) Coordsys - where coordinate system objects and the
pattern for Frames<br>
> are defined<br>
> c) Coords - where the pattern for Coordinates is
defined (and implemented<br>
> for several domains, but that is not important here)<br>
> d) Trans - where Transform mappings are defined.<br>
<br>
For all of these I'd ask: How does it help clients to have
these<br>
pulled together in one place? What can it do that it couldn't
do if<br>
these were separate annotations?<br>
<br>
That's actually my personal guideline for DM design: "But does
it<br>
help clients?"<br>
<br>
> You say that cube should not import Coords to identify
what a Coordinate<br>
> is.. that it simply indicates that 'it has Coordinates'.<br>
> It currently says that an Observable is a
coords:DerivedCoordinate .. which<br>
> is an abstract object restricted to<br>
> follow the pattern defined in that model. Any model can
implement the<br>
> pattern and declare itself as that type of Coordinate,<br>
> and be instantly usable in a cube instance.<br>
><br>
> Without this explicit link, then one cannot validate
across these<br>
> boundaries.<br>
><br>
> An instance would have<br>
> Element with role cube:DataAxis.observable<br>
> Instance with type <whatever implemented
"Coordinate" type> ie:<br>
> spec:FluxCoord<br>
><br>
> But a validator cannot check if FluxCoord is actually a
Coordinate... (I<br>
> could put a ds:Curation instance<br>
> there.. the role and the instance would be individually
valid, but the<br>
> combination is nonsense).<br>
<br>
I'd maintain there's far too much that might work as
coordinate<br>
(Filter name, anyone?) or, even worse, observable to even hope
that a<br>
static validator will provide more benefit that harm (in terms
of<br>
making things hard that clients would otherwise have no issue
with).<br>
In the end, people will annotate something as a Coordinate
just to<br>
shut up the validator and confuse actual users of a Coordinate
DM,<br>
making things worse. And have mis-pointed axes really been a<br>
problem in any published dataset?<br>
<br>
So, I'm afraid I find the use case "make sure the thing
referenced<br>
from an NDCube axis is derived from Coordinate" too
unconvincing to<br>
warrant the complication of linking otherwise independent DMs.<br>
<br>
> And.. without the link, there is no binding the various
implementations of<br>
> Coordinate to the pattern.<br>
<br>
I have to admit that I find the current artefacts for current
STC on<br>
volute somewhat hard to figure out. But from what I can see
I'd be<br>
unsure how that binding would help me as a client; that may,
of<br>
course, be because I've not quite understood the pattern.<br>
<br>
If this turned out to be true, I'd take that as an indication
that<br>
Coordinate should move into ivoa or, perhaps, a DM of its own,
being<br>
so generic a concept that it actually needs sharing across<br>
"sub-domains".<br>
<br>
> Interoperability would suffer because there would be no
guarantee of<br>
> compatibility of different Coordinate types.<br>
> My code that understands the Coords pattern would have no
hope of<br>
> understanding any portion of<br>
> independently defined Coordinates.<br>
<br>
What does "understand" mean here? This is not a rhethorical
question<br>
-- I'm actually trying to understand where the complication
comes<br>
from. What information, in addition to what you get from STC
or<br>
comparable annotation, does your code require, and is there
really no<br>
other way to communicate it without having to have a hard link<br>
between NDCube and STC (or any other "physical" DM, really)?<br>
<br>
Thanks,<br>
<br>
Markus<br>
</blockquote>
</div>
</blockquote>
<br>
</body>
</html>