<div dir="ltr"><div><div><div><div><div><div>To respond to Tom: I think everybody in this conversation is sane enough for agreeing with you on the general goal of the reference implementations.<br><br>Technically speaking, though, a data model alone does not achieve any purpose other than describing entities and their relationships in a particular domain. It doesn't serve them in protocols, it doesn't serialize them in VOTables, but it is used by other standards for achieving these other goals.<br><br>We have a process for defining standards that ensures that use cases are established first, and the standards need to be reviewed to make sure that they fulfill them.<br><br></div>What does it mean for a data model? Since we are introducing standard representations for data models, they have to pass the "human" review and reference implementation process to make sure they capture the universe of discourse they were supposed to capture, but now they can also be validated against an XML schema and a schematron.<br><br></div>As for what qualifies reference serializations (which I thought was in the scope of the Mapping document, that's why we split them years ago), you probably need at least three things:<br></div> * A serialization (fake if necessary) that touches all of the entities and all of the relationships in the model. You can't possibly have all the possible combinations/permutations. That would be insane and I don't know of anybody that validates UML class diagrams by building all possible object diagrams that can be created from it (and if there is recursion, that's not even possible).<br></div><br></div> * One or more real life serializations from actual or prototype services covering the use cases the model was designed to fulfill.<br><br></div><div> * Client software that can read the serializations and prove that it can "understand" the information that was encrypted with it.<br><br></div><div>I would argue that if there are no concrete, real-life implementations of parts of a data model, then the model was over-speficied and covers more use cases than were needed. If those parts have not been tested, they should not be there to begin with. In this sense, while it is fine if a reference implementation does not cover the whole model, the union of all of the implementations (minus the possible "fake" serialization", which works as some sort of "unit test", so it doesn't count in this context) should.<br><br></div><div>Reference implementations are supposed to be complete. If they (alone or together) are not complete, the standard does not have enough reference implementations by definition.<br><br></div><div>Omar.<br></div><div><br></div></div><div class="gmail_extra"><br><div class="gmail_quote">On Tue, May 10, 2016 at 12:32 PM, Mireille Louys <span dir="ltr"><<a href="mailto:mireille.louys@unistra.fr" target="_blank">mireille.louys@unistra.fr</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<div text="#000000" bgcolor="#FFFFFF">
Hi Gerard, Hi all, <br><span class="">
<br>
<div>Le 10/05/2016 18:04, Gerard Lemson a
écrit :<br>
</div>
<blockquote type="cite">
<div dir="ltr">HI Tom
<div>I think that writing a model in VO-DML is *not* an
implementation of the model, but its definition. It *is* an
implementation of VO-DML,</div>
<div><br>
</div>
</div>
</blockquote></span>
Yes, I agree. <br><span class="">
<blockquote type="cite">
<div dir="ltr">
<div>An *instance* of the model is a valid implementation of the
model, but we have no generic standard way (yet) for
representing instances of data models. The mapping document
will fit that, but protocols can also do that.</div>
<div>Showing interoperability of the data model could be an
application that uses two or more independent instances of the
data model serialized in some standard way.</div>
<div>One way this might happen is that votables annotated with
the same data model are interpreted as instances of the data
model.</div>
<div>And it would be nice if something interesting is done with
them. Ieally this could be an implementation of a use cases
the model was supposed to support.</div>
</div>
</blockquote></span>
The different scenarios proposed in the use-cases should be checked
, I guess. <br>
This is sometimes difficult if the model tries to encompass many
different situations ( as Char tried, and ND-Cube will probably) .<br>
Could we envisage a partial validation where the main scenarios are
checked first and the secondary ones later.<br>
<br>
In other terms , should we be happy with a 75% validation rate for
serialisations obtained from a new data model?<br>
<br>
my 2c, Mireille.<div><div class="h5"><br>
<br>
<blockquote type="cite">
<div dir="ltr">
<div>Gerard</div>
<div class="gmail_extra"><br>
</div>
<div class="gmail_extra"><br>
<div class="gmail_quote">On Tue, May 10, 2016 at 10:46 AM, Tom
McGlynn (NASA/GSFC Code 660.1) <span dir="ltr"><<a href="mailto:tom.mcglynn@nasa.gov" target="_blank"></a><a href="mailto:tom.mcglynn@nasa.gov" target="_blank">tom.mcglynn@nasa.gov</a>></span>
wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">If you
feel that data models are not subject to the requirement
of two reference implementations, that's fine but then
this discussion is moot regardless. If you think they are
then you are quibbling about my choice of words. Feel free
to substitute whichever you like for 'protocol'.<br>
<br>
Regards,<br>
Tom<br>
<br>
Matthew Graham wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
But a data model is not a protocol.<br>
<br>
-- Matthew<br>
<br>
On May 10, 2016, at 4:37 PM, Tom McGlynn (NASA/GSFC Code
660.1) wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
This kind of using the fact that you have written a
definition of the model counting as an implementation
of the model sounds awfully incestuous. I have always
read the requirement as having two different groups
using the protocol in some service, ideally one that
supports doing astronomy.<br>
<br>
Tom McGlynn<br>
<br>
Matthew Graham wrote:<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
So once we have a mapping standard with reference
implementations any DM model specified in VO-DML
could automatically have a reference implementation
(according to these criteria) which would make life
easier. Time to get that mapping spec out :)<br>
<br>
-- Matthew<br>
<br>
<br>
<br>
On May 10, 2016, at 4:00 PM, Laurino, Omar wrote:<br>
<br>
<blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
Hi,<br>
<br>
I agree with Gerard that items 1) and 2) look like
the same, unless one brings in a standard for
instance serializations. Since we can have
standardized mapping strategies (as different
recommendations) that map instances of valid
VODML/XML models and their standard
serializations, I don't think a valid
serialization of an instance should be required
for data models: this should be guaranteed by the
mapping standard(s) and their reference
implementations.<br>
<br>
As for the ModelImport requirement, it makes sense
for "low level" models but not for "high level"
ones, plus there is no way to guarantee that all
types defined by a model are extendable/usable by
other models. It gets too complicated.<br>
<br>
I would suggest we include the "model import"
evidence as a "soft requirement", to be evaluated
on a case-by-case basis depending on the use cases
of the model. For STC, it makes a lot of sense to
require this additional proof of interoperability,
because the model is intended to be a building
block for other models.<br>
<br>
Or, we might be to *require* that at least one
reference implementation is a
mission/archive/service-specific model that
extends the standard one. So, if we had a model
for Sources/Catalogs, at least one reference
implementation should be a
mission/archive/system-specific model that proofs
the model can be meaningfully extended by actual
specializations. Other than properly validating as
VODML/XML, this model should be evaluated for its
domain-specific content, and that's probably not
something you can automate. This should also be a
good way to involve implementors from the
community, so it's probably my preferred one.<br>
<br>
Omar.<br>
<br>
<br>
On Tue, May 10, 2016 at 9:38 AM, Gerard Lemson
<<a href="mailto:gerard.lemson@gmail.com" target="_blank">gerard.lemson@gmail.com</a>
<mailto:<a href="mailto:gerard.lemson@gmail.com" target="_blank">gerard.lemson@gmail.com</a>>>
wrote:<br>
<br>
Hi<br>
What is meant by item 2, "An XML serialization
of the DM".<br>
The standard representation (serialization?)
of a VO-DML data<br>
model is VO-DML/XML, i.e. XML. And that is the
representation<br>
that can be validated (step 1?) using
automated means, for<br>
example using XSLT scripts in the vo-dml/xslt
folder on volute@gavo.<br>
If 2) is meant to imply an XML serialization
of an *instance* of<br>
the model, that we can only do once we have a
standard XML<br>
representation of instances of models. That
does not yet exist.<br>
The original VO-URP framework does contain an
automated XML<br>
Schema generator for its version of VO-DML,
that has not yet been<br>
ported to VO-DML.<br>
And of course the mapping document describes
how one can describe<br>
instances serialized in VOTable, but that is a
different standard.<br>
<br>
For what it's worthy, I think that an
"implementation of VO-DML"<br>
is a data model expressed using that language
(in VO-DML/XML to<br>
be precise) and validated using software. The
latter enforces<br>
that the language should allow automated
validtion.<br>
I think interoperable implementations of
VO-DML are two or more<br>
valid models that are linked by "modelimport"
relationships.<br>
I.e.one model "imports" the other(s) and uses
types from the<br>
other as roles or super types in the
definition of its own types.<br>
This is supported by the VODMLID/VODMLREF
meachanism of the language.<br>
<br>
Cheers<br>
Gerard<br>
<br>
<br>
On Tue, May 10, 2016 at 9:04 AM, Matthew
Graham<br>
<<a href="mailto:mjg@cd3.caltech.edu" target="_blank">mjg@cd3.caltech.edu</a>
<mailto:<a href="mailto:mjg@cd3.caltech.edu" target="_blank">mjg@cd3.caltech.edu</a>>>
wrote:<br>
<br>
Hi,<br>
<br>
We're trying to define specifically what
would satisfy the<br>
reference implementation requirement for
an IVOA Spec in the<br>
context of a data model. The proposal is
that:<br>
<br>
(1) If the DM has been described using
VO-DML it can be<br>
validated as valid VO-DML<br>
<br>
(2) An XML serialization of the DM can be
validated<br>
<br>
so therefore is the combination of the two
sufficient to<br>
demonstrate the validity and potential
interoperability of<br>
the data model (which is the purpose of
the reference<br>
implementations).<br>
<br>
Cheers,<br>
<br>
Matthew<br>
<br>
<br>
<br>
<br>
<span><font color="#888888">
<br>
-- <br>
Omar Laurino<br>
Smithsonian Astrophysical Observatory<br>
Harvard-Smithsonian Center for Astrophysics<br>
100 Acorn Park Dr. R-377 MS-81<br>
02140 Cambridge, MA<br>
<a href="tel:%28617%29%20495-7227" value="+16174957227" target="_blank">(617)
495-7227</a><br>
</font></span></blockquote>
</blockquote>
</blockquote>
</blockquote>
<br>
</blockquote>
</div>
<br>
</div>
</div>
</blockquote>
<br>
</div></div></div>
</blockquote></div><br><br clear="all"><br>-- <br><div class="gmail_signature"><div dir="ltr">Omar Laurino<br>Smithsonian Astrophysical Observatory<br>Harvard-Smithsonian Center for Astrophysics<div><font color="#999999">100 Acorn Park Dr. R-377 MS-81</font></div><div><font color="#999999">02140 Cambridge, MA</font><br><span style="color:rgb(153,153,153)"><a value="+16174957227" style="color:rgb(17,85,204)">(617) 495-7227</a></span></div></div></div>
</div>