VOEvent at Heidelberg InterOp

Frederic V. Hessman Hessman at Astro.physik.Uni-Goettingen.DE
Fri May 17 00:26:47 PDT 2013


Howdy!

Having discussed these concerns with Mario at Heidelberg, I agree with John that this is basically a non-problem for VOEvent: why worry about several hundred byte overheads for 42 bytes of info when you're thinking of shipping whole images with the messages? To be fair, Mario responded to this, saying that a similar overhead would then also occur.  The point is that they want to use an Event Broker anyway: the database at the telescope is only supposed to publish.  A private broker could easily strip off and cache the images and pass the slimmed down event on to the rest of us, or the internal representation of the events could be passed to a more intelligent broker which then sends out the slimmed down events and responds to queries.

I can understand Mario saying that he'd rather just throw his heavy-weight events out of his VOEvent window and then forget about them, but doesn't the creator of an event have SOME amount of responsibility for having produced one? If you want to publish only, then set up or used the services of a broker capable of handling responses.   Since there are thousands of brokers out there waiting to be used…..

Rick


On 16 May 2013, at 21:07, Roy Williams <roy at caltech.edu> wrote:

> John
> Thanks for the update from Heidelberg.
> 
> VOEventNet is like Twitter in that messages are deliberately kept short. I note that a lot of tweets include links to other content, and that there is no resulting meltdown of the internet. Justin Bieber has 39,000,000 followers! You might say that Justin doesn't make tweets at the rate that LSST will make VOEvents, but then Youtube has 4,500 uploaded seconds per second, which is a pretty high rate.
> 
> If we can find out more about these big systems, how they replicate and cache and distribute content, perhaps it will inform the VOEvent community about scaling up?
> Roy
> 


On 16 May 2013, at 19:14, John Swinbank <swinbank at transientskp.org> wrote:

> Dear all,
> 
> I'm currently on the train home from the (ongoing) InterOp in Heidelberg. I hope that Matthew (or possibly his successor as TDIG chair, depending on the outcome of today's Exec meeting) will provide some commentary on the general TDIG relevant discussion at the meeting, but there were a few VOEvent-specific items which I thought might be worthy of further discussion. I apologize in advance for the somewhat rambling nature of the following.
> 
> First (and with apologies to Bob, in particular, that this has been on my back burner for so long), the latest draft of the VOEvent Transport Protocol document is available from <http://tinyurl.com/20130513vtp>. It's my hope that we can agree upon a version of this that we're happy to take forward to the standardization process soon. Your comments are very much welcome.
> 
> Secondly, Mario Juric of LSST raised a couple of interesting points about future VOEvent development in today's time domain discussion session. I guess I should emphasize that VOEvent 2.0 seems "good enough" for the time being at least: I see no harm in musing over future extensions, and have no doubt they'll be necessary some day, but evolving too rapidly will simply hurt adoption rates.
> 
> With that out of the way: firstly, Mario worried about the relatively heavyweight nature of the XML serialization, pointing to a particular example from the IVOA website where a several-hundred-character VOEvent provides about 40 characters of actually useful information. In the LSST world of 2e6 events/night, that's obviously a substantial overhead. This naturally makes me recall previous discussions of alternative VOEvent serializations (JSON, anybody…?).
> 
> Of course, by ~2020 (indeed, by 2013) shifting a couple of million messages a night, even if they contain kilobytes of XML overhead, doesn't seem prohibitively expensive – and even if that were an issue within some part of the LSST system, they could presumably define their own internal event representation, and only reserialize it to XML for broadcast to the rest of the world.
> 
> [On the other hand, defining an alternative (easily canonicalizable...) VOEvent representations might have other advantages regarding event signatures. But I think that's out of scope for the time being.]
> 
> Another issue Mario raised was that of embedding richer content, such as thumbnail images, into VOEvent packets. The argument here is that the existing reference mechanism isn't necessarily scalable to the volumes LSST needs: they don't want to have to field 2e6 * n_subscribers * n_references call-back requests every night, and, further, worry about the additional latency for event consumers (who, rather than immediately making a decision on whether to perform follow-up, now have to request additional information and wait for that to be delivered before they can proceed).
> 
> While I see the argument, my gut is rather sceptical of the above: it seems to me that, rather than event authors fielding millions of callbacks, they can easily and (relatively…) cheaply make their content available on distribution networks to which the number of requests is fairly trivial (let S3 take the strain!), and rather focus on driving down latency by keeping the VOEvent packets themselves small and distribution networks fast.
> 
> Indeed, the above makes me wonder if there should be a standardized upper limit on the size of VOEvent messages being distributed over VTP. Of course, such a limit already exists in that we use a 32 bit integer to specify the size of the packet being transmitted. But should we mandate that any brokers signing up to the "VOEvent backbone" are obliged to carry messages up to that size? Or up to some other limit?
> 
> Thoughts on any of the above?
> 
> John



More information about the voevent mailing list