Multi-conference report: VO and SW

Norman Gray norman at astro.gla.ac.uk
Fri Dec 9 03:52:26 PST 2005


Doug,

On 2005 Dec 8 , at 21.42, Doug Tody wrote:

> A given URI may resolve into multiple URLs pointing to multiple  
> instances.

That's the difference!  I had completely forgotten about the one-to- 
many resolution.

I'm working this through out loud here, Doug, for my benefit rather  
than yours, as I imagine you've been through this already, and  
because it might be useful (to me if noone else) to have the whole  
argument in one place.

The underlying reason is that the resources in question are biggish.   
This breaks the assumptions of the best practice/architecture  
analysis in two independent ways:

1. The resources are replicated, and large enough that the client's  
location on the network matters.

2. The size means that HTTP is probably not the best transport  
mechanism, but instead GridFTP, or BitTorrent, or something else.

In both cases, the client can't be expected to make a good decision  
about which source to use (because that will depend on details of the  
national and intercontinental network, which will moreover change in  
time), nor which protocol to use (which will also depend on network  
environment and time).  A local resolver can be expected to know  
these things, either by discovery or configuration.

The assumption that's broken is the single, almost hidden, assumption  
that the transport issue is solved -- `use HTTP'.  Even if that were  
sorted out, and everyone decided that GridFTP (say) was the single  
best transport, the analysis also assumes that there is a single  
source -- a single DNS host -- for the resource; the replication in  
(1) means that we're not assuming that.  That can also be got around,  
by having a DNS name be handled by multiple geographically dispersed  
IP addresses (Google is well known to do this), but this is  
technically complicated and therefore fragile, and also centralised.

Even if they acknowledge the first HTTP point, the response to this  
second point on the part of the TAG (the W3C Technical Architecture  
Group, authors of the Web Architecture document) would be to point at  
the replication implicit in (1).  One of the good features of HTTP is  
that it is stateless, which means that it is very friendly to caches  
and proxies, so you _can_ have a simple single source, and just rely  
on caches to speed things up -- don't try to outsmart the network!   
But the sizes undermine that argument, too: few places have the  
resources to cache lots of multi-GB files, and if regional or  
national centres were set up which could handle that, it would  
require configuration cleverness to use them.  Thus the replication  
is essentially a type of preemptive caching.

On the other hand: I suppose there is still one case for using HTTP  
with a (nominally) single source, along with a smart local proxy,  
which spots when you're requesting a resource/source it knows about,  
and satisfies those requests using (transparently) a separate network  
of replicas and protocols.  That way, the client gets all the  
simplicity, predictability and API advantages of using HTTP naively  
(because that would work fine over a local network).  The proxy is  
effectively acting as a resolver, but the client is interacting with  
it using an extremely simple and possibly built-in protocol/API, and  
so doesn't have to care.  Is there milage in that?

...but I think I'm going on at too much length now, so I'll shut up!

All the best,

Norman


-- 
------------------------------------------------------------------------ 
----
Norman Gray  /  http://nxg.me.uk
eurovotech.org  /  University of Leicester, UK





More information about the semantics mailing list