Draft CORS guidance for an IVOA JSON protocol

Markus Demleitner msdemlei at ari.uni-heidelberg.de
Fri May 31 11:06:52 CEST 2024


Dear Russ,

On Wed, May 29, 2024 at 08:03:52PM -0700, Russ Allbery via grid wrote:
> Markus Demleitner via grid <grid at ivoa.net> writes:
> >     <interface xsi:type="vs:JSONPost">
> >       <!-- CSRF-hardened new-fangled thing -->
> >       <accessURL>http://example.org/images/sia.xml</accessURL>
> >     </interface>
> >   </capability>
>
> > I don't forsee much trouble there *if* clients have a good way
> > to know when to request which type of interface.
>
> I think you meant the URL of that second entry to be sia.json?  If so,
> then yes, this sort of thing is what I had in mind.

Yes, the sia.xml in the second URI was an oversight.

[Digression]
> HTTP Basic Auth *with passwords* is less secure, but only because
> passwords are not very secure (and also annoying to deal with for a bunch
> of other reasons, which is why people generally do some variation of OAuth
> or SAML these days for authentication of humans).  But there is no
> requirement that the "password" field in HTTP Basic Auth be a password.

Yeah, ok, so you *can* use HTTP Basic in ways that don't sling
around full credentials in (essentially) clear text with every
request; it's perhaps not immediately well understandable in the
associate browser UIs, but I give you that these by and large suck
in any case.

So, let me qualify my original statement to soemthing like: "If you
transmit your full user credentials with every request, you probably
are not *very* concerned about security anyway."  Which, mind you,
I'd consider an attitude very appropirate for the no-personal-data,
no-commercial-value world most of us have the privilege live in
professionally.  [/Digression]

> > Let me suggest that this observation might point the way to a nice
> > compromise between not wanting to break everything that works relatively
> > nicely now and wanting to plug gaping CSRF holes in our protocols:
> > Define a way to derive JSON-posting interfaces from our "normal"
> > form-posting interfaces and then tell people: "If you want to do
> > SSO-compliant Auth, write a vs:JSONPost interface rather than a
> > vs:ParamHTTP one".
>
> If the only goal were addressing CSRF issues, this may make sense, but I
> don't think this approach would provide several of the other, more
> significant motivating benefits of this work.

My conclusion from this thread would, I think, be: Dear P3T, perhaps
you can at least evaluate if the blast radius of what you're doing can be
limited to authenticated services and how much blowing up anything
beyond that would actually be beneficial enough to warrant the
destruction[1].  From what transpired at the Interop, you don't want to
do away with VOTable, and if we can save form-posting in this way,
perhaps we don't need to tear anything apart at all?

> The downside to this CSRF approach that this means such services cannot be
> driven by HTML forms, with possibly the special exception of HTML forms
> hosted in the same origin as the service and that use HTTP Basic Auth
> (although that browser UI is not great and I wouldn't recommend it).

Auth and (3rd party, on top) simple web forms is probably not a
mixture I'd care about at all.

[another Digression]
> features that are supported by all of the browsers that you care about.
> As one very obvious example, many web sites require TLS and are simply
> inaccessible to any browser that doesn't implement it.

... which is one of the reasons I'm arguing against forced HTTPS
redirects unless you have a strong reason (like credentials
protection) for them.  The right way to do this, and one that doesn't
lock out out reasonably secure (because no javascript) clients is
evaluating the upgrade-insecure-requests header, which keeps this
under client control:
<http://blog.tfiu.de/foced-https-redirects-considered-harmful.html>
[/another Digression]

Finally, perhaps as a segue into a new, non-security related thread:

> use XML is fairly deeply entwined into the standards.  I think one of the
> big pieces of this work is figuring out how to separate the encoding from
> the data model so that it's easier to add new encodings in the future.
> JSON won't be the last.

It would take *quite* a bit of effort to convince me that layering
the protocols in this way is rationally feasible or even a good idea.
You see, SOAP was bad in many ways, but that the authors really
thought about SOAP via SMTP made the standard particularly
unreadable.  And I think this kind of complication is almost
inevitable as you introduce extra abstraction layers.  But I don't
think that's something we can discuss: Try writing a standard with
such an abstraction (Datalink, as I said, would be my obvious
choice), and then let's see if it remains understandable.

Datalink would also be a good choice because I think the current
standard is not very accessible to start with, so there is some
low-hanging fruit in improving its readability in the process of
updating it.

            -- Markus


[1] Rant in a footnote; if anyone wants to reply to that, please do
so in a new thread, as it's unrelated to *our protocols'* security
properties:  Of course the cost-benefit calculation looks
particularly dire in my book because I have not seen convincing
advantages in the current bleeding-edge industrial tooling. Instead,
I'm mainly seeing a mess of curlbashware that tells you to pull 500
GB of random executable stuff from all over the net just to get
started, and then still asks you to make use of about a dozen
commercial network services with unclear revenue streams (i.e.,
presumably surveillance economy in Shoshana Zuboff's words).  Oh, and
of course deploying the stuff usually asks you to run a Kubernetes
cluster where, with less tooling, a vintage-2008 Raspi would have
done.  But again, take that as a rant; if what comes out of the P3T
actually provides benefits I can reap while running Debian stable, I'll
publicly apologise for this footnote.



More information about the grid mailing list