<html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"></head><body style="overflow-wrap: break-word; -webkit-nbsp-mode: space; line-break: after-white-space;">Hi Russ, <div><br></div><div>I have expanded on a few points below - but chopping a fair bit of intervening text out to make it easier to follow<br><div><br><blockquote type="cite"><div>On 23 May 2023, at 17:28, Russ Allbery <eagle@eyrie.org> wrote:</div><br class="Apple-interchange-newline"><div><div>Paul Harrison <paul.harrison@manchester.ac.uk> writes:<br><br><blockquote type="cite">We we encouraged not to discuss the specifics of<br>https://sqr-063.lsst.io/ at the interop, but there was one general topic<br>that should have been discussed, because as a result of that note there<br>is a perception that the “VO is insecure” - this is a dangerous<br>reputational message to be circulating that I believe is based on an<br>incorrect analysis and is only true in the sense that “the Internet is<br>insecure”.<br></blockquote><br>I want to be very clear what SQR-063 is: it's a collection of informal<br>notes from someone (me) new to the IVOA protocols of things I noticed<br>while writing an implementation. It's not anything more than that, and in<br>particular it was never intended to be a formal security analysis. The<br>heading in SQR-063 is "Security concerns" quite intentionally; these are<br>just things I was concerned about over a year ago when I wrote a SODA<br>implementation. Concerns are not vulnerabilities and may not even be<br>correct (as indeed one was not).<br><br>I am in absolutely no way asserting that "VO is insecure" or anything like<br>that. I have serious professional objections as a security person to that<br>sort of statement about any protocol. <br></div></div></blockquote><div><br></div><div>I can see that you get the subtleties here, but unfortunately I think that there is a danger that others </div><div>are interpreting your document in a more absolutest fashion - I am pretty sure that I did hear the phrase</div><div>“the VO is insecure” during the <a href="https://wiki.ivoa.net/twiki/bin/view/IVOA/InterOpMay2023GWS">https://wiki.ivoa.net/twiki/bin/view/IVOA/InterOpMay2023GWS</a> session, though it might have only been in a conditional clause.</div><blockquote type="cite"><div><div><br><blockquote type="cite">* Secondly the whole point of the VO was to make data interoperable in a<br> public way.<br></blockquote><br>A quick aside on this, since Rubin has a couple of constraints here. My<br>background isn't in astronomy, so I don't know how unusual they are.<br><br>* The US taxpayers via the US government have decided that our data can't<br> be public for some time after it's gathered. <br></div></div></blockquote><div><br></div><div>Having a proprietary period on data very common (majority?) situation amongst observatories. My comment was that VO protocols were originally designed for the “public archive” situation after that initial period. I do think that developments like science platforms and FAIR mean that is should be a principle that astronomers do not have to work differently to access data in the “proprietary” or “archive” situations. Having said that I think that my first principle that security is orthogonal to the VO protocol specifications should still be an aim, and I believe achievable for the technical reason I gave about the parts of the HTTP protocol that security concerns are implemented in.</div><blockquote type="cite"><div><div><br><br><blockquote type="cite">3.1 - This whole argument is made from a very 'browser client only'<br>perspective. As stated above in the second general principle of course<br>VO protocols can be called from anywhere, so they will be inherently<br>susceptible to “CSRF attacks” - that is normal usage. Of course it would<br>be a bit surprising if when clicking on a picture of a cute cat an<br>astronomer ran a TAP query, but when using Topcat they do want to be<br>able to query multiple servers in different locations.<br></blockquote><br>I'm not sure that I successfully communicated the point of this section if<br>you're thinking of Topcat requests as cross-site requests. The discussion<br>is about browsers because cross-site requests are a concept specific to<br>web browsers. They don't normally apply to non-browser clients; no Topcat<br>request would be a cross-site request in the normal HTTP sense, at least<br>unless I'm wildly wrong about how Topcat works.<br></div></div></blockquote><div><br></div>I was being facetious with the language, but the real point is that a VO service that is co-hosted with</div><div>a web portal for an observatory should not only be expecting calls from the portal.</div><div><br></div><div><br><blockquote type="cite"><div><div><br>The background here for both 3.1 and 3.2 is that we plan to enable, via<br>IVOA protocols or extensions that are faithful to the protocols, various<br>operations that may be quite expensive (eight-hour TAP queries, large<br>batch image processing) or destructive (user table deletion). I don't<br>want it to be possible to trigger such actions via cross-site requests,<br>mostly because of the risk of denial of service attacks. Those don't have<br>to be malicious and in fact often aren't; think of, for instance, web<br>crawlers that don't honor robots.txt (sadly more common than any of us<br>would like), or some JavaScript-triggered request that gets into a refresh<br>loop and spams requests.<br></div></div></blockquote><div><br></div>I think that the only foolproof way to prevent this is to always require authentication -</div><div>that might be becoming more acceptable to the VO world nowadays.</div><div><br></div><div><blockquote type="cite"><div><div><br>The most common way that web services disallow cross-site requests in<br>these situations is that the protocol uses PUT, PATCH, or DELETE, which<br>inherently force a JavaScript single-origin policy, or uses POST with body<br>content type that isn't one of the white-listed simple request types.<br>However, the IVOA protocols use GET or POST with<br>application/x-www-form-urlencoded, both of which are classified as simple<br>requests, so that method of preventing cross-site requests isn't<br>available.</div></div></blockquote><br></div><div>There is quite a history to UWS - my preference was not to have had the GET or application/x-www-form-urlencoded parameters - however, the consensus was to include them - I still think that security is not a UWS concern though - basically because there is no CSRF prevention mechanism that is acceptable that alters the POST body (as you set out below)</div><div><br><blockquote type="cite"><div><div><br>The next most common way to disallow cross-site requests is to require<br>that all such requests contain some random token in both the form body and<br>a cookie. (OWASP calls this the "double submit cookie" technique, which I<br>am not fond of as a name, but standard terminology is good.) This works<br>great for things intended to be used via a browser, but as you point out<br>IVOA protocols mostly aren't, so naively using this would break Topcat and<br>similar clients that wouldn't know how to set the cookie (nor should<br>they). Similarly, the "synchronizer token pattern" requires reading a<br>token from the server and reflecting it in the form submission, but the<br>IVOA UWS protocol has no way to tell a client that's happening and<br>existing clients don't know how to do it.<br><br>The variant that's often used for protocols where most requests are<br>expected to be from non-browser clients is the custom request header<br>approach, where all state-changing requests are required to include a<br>header containing a token that was obtained from an earlier API call.<br>However, this requires the client understand this protocol well enough to<br>include that header, so again, we're concerned about breaking existing<br>clients.<br><br>One simple variation of the custom request header approach that we could<br>use here is to require every request contain an Authorization header.<br>This forces single-origin policy and thus prevents CSRF in exactly the<br>same way the custom header approach does, and it works fine with Topcat,<br>which already knows how to send Authorization headers and will be sending<br>them in the normal case for Rubin during the period where we have to<br>require authentication for data access. However, the drawback of this<br>approach is that it prohibits using simple forms to make UWS requests; you<br>*have* to use a client like Topcat (or at least curl). This makes it<br>harder to make ad hoc UIs, although that may be an acceptable price to<br>pay.<br><br>I'm not sure the best way to tackle this, but I can think of a few<br>possibilities. Some of these conflict with each other, so we'd also need<br>a way to clearly communicate (presumably via the registry) which is in<br>use.<br><br>* Obviously protocols using bodies in any format other than text/plain,<br> multipart/form-data, or application/x-www-form-urlencoded force the<br> JavaScript single-origin policy and avoid nearly all CSRF problems as<br> long as one enforces, on the server, the presence of a correct<br> Content-Type header in the request. But of course that's a significant<br> protocol change, and while it may have merits for other reasons, I<br> wouldn't advocate a change of request serialization protocol solely for<br> CSRF protection. (This also prohibits using simple forms.)<br><br>* In many cases, it may make sense for a service to require an<br> Authorization header and not allow cookie-authenticated browser access<br> at all. I believe this case may already be covered by IVOA SSO<br> protocols, and the only thing preventing us from using that approach was<br> having an SSO profile for bearer tokens, which I believe is being worked<br> on.<br></div></div></blockquote><div><br></div>I think that the conclusion is that the only measures that can be used to try to </div><div>mitigate CSRF attacks are header based tokens, and generating and transporting these securely are basically equivalent to SSO protocols.</div><div><br></div><div>However, I think that it is unlikely to be acceptable for a long time that an astronomer is required to log-on to the VO, mainly because a globally acceptable identity federator is difficult to agree on.</div><div><br></div><div>I think that an observatory that wants to use VO protocols to distribute their proprietary data will have to do that on a different authenticated endpoint to the “open” data. This still leaves the problem of trying to mitigate CSRF on the open data endpoints.</div><div><br></div><div><br><blockquote type="cite"><div><div><br>* Currently, my understanding is that DALI *requires* a compliant server<br> to support both GET and POST. While both GET and POST requests have the<br> same theoretical security properties, in practice GET requests are much<br> more likely to produce those unintentional denial of service attacks I'm<br> the most worried about, and there are some cases where I don't think we<br> will have any obvious need to support GET. In those cases, it would be<br> nice to have a way to say "this service is POST-only" so that clients<br> will know to not attempt GET. (This was the point of 3.2; the entire<br> request there is just that servers not be *required* to support GET to<br> be standard-compliant.)<br></div></div></blockquote><div><br></div>This might be the easiest area to try to push on - though it should be noted that there are consequences for other things like DataLink - though there are mechanisms there for parameter passing outside the url query parameters, which might actually lower the level of desire for everything to be captured in the URL query parameters. The DataLink could point at the equivalent POST.</div><div><br></div><div>It might even be acceptable to return a DataLink response for any IVOA protocol….</div><div><br></div><div>Paul.</div><div><br></div><br></div></body></html>