Web Profile and security
Mark Taylor
m.b.taylor at bristol.ac.uk
Fri Dec 17 04:24:32 PST 2010
Ray,
thanks for setting out the issues so clearly. I can illuminate a
couple of them.
On Thu, 16 Dec 2010, Ray Plante wrote:
> Let me back up a bit. There are three basic security questions we need to ask
> ourselves:
>
> 1. How can malicious code gain trusted access?
> 2. What can malicious code do once it gains trusted access?
> 3. What can we do to prevent it?
>
> My concerns surrounding #1 are as follows: the window pops and asks for
> confirmation to connect to the hub. The window identifies the source of the
> web samp client application (by its IP name); based on this the user decides
> whether to trust the client and allow the connection. There are two common
> ways to get a user to trust malicious code:
>
> o spoof the identity by claiming to be who it isn't. This is often
> done via man-in-the-middle attacks, including DNS spoofing.
>
> o phish the identity, e.g. use a domain name that is nearly identical
> to a trusted one.
>
> It is human nature for folks, when repeatedly presented with the same hoop to
> getting what they want, to just jump through it without care. I would argue
> that in the limit of our success in the VO, particularly with a Web-SAMP
> profile, the confirmation window is not a security feature (though it may be
> useful in other ways). In that limit (imagine a popular web-samp enabled EO
> site), malicious access will occur.
First, the popup window may not even be able to report the origin
of the web application requesting registration. Of the three
sandbox-busting technologies that I've suggested we use, only one
(CORS = Cross-Origin Resource Sharing) forwards this information in
outgoing cross-domain requests, i.e. to the hub. The others (used by
Flash, Silverlight, Java and possibly others) provide no way for the
hub to know where the web app was served from. We could in principle
restrict the Web Profile to use only CORS, but this would cut out
many (maybe most) web app/browser combinations - not good.
The only thing the popup window can report in general is a name
(arbitrary string) that the web app provides for itself. There is
absolutely no way to prevent a web app from reporting its name as
something misleading.
The main security, such as it is, comes from the fact that the
user is only going to be expecting a registration request popup
when he has just done something to cause one, i.e. opened or clicked
a suitable button on a web page that looks like it's going to need
SAMP access. If the registration happens at any other time
(e.g. on loading a page from evilhacker.com), the user will
be suspicious and hopefully reject the request. In other words
the URL which gives the user a (not necessarily reliable) clue about
whether to trust the registration request is not in the popup,
but in the browser window where activity was just taking place.
I could change the text in the popup box to clarify that it's only
a good idea to accept if you've just been doing something relevant
in the web browser. I can't think of a way that a malicious app
could know when a legitimate app is about to make a registration
request so it could send its own request at the same time which
might be mistaken for the legitimate one - can anyone?
This arrangement is certainly open to phishing - a malicious
author can write a web page that looks like a VO tool, and
publicise it to a VO audience, which performs malicious activities.
At the current state of the VO, I don't think this is very likely.
In the limit of our success, it could be.
Since the hub SHOULD only accept connections which come from the
localhost (presumably, though not verifiably, from a process
running a web browser), I don't *think* that spoofing is an issue.
If you think it is, can you elaborate?
> As for #2, I'm less familiar with SAMP and what the common mtypes can do. I
> see that clients can request other clients to load files via a URL; e.g.
> Tomcat may instruct a web client to read a file from local disk (yes?). Can a
> web client tell another web client to read from an arbitrary file on disk
> (like /etc/passwd)?
>
> Part of the danger, of course, is that mtypes are extensible, and new ones can
> be defined to do anything on the local system. Now developers should apply
> prudence when defining new mtypes (because there could be rogue desktop apps,
> albeit being harder to deliver to users); nevertheless, there is no such
> control implied when Web-SAMP is in use.
>
> Another general danger has to do with hole into the local system the Web-SAMP
> hub opens to the outside world. Even if the hub, say, filters dangerous
> requests, its implementation needs also needs to worry about illegal uses of
> the protocol to break out of the hub's "sandbox" (the analog to SQL-injection
> attacks or overflow attacks) and gain the power of the user.
Your fears in this area are quite justified. It will be possible for
a registered web client to read files on the local filesystem with
the privileges of the user running the hub, since the hub provides
a proxying service for arbitrary URLs, including file-protocol URLs
(see sec 1.1.4 and 2.3 of my proposal
http://www.star.bristol.ac.uk/~mbt/websamp/websamp.html).
I have thought about restricting this in some way (e.g. only provide
proxying for a file-protocol URL if it has already appeared in the
content of a SAMP message); we could investigate this further,
but I suspect it would be difficult to make such a scheme watertight.
Although there is no obvious way using existing MTypes to do other
dangerous things like deleting user files, one can imagine tools
which provide scripting interfaces via SAMP which might allow
free use of user privileges. I think it's safest to assume that
once a client is registered with SAMP it can in principle do
anything that the user can.
> For #3, we need to look into what can be done to prevent #1 and #2. Perhaps
> the danger can be mitigated with extra rules for handling client messages
> (presumably in the hub, since client applications already exist). However, it
> seems to me the weak point is in the trust established when the connection is
> made.
I agree. The assumption has always been that the hub forwards messages
with no comprehension of their content, and it would be best to keep
it that way.
> As I said above, certificates allow one to validate that a client (or a
> server) is who they say they are. It works because only the owner of the
> identity has the private key associated with the signature on the certificate.
> HTTPS provides a means for a client to present such a certificate. To make
> the confirmation window effective against the attacks in #1, we need a
> certificate identifying the provider of the web client to be provided to the
> hub. This has a few challenges:
> o The JavaScript language does not support (to my knowledge)
> presenting a cert with an HTTPS URL.
> o We have to get the cert plus a private key to the JS VM in the
> browser in a secure way (this is known as credential delegation)
>
> At the moment, I don't know exactly how to address these; however, I believe
> that others in the web community have been grappling with the same problem and
> solutions are in the works. (E.g. modern browsers are picking up support for
> extensions to JavaScript that can do more.) Of course, X.509 certificates are
> not the only way to accomplish the validation. Perhaps the answer is
> incorporating credential delegation or authentication challenges into the hub.
The trouble with relying on new JavaScript extensions is that we restrict
usage to a small (though admittedly hopefully growing) proportion of
client/browser combinations. The original proposal would work on
almost all. As to whether this could work at all: I don't understand
how credential delegation works, I'll go and do some reading.
> My above stabs at the three security questions are most definitely incomplete
> and need more work. Nevertheless, my answer to #1 suggests to me that these
> should not be ignored.
I'll repeat what I said in my presentation, (see also sec 1.2 of
http://www.star.bristol.ac.uk/~mbt/websamp/websamp.html)
which is that *in practice*, my proposal doesn't present any more
serious security issues than the current best/only way of
getting web pages to do SAMP, namely signed java applets.
My reasoning for this is that although signed java
applets use certificates, in practice VO users (myself included I'm
afraid, though I'm starting to learn) don't look at the details of
these certificates or understand how they work. By way of example,
the WebStart version of TOPCAT has a certificate which is self-signed,
untrusted, unverified and expired (as an added bonus, it's signed by
"Peter W. Draper", a name unfamiliar to most users - though this is
irrelevant to the security since a self-signed cert can use any
name it wants). Lots of people have used this, but I've never had
anybody query it. Aladin appears to be in a similar situation.
I am prepared to consider however that this situation might be changed
if *both* the VO becomes higher profile and hence a target for
malicious authors, *and* if parts of the VO (e.g. the VAO) get serious
about providing VO-certified identities, and service providers
get serious about using them. If most or all of the certifiable
services that VO users use do in fact have certificates that prove
they are plausibly trustworthy as far as VO users are concerned,
a culture or habit of trusting only such services would probably arise.
That certainly hasn't happened yet, but it might.
Given all that, we need to work out where to go from here. I have a
few thoughts, but this message is long enough already. I would
welcome contributions from any and all who have an opinion about
whether this is something that we need to address actively, and if
so how.
Mark
--
Mark Taylor Astronomical Programmer Physics, Bristol University, UK
m.b.taylor at bris.ac.uk +44-117-928-8776 http://www.star.bris.ac.uk/~mbt/
More information about the apps-samp
mailing list