Re [WEB SECURITY] countermeasure against attacks through HTML shared files
Hello, I have revised the paper based on the comments, and put the revised version on the Pomcor site, at http://www.pomcor.com/whitepapers/file_sharing_security.pdf (Watch for a revision date of November 10, there was an earlier version.) The changes include an improvement based on the last post by Bil Corry (see Section 5.1). Thanks for the all the comments! Francisco
Re: [WEB SECURITY] countermeasure against attacks through HTML shared files
Bil, > > If the browser displayed the file > > and the user takes no precautions, the file should > > be in the browser's cache. > > Yngve Pettersen of Opera is working on a proposed > browser specification for "Context Cache" that > would allow cached items to expire/be discarded > immediately upon logging out: > > http://my.opera.com/yngve/blog/2007/02/27/introducing-cache-contexts-or-why-the > http://www.ietf.org/internet-drafts/draft-pettersen-cache-context-03.txt > An interesting proposal. > I know he's looking for feedback on the > idea. And of course, all the new "stealth" modes > being built into browsers would also help (they > do have use beyond surfing adult-content). > > > > To tell you the truth, > > the original motivation was just that it's not a > > good idea to have a valid authentication token > > (the file retrievel session ID) embedded in a URL. > > Sure, it can show up in logs, referer, etc. If > you don't mind JavaScript, it's easy enough to > use JavaScript to submit a POST. > > > > There is also a more exotic scenario: the > > attacker reads the authentication token from the > > user's computer display, as it is shown in the > > address box of the browser. These days, with a > > camera phone, the attacker does not have to be > > James Bond to pull that off. > > You could insert as the first param random junk > that's 100 characters long that will "push" the > real token off-screen. Yes. > > In any case, I do > > think now that the file retrieval session ID must > > remain valid while the login session is valid, in > > case the browser issues multiple requests for the > > same file. > > No, the thing to do here is a one-time, limited > duration key. When the browser first hits the > download page using the key, the user is assigned > an internal session by the file download site, and > the one-time key is voided. No replay attacks. The > internal session is used for all subsequent > requests. And the key is limited in duration > (maybe a minute), so if the user's browser dies or > can't reach the download site, the key expires > after the time limit. Yes, good idea. (I assume that what you mean by "key" is what I called "file retrievel session ID", and the "internal session" is for the purpose of authenticating subsequent request ***for the same file***, and "the user is assigned an internal session by the download site" means that such an internal session record is created on the server side, and a cookie referring to the internal session is set in the user's browser; this cookie would be specific to the file, and it would be used in addition to the cookie that authenticates application pages and the cookie that authenticates standard-URL requests for user files.) > > > Actually, I think there may be another case where > > a browser may issue multiple requests (besides the > > case where a large file download is interrupted), > > namely to implement sniffing. A browser may > > download an initial portion of the file to > > determine its type, and then download the rest. > > It's not clear to me why a second request would be > > needed to download the rest, rather than just > > continuing the download; but I think I remember > > seeing some version of IE issue a second request, > > when downloading MS Office documents. > > Switching from the one-time key to an internal > session ID (as described above) solves these > issues. Yes. (Same assumptions.) Thanks! Francisco
Re: [WEB SECURITY] countermeasure against attacks through HTML shared files
Hi Bil, > > My motivation for deleting the file retrieval > > session record was that the extended hostname is > > recorded in the browser history. So if the user > > neglects to log out, and is using a laptop, and > > the laptop is stolen (even if turned off), the > > thief can access the file from the history until > > the login session times out. > > Is the thought that once downloaded, the user is storing the file > securely on the hard drive? If not, then I think the attacker will simply > lift the file off the laptop rather than trying to re-download the file > again. Well, the user could have deleted the file. But you're right, the file is likely to be in the stolen laptop. If the browser displayed the file and the user takes no precautions, the file should be in the browser's cache. To tell you the truth, the original motivation was just that it's not a good idea to have a valid authentication token (the file retrievel session ID) embedded in a URL. The stolen laptop scenario was an afterthought. (There is also a more exotic scenario: the attacker reads the authentication token from the user's computer display, as it is shown in the address box of the browser. These days, with a camera phone, the attacker does not have to be James Bond to pull that off.) In any case, I do think now that the file retrieval session ID must remain valid while the login session is valid, in case the browser issues multiple requests for the same file. Actually, I think there may be another case where a browser may issue multiple requests (besides the case where a large file download is interrupted), namely to implement sniffing. A browser may download an initial portion of the file to determine its type, and then download the rest. It's not clear to me why a second request would be needed to download the rest, rather than just continuing the download; but I think I remember seeing some version of IE issue a second request, when downloading MS Office documents. Francisco
Re: countermeasure against attacks through HTML shared files
Hi Peter, Thanks for your comments! >The gist of your suggestion is to use different base URLs >for the untrusted content, so that "same origin" policies >act as a sort of firewall. You propose different hostnames; >back in 2001, the acmemail webmail project did something >similar, but rather than hostnames, we chose to offer the >option of using different port numbers. Many of us ran >acmemail on https URLs, and that meant either using wildcard >certs for https (which would expose other hosts to any >flaws in acmemail) or different ports. You can see the source here: > >http://acmemail.cvs.sourceforge.net/viewvc/acmemail/acmemail/AcmemailConf.pm?view=log > >Revision 1.27 on 18 Aug 2001 introduced the change: > > # For better protection against JavaScript attacks in messages > # and attachments, it is recommended that you configure your > # Web server to listen to two ports. One of these ports should > # be designated as the "control" port, where acmemail will display > # pages it has high confidence have safe content. The other will > # be designated the "message" port, and will be used to display > # emails and their attachments > >IIRC, acmemail used querystring/URL arguments to pass authentication >tokens in the requests to the "message" host:port requests; our hope >was that all (? important?) cookies would only go to the "control" > URLs. Interesting. I'll mention this in the revised paper. >Using different ports can be a little tricky; corporate firewall admins >are very fond of disallowing https to atypical ports, for instance. > Your >hostname suggestion has other benefits if you're able to mitigate other >risks (e.g., SSO cookies scoped for all RegisteredDomain hostnames) -- Good point, but this should not be a problem if the application service provider uses a dedicated RegisteredDomain for the particular application. >being able to sandbox each document+viewer combo is great. I think you >should do some usability testing with your suggestion that the file >retrieval session record be deleted when the document is accessed, > though. >This is very likely to cause problems with user agents like Internet > Explorer >that have aggressive anti-caching stances for https content, and I > imagine >could easily cause trouble for things like chunked partial requests. Very good point!!! Plus, this makes me think of another problem with deleting the record: what if the user wants to go back to the file using the back arrow, or the browser's history, or a bookmark. > I'd >tend to treat the retrieval keys more like typical web session objects >-- in fact, I'd probably stick a hashtable of filename -> hostkey > values >in each user's web session objects, so the keys would remain valid as >long as the user was still logged in. My motivation for deleting the file retrieval session record was that the extended hostname is recorded in the browser history. So if the user neglects to log out, and is using a laptop, and the laptop is stolen (even if turned off), the thief can access the file from the history until the login session times out. But the chunked request problem you brought up trumps this. So I think now that the file retrieval session record should not be deleted until the login session record is deleted, and the user will have to be careful about logging out before leaving the laptop unattended. Also, the file retrieval session record should now be specific to a particular file, so it should have a field for the filepath, which should be checked before downloading the file. (This is equivalent to your hashtable, but I like to think of sessions as implemented by normalized relational database records, in this case by the login session record plus the collection of file-retrieval session records that refer to the login session record.) It remains to solve the back-arrow/history/bookmark problem. Here is what I propose for that: if the file retrieval session ID does not map to a file retrieval session record, the application redirects the browser to the standard user file URL. If the user is logged in, the redirected request will come in with the user-file authentication cookie, and the application will create a file retrieval session record and redirect to a new extended user-file URL. Yes, that's two redirects for each download from a bookmark, but hopefully that will not cause a noticeable additional delay, especially if keepalive is used. Will add all this to the revised paper. Thanks again, Francisco
Re: [WEB SECURITY] countermeasure against attacks through HTML shared files
Hi Adrian, >It would have been cool to mention Microsoft SharePoint as an example of >a popular file sharing system that allows persistent XSS through shared >HTML files. i.e.: Thanks for pointing this out. I didn't look at SharePoint, actually. I did look at many others, and didn't find any that took any explicit precautions against XSS through shared files. But I thought there was no need to mention any names in the paper. Francisco
countermeasure against attacks through HTML shared files
Hello, I wanted to announce a Pomcor white paper that looks at attacks through HTML shared files in Web applications and proposes a countermeasure. These are essentially XSS attacks, but the usual defenses against XSS are typically not available, because shared files cannot be sanitized. The paper is available at: http://www.pomcor.com/whitepapers/file_sharing_security.pdf I have not been able to find much prior work. What I've found is discussed in Section 2 of the paper. If I've missed something, please let me know. Thanks, Francisco Corella