Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-08 Thread Adam Barth
On Thu, Dec 8, 2011 at 1:25 PM, Rik Cabanier  wrote:
> This might no longer be true, but isn't it the case that shaders are
> designed to take the same amount of time to execute, no matter what input
> they get?
> ie if you have an if/else block, the time of the shader would be whatever
> block takes the longest. This was done so you can schedule many of them at
> the same time without having to worry about synchronizing them.

That used to be true in the initial version of fragment shaders for
OpenGL, but it's no longer true apparently.  The old attack against
WebGL demonstrates that shaders can take a variable amount of time to
execute.

Adam


> On Mon, Dec 5, 2011 at 3:34 PM, Chris Marrin  wrote:
>>
>>
>> On Dec 5, 2011, at 11:32 AM, Adam Barth wrote:
>>
>> > On Mon, Dec 5, 2011 at 10:53 AM, Chris Marrin  wrote:
>> >> To be clear, it's not the difference between white and black pixels,
>> >> it's
>> >> the difference between pixels with transparency and those without.
>> >
>> > Can you explain why the attack is limited to distinguishing between
>> > black and transparent pixels?  My understanding is that these attacks
>> > are capable of distinguishing arbitrary pixel values.
>>
>> This is my misunderstanding. I was referring to the attacks using WebGL,
>> which measure the difference between rendering alpha and non-alpha pixels.
>> But I think there is another, more dangerous attack vector specific to CSS
>> shaders. Shaders have the source image (the image of that part of the page)
>> available. So it is an easy thing to make a certain color pixel take a lot
>> longer to render (your "1000x slower" case). So you can easily and quickly
>> detect, for instance, the color of a link.
>>
>> So I take back my statement that CSS Shaders are less dangerous than
>> WebGL. They are more!!! As I've said many times (with many more expletives),
>> I hate the Internet.
>>
>> I think the solution is clear. We should create a whole new internet where
>> we only let in people we trust.  :-)
>>
>> -
>> ~Chris
>> cmar...@apple.com
>>
>>
>>
>>
>> ___
>> webkit-dev mailing list
>> webkit-dev@lists.webkit.org
>> http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
>
>
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-08 Thread Oliver Hunt

On Dec 8, 2011, at 1:25 PM, Rik Cabanier wrote:

> This might no longer be true, but isn't it the case that shaders are designed 
> to take the same amount of time to execute, no matter what input they get?
> ie if you have an if/else block, the time of the shader would be whatever 
> block takes the longest. This was done so you can schedule many of them at 
> the same time without having to worry about synchronizing them.
> 
> Rik

That was only true in the early days of GLSL, etc when the hardware did not 
actually support branching.  Now the hardware does support branching so these 
timing attacks are relatively trivial (see 
http://www.contextis.co.uk/resources/blog/webgl/poc/index.html).

--Oliver

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-08 Thread Rik Cabanier
This might no longer be true, but isn't it the case that shaders are
designed to take the same amount of time to execute, no matter what input
they get?
ie if you have an if/else block, the time of the shader would be whatever
block takes the longest. This was done so you can schedule many of them at
the same time without having to worry about synchronizing them.

Rik

On Mon, Dec 5, 2011 at 3:34 PM, Chris Marrin  wrote:

>
> On Dec 5, 2011, at 11:32 AM, Adam Barth wrote:
>
> > On Mon, Dec 5, 2011 at 10:53 AM, Chris Marrin  wrote:
> >> To be clear, it's not the difference between white and black pixels,
> it's
> >> the difference between pixels with transparency and those without.
> >
> > Can you explain why the attack is limited to distinguishing between
> > black and transparent pixels?  My understanding is that these attacks
> > are capable of distinguishing arbitrary pixel values.
>
> This is my misunderstanding. I was referring to the attacks using WebGL,
> which measure the difference between rendering alpha and non-alpha pixels.
> But I think there is another, more dangerous attack vector specific to CSS
> shaders. Shaders have the source image (the image of that part of the page)
> available. So it is an easy thing to make a certain color pixel take a lot
> longer to render (your "1000x slower" case). So you can easily and quickly
> detect, for instance, the color of a link.
>
> So I take back my statement that CSS Shaders are less dangerous than
> WebGL. They are more!!! As I've said many times (with many more
> expletives), I hate the Internet.
>
> I think the solution is clear. We should create a whole new internet where
> we only let in people we trust.  :-)
>
> -
> ~Chris
> cmar...@apple.com
>
>
>
>
> ___
> webkit-dev mailing list
> webkit-dev@lists.webkit.org
> http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev
>
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-05 Thread Charles Pritchard

On 12/5/11 3:34 PM, Chris Marrin wrote:

On Dec 5, 2011, at 11:32 AM, Adam Barth wrote:


On Mon, Dec 5, 2011 at 10:53 AM, Chris Marrin  wrote:

To be clear, it's not the difference between white and black pixels, it's
the difference between pixels with transparency and those without.

Can you explain why the attack is limited to distinguishing between
black and transparent pixels?  My understanding is that these attacks
are capable of distinguishing arbitrary pixel values.

This is my misunderstanding. I was referring to the attacks using WebGL, which measure 
the difference between rendering alpha and non-alpha pixels. But I think there is 
another, more dangerous attack vector specific to CSS shaders. Shaders have the source 
image (the image of that part of the page) available. So it is an easy thing to make a 
certain color pixel take a lot longer to render (your "1000x slower" case). So 
you can easily and quickly detect, for instance, the color of a link.


Can this proposal be moved forward on CORS + HTMLMediaElement, 
HTMLImageElement and HTMLCanvasElement?


The proposal would really benefit users and authors on those media 
types, even if it falls short of applying to general HTML elements and 
CSS urls in the first draft.


I realize that it falls short of the lofty goals of the presentation, 
but it would make a good impact and set the stage for further work. It 
seems entirely do-able to disable a:visited on elements that have custom 
filters applied, but, like the timing issues, there needs to be some 
empirical data on risks before moving forward on them.



So I take back my statement that CSS Shaders are less dangerous than WebGL. 
They are more!!! As I've said many times (with many more expletives), I hate 
the Internet.

I think the solution is clear. We should create a whole new internet where we 
only let in people we trust.  :-)

-
~Chris
cmar...@apple.com


I still love my iPhone. ;-)


-Charles

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-05 Thread Chris Marrin

On Dec 5, 2011, at 11:32 AM, Adam Barth wrote:

> On Mon, Dec 5, 2011 at 10:53 AM, Chris Marrin  wrote:
>> To be clear, it's not the difference between white and black pixels, it's
>> the difference between pixels with transparency and those without.
> 
> Can you explain why the attack is limited to distinguishing between
> black and transparent pixels?  My understanding is that these attacks
> are capable of distinguishing arbitrary pixel values.

This is my misunderstanding. I was referring to the attacks using WebGL, which 
measure the difference between rendering alpha and non-alpha pixels. But I 
think there is another, more dangerous attack vector specific to CSS shaders. 
Shaders have the source image (the image of that part of the page) available. 
So it is an easy thing to make a certain color pixel take a lot longer to 
render (your "1000x slower" case). So you can easily and quickly detect, for 
instance, the color of a link. 

So I take back my statement that CSS Shaders are less dangerous than WebGL. 
They are more!!! As I've said many times (with many more expletives), I hate 
the Internet.

I think the solution is clear. We should create a whole new internet where we 
only let in people we trust.  :-)

-
~Chris
cmar...@apple.com




___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-05 Thread Adam Barth
On Mon, Dec 5, 2011 at 10:53 AM, Chris Marrin  wrote:
> To be clear, it's not the difference between white and black pixels, it's
> the difference between pixels with transparency and those without.

Can you explain why the attack is limited to distinguishing between
black and transparent pixels?  My understanding is that these attacks
are capable of distinguishing arbitrary pixel values.

> And I've
> never seen a renderer that runs "1000x slower" when rendering a pixel with
> transparency. It may runs a few times slower, and maybe even 10x slower. But
> that's about it.

As I wrote above, I don't have a proof-of-concept, so I can't give you
exact figures on how many bits the attacker can extract per second.

> I'm still waiting to see an actual "compelling" attack. The one you mention
> here:
>
> http://www.contextis.co.uk/resources/blog/webgl/poc/index.html
>
> has never seemed very compelling to me. At the default "medium" quality
> setting the image still takes over a minute to be generated and it's barely
> recognizable. You can't read the text in the image or even really tell what
> the image is unless you had the reference next to it. For something big,
> like the WebGL logo, you can see the shape. But that's because it's a lot of
> big solid color. And of course the demo only captures black and white, so
> none of the colors in an image come through. If you turn it to its highest
> quality mode you can make out the blocks of text, but that takes well over
> 15 minutes to generate.

A few points:

1) Attacks only get better, never worse.  It's unlikely that this demo
is the best possible attack.  It just gives us a feel for what kinds
of attacks are within the realm of possibility.

2) That attack isn't optimized for extracting text.  For the attack
I'm worried about, the attacker is interested in computing a binary
predicate over each pixel, which is much easier than computing the
value of the pixel.  Moreover, the attacker doesn't need to extract
every pixel.  He or she just needs to extract enough information to
distinguish glyphs in a known typeface.

3) According to the data we gathered for this paper
,
an attacker can easily spend four or five minutes executing an attack
like this without losing too many users.

> And this exploit is using WebGL, where the author has a huge amount of
> control over the rendering. CSS Shaders (and other types of rendering on the
> page) give you much less control over when rendering occurs so it makes it
> much more difficult to time the operations. I stand behind the statement,
> "... it seems difficult to mount such an attack with CSS shaders because the
> means to measure the time taken by a cross-domain shader are limited.",
> which you dismissed as dubious in your missive. With WebGL you can render a
> single triangle, wait for it to finish, and time it. Even if you tuned a CSS
> attack to a given browser whose rendering behavior you understand, it would
> take many frame times to determine the value of a single pixel and even then
> I think the accuracy and repeatability would be very low. I'm happy to be
> proven wrong about this, but I've never seen a convincing demo of any CSS
> rendering exploit.
>
> This all begs the question. What is an "exploit"? If I can reproduce
> with 90% accuracy a 100x100 block of RGB pixels in 2 seconds, then I think
> we'd all agree that we have a pretty severe exploit. But what if I can
> determine the color of a single pixel on the page with 50% accuracy in 30
> seconds. Is that an exploit? Some would say yes, because that can give back
> information (half the time) about visited links. If that's the case, then
> our solution is very different than in the first case.

It's a matter of risk.  I'm not sure it's helpful to set a hard
cut-off for what bit rate constitues an exploit.  We'd never be able
to figure out the exact bit rate anyway.  Instead, we should view more
efficient attack vectors as being higher risk.

> I think we need to agree on the problem we're trying to solve and then prove
> that the problem actually exists before trying to solve it. In fact, I think
> that's a general rule I live my life by :-)

I think it's clear what problem we're trying to solve.  We do not want
to provide web attackers a mechanism for extracting sensitive data
from the browser.  Here are a couple examples of sensitive data:

1) The text of other web sites.
2) Whether the user has visited another site previously.

I presume that's all non-controversial.  The part we seem to disagree
about is whether CSS Shaders causes this problem.  Based on everything
we know today, it seems quite likely they do.

Adam
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-05 Thread Chris Marrin

On Dec 3, 2011, at 11:57 PM, Adam Barth wrote:

> On Sat, Dec 3, 2011 at 11:37 PM, Dean Jackson  wrote:
>> On 04/12/2011, at 6:06 PM, Adam Barth wrote:
>>> On Mon, Oct 24, 2011 at 9:51 PM, Adam Barth  wrote:
 Personally, I don't believe it's possible to implement this feature
 securely, at least not using the approach prototyped by Adobe.
 However, I would love to be proven wrong because this is certainly a
 powerful primitive with many use cases.
>>> 
>>> I spent some more time looking into timing attacks on CSS Shaders.  I
>>> haven't created a proof-of-concept exploit, but I believe the current
>>> design is vulnerable to timing attacks.  I've written up blog post
>>> explaining the issue:
>>> 
>>> http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html
>> 
>> Thanks for writing this up.
>> 
>> I'm still interested to know what the potential rate of data leakage is.
>> Like I mentioned before, there are plenty of existing techniques that could
>> expose information to a timing attack. For example, SVG Filters can
>> manipulate the color channels of cross-domain images, and using CSS overflow
>> on an iframe could potentially detect rendering slowdowns as particular
>> colors/elements/images come into view.
> 
> My understanding is that shader languages allow several orders of
> magnitude greater differences in rendering times than these
> approaches.  However, as I wrote in the post, I don't have a
> proof-of-concept, so I cannot give you exact figures.

>> CSS shaders increase the rate of leakage
>> because they execute fast and can be tweaked to exaggerate the timing, but
>> one could imagine that the GPU renderers now being used in many of WebKit's 
>> ports
>> could be exploited in the same manner (e.g. discover a CSS "trick" that drops
>> the renderer into software mode).
> 
> I don't understand how those attacks would work without shaders.  Can
> you explain in more detail?  Specifically, how would an attacker
> extract the user's identity from a Facebook Like button?
> 
> In the CSS Shader scenario, I can write a shader that runs 1000x
> slower on a black pixel than one a white pixel, which means I can
> extract the text that accompanies the Like button.  Once I have the
> text, I'm sure you'd agree I'd have little trouble identifying the
> user.

To be clear, it's not the difference between white and black pixels, it's the 
difference between pixels with transparency and those without. And I've never 
seen a renderer that runs "1000x slower" when rendering a pixel with 
transparency. It may runs a few times slower, and maybe even 10x slower. But 
that's about it.

I'm still waiting to see an actual "compelling" attack. The one you mention 
here:

http://www.contextis.co.uk/resources/blog/webgl/poc/index.html

has never seemed very compelling to me. At the default "medium" quality setting 
the image still takes over a minute to be generated and it's barely 
recognizable. You can't read the text in the image or even really tell what the 
image is unless you had the reference next to it. For something big, like the 
WebGL logo, you can see the shape. But that's because it's a lot of big solid 
color. And of course the demo only captures black and white, so none of the 
colors in an image come through. If you turn it to its highest quality mode you 
can make out the blocks of text, but that takes well over 15 minutes to 
generate.

And this exploit is using WebGL, where the author has a huge amount of control 
over the rendering. CSS Shaders (and other types of rendering on the page) give 
you much less control over when rendering occurs so it makes it much more 
difficult to time the operations. I stand behind the statement, "... it seems 
difficult to mount such an attack with CSS shaders because the means to measure 
the time taken by a cross-domain shader are limited.", which you dismissed as 
dubious in your missive. With WebGL you can render a single triangle, wait for 
it to finish, and time it. Even if you tuned a CSS attack to a given browser 
whose rendering behavior you understand, it would take many frame times to 
determine the value of a single pixel and even then I think the accuracy and 
repeatability would be very low. I'm happy to be proven wrong about this, but 
I've never seen a convincing demo of any CSS rendering exploit.

This all begs the question. What is an "exploit"? If I can reproduce with 90% 
accuracy a 100x100 block of RGB pixels in 2 seconds, then I think we'd all 
agree that we have a pretty severe exploit. But what if I can determine the 
color of a single pixel on the page with 50% accuracy in 30 seconds. Is that an 
exploit? Some would say yes, because that can give back information (half the 
time) about visited links. If that's the case, then our solution is very 
different than in the first case. 

I think we need to agree on the problem we're trying to solve and then prove 
that the problem actually exists before tryin

Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-04 Thread Charles Pritchard

On 12/3/11 11:06 PM, Adam Barth wrote:

On Mon, Oct 24, 2011 at 9:51 PM, Adam Barth  wrote:

Personally, I don't believe it's possible to implement this feature
securely, at least not using the approach prototyped by Adobe.
However, I would love to be proven wrong because this is certainly a
powerful primitive with many use cases.

I spent some more time looking into timing attacks on CSS Shaders.  I
haven't created a proof-of-concept exploit, but I believe the current
design is vulnerable to timing attacks.  I've written up blog post
explaining the issue:

http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html

Jonas Sicking seems to have a similar concern:

https://twitter.com/#!/SickingJ/status/143161375823380480

It's probably worth addressing this concern sooner rather than later.
Ignoring it certainly won't cause the vulnerability to go away.


What was the verdict on CORS + Web Fonts? As I understood things, they 
were introduced for cross domain use (much like WebGL) and that's been 
an issue that I think vendors are peddling back on. I'm fully supportive 
of discovering just what the relative security issues are here... While 
that's going on: it seems like this feature can be made CORS-aware in 
subsequent prototypes while we wait on a verdict about timing issues.


I'm bringing up fonts, as they'd be the first [that I'm aware of] 
technology where CSS has integrated CORS.


There are many -many-many- quirks that authors will have to deal with, 
with programmable shaders. If everything were restricted to the CPU, 
we'd know that well, low end systems run with 1ghz and high end systems 
have multiple cores, but the performance and compatibility spread is 
something reasonable. Once GPUs are in the mix, we're talking about a 
100x difference, we're talking about all sorts of visual glitches, it's 
a mess.


I'm very much for getting this CSS shader proposal through. Between 
object-fit (and some other values) and custom shaders, I could rid 
myself of a thousand lines of code handling some basic image 
manipulation tasks. There are benefits to developers to weigh with the 
risks. I would be willing to accept CORS+CSS shaders as a compromise. 
There are  good opt-out mechanisms for secure sites; HTTP headers for 
nosniff and the like.


I do think the security issues that Dean Jackson has brought up are 
fascinating. It does seem to me that documenting attacks of various 
sorts is a worthwhile venture. I see it happening in a manner similar to 
how WCAG-TECHS exists. I don't think that those documented attacks spell 
doom.


Anecdote: I brought up Web Storage to the postgresql hackers mailing 
list awhile back. At least one developer was absolutely aghast that 
sites could launch attacks by creating thousands of 5 megabyte storage 
entries. The 5 meg per-origin limit works in practice, but explaining 
that fact was difficult.


There seems to be broad consensus/desire for facts about known attack 
vectors. I think it'd benefit all interested parties if something were 
created, in the style of WCAG-TECHS. http://www.w3.org/TR/WCAG-TECHS/


Such as, "Techniques and Failures for Web Security".



___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-04 Thread Adam Barth
On a personal note, Dean, please don't feel like I'm singling you
or your colleagues out.  More or less this exact feature request has
come up internally within Google at least three separate times.
I'm telling you now exactly what I told those folks then.  (Although I
did do some more research this time to make sure I had my ducks in a
row.)

I would very much like for this feature to succeed.  The demos of
what you can build with this feature are really impressive.  I'm sorry
that I don't know how to make it secure.

Adam


On Sat, Dec 3, 2011 at 11:57 PM, Adam Barth  wrote:
> On Sat, Dec 3, 2011 at 11:37 PM, Dean Jackson  wrote:
>> On 04/12/2011, at 6:06 PM, Adam Barth wrote:
>>> On Mon, Oct 24, 2011 at 9:51 PM, Adam Barth  wrote:
 Personally, I don't believe it's possible to implement this feature
 securely, at least not using the approach prototyped by Adobe.
 However, I would love to be proven wrong because this is certainly a
 powerful primitive with many use cases.
>>>
>>> I spent some more time looking into timing attacks on CSS Shaders.  I
>>> haven't created a proof-of-concept exploit, but I believe the current
>>> design is vulnerable to timing attacks.  I've written up blog post
>>> explaining the issue:
>>>
>>> http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html
>>
>> Thanks for writing this up.
>>
>> I'm still interested to know what the potential rate of data leakage is.
>> Like I mentioned before, there are plenty of existing techniques that could
>> expose information to a timing attack. For example, SVG Filters can
>> manipulate the color channels of cross-domain images, and using CSS overflow
>> on an iframe could potentially detect rendering slowdowns as particular
>> colors/elements/images come into view.
>
> My understanding is that shader languages allow several orders of
> magnitude greater differences in rendering times than these
> approaches.  However, as I wrote in the post, I don't have a
> proof-of-concept, so I cannot give you exact figures.
>
>> CSS shaders increase the rate of leakage
>> because they execute fast and can be tweaked to exaggerate the timing, but
>> one could imagine that the GPU renderers now being used in many of WebKit's 
>> ports
>> could be exploited in the same manner (e.g. discover a CSS "trick" that drops
>> the renderer into software mode).
>
> I don't understand how those attacks would work without shaders.  Can
> you explain in more detail?  Specifically, how would an attacker
> extract the user's identity from a Facebook Like button?
>
> In the CSS Shader scenario, I can write a shader that runs 1000x
> slower on a black pixel than one a white pixel, which means I can
> extract the text that accompanies the Like button.  Once I have the
> text, I'm sure you'd agree I'd have little trouble identifying the
> user.
>
>> Obviously at a minimum we'll need to be careful about cross-domain content,
>> and give input to filters (not just CSS shaders, and moz-element or 
>> ctx2d.drawElement)
>> that doesn't expose user info like history.
>
> As discussed in Bug 69044, I do not believe the blacklisting approach
> will lead to security.
>
>> But I wonder if there is also some more general approach to take here.
>> You mention Mozilla's paint events and requestAnimationFrame. Without those
>> it would be much more difficult to get timing information. The original
>> exploit on WebGL was especially easy because you could explicitly time a
>> drawing operation. This is more difficult with CSS (and in Safari, we
>> don't necessarily draw on the same thread, so even rAF data might not
>> be accurate enough).
>>
>> Is there something we can do to make rendering-based timing attacks
>> less feasible?
>
> As discussed in the blog post, once the sensitive data has entered the
> timing channel, it is extremely difficult to prevent the attacker from
> observing it.  Preventing the sensitive data from entering the timing
> channel in the first place is the most likely approach to security.
>
>> Here's a idea I heard floated internally: since the rAF-based attack would be
>> trying to trigger cases where the framerate drops from 60fps to 30fps, is
>> there some way we can detect this and do something about it? For example,
>> once you drop, don't return to 60fps for some random amount of time even if
>> it is possible. This might sound annoying to developers, but I expect anyone
>> legitimately concerned with framerate is going to want to do what they can
>> to keep at the higher value (i.e. they'll want to rewrite their code to
>> avoid the stutter). This doesn't stop the leak, but it slows it down. And as
>> far as I can tell everything is leaky - we're just concerned about the
>> rate. I know there won't be a single solution to everything.
>
> Approaches analogous to those have been tried, unsuccessfully, in many
> settings since the mid seventies.  In fact, I'm not aware of any cases
> where they were successful.  It's possible 

Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-03 Thread Adam Barth
On Sat, Dec 3, 2011 at 11:37 PM, Dean Jackson  wrote:
> On 04/12/2011, at 6:06 PM, Adam Barth wrote:
>> On Mon, Oct 24, 2011 at 9:51 PM, Adam Barth  wrote:
>>> Personally, I don't believe it's possible to implement this feature
>>> securely, at least not using the approach prototyped by Adobe.
>>> However, I would love to be proven wrong because this is certainly a
>>> powerful primitive with many use cases.
>>
>> I spent some more time looking into timing attacks on CSS Shaders.  I
>> haven't created a proof-of-concept exploit, but I believe the current
>> design is vulnerable to timing attacks.  I've written up blog post
>> explaining the issue:
>>
>> http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html
>
> Thanks for writing this up.
>
> I'm still interested to know what the potential rate of data leakage is.
> Like I mentioned before, there are plenty of existing techniques that could
> expose information to a timing attack. For example, SVG Filters can
> manipulate the color channels of cross-domain images, and using CSS overflow
> on an iframe could potentially detect rendering slowdowns as particular
> colors/elements/images come into view.

My understanding is that shader languages allow several orders of
magnitude greater differences in rendering times than these
approaches.  However, as I wrote in the post, I don't have a
proof-of-concept, so I cannot give you exact figures.

> CSS shaders increase the rate of leakage
> because they execute fast and can be tweaked to exaggerate the timing, but
> one could imagine that the GPU renderers now being used in many of WebKit's 
> ports
> could be exploited in the same manner (e.g. discover a CSS "trick" that drops
> the renderer into software mode).

I don't understand how those attacks would work without shaders.  Can
you explain in more detail?  Specifically, how would an attacker
extract the user's identity from a Facebook Like button?

In the CSS Shader scenario, I can write a shader that runs 1000x
slower on a black pixel than one a white pixel, which means I can
extract the text that accompanies the Like button.  Once I have the
text, I'm sure you'd agree I'd have little trouble identifying the
user.

> Obviously at a minimum we'll need to be careful about cross-domain content,
> and give input to filters (not just CSS shaders, and moz-element or 
> ctx2d.drawElement)
> that doesn't expose user info like history.

As discussed in Bug 69044, I do not believe the blacklisting approach
will lead to security.

> But I wonder if there is also some more general approach to take here.
> You mention Mozilla's paint events and requestAnimationFrame. Without those
> it would be much more difficult to get timing information. The original
> exploit on WebGL was especially easy because you could explicitly time a
> drawing operation. This is more difficult with CSS (and in Safari, we
> don't necessarily draw on the same thread, so even rAF data might not
> be accurate enough).
>
> Is there something we can do to make rendering-based timing attacks
> less feasible?

As discussed in the blog post, once the sensitive data has entered the
timing channel, it is extremely difficult to prevent the attacker from
observing it.  Preventing the sensitive data from entering the timing
channel in the first place is the most likely approach to security.

> Here's a idea I heard floated internally: since the rAF-based attack would be
> trying to trigger cases where the framerate drops from 60fps to 30fps, is
> there some way we can detect this and do something about it? For example,
> once you drop, don't return to 60fps for some random amount of time even if
> it is possible. This might sound annoying to developers, but I expect anyone
> legitimately concerned with framerate is going to want to do what they can
> to keep at the higher value (i.e. they'll want to rewrite their code to
> avoid the stutter). This doesn't stop the leak, but it slows it down. And as
> far as I can tell everything is leaky - we're just concerned about the
> rate. I know there won't be a single solution to everything.

Approaches analogous to those have been tried, unsuccessfully, in many
settings since the mid seventies.  In fact, I'm not aware of any cases
where they were successful.  It's possible that this setting is
different from all those other settings, of course, but I'm dubious.

> Or maybe rAF is inherently insecure?

I don't know what "inherently" insecure means.  I don't believe
removing requestAnimationFrame from the platform would be sufficient
to make adding CSS Shaders secure, if that's what you mean.

Adam


>> Jonas Sicking seems to have a similar concern:
>>
>> https://twitter.com/#!/SickingJ/status/143161375823380480
>>
>> It's probably worth addressing this concern sooner rather than later.
>> Ignoring it certainly won't cause the vulnerability to go away.
>>
>> Adam
>
___
webkit-dev mailing list
webkit-dev@lists

Re: [webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-03 Thread Dean Jackson

On 04/12/2011, at 6:06 PM, Adam Barth wrote:

> On Mon, Oct 24, 2011 at 9:51 PM, Adam Barth  wrote:
>> Personally, I don't believe it's possible to implement this feature
>> securely, at least not using the approach prototyped by Adobe.
>> However, I would love to be proven wrong because this is certainly a
>> powerful primitive with many use cases.
> 
> I spent some more time looking into timing attacks on CSS Shaders.  I
> haven't created a proof-of-concept exploit, but I believe the current
> design is vulnerable to timing attacks.  I've written up blog post
> explaining the issue:
> 
> http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html

Thanks for writing this up.

I'm still interested to know what the potential rate of data leakage is.
Like I mentioned before, there are plenty of existing techniques that could
expose information to a timing attack. For example, SVG Filters can
manipulate the color channels of cross-domain images, and using CSS overflow
on an iframe could potentially detect rendering slowdowns as particular
colors/elements/images come into view. CSS shaders increase the rate of leakage
because they execute fast and can be tweaked to exaggerate the timing, but
one could imagine that the GPU renderers now being used in many of WebKit's 
ports
could be exploited in the same manner (e.g. discover a CSS "trick" that drops
the renderer into software mode).

Obviously at a minimum we'll need to be careful about cross-domain content,
and give input to filters (not just CSS shaders, and moz-element or 
ctx2d.drawElement)
that doesn't expose user info like history. 

But I wonder if there is also some more general approach to take here.
You mention Mozilla's paint events and requestAnimationFrame. Without those
it would be much more difficult to get timing information. The original
exploit on WebGL was especially easy because you could explicitly time a
drawing operation. This is more difficult with CSS (and in Safari, we
don't necessarily draw on the same thread, so even rAF data might not
be accurate enough).

Is there something we can do to make rendering-based timing attacks
less feasible?

Here's a idea I heard floated internally: since the rAF-based attack would be
trying to trigger cases where the framerate drops from 60fps to 30fps, is
there some way we can detect this and do something about it? For example,
once you drop, don't return to 60fps for some random amount of time even if
it is possible. This might sound annoying to developers, but I expect anyone
legitimately concerned with framerate is going to want to do what they can
to keep at the higher value (i.e. they'll want to rewrite their code to
avoid the stutter). This doesn't stop the leak, but it slows it down. And as
far as I can tell everything is leaky - we're just concerned about the
rate. I know there won't be a single solution to everything.

Or maybe rAF is inherently insecure?

Dean



> 
> Jonas Sicking seems to have a similar concern:
> 
> https://twitter.com/#!/SickingJ/status/143161375823380480
> 
> It's probably worth addressing this concern sooner rather than later.
> Ignoring it certainly won't cause the vulnerability to go away.
> 
> Adam

___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev


[webkit-dev] Timing attacks on CSS Shaders (was Re: Security problems with CSS shaders)

2011-12-03 Thread Adam Barth
On Mon, Oct 24, 2011 at 9:51 PM, Adam Barth  wrote:
> Personally, I don't believe it's possible to implement this feature
> securely, at least not using the approach prototyped by Adobe.
> However, I would love to be proven wrong because this is certainly a
> powerful primitive with many use cases.

I spent some more time looking into timing attacks on CSS Shaders.  I
haven't created a proof-of-concept exploit, but I believe the current
design is vulnerable to timing attacks.  I've written up blog post
explaining the issue:

http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html

Jonas Sicking seems to have a similar concern:

https://twitter.com/#!/SickingJ/status/143161375823380480

It's probably worth addressing this concern sooner rather than later.
Ignoring it certainly won't cause the vulnerability to go away.

Adam
___
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev