On Dec 3, 2011, at 11:57 PM, Adam Barth wrote:

> On Sat, Dec 3, 2011 at 11:37 PM, Dean Jackson <d...@apple.com> wrote:
>> On 04/12/2011, at 6:06 PM, Adam Barth wrote:
>>> On Mon, Oct 24, 2011 at 9:51 PM, Adam Barth <aba...@webkit.org> wrote:
>>>> Personally, I don't believe it's possible to implement this feature
>>>> securely, at least not using the approach prototyped by Adobe.
>>>> However, I would love to be proven wrong because this is certainly a
>>>> powerful primitive with many use cases.
>>> 
>>> I spent some more time looking into timing attacks on CSS Shaders.  I
>>> haven't created a proof-of-concept exploit, but I believe the current
>>> design is vulnerable to timing attacks.  I've written up blog post
>>> explaining the issue:
>>> 
>>> http://www.schemehostport.com/2011/12/timing-attacks-on-css-shaders.html
>> 
>> Thanks for writing this up.
>> 
>> I'm still interested to know what the potential rate of data leakage is.
>> Like I mentioned before, there are plenty of existing techniques that could
>> expose information to a timing attack. For example, SVG Filters can
>> manipulate the color channels of cross-domain images, and using CSS overflow
>> on an iframe could potentially detect rendering slowdowns as particular
>> colors/elements/images come into view.
> 
> My understanding is that shader languages allow several orders of
> magnitude greater differences in rendering times than these
> approaches.  However, as I wrote in the post, I don't have a
> proof-of-concept, so I cannot give you exact figures.

>> CSS shaders increase the rate of leakage
>> because they execute fast and can be tweaked to exaggerate the timing, but
>> one could imagine that the GPU renderers now being used in many of WebKit's 
>> ports
>> could be exploited in the same manner (e.g. discover a CSS "trick" that drops
>> the renderer into software mode).
> 
> I don't understand how those attacks would work without shaders.  Can
> you explain in more detail?  Specifically, how would an attacker
> extract the user's identity from a Facebook Like button?
> 
> In the CSS Shader scenario, I can write a shader that runs 1000x
> slower on a black pixel than one a white pixel, which means I can
> extract the text that accompanies the Like button.  Once I have the
> text, I'm sure you'd agree I'd have little trouble identifying the
> user.

To be clear, it's not the difference between white and black pixels, it's the 
difference between pixels with transparency and those without. And I've never 
seen a renderer that runs "1000x slower" when rendering a pixel with 
transparency. It may runs a few times slower, and maybe even 10x slower. But 
that's about it.

I'm still waiting to see an actual "compelling" attack. The one you mention 
here:

        http://www.contextis.co.uk/resources/blog/webgl/poc/index.html

has never seemed very compelling to me. At the default "medium" quality setting 
the image still takes over a minute to be generated and it's barely 
recognizable. You can't read the text in the image or even really tell what the 
image is unless you had the reference next to it. For something big, like the 
WebGL logo, you can see the shape. But that's because it's a lot of big solid 
color. And of course the demo only captures black and white, so none of the 
colors in an image come through. If you turn it to its highest quality mode you 
can make out the blocks of text, but that takes well over 15 minutes to 
generate.

And this exploit is using WebGL, where the author has a huge amount of control 
over the rendering. CSS Shaders (and other types of rendering on the page) give 
you much less control over when rendering occurs so it makes it much more 
difficult to time the operations. I stand behind the statement, "... it seems 
difficult to mount such an attack with CSS shaders because the means to measure 
the time taken by a cross-domain shader are limited.", which you dismissed as 
dubious in your missive. With WebGL you can render a single triangle, wait for 
it to finish, and time it. Even if you tuned a CSS attack to a given browser 
whose rendering behavior you understand, it would take many frame times to 
determine the value of a single pixel and even then I think the accuracy and 
repeatability would be very low. I'm happy to be proven wrong about this, but 
I've never seen a convincing demo of any CSS rendering exploit.

This all begs the question. What is an "exploit"? If I can reproduce with 90% 
accuracy a 100x100 block of RGB pixels in 2 seconds, then I think we'd all 
agree that we have a pretty severe exploit. But what if I can determine the 
color of a single pixel on the page with 50% accuracy in 30 seconds. Is that an 
exploit? Some would say yes, because that can give back information (half the 
time) about visited links. If that's the case, then our solution is very 
different than in the first case. 

I think we need to agree on the problem we're trying to solve and then prove 
that the problem actually exists before trying to solve it. In fact, I think 
that's a general rule I live my life by :-)

-----
~Chris
cmar...@apple.com




_______________________________________________
webkit-dev mailing list
webkit-dev@lists.webkit.org
http://lists.webkit.org/mailman/listinfo.cgi/webkit-dev

Reply via email to