On Fri, Aug 09, 2013 at 09:14:51AM -0700, Paul Querna wrote:
> In this case, I don't know if any of the proposed mitigations help;
> I'd love to have an easy way to validate that, so we could bring data
> to the discussion:  If it increases the attack by multiple hours, and
> causes a <1% performance drop, isn't that the kind of thing that is
> useful?

I sympathise with Stefan but I agree we should do something if we can 
find something cheap, effective and reliable.

Length hiding seems the most promising avenue.  The paper notes that 
that simply adding rand(0..n) bytes to the response only increases the 
cost (time/requests) of executing the attack.

Adding a random number of leading zeroes to the chunk-size line would be 
be perhaps reliable (i.e. least likely to have interop issues), though 
we could only introduce relatively small variability of the total 
response length.  We could maybe 0-5 leading zeroes per chunk, safely?  
Possibly that breaks some client already.  It's probably not effective.

We could randomly vary the maximum bytes of application data per TLS 
message using the 2.4 mod_ssl "coalesce" filter too.  I'm not sure if 
that actually produces length hiding at the right level though, and it 
hurts performance.  (Crypto experts listening?)

It's kind of really a TLS problem.  Crypto experts should solve this in 
TLS! :)

Regards, Joe

Reply via email to