On 09/30/2010 10:41 AM, Thor Lancelot Simon wrote:
On Wed, Sep 29, 2010 at 09:22:38PM -0700, Chris Palmer wrote:

Thor Lancelot Simon writes:
a significant net loss of security, since the huge increase in computation
required will delay or prevent the deployment of "SSL everywhere".

That would only happen if we (as security experts) allowed web developers to
believe that the speed of RSA is the limiting factor for web application
performance.

+1.

Why are multi-core GHz server-oriented CPUs providing hardware acceleration for AES rather than RSA?

There may be reasons: AES side channels, patents, marketing, etc..

But if it really were such a big limitation you'd think it'd be a feature to sell server chips by now. Maybe in a sense it already is. What else are you going to do on that sixth core you stick behind the same shared main memory bus?

At 1024 bits, it is not.  But you are looking at a factor of *9* increase
in computational cost when you go immediately to 2048 bits.  At that point,
the bottleneck for many applications shifts, particularly those which are
served by offload engines specifically to move the bottleneck so it's not
RSA in the first place.

I could be wrong, but I get the sense that there's not really a high proportion of sites which are:

A. currently running within an order of magnitude of maxing out server CPU utilization on 1024 bit RSA, and

B. using session resumption to its fullest (eliminates RSA when it can be used), and

C. an upgrade to raw CPU power would represent a big problem for their budget.

OTOH, if it increased the latency and/or power consumption for battery-powered mobile client devices that could be noticeable for a lot of people.

Also, consider devices such as deep-inspection firewalls or application
traffic managers which must by their nature offload SSL processing in
order to inspect and possibly modify data before application servers see
it.  The inspection or modification function often does not parallelize
nearly as well as the web application logic itself, and so it is often
not practical to handle it in a distributed way and "just add more CPU".

The unwrapping of the SSL should parallelize just fine. I think the IT term for that is "scalability". We should be so lucky that all our problems could be solved by throwing more silicon at them!

Well, if there are higher-layer inspection methods (say virus scanning) which don't parallelize, well, wouldn't they have the same issue without encryption?

At present, these devices use the highest performance modular-math ASICs
available and can just about keep up with current web applications'
transaction rates.  Make the modular math an order of magnitude slower
and suddenly you will find you can't put these devices in front of some
applications at all.

Or the vendors get to sell a whole new generation of boxes again.

This too will hinder the deployment of "SSL everywhere",

It doesn't bother me the least if deployment of dragnet-scale interception-friendly SSL is hindered. But you may be right that it has some kind of effect on overall adoption.

and handwaving
about how for some particular application, the bottleneck won't be at
the front-end server even if it is an order of magnitude slower for it
to do the RSA operation itself will not make that problem go away.

Most sites do run "some particular application". For them, it's either a problem, an annoyance, or not a noticeable at all. The question is what proportion of situations are going to be noticeably impacted.

I imagine increasing the per-handshake costs from, say, 40 core-ms to 300 core-ms will have wildly varying effects depending on the system. It might not manifest as a linear increase of anything that people care to measure.

I agree, it does sound a bit hand-wavy though. :-)

- Marsh

---------------------------------------------------------------------
The Cryptography Mailing List
Unsubscribe by sending "unsubscribe cryptography" to majord...@metzdowd.com

Reply via email to