On Apr 1, 2014, at 9:05 AM, Nicholas Weaver <nwea...@icsi.berkeley.edu> wrote:

> 
> On Apr 1, 2014, at 5:39 AM, Olafur Gudmundsson <o...@ogud.com> wrote:
>> 
>> Doing these big jumps is the wrong thing to do, increasing the key size 
>> increases three things:
>>      time to generate signatures  
>>      bits on the wire
>>      verification time. 
>> 
>> I care more about verification time than bits on the wire (as I think that 
>> is a red herring).
>> Signing time increase is a self inflicted wound so that is immaterial. 
>> 
>>                 sign    verify    sign/s verify/s
>> rsa 1024 bits 0.000256s 0.000016s   3902.8  62233.2
>> rsa 2048 bits 0.001722s 0.000053s    580.7  18852.8
>> rsa 4096 bits 0.012506s 0.000199s     80.0   5016.8
>> 
>> Thus doubling the key size decreases the verification performance by roughly 
>> by about 70%. 
>> 
>> KSK's verification times affect the time to traverse the DNS tree, thus 
>> If 1024 is too short 1280 is fine for now
>> If 2048 is too short 2400 bit key is much harder to break thus it should be 
>> fine. 
>> 
>> just a plea for key use policy sanity not picking on Bill in any way.
> 
> NO!  FUCK THAT SHIT.  Seriously.

Watch your language, just because I'm calling you on the carpet for simplistic 
world view, does not mean you need to use foul language. 

> 
> There is far far far too much worrying about "performance" of crypto, in 
> cases like this where the performance just doesn't matter!
> 

disagree strongly, you are only looking at a part of the whole picture. 
Verification adds resolution latency + verification adds extra queries which is 
more latency
        latency == un-happy eye-balls.  

> Yes, you can only do 18K verifies per CPU per second for 2048b keys.  Cry me 
> a river.  Bite the bullet, go to 2048 bits NOW, especially since the servers 
> do NOT have resistance to roll-back-the-clock attacks.

Why not go to a good ECC instead ? (not sure which one, but not P256 or P384) 

18K answers/second ==> is a fraction of what larger resolver servers do today 
during peak times, yes not all answers need validation.
BUT you need to take into account that in some cases there is going to be lots 
of redundancy in verification in large resolver clusters, thus
if your query stream hits 5 different hosts all of them may end up doing almost 
5x of the work, thus adding servers does not scale. 
Yes people can create any cast clusters in depth where only the front end ones 
do verification and the back end ones only answer queries, but
that has different implications. 

Remember it is not average load that matters it is peak load, even if the peak 
for 30 minutes on one day of the year. 

Over the years I have been saying use keys that are appropriate, thus for 
someone like Paypal it makes sense to have strong keys,
but for my private domain does it matter what key size I use? 
I do not buy the theory that one size fits all model for crypto, people should 
not use unsafe crypto, but the one size fits all is not the
right answer, just like not every zone needs a KSK and ZSK split. ( I use 
single 1280 bit RSA key with NSEC) 

A real world analogy is that not everyone needs the same kind of car, some 
people need big cars, others small ones or even no car. 

Furthermore using larger keys than your parents is non-sensical as that moves 
the cheap point of key cracking attack. 
 
> In a major cluster validating recursive resolver, like what Comcast runs with 
> Nominum or Google uses with Public DNS, the question is not how many verifies 
> it can do per second per CPU core, but how many verifies it needs to do per 
> second per CPU core.

I have no doubt that CPU's can keep up but the point I was trying to make is 
increasing the key sizes by this big jump 
==> invalidates peoples assumptions on what the load is going to be in the near 
term. 

> 
> And at the same time, this is a problem we already know how to parallelize, 
> and which is obscenely parallel, and which also caches…

Do we? Some high performance DNS software is still un-treaded, many resolvers 
are run in VM's with low number of cores 
exported to the VM. 

> 
> Lets assume a typical day of 1 billion external lookups for a major ISP 
> centralized resolver, and that all are verified.  Thats less 1 CPU core-day 
> to validate every DNSSEC lookup that day at 2048b keys.  

1B is low due to low TTL's and synthetic names used for transactions, and as I 
said before it is peak load that matters not average. 
DNSSEC processing is just a part of the whole processing model. 

> 
> And yeah, DNS is peaky, but that's also why this task is being run on a 
> cluster already, and each cluster node has a lot of CPUs.

that costs money, and effort to operate.

        Olafur

_______________________________________________
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

Reply via email to