Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-04-02 Thread Phillip Hallam-Baker
On Tue, Apr 1, 2014 at 10:48 PM, Paul Hoffman paul.hoff...@vpnc.org wrote:

 On Apr 1, 2014, at 7:37 PM, Olafur Gudmundsson o...@ogud.com wrote:

  Why not go to a good ECC instead ? (not sure which one, but not P256 or
 P384)

 Why not P256 or P384? They are the most-studied curves. Some of the newer
 curves do have advantages, but they are also newer.


Same answer as always: A patent troll with the most worthless claim ever is
still going to cost $4 million to get a declarative judgement against.

RIM is on the verge of bankruptcy and it is very likely the patents will be
acquired by a troll.

And the new tactic is to go after the customers, not the technology
providers. So without the declarative judgement we are swapping a
technology we know we have no problem with for one with an expensive
liability. So we definitely need a declarative judgement.


IF the size of the signatures vs the packet size was the issue we could go
to DSA. It has some implementation issues but I'll take 2048 bit DSA over
1024 bit RSA.

Alternatively, we can forget the ICANN root as being the primary validation
path and have people publish a 2048 bit cert in a WebPKI validated chain in
their zone. We already have the records for that.

-- 
Website: http://hallambaker.com/
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread Edward Lewis
After a break of a few months I rejoined the DNSOP WG mail list. After the very 
first message I sent a complaint to the chairs over the tone and language.  I 
feel I should send a note about that to the open list itself.

It’s not that I have a puritan tongue.  It’s that such low-brow language, 
including the subject line’s obtuse reference to an obscene expression and 
finally to the out right vulgar expression in the thread call into doubt the 
level of professionalism of this WG.  It’s hard to defend the work that results 
from a group that exhibits juvenile behavior.  I realize that I am reacting to 
perhaps the writing of one person (perhaps, I never tried to verify the 
originator) and perhaps reacting to the interaction of a small group of 
individuals - even if some are well-intentioned here.  But I think that it is 
up to the WG to defend it’s stature and not accommodate this low level of 
discourse.

As to the matter at hand, I had been an operator of DNS services for about the 
past decade, designed the initial role out of DNSSEC in places, and ran a 
measurement experiment for a few years to quantify the choices.

I found that there are two primary reasons why 1024 bits is used in zone 
signing keys.

 One - peer pressure.  Most other operators start out with 1024 bits.  I know 
of some cases where operators wanted to choose other sizes but were told to 
“follow the flock.”

Two - it works.  No one has ever demonstrated a failure of a 1024 bit key to 
provide as-expected protection.

From these two main reasons (and you’ll notice nothing about cryptographic 
strength in there) a third very import influence must be understood - the 
tools operators use more or less nudge operators to the 1024 bit size.  
Perhaps via the default settings or perhaps in the tutorials and documentation 
that is read.

Why do operators seem to ignore the input of cryptographers?  I can tell you 
that from personal experience.  Cryptographers, whenever given a straight 
question on DNSSEC have failed to give straight answers.  As is evident in the 
thread, theoretical statements are made and the discussion will veer off into 
recursive (really cache-protection) behaviors, but never wind up with a result 
that is clearly presented and defended.  In my personal experience, when I 
recommended 1024 bits, it was after consulting cryptographic experts who would 
just waffle on what size is needed and then relying on what we did in workshops 
15 years ago.

What does it matter from a security perspective?  DNS messages are short lived. 
 It’s not like we are encrypting a novel to be kept secret for 100 years.  With 
zone signing keys lasting a month, 6 months, or so, and the ability to disallow 
them fairly quickly, what’s the difference between this so-called 80 or 112 bit 
strength difference?  Yes, I understand the doomsday scenario that someone 
might “guess” my private key and forge messages.  But an attack is not as 
simple as forging messages, it takes the ability to inject them too.  That can 
be done - but chaining all these things together just makes the attack that 
much less prevalent.

Saving space and time does matter.  Roughly half the operators I studied would 
include a backup key on-line because “they could” with the shorted length.  And 
performance does matter - ask the web browser people.

It nets to this - cryptographers urge for longer lengths but can’t come up with 
a specific, clearly rational, recommendation.  DNS operators want smaller, web 
performance wants quicker.  Putting all that together, the smaller key size 
makes sense.  In operations.

PS - Yes, some operators do use longer keys.  Generally, those that do have 
decent “excuses” (read: unusual use cases) and so they are not used in the peer 
pressure arguments.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread Nicholas Weaver
The profanity is deliberate.  The same discredited performance arguments have 
come up for a decade+.  It gets very frustrating to see the same ignorance, 
again and again.


On Apr 2, 2014, at 6:30 AM, Edward Lewis edlewis.subscri...@cox.net wrote:
 From these two main reasons (and you’ll notice nothing about cryptographic 
 strength in there) a third very import influence must be understood - the 
 tools operators use more or less nudge operators to the 1024 bit size.  
 Perhaps via the default settings or perhaps in the tutorials and 
 documentation that is read.
 
 Why do operators seem to ignore the input of cryptographers?  I can tell you 
 that from personal experience.  Cryptographers, whenever given a straight 
 question on DNSSEC have failed to give straight answers.  As is evident in 
 the thread, theoretical statements are made and the discussion will veer off 
 into recursive (really cache-protection) behaviors, but never wind up with a 
 result that is clearly presented and defended.  In my personal experience, 
 when I recommended 1024 bits, it was after consulting cryptographic experts 
 who would just waffle on what size is needed and then relying on what we did 
 in workshops 15 years ago.

Well, its because for the most part, cryptographers do seem to understand that 
DNSSEC is a bit of a joke when it comes to actually securing conventional DNS 
records.

And the NIST crypto recommendations have existed for years.  1024b RSA was 
deprecated in 2010, eliminate completely in 2013.  There may be doubt in NIST 
now, but 2 years ago, to ignore the standard recommendations is foolish.

 What does it matter from a security perspective?  DNS messages are short 
 lived.  It’s not like we are encrypting a novel to be kept secret for 100 
 years.  With zone signing keys lasting a month, 6 months, or so, and the 
 ability to disallow them fairly quickly, what’s the difference between this 
 so-called 80 or 112 bit strength difference?  Yes, I understand the doomsday 
 scenario that someone might “guess” my private key and forge messages.  But 
 an attack is not as simple as forging messages, it takes the ability to 
 inject them too.  That can be done - but chaining all these things together 
 just makes the attack that much less prevalent.

Do your resolvers have protection against roll back the clock attacks?  If 
not, you do not gain protection from the short-lived (well, really, a few 
month, they don't roll the actual key every 2 weeks) nature of the ZSK for 
root, .com, etc.

 Saving space and time does matter.  Roughly half the operators I studied 
 would include a backup key on-line because “they could” with the shorted 
 length.  And performance does matter - ask the web browser people.

Amdahl's law seems to be something that computer science in general always 
seems to forget.  The performance impact, both in size and cryptographic 
overhead, to shift to 2048b keys is negligible in almost all cases.  

And the step function in DNS cost, the Internet can't do fragments problem, 
doesn't really come into play at 2048b.

 It nets to this - cryptographers urge for longer lengths but can’t come up 
 with a specific, clearly rational, recommendation.  

Yes they have.  2048b.

 DNS operators want smaller, web performance wants quicker.  Putting all that 
 together, the smaller key size makes sense.  In operations.

The real dirty secret.  

DNSSEC is actually useless for, well, DNS.  A records and the like do not 
benefit from cryptographic protection against a MitM adversary, as that 
adversary can just as easily attack the final protocol.

Thus the only actual use for DNSSEC is not protecting A records, but protecting 
cryptographic material and other similar operations: DANE is probably the best 
example to date, but there is also substantial utility in, e.g., email keys.  

DNSSEC is unique in that it is a PKI with constrained and enforced path of 
trust along existing business relationships.

Building the root of this foundation on the sand of short keys, keys that we 
know that are well within range for nation-state adversaries, from the root and 
TLDs is a recipe to ensure that DNSSEC is, rightly, treated by the rest of the 
world as a pointless joke.


 PS - Yes, some operators do use longer keys.  Generally, those that do have 
 decent “excuses” (read: unusual use cases) and so they are not used in the 
 peer pressure arguments.

And that does no good unless the upstream in the path of trust, starting at the 
root, actually use real length keys.

The difference between 2^80 and 2^100 effort is huge.  2^80 is in range today 
of nation states, and near the range of academics.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with 

Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-04-02 Thread Stephane Bortzmeyer
On Tue, Apr 01, 2014 at 10:37:54PM -0400,
 Olafur Gudmundsson o...@ogud.com wrote 
 a message of 158 lines which said:

 Furthermore using larger keys than your parents is non-sensical as
 that moves the cheap point of key cracking attack.

Mostly true, but still too strong a statement, in my opinion. This is
because, if you are an attacker and managed to crack a key somewhere
between the root (inclusive) and your real target, the higher you are
in the tree, the more things you have to emulate or simulate below. If
you are after ogud.com, and you cracked the root key, you need to
either create a false DS for .com (and then the resolver will croak on
most .com responses, detecting there is something wrong) or a false
NSEC proving that .com is not signed (but the fact that .com is signed
is rapidly cached in validating resolvers).

So, yes, basically, you are right, since DNSSEC is tree-based, the
security of the weakest node is what matters. But, in practice,
exploiting a cracked key upper in the tree is a bit more difficult
than it seems.


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Whiskey Tango Foxtrot on key lengths...

2014-04-02 Thread Paul Hoffman
On Apr 1, 2014, at 8:02 PM, Olafur Gudmundsson o...@ogud.com wrote:

 
 On Apr 1, 2014, at 10:48 PM, Paul Hoffman paul.hoff...@vpnc.org wrote:
 
 On Apr 1, 2014, at 7:37 PM, Olafur Gudmundsson o...@ogud.com wrote:
 
 Why not go to a good ECC instead ? (not sure which one, but not P256 or 
 P384) 
 
 Why not P256 or P384? They are the most-studied curves. Some of the newer 
 curves do have advantages, but they are also newer.
 
 --Paul Hoffman
 
 
 The verification performance is bad, P256 takes 24x times longer to verify a 
 signature than 2048 bit RSA key. 
 Studied != good performance

I believe that there are no elliptic curves that get *much* better verification 
speeds that P256/P384. Some are a bit faster, but not even close to RSA2048. 
From your question Why not go to a good ECC instead, I assumed you were 
caring about predictability against attacks and key length, which are the 
strengths of elliptic curve cryptography.

--Paul Hoffman
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Ted Lemon
On Apr 2, 2014, at 10:19 AM, Jim Reid j...@rfc1035.com wrote:
 My gut feel is large ZSKs are overkill because the signatures should be 
 short-lived and the keys rotated frequently. Though the trade-offs here are 
 unclear: is a 512-bit key that changes daily (say) better than a 2048-bit key 
 that gets rotated once a week/month/whatever? Remember too we're not talking 
 about keys to launch ICBMs or authenticate billion dollar transactions. I 
 doubt it matters if a previous key can be cracked provided it gets retired 
 before the bad guys can throw enough CPU-years to break it.

The problem with the way you've phrased this question is that there does not 
seem to be agreement amongst the parties to this discussion whether old keys 
matter.   If you think they do, you need longer keys.   If you think they 
don't, you need shorter keys.   So rather than talking about key lengths first, 
it would be more productive to come to a consensus about which threat model we 
are trying to address.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Joe Abley

On 2 Apr 2014, at 10:26, Ted Lemon ted.le...@nominum.com wrote:

 The problem with the way you've phrased this question is that there does not 
 seem to be agreement amongst the parties to this discussion whether old keys 
 matter.   If you think they do, you need longer keys.   If you think they 
 don't, you need shorter keys.   So rather than talking about key lengths 
 first, it would be more productive to come to a consensus about which threat 
 model we are trying to address.

I'm trying to understand the time-based attack, but I'm not seeing it.

The gist seems to be that if I can turn back the clock on a remote resolver, I 
can pollute its cache with old signatures (made with an old, presumably 
compromised key) and the results will appear to clients of the resolver to 
validate.

This sounds plausible, but without administrative compromise of the remote 
resolver (in which case you have much simpler options) this attack seems to 
involve:

1. subverting sufficient NTP responses over a long enough period to cause the 
remote resolver's clock to turn back in time (long period suggested due to 
many/most? implementations' refuse large steps in times, and hence many smaller 
steps might be required)

2. replacing every secure response that would normally arrive at the resolver 
with a new one that will validate properly at whatever the resolver's idea of 
the time and date is (or, if not every, sufficient that the client population 
don't see validation failures for non-target queries). This potentially 
involves having factored or otherwise recovered every ZSK and KSK that might be 
used to generate a signature in a response to the resolver, for the time period 
between now and then.

This seems like an intractably difficult thing to accomplish.

What am I missing?


Joe
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] mailing list behavior Re: Current DNSOP thread and why 1024 bits

2014-04-02 Thread Suzanne Woolf
Colleagues,

We've noted the tone some participants in the key length discussion have taken, 
and the complaints about it. 

We're handling off-list, in accordance with IETF mailing list moderation 
policy. 

Understood that this is a frustrating topic, but less heat, more light, please.


thanks,
Suzanne  Tim
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread  Roy Arends
On 02 Apr 2014, at 15:19, Jim Reid j...@rfc1035.com wrote:

 There's been a lot of noise and very little signal in the recent discussion.
 
 It would be helpful if there was real data on this topic. Is an RSA key of N 
 bits too weak or too strong? I don't know. Is N bits good enough? 
 Probably. Change the algorithm and/or value of N to taste.
 
 My gut feel is large ZSKs are overkill because the signatures should be 
 short-lived and the keys rotated frequently. Though the trade-offs here are 
 unclear: is a 512-bit key that changes daily (say) better than a 2048-bit key 
 that gets rotated once a week/month/whatever? Remember too we're not talking 
 about keys to launch ICBMs or authenticate billion dollar transactions. I 
 doubt it matters if a previous key can be cracked provided it gets retired 
 before the bad guys can throw enough CPU-years to break it.
 
 However I'm just going on my own gut feel and common sense which could be 
 wrong. Large keys might well be advisable at the root and/or for TLD KSKs. 
 But so far there does not appear to have been much science or engineering on 
 just how large those keys should be or how frequently they change. So in the 
 absence of other firm foundations the established wisdom becomes do what 
 gets done for the root.
 
 If there is a threat or risk here, please present solid evidence. Or, better 
 still, an actual example of how any DNSSEC key has been compromised and then 
 used for a real-world (or proof of concept) spoofing attack. 
 
 
 BTW, the apparent profanity on an earlier thread was annoying because it 
 didn't spell whisky correctly. As every drinker of fine single malt knows. 
 :-)

:-)

Jim,

Just a thought that occured to me. Crypto-maffia folk are looking for a minimum 
(i.e. at least so many bits otherwise its insecure). DNS-maffia folk are 
looking for a maximum (i.e. at most soo many bits otherwise 
fragmentation/fallback to tcp). It seems that the cryptomaffia’s minimum might 
actually be larger than the DNS-maffia’s maximum.

As an example (dns-op perspective). 

Average case: 2 keys (KSK/ZSK) + 1 sig (by KSK) with 2048 bit keys is at least 
768 bytes (and then some).
Roll case: 3 keys(2 KSK/1 ZSK) + 2 sig (by KSK) with 2048 bit keys is at least 
1280 bytes (and then some).

Then there is this section in SAC63: Interaction of Response Size and IPv6 
Fragmentation” 

Which relates to response sizes larger than 1280 and IPv6 and blackhole effects.

https://www.icann.org/en/groups/ssac/documents/sac-063-en.pdf

Hope this helps

Roy





signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Phil Regnauld
Joe Abley (jabley) writes:
 
 
 1. subverting sufficient NTP responses over a long enough period to cause the 
 remote resolver's clock to turn back in time (long period suggested due to 
 many/most? implementations' refuse large steps in times, and hence many 
 smaller steps might be required)

Many systems will run ntpdate on startup.

 This seems like an intractably difficult thing to accomplish.

It does seem far fetched.

 What am I missing?

There may be good reasons to increase key length, this is not one I'm
worried about (then again, no one worried about source port 
randomization
before 2008 :)

P.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread Colm MacCárthaigh
On Wed, Apr 2, 2014 at 6:30 AM, Edward Lewis edlewis.subscri...@cox.netwrote:

 I found that there are two primary reasons why 1024 bits is used in zone
 signing keys.

  One - peer pressure.  Most other operators start out with 1024 bits.  I
 know of some cases where operators wanted to choose other sizes but were
 told to follow the flock.

 Two - it works.  No one has ever demonstrated a failure of a 1024 bit key
 to provide as-expected protection.


Cryptographic failures are often undemonstrated for decades. If a state
actor has broken 1024b keys, they're unlikely to advertise that, just use
it now and then as quietly as they can.

Secondly, the application of signatures in DNS and the nature of the DNS
protocol itself presents significant risks that don't make a
straightforward comparison easy.

Suppose your goal is to intercept traffic, and you'd like to cause
www.example.com, a signed domain, to resolve to an IP address that you
control.  Now suppose you also happen to have a /16, not unreasonable for a
large actor - small even. If you can craft a matching signature for
www.example.com with even one of your 2^16 IP addresses, you've succeeded.
You don't have to care which particular IP address you happened to craft a
matching signature for.  This property makes it easier to sieve for
matching signatures.

From these two main reasons (and you'll notice nothing about cryptographic
 strength in there) a third very import influence must be understood - the
 tools operators use more or less nudge operators to the 1024 bit size.
  Perhaps via the default settings or perhaps in the tutorials and
 documentation that is read.


Do you think that this would be as relevant to the root zone and large TLDs
though?

-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Christopher Morrow
On Wed, Apr 2, 2014 at 11:19 AM,  Roy Arends r...@dnss.ec wrote:

 Just a thought that occured to me. Crypto-maffia folk are looking for a 
 minimum (i.e. at least so many bits otherwise its insecure). DNS-maffia folk 
 are looking for a maximum (i.e. at most soo many bits otherwise 
 fragmentation/fallback to tcp). It seems that the cryptomaffia’s minimum 
 might actually be larger than the DNS-maffia’s maximum.

 As an example (dns-op perspective).

 Average case: 2 keys (KSK/ZSK) + 1 sig (by KSK) with 2048 bit keys is at 
 least 768 bytes (and then some).
 Roll case: 3 keys(2 KSK/1 ZSK) + 2 sig (by KSK) with 2048 bit keys is at 
 least 1280 bytes (and then some).


Part of jim's query is of interest:
  Where are the requirements? (boiled down some to that I think)

There's also a point I asked about previously in jim's note:
  Where's the POC at?

I don't think anyone's going to change anything without your referred
to 2008-like incident... and without some requirements at least as a
swag, right?

I'd expect the key length discussion relates pretty closely to:
  If I can factor the key in less time than you will rotate keys...

So, how often to the keys rotate? at least every 30 days? So you have
to be able to be 'secure' longer than 30 days of compute resources
time, right?

 Then there is this section in SAC63: Interaction of Response Size and IPv6 
 Fragmentation”

 Which relates to response sizes larger than 1280 and IPv6 and blackhole 
 effects.

 https://www.icann.org/en/groups/ssac/documents/sac-063-en.pdf

good times :(

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Christopher Morrow
On Wed, Apr 2, 2014 at 11:31 AM, Christopher Morrow
morrowc.li...@gmail.com wrote:
 On Wed, Apr 2, 2014 at 11:19 AM,  Roy Arends r...@dnss.ec wrote:

 Just a thought that occured to me. Crypto-maffia folk are looking for a 
 minimum (i.e. at least so many bits otherwise its insecure). DNS-maffia folk 
 are looking for a maximum (i.e. at most soo many bits otherwise 
 fragmentation/fallback to tcp). It seems that the cryptomaffia’s minimum 
 might actually be larger than the DNS-maffia’s maximum.

 As an example (dns-op perspective).

 Average case: 2 keys (KSK/ZSK) + 1 sig (by KSK) with 2048 bit keys is at 
 least 768 bytes (and then some).
 Roll case: 3 keys(2 KSK/1 ZSK) + 2 sig (by KSK) with 2048 bit keys is at 
 least 1280 bytes (and then some).


 Part of jim's query is of interest:
   Where are the requirements? (boiled down some to that I think)

 There's also a point I asked about previously in jim's note:
   Where's the POC at?

 I don't think anyone's going to change anything without your referred
 to 2008-like incident... and without some requirements at least as a
 swag, right?

oops, apologies, phil's 2008 reference.


 I'd expect the key length discussion relates pretty closely to:
   If I can factor the key in less time than you will rotate keys...

 So, how often to the keys rotate? at least every 30 days? So you have
 to be able to be 'secure' longer than 30 days of compute resources
 time, right?

 Then there is this section in SAC63: Interaction of Response Size and IPv6 
 Fragmentation”

 Which relates to response sizes larger than 1280 and IPv6 and blackhole 
 effects.

 https://www.icann.org/en/groups/ssac/documents/sac-063-en.pdf

 good times :(

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Ted Lemon
On Apr 2, 2014, at 10:49 AM, Joe Abley jab...@hopcount.ca wrote:
 This seems like an intractably difficult thing to accomplish.

Bear in mind that all you _really_ have to do is get a bogus ZSK with the 
current time into the resolver, which you may be able to do with some clever 
NTP shenanigans over a relatively short timescale.   But yeah, this isn't 
likely to be useful except in cases where a device has been powered off, 
doesn't have an accurate battery-backed-up clock, and does DNSSEC, which is a 
weird set of circumstances.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread Paul Wouters

On Wed, 2 Apr 2014, Nicholas Weaver wrote:


Well, its because for the most part, cryptographers do seem to understand that 
DNSSEC is a bit of a joke when it comes to actually securing conventional DNS 
records.


Funny, the cryptographers I talk to, like at the University of Waterloo,
have always told me our key sizes are giant for the deployment time we
are using them and that we're far above minimum.


And the NIST crypto recommendations have existed for years.  1024b RSA was 
deprecated in 2010, eliminate completely in 2013.  There may be doubt in NIST 
now, but 2 years ago, to ignore the standard recommendations is foolish.


I love how people now cherry pick NIST. If it agrees with them they
quote it. If it disagreed with them they shout NSA subverted them.


Do your resolvers have protection against roll back the clock attacks?  If not, you do 
not gain protection from the short-lived (well, really, a few month, they don't roll 
the actual key every 2 weeks) nature of the ZSK for root, .com, etc.


Yes, there is a 5011 unbound anchor verification on some of them. And
ntpd doesn't let you jump months in a second either, so only systems
that run ntpdate before ntpd are vulnerable, and only those systems
without a battery backed up clock should do that.


Saving space and time does matter.  Roughly half the operators I studied would 
include a backup key on-line because “they could” with the shorted length.  And 
performance does matter - ask the web browser people.


Because we want to make security decisions based on a 1ms latency browser war?


Amdahl's law seems to be something that computer science in general always 
seems to forget.  The performance impact, both in size and cryptographic 
overhead, to shift to 2048b keys is negligible in almost all cases.


Mind you, I agree we can move to 2048 for ZKSs


And the step function in DNS cost, the Internet can't do fragments problem, 
doesn't really come into play at 2048b.


I don't think the entire cannot do fragments issue is really still a big
problem. Networks using/depending on 8.8.8.8 has actually cleaned up a
lot of the transport issues involved I think.


It nets to this - cryptographers urge for longer lengths but can’t come up with 
a specific, clearly rational, recommendation.


Yes they have.  2048b.


Actually my Waterloo cryptographer said you can pick something smaller
than 2048 and larger than 1024 too.


DNSSEC is actually useless for, well, DNS.  A records and the like do not 
benefit from cryptographic protection against a MitM adversary, as that 
adversary can just as easily attack the final protocol.


You are mixing in local vs global attacks. For example, Brasil's attack
where the majority of cable modems got their DNS setting changed for
their DHCP server. If hosts were running DNSSEC with using their ISP
as forwarders, this massive attack affecting millions would have
failed. So your statement is rather simplistic and wrong.


Building the root of this foundation on the sand of short keys, keys that we 
know that are well within range for nation-state adversaries, from the root and 
TLDs is a recipe to ensure that DNSSEC is, rightly, treated by the rest of the 
world as a pointless joke.


Actually, I would LOVE to see a rogue entry signed by the root ZSK. It's
a giant enigma problem if the USG can make these. Please do invite me to
the root-servers meeting following that exposure.

I don't think there is a reason not to switch to 2048 for ZSKs. But I'm
basing that on there are no significant transport obstacles left.

Paul

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread S Moonesamy

Hi Ed,
At 06:30 02-04-2014, Edward Lewis wrote:
I found that there are two primary reasons why 1024 bits is used in 
zone signing keys.


 One - peer pressure.  Most other operators start out with 1024 
bits.  I know of some cases where operators wanted to choose other 
sizes but were told to follow the flock.


Two - it works.  No one has ever demonstrated a failure of a 1024 
bit key to provide as-expected protection.


My short comment would be Yes to the above.

The problem might be the follow the flock as there is an assumption 
that someone looked at the details before choosing the 1024 bit key.


What does it matter from a security perspective?  DNS messages are 
short lived.  It's not like we are encrypting a novel to be kept 
secret for 100 years.  With zone signing keys lasting a month, 6 
months, or so, and the ability to disallow them fairly quickly, 
what's the difference between this so-called 80 or 112 bit strength 
difference?  Yes, I understand the doomsday scenario that someone 
might guess my private key and forge messages.  But an attack is 
not as simple as forging messages, it takes the ability to inject 
them too.  That can be done - but chaining all these things together 
just makes the attack that much less prevalent.


For context, the discussion is about a ZSK.  There is a theory that 
it would take under a year and several million (U.S.) dollars to 
break 1024 bits.  It has been said (not on this mailing list) that an 
organization could do it within a shorter time.  It's not a good idea 
to wait for the demonstration as it can raise concerns about the 
entity which chose the key.


As a general comment I tried to find out which NIST recommendations 
are being discussed in respect to DNSSEC.  The requirements mentioned 
by Joe Abley refers to NIST SP 800-78.  That document is about 
Cryptographic Algorithms and Key Sizes for Personal Identity
Verification.  Is that the NIST recommendation on which this 
discussion is based?


Regards,
S. Moonesamy  


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Evan Hunt
On Wed, Apr 02, 2014 at 11:33:20AM -0400, Ted Lemon wrote:
 Bear in mind that all you _really_ have to do is get a bogus ZSK with the
 current time into the resolver, which you may be able to do with some
 clever NTP shenanigans over a relatively short timescale.   But yeah,
 this isn't likely to be useful except in cases where a device has been
 powered off, doesn't have an accurate battery-backed-up clock, and does
 DNSSEC, which is a weird set of circumstances.

I predict that will be a less weird set of circumstances in a year or
so: dnsmasq now has DNSSEC validation in beta.

(Tony Finch has a nifty idea to replace ntpdate with a quorum of tlsdate
responses; it might still be subvertible but it would be a much harder
nut to crack. https://git.csx.cam.ac.uk/x/ucs/u/fanf2/temporum.git)

-- 
Evan Hunt -- e...@isc.org
Internet Systems Consortium, Inc.

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread Rose, Scott
On Apr 2, 2014, at 12:06 PM, S Moonesamy sm+i...@elandsys.com wrote:

 
 What does it matter from a security perspective?  DNS messages are short 
 lived.  It's not like we are encrypting a novel to be kept secret for 100 
 years.  With zone signing keys lasting a month, 6 months, or so, and the 
 ability to disallow them fairly quickly, what's the difference between this 
 so-called 80 or 112 bit strength difference?  Yes, I understand the doomsday 
 scenario that someone might guess my private key and forge messages.  But 
 an attack is not as simple as forging messages, it takes the ability to 
 inject them too.  That can be done - but chaining all these things together 
 just makes the attack that much less prevalent.
 
 For context, the discussion is about a ZSK.  There is a theory that it would 
 take under a year and several million (U.S.) dollars to break 1024 bits.  It 
 has been said (not on this mailing list) that an organization could do it 
 within a shorter time.  It's not a good idea to wait for the demonstration as 
 it can raise concerns about the entity which chose the key.
 
 As a general comment I tried to find out which NIST recommendations are being 
 discussed in respect to DNSSEC.  The requirements mentioned by Joe Abley 
 refers to NIST SP 800-78.  That document is about Cryptographic Algorithms 
 and Key Sizes for Personal Identity
 Verification.  Is that the NIST recommendation on which this discussion is 
 based?
 

The only DNSSEC related NIST SP's are 800-57 and 800-81-2.  SP 800-57 is in 3 
parts, part one is general key considerations and part 3 covers specific uses 
like DNSSEC.  It's showing its age though.  

The US Federal policy (now) is 2048 bit RSA for all uses, DNSSEC has a special 
exemption for 1024 bit ZSK's if desired (to reduce risks of fragmented 
packets).  I do know some .gov zones using 2048 bit KSK and ZSK's as local 
policies can call for stronger keys.  By 2015, .gov/mil zones should migrate to 
ECDSA.  Not sure if that will happen given the track record, but that is the 
roadmap.  

Scott

 Regards,
 S. Moonesamy  
 ___
 DNSOP mailing list
 DNSOP@ietf.org
 https://www.ietf.org/mailman/listinfo/dnsop

===
Scott Rose
NIST
scott.r...@nist.gov
+1 301-975-8439
Google Voice: +1 571-249-3671
http://www.dnsops.gov/
===

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Working Group Last call for draft-ietf-dnsop-delegation-trust-maintainance

2014-04-02 Thread Tim Wicinski

Greetings,

This is the starting of the WGLC on Automating DNSSEC delegation trust 
maintenance.  This was briefly covered in London and these are ready for 
WGLC.   The current versions of this documents can be found here:


https://datatracker.ietf.org/doc/draft-ietf-dnsop-delegation-trust-maintainance/
http://www.ietf.org/id/draft-ietf-dnsop-delegation-trust-maintainance-03.txt


We'll have a 2 week period for comments, closing on April 16th, 2014.

thanks
tim

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] Working Group Last Call for draft-ietf-dnsop-child-syncronization

2014-04-02 Thread Tim Wicinski


All,

This is the beginning of the Working Group Last Call on Child To Parent 
Synchronization in DNS.
The London update showed that this work is complete and ready to move 
forward.


The document can be found here:
https://datatracker.ietf.org/doc/draft-ietf-dnsop-child-syncronization/
http://www.ietf.org/id/draft-ietf-dnsop-child-syncronization-00.txt

Please take a moment to review the final versions and send up any comments.

This document willhave a 2 week period for comments, closing on April 
16th, 2014.


thanks
tim

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Nicholas Weaver

On Apr 2, 2014, at 11:19 AM,  Roy Arends r...@dnss.ec wrote:
 
 Just a thought that occured to me. Crypto-maffia folk are looking for a 
 minimum (i.e. at least so many bits otherwise its insecure). DNS-maffia folk 
 are looking for a maximum (i.e. at most soo many bits otherwise 
 fragmentation/fallback to tcp). It seems that the cryptomaffia’s minimum 
 might actually be larger than the DNS-maffia’s maximum.

The problem from the dns-op maximalist viewpoint is there is basically two 
magic numbers: 512B and ~1400B.  As someone who's measured this, the 512B is 
not a problem, but the 1400B here be fragments is.  Yet at the same time, the 
current 1024B ZSK/2048B KSK configuration on TLDs does blow through it: I 
reported in the previous thread how org's DNSKEY record already blew past that 
limit.


And even in that case, resolvers can handle fragments don't work, albeit with 
a latency penalty.  So its not a DNSSEC fails point but simply performance 
degraded.


So the real question is do the common answers fragment, the ones with short 
TTLs that are accessed a lot, have a fragment problem.  With 2048b keys, they 
don't: the one that gets you is NSEC3, and that only blows up in your face on 
4096b keys.  (But boy does it, those 3 RRSIGs get big when you're using 4096b 
keys).


And please don't discount the psychology of the issue.  If DNSSEC wants to be 
taken seriously, it needs to show it.  Using short keys for root and the major 
TLDs, under the assumptions that it can't be cracked quickly (IMO, we have to 
assume 1024b can be.) and that old keys don't matter [1], is something that 
really does draw criticism.



[1] IMO they do until validators record and use a 'root key ratchet': never 
accept a key who's expiration is older than the inception date of the RRSIG on 
the youngest root ZSK seen, or have some other defense to roll-back-the-clock 
attacks.

--
Nicholas Weaver  it is a tale, told by an idiot,
nwea...@icsi.berkeley.edufull of sound and fury,
510-666-2903 .signifying nothing
PGP: http://www1.icsi.berkeley.edu/~nweaver/data/nweaver_pub.asc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Frederico A C Neves
Nicholas,

On Wed, Apr 02, 2014 at 04:25:10PM -0400, Nicholas Weaver wrote:
 
... 
 And please don't discount the psychology of the issue.  If DNSSEC
 wants to be taken seriously, it needs to show it.  Using short keys
 for root and the major TLDs, under the assumptions that it can't be
 cracked quickly (IMO, we have to assume 1024b can be.) and that old
 keys don't matter [1], is something that really does draw criticism.

 [1] IMO they do until validators record and use a 'root key
 ratchet': never accept a key who's expiration is older than the
 inception date of the RRSIG on the youngest root ZSK seen, or have
 some other defense to roll-back-the-clock attacks.

What do you mean by ..key who's expiration is..? A new propertie
recorded at this ratchet, btw what is this?

Fred

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Richard Lamb
Speaking for myself:

First:  Thank you Jim and Joe for seeking to increase the signal-to-noise ratio 
on this thread and for explaining what the attack vector would be for lower IQ 
folk like myself.

Second: I have always taken my instructions from the community. So regardless 
of what I believe I will faithfully do my part (with your help) to make it 
happen.

Third: From my vantage point and as author of the code used for the KSK side of 
things, I do not see any immediate barriers  to increasing key lengths.  The 
members of the original root DNSSEC design team have enjoyed a very good 
working relationship and I expect that to continue.  However, like any other 
change at this level it must be one that is approached conservatively and 
thoroughly tested before deployed (software, increased RRSet sizes, IPv6 
impact, new ZSK generation).  This will take human resources and time.

I look forward to following further discussions on this topic.

-Rick



-Original Message-
From: DNSOP [mailto:dnsop-boun...@ietf.org] On Behalf Of Joe Abley
Sent: Wednesday, April 02, 2014 7:50 AM
To: Ted Lemon
Cc: IETF DNSOP WG
Subject: Re: [DNSOP] key lengths for DNSSEC


On 2 Apr 2014, at 10:26, Ted Lemon ted.le...@nominum.com wrote:

 The problem with the way you've phrased this question is that there does not 
 seem to be agreement amongst the parties to this discussion whether old keys 
 matter.   If you think they do, you need longer keys.   If you think they 
 don't, you need shorter keys.   So rather than talking about key lengths 
 first, it would be more productive to come to a consensus about which threat 
 model we are trying to address.

I'm trying to understand the time-based attack, but I'm not seeing it.

The gist seems to be that if I can turn back the clock on a remote resolver, I 
can pollute its cache with old signatures (made with an old, presumably 
compromised key) and the results will appear to clients of the resolver to 
validate.

This sounds plausible, but without administrative compromise of the remote 
resolver (in which case you have much simpler options) this attack seems to 
involve:

1. subverting sufficient NTP responses over a long enough period to cause the 
remote resolver's clock to turn back in time (long period suggested due to 
many/most? implementations' refuse large steps in times, and hence many smaller 
steps might be required)

2. replacing every secure response that would normally arrive at the resolver 
with a new one that will validate properly at whatever the resolver's idea of 
the time and date is (or, if not every, sufficient that the client population 
don't see validation failures for non-target queries). This potentially 
involves having factored or otherwise recovered every ZSK and KSK that might be 
used to generate a signature in a response to the resolver, for the time period 
between now and then.

This seems like an intractably difficult thing to accomplish.

What am I missing?


Joe
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] CD (Re: Whiskey Tango Foxtrot on key lengths...)

2014-04-02 Thread Mark Andrews

In message CAAF6GDcP77MBBUJbEdQgOqOLh2UHPEOmxYNTaAO-8F=odly...@mail.gmail.com
, =?ISO-8859-1?Q?Colm_MacC=E1rthaigh?= writes:
 
 On Tue, Apr 1, 2014 at 7:49 PM, Evan Hunt e...@isc.org wrote:
 
  On Tue, Apr 01, 2014 at 06:25:12PM -0700, Colm MacC?rthaigh wrote:
   DNSSEC is a mitigation against spoofed responses, man-in-the-middle
   interception-and-rewriting and cache compromises. These threats are
   endpoint and path specific, so it's entirely possible that one of your
   resolvers (or its path) has been compromised, but not others. If all of
   your paths have been compromised, then there is no recovery; only
   detection. But that is always true for DNSSEC.
 
  Consider the scenario in which one authoritative server for a zone
  has been compromised and the others have not, and that one happens to
  have the lowest round-trip time, so it's favored by your resolver.
 
 
  If you query with CD=0, a validating resolver detects the problem
  and tries again with another auth server.  It doesn't give up until
  the whole NS RRset has failed.
 
 
  If you query with CD=1, you get the bogus data and it won't validate.
 
 
 I don't think this makes much sense for a coherent resolver. If I were
 writing a resolver, the behaviour would instead be;  try really hard to
 find a valid response, exhaust every reasonable possibility. If it can't
 get a valid response, then if CD=1 it's ok to pass back the invalid
 response and its supposed signatures - maybe the stub will no better, at
 least fail open. If CD=0, then SERVFAIL, fail closed.

Guess what, resolvers do not work like that.  They are not required
to work like that.  They are however required to search if CD=0.
SERVFAIL should be a rare event.  SERVFAIL that gets fixed with
CD=1 and then validates successfull should be a even much rarer
event.

We know that there are cases where some of the authoritative servers
broken DNSSEC wise yet you want to optimise for the bad time / trust
anchor in the recursive server.

 Although CD means checking disabled, I wouldn't actually disable
 checking, simply because that's stupid (I don't mean to be impolite, but I
 don't have a better word to use here). But by preserving the on-the-wire
 semantics of the CD bit, I'd preserve effectiveness as a cache, and pass on
 what's needed to validate even the failure cases.
 
 
 -- 
 Colm
 
 --001a11c2a20c040b1504f60880b6
 Content-Type: text/html; charset=ISO-8859-1
 Content-Transfer-Encoding: quoted-printable
 
 div dir=3Dltrbrdiv class=3Dgmail_extradiv class=3Dgmail_quote=
 On Tue, Apr 1, 2014 at 7:49 PM, Evan Hunt span dir=3Dltrlt;a href=3D=
 mailto:e...@isc.org; target=3D_blanke...@isc.org/agt;/span wrote:b=
 rblockquote class=3Dgmail_quote style=3Dmargin:0 0 0 .8ex;border-left:=
 1px #ccc solid;padding-left:1ex
 On Tue, Apr 01, 2014 at 06:25:12PM -0700, Colm MacC?rthaigh wrote:br
 gt; DNSSEC is a mitigation against spoofed responses, man-in-the-middlebr=
 
 gt; interception-and-rewriting and cache compromises. These threats arebr=
 
 gt; endpoint and path specific, so it#39;s entirely possible that one of =
 yourbr
 gt; resolvers (or its path) has been compromised, but not others. If all o=
 fbr
 gt; your paths have been compromised, then there is no recovery; onlybr
 gt; detection. But that is always true for DNSSEC.br
 br
 Consider the scenario in which one authoritative server for a zonebr
 has been compromised and the others have not, and that one happens tobr
 have the lowest round-trip time, so it#39;s favored by your resolver.=A0/=
 blockquoteblockquote class=3Dgmail_quote style=3Dmargin:0 0 0 .8ex;bor=
 der-left:1px #ccc solid;padding-left:1ex
 br
 If you query with CD=3D0, a validating resolver detects the problembr
 and tries again with another auth server. =A0It doesn#39;t give up untilb=
 r
 the whole NS RRset has failed.=A0/blockquoteblockquote class=3Dgmail_qu=
 ote style=3Dmargin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex=
 
 br
 If you query with CD=3D1, you get the bogus data and it won#39;t validate.=
 br/blockquotediv=A0/divdivI don#39;t think this makes much sense=
  for a coherent resolver. If I were writing a resolver, the behaviour would=
  instead be; =A0try really hard to find a valid response, exhaust every rea=
 sonable possibility. If it can#39;t get a valid response, then if CD=3D1 i=
 t#39;s ok to pass back the invalid response and its supposed signatures - =
 maybe the stub will no better, at least fail open. If CD=3D0, then SERVFAIL=
 , fail closed.=A0/div
 divbr/divdivAlthough CD means quot;checking disabledquot;, I woul=
 dn#39;t actually disable checking, simply because that#39;s stupid (I don=
 #39;t mean to be impolite, but I don#39;t have a better word to use here)=
 . But by preserving the on-the-wire semantics of the CD bit, I#39;d preser=
 ve effectiveness as a cache, and pass on what#39;s needed to validate even=
  the failure cases.=A0/div
 divbr/div/divdivbr/div-- brColm
 /div/div
 
 --001a11c2a20c040b1504f60880b6--
 
 

Re: [DNSOP] CD (Re: Whiskey Tango Foxtrot on key lengths...)

2014-04-02 Thread Colm MacCárthaigh
On Wed, Apr 2, 2014 at 2:40 PM, Mark Andrews ma...@isc.org wrote:

  I don't think this makes much sense for a coherent resolver. If I were
  writing a resolver, the behaviour would instead be;  try really hard to
  find a valid response, exhaust every reasonable possibility. If it can't
  get a valid response, then if CD=1 it's ok to pass back the invalid
  response and its supposed signatures - maybe the stub will no better, at
  least fail open. If CD=0, then SERVFAIL, fail closed.

 Guess what, resolvers do not work like that.  They are not required
 to work like that.


Nothing can compel any particular resolver to choose a particular
implementation - but I take note of
https://tools.ietf.org/html/rfc6840#section-5.9 and
https://tools.ietf.org/html/rfc6840#appendix-B which recommends it (as a
SHOULD) and I generally agree with the good reasoning that's in the RFC.

As I wrote, if it were me writing a validating stub resolver, I would
always set CD=1 - and when acting as an intermediate resolver, I would
always make a reasonable effort to find a validating response, even if CD=0
is on the incoming query. I'm certain that at least one resolver does work
like this, and I suspect it's also how Google Public DNS works, based on
some experimentation.


-- 
Colm
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread S Moonesamy

Hi Scott,
At 10:13 02-04-2014, Rose, Scott wrote:
The only DNSSEC related NIST SP's are 800-57 and 800-81-2.  SP 
800-57 is in 3 parts, part one is general key considerations and 
part 3 covers specific uses like DNSSEC.  It's showing its age though.


The US Federal policy (now) is 2048 bit RSA for all uses, DNSSEC has 
a special exemption for 1024 bit ZSK's if desired (to reduce risks 
of fragmented packets).  I do know some .gov zones using 2048 bit 
KSK and ZSK's as local policies can call for stronger keys.  By 
2015, .gov/mil zones should migrate to ECDSA.  Not sure if that will 
happen given the track record, but that is the roadmap.


Thanks for the above information.  Adding to it, 1024-bit RSA keys 
are allowed until 2015.  There is an explanation about that 
recommendation, i.e. it's not only about packet size.


Regards,
S. Moonesamy 


___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] key lengths for DNSSEC

2014-04-02 Thread Phillip Hallam-Baker
On Wed, Apr 2, 2014 at 11:19 AM,  Roy Arends r...@dnss.ec wrote:

 On 02 Apr 2014, at 15:19, Jim Reid j...@rfc1035.com wrote:

  There's been a lot of noise and very little signal in the recent
 discussion.
 
  It would be helpful if there was real data on this topic. Is an RSA key
 of N bits too weak or too strong? I don't know. Is N bits good
 enough? Probably. Change the algorithm and/or value of N to taste.
 
  My gut feel is large ZSKs are overkill because the signatures should be
 short-lived and the keys rotated frequently. Though the trade-offs here are
 unclear: is a 512-bit key that changes daily (say) better than a 2048-bit
 key that gets rotated once a week/month/whatever? Remember too we're not
 talking about keys to launch ICBMs or authenticate billion dollar
 transactions. I doubt it matters if a previous key can be cracked provided
 it gets retired before the bad guys can throw enough CPU-years to break it.
 
  However I'm just going on my own gut feel and common sense which could
 be wrong. Large keys might well be advisable at the root and/or for TLD
 KSKs. But so far there does not appear to have been much science or
 engineering on just how large those keys should be or how frequently they
 change. So in the absence of other firm foundations the established wisdom
 becomes do what gets done for the root.
 
  If there is a threat or risk here, please present solid evidence. Or,
 better still, an actual example of how any DNSSEC key has been compromised
 and then used for a real-world (or proof of concept) spoofing attack.
 
 
  BTW, the apparent profanity on an earlier thread was annoying because it
 didn't spell whisky correctly. As every drinker of fine single malt
 knows. :-)

 :-)

 Jim,

 Just a thought that occured to me. Crypto-maffia folk are looking for a
 minimum (i.e. at least so many bits otherwise its insecure). DNS-maffia
 folk are looking for a maximum (i.e. at most soo many bits otherwise
 fragmentation/fallback to tcp). It seems that the cryptomaffia’s minimum
 might actually be larger than the DNS-maffia’s maximum.

 As an example (dns-op perspective).

 Average case: 2 keys (KSK/ZSK) + 1 sig (by KSK) with 2048 bit keys is at
 least 768 bytes (and then some).
 Roll case: 3 keys(2 KSK/1 ZSK) + 2 sig (by KSK) with 2048 bit keys is at
 least 1280 bytes (and then some).

 Then there is this section in SAC63: Interaction of Response Size and
 IPv6 Fragmentation”

 Which relates to response sizes larger than 1280 and IPv6 and blackhole
 effects.

 https://www.icann.org/en/groups/ssac/documents/sac-063-en.pdf


There is no doubt that we can get close to the limit on response sizes.
Which is why I have been pushing the notion that if we are going to do DNSE
then part of the DNSE solution should be to get us out of the single
response packet straightjacket.

Its not just crypto that gets crippled by this issue.

We are not in 1995 any more. We have bigger computing resources and bigger
security challenges. The Internet isn't a science project any more.


Too much of the debate here has been for one security approach versus
another. That is obsolete thinking we have been moving to multiple layers
of cryptography for some time now.



-- 
Website: http://hallambaker.com/
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


[DNSOP] DNSng-ish (was Re: key lengths for DNSSEC)

2014-04-02 Thread Andrew Sullivan
On Wed, Apr 02, 2014 at 07:21:11PM -0400, Phillip Hallam-Baker wrote:

 Which is why I have been pushing the notion that if we are going to do DNSE
 then part of the DNSE solution should be to get us out of the single
 response packet straightjacket.

I've seen what you've had to say on that, and what I just don't
understand yet is how that answer is deployable.  That is, how is what
you are suggesting there (and in your other discussions of this topic)
not replace DNS?  Or, if it is, why don't we just do a new protocol
completely?  We could fix the internationalization issues.  We could
ditch UDP and in a single blow eliminate a major source of DDoS on the
Internet.  And so on.

The only problem is getting everyone to upgrade.  No?

A

-- 
Andrew Sullivan
a...@anvilwalrusden.com

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNSng-ish (was Re: key lengths for DNSSEC)

2014-04-02 Thread Andrew Sullivan
On Wed, Apr 02, 2014 at 09:07:07PM -0400, Phillip Hallam-Baker wrote:
 1) Client - Resolver

 Changing 1 is the easiest and also the part that is most in need.

From where I sit, that project appears to reduce to roughly upgrade
all the computers on Earth.  It may be that we do not have a common
meaning of easiest.  Perhaps you could say more.

Best regards,

A

-- 
Andrew Sullivan
a...@anvilwalrusden.com

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread David Conrad
Paul,

On Apr 3, 2014, at 12:38 AM, Paul Wouters p...@nohats.ca wrote:
 Saving space and time does matter.  Roughly half the operators I studied 
 would include a backup key on-line because “they could” with the shorted 
 length.  And performance does matter - ask the web browser people.
 Because we want to make security decisions based on a 1ms latency browser war?

We want to make security decisions that actually improve security.

Making a decision that results in people turning security off because the 
(perceived at least) performance impact is too large does not improve security.

People are already doing insanely stupid things (e.g., not following TTLs) 
because they eke out a couple of extra milliseconds in reduced RTT per query 
(which, multiplied by the zillions of queries today's high content websites 
require, does actually make a difference).

Having not looked into it sufficiently, I do not have a strong opinion as to 
whether increasing key lengths will result in people either not signing or 
turning off validation, but I believe it wrong to disregard performance 
considerations.

Regards,
-drc



signature.asc
Description: Message signed with OpenPGP using GPGMail
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] DNSng-ish (was Re: key lengths for DNSSEC)

2014-04-02 Thread Phillip Hallam-Baker
On Wed, Apr 2, 2014 at 10:48 PM, Andrew Sullivan a...@anvilwalrusden.comwrote:

 On Wed, Apr 02, 2014 at 09:07:07PM -0400, Phillip Hallam-Baker wrote:
  1) Client - Resolver

  Changing 1 is the easiest and also the part that is most in need.

 From where I sit, that project appears to reduce to roughly upgrade
 all the computers on Earth.  It may be that we do not have a common
 meaning of easiest.  Perhaps you could say more.


Nope, just the gateway devices and the main DNS servers.

Legacy DNS over raw UDP will be around for decades to come. But DNS over a
privacy protected transport is quite viable.

The privacy issues are most acute at the network gateway device, the
firewall or the WiFi router.


Privacy protection plus anti-censorship protection is in big demand right
now.

-- 
Website: http://hallambaker.com/
___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop


Re: [DNSOP] Current DNSOP thread and why 1024 bits

2014-04-02 Thread Paul Wouters

On Thu, 3 Apr 2014, David Conrad wrote:


We want to make security decisions that actually improve security.

Making a decision that results in people turning security off because the 
(perceived at least) performance impact is too large does not improve security.


I'm happy to hear the browser vendors taking DNS latency seriously, and
look forward to their contributions towards solving that, with solutions
such as http://datatracker.ietf.org/doc/draft-wouters-edns-tcp-chain-query/

Perhaps they will even advise running resolvers on the stubs with
pre-fetching of low TTL records so they can get out of the DNS caching
business themselves.


People are already doing insanely stupid things (e.g., not following TTLs) 
because they eke out a couple of extra milliseconds in reduced RTT per query 
(which, multiplied by the zillions of queries today's high content websites 
require, does actually make a difference).


Luckily, I think we've seen the chrome/speed pendulum is already
swinging back, and the browser vendors are seeing that users do
care about more than just about latency.


Having not looked into it sufficiently, I do not have a strong opinion as to 
whether increasing key lengths will result in people either not signing or 
turning off validation, but I believe it wrong to disregard performance 
considerations.


My previous email explained why I believe those performance considerations
were wrong.  I am not disregarding those out of principle, I'm disregarding
because I don't agree with the reasons offered. Big resolvers can add more
hardware without pain. End nodes like phones have plenty of CPU to use
up while waiting for latency, and then some.

Paul

___
DNSOP mailing list
DNSOP@ietf.org
https://www.ietf.org/mailman/listinfo/dnsop