Since we agreed to decouple path validity from origin validity (and return
both as distinct validation results), we should probably clean this up.

That is, we now can’t rely on the origin being invalid, to invalidate this
path manipulation. 

And even if we had not decoupled those two, just because you might be
authorized to announce 102::/16 to a peer, that is different than saying
that you actually announced it.

I realize that the threat scenario here is getting diminishing small, but
still, we should clean this up to reduce it to zero.

dougm
— 
Doug Montgomery, Mgr Internet & Scalable Systems Research @  NIST/ITL/ANTD





On 2/13/15, 8:03 PM, "Keyur Patel (keyupate)" <keyup...@cisco.com> wrote:

>Hi David,
>
>
>On 2/10/15, 1:48 PM, "David Mandelberg" <da...@mandelberg.org> wrote:
>
>>All, while coming up with the example below, I realized another issue.
>>The structure in 4.1 doesn't include an Address Family Identifier.
>>Unless I missed something, this means that a signature for 1.2.0.0/16
>>would be exactly the same as a signature for 102::/16. This would be a
>>much more practical attack than the one I originally though of.
>
>
>But, isn¹t this issue covered by origin-AS validation?
>
>Regards,
>Keyur
>
>
>
>>
>>Michael, response to your comment is below.
>>
>>On 02/10/2015 12:09 PM, Michael Baer wrote:
>>> I don't believe this is a problem.  The signature is calculated by
>>> creating a digest of the data and then creating a signature from that
>>> digest.  I'm definitely not a cryptography expert, but my understanding
>>> of digest functions generally is that with even slightly differing
>>> input, the resulting set of bits should be completely different.
>>> Assuming the digest function chosen is not flawed, there shouldn't be a
>>> set of bits from the digest of 4.1 that could be used to successfully
>>> replace the digest of 4.2, except by chance.
>>
>>You're right about digest algorithms being highly sensitive to changes
>>in the input, but the issue I described is when the two inputs are
>>equal, not just similar. For example, if a router signed the below
>>values in the structure from 4.2:
>>
>>Target AS Number = 0x01020304
>>Origin AS Number = 0x05060708
>>pCount = 0x01
>>Flags = 0x00
>>Most Recent Sig Field = 0x00700102030405060708090a0b0c0d0e (See Sriram's
>>email for why this would never actually happen with the current
>>algorithm suite's signature length.)
>>
>>Then the router signed the digest of the bytes
>>0x0102030405060708010000700102030405060708090a0b0c0d0e. However, these
>>exact same bytes could appear to have come from the structure in 4.1
>>with these values:
>>
>>Target AS Number = 0x01020304
>>Origin AS Number = 0x05060708
>>pCount = 0x01
>>Flags = 0x00
>>Algorithm Suite Id = 0x00
>>NLRI Length = 0x70 (112 bits = 14 bytes)
>>NLRI Prefix = 0x0102030405060708090a0b0c0d0e
>>
>>Note that the first 16 bits of 4.2's Most Recent Sig Field can't be any
>>values. The first 8 have to match the Algorithm Suite ID (1 possible
>>value). The next 8 have to be a valid number of bits for the number of
>>bytes in the prefix (8 possible values). This means that there's only a
>>2^-13 chance that a single random Most Recent Sig Field of the
>>appropriate length could be reinterpreted successfully. However, with
>>more than 2^13 signatures floating around the Internet, that's not good
>>odds.
>>
>>-- 
>>David Eric Mandelberg / dseomn
>>http://david.mandelberg.org/
>>
>
>_______________________________________________
>sidr mailing list
>sidr@ietf.org
>https://www.ietf.org/mailman/listinfo/sidr

_______________________________________________
sidr mailing list
sidr@ietf.org
https://www.ietf.org/mailman/listinfo/sidr

Reply via email to