Brian Dickson <brian.peter.dick...@gmail.com> writes: Hi Brian,
I believe I understand your system as functionally creating an BGPSEC_Path_Signatures attribute with a algorithm/signature set that is based off of adding random numbers to the signature field(s) instead of adding cryptographically generated hashes. IE, we end up with something like this being received by a RP: |----------------------| | AS Number 1 | | Random Bytes from #1 | |----------------------| | AS Number 2 | | Random Bytes from #2 | |----------------------| | ... | This obviously greatly simplifies the diagram from the -protocol document section 3, but is sufficient for the discussion. The point being is that the BGPSEC_Path_Signatures is a list of ASes that the update has gone through. Roughly. Now, I'm all for updating the signature field to the fastest cryptography that meets the goals. We just need to agree on what the goals are, and how the resulting system is subject to attacks against those goals. So, lets consider a few things first: 1) A random number generation using a PRF is an interesting concept. And you're right, that if you assume a OOB cache that can verify the results is available, it'll offload much of the computation from the RP router. It's a worthy goal. [However, I'd argue that you could do that without changing the crypto as well; though it may still be a goal to lower the crypto cost of the OOB cache machine anyway] 2) A random number generator can only be compared against an existing system in turns of speed if it provides a similar amount of security, or the reduced security is an acceptable trade off for the speed gain. So, let me start with the concept of generating a nonce from a 32 bit generating PRF (or, as you stated, N instances of a 32 bit PRF). The problem with a 32 bit PRF and repeating it N times is that you don't get 32*N bits of entropy. You only get 32 bits. Specifically, if I'm given M random number Rs that were concatenated into a nonce, what do I have to do in order to predict the next 32*N bit field? Functionally, I need to figure out the original seed so that I can calculate the next 32*N bit field. But to calculate all the N fields, I really only need to sweep through the entire 32 bits looking for each of the N values, and work backward to the seed. Using a single 2.4GHz i7 core, I calculate it would take me 1.4 hours to search the entire 32bit space of random numbers looking backward for a seed. And much of that time is devoted just to initializing the random number generator, as producing 4 random numbers from a single seed (say you want to drop the first bunch of random numbers so it's harder for me to work backward) doesn't really help much, as the time needed only jumps to 1.5 hours to calculate 4 times the number of random numbers per seed. Then, you have to think about dictionary attacks. Storing an entire 32 bit set of random numbers generated from any 32-bit PRF requires only 16GB of storage, which is probably about the same as your ram limit without even going into disk-storage. The other problem with a simple PRF implementation (such as random() on most systems) is that it's not designed to be unreversable. IE, it's not designed to be cryptographically random and a random generator should be picked with care if you're using it for "proof" points of view. The end result of this, is that if you want to compare the speed of using random numbers against the speed of a real crypto function, you probably want to use a random number generator that won't let me predict the next one so easily. You need a random number generator that can output a much larger set of bits (IE, returns an number much bigger than a 32-bit int and requires a random seed (if any) much bigger than a 32-bit int). But lets say you come up with a random number generator that is strong enough to do what you want, and we agree the OOB problem can be solved in a way that everyone can be happy with (which I know is one of your statements: worry about it later; I'm not sure we can meet that goal either, but I'm even willing to worry about it later for the time being). IE, let us say that an architecture is found that allows an OOB or the router itself to authoritatively say "AS A really did use random number R a while ago". What does the resulting algorithm really let you say? 3) It lets you say that "AS A really did use random number R a while ago". IE, if you see this: |------| | AS A | | R | |------| | AS B | | Q | |------| | ... | Then you can say "yep, R was generated by A". 4) But.... does it really tell you that R was generated by A for that particular update path? It doesn't, right? All you know is that at some point in the past that AS A really did issue R. And possibly, assuming you keep a running memory cache to avoid replay, that it hasn't been seen yet so you should trust it. But, it may have been monitored/stolen from a different path instead, etc. EG, consider two trust caches that neither of which has seen AS R yet. They could both vouch for completely different paths advertised. 5) And in fact, there is no assurance that the random number generated matches anything. Thus any of the following paths would be treated the same: |------| |------| |------| |------| |------| | AS A | | AS A | | AS C | | AS A | | AS B | | R | | R | | N | | R | | Q | |------| |------| |------| |------| |------| | AS B | | AS C | | AS A | | Q | | Q | | R | |------| |------| |------| | ... | | ... | | ... | IE, any update with AS A/num=R in it will verify, but it won't matter if the order is added-to, reversed, deleted, dropped, etc. In the end, it doesn't really provide any of the protection services that the original cryptographically-chained algorithm provides: proof that the path wasn't modified. So, as I said, I'm all for a system that is equally secure, or provides the minimal security properties that we need at a much faster speed. But I fail to see how your system provides any security. At least as you've defined it to date. And even a 50,000x potential improvement you claimed doesn't seem to make me happier if we're getting nothing out of it. Even if we had a more secure random number generator in use (which would be likely slower than the existing one you were testing but still faster than a ECDSA signing system), I fail to see how the designed system with any random number generator will help out because we're loosing all the path binding properties that BGPSEC is trying to secure in the first place. -- Wes Hardaker SPARTA, Inc. _______________________________________________ sidr mailing list sidr@ietf.org https://www.ietf.org/mailman/listinfo/sidr