On Tuesday 15 January 2008 16:22, Michael T?nzer wrote:
> Matthew Toseland schrieb:
> | On Saturday 05 January 2008 05:15, Michael Rogers wrote:
> |> Matthew Toseland wrote:
> |>> The 50% chance of decrementing
> |>> at 10 means that 3 resets equals 6 nodes with HTL 10...
> |> It's more complicated than that... if the HTL is 10 and the closest
> |> location isn't the previous hop's location, then the attacker knows the
> |> previous hop doesn't decrement at 10.
> |
> | It's a function of the input node, not the output node. Although that's
> | problematic on rerouting, so we may change it to be a function of the
> input
> | node for the first node tried, and then a function of the output node...
> | Hmmm. Why did we want to choose it once per node again? I'm sure there
> was a
> | good reason, but I don't remember it...

Ah yes, easy statistical attacks (50% HTL 10, 50% HTL 9 => first/second hop).
> |
> |> Likewise if the HTL is 9 and the
> |> closest location isn't the previous hop's location, the previous hop
> |> decrements at 10. Either way, the attacker can use that knowledge for
> |> the rest of the session.
> |
> | Okay, so for a single request, if HTL is 10, and closestLocation is
> equal to
> | the node's location, the node could be the originator or it could have
> reset
> | it. If HTL is 10, and closestLocation is not equal to the node's
> location,
> | the node cannot be the originator and cannot have reset the HTL.
> |
> | So without keeping any state, we just look for requests which have
> | closestLocation equal to the previous node's location, and we have a 1
> in (#
> | resets per request + 1) chance that it's the originator (1 in 4 on the
> | previous assumption). Ouch! I'm not sure how the additional state we
> could
> | keep from the above would help the attacker.
> |
> | For a non-resetting HTL, the odds are even worse, unless we have a long
> | probabilistic htl=max stage. So plainly a weighted coin would help
> | significantly. The problem is that a weighted coin would increase the
> | variance in path length *significantly*, cause more no-fault timeouts,
> cause
> | more no-fault failures (too short a path), make request coalescing
> extremely
> | difficult, and likewise with ULPRs. Is there an alternative?
> |
> | [snip]
> 
> How about a combination of HTL and weighted coin: We don't only
> decrement probabilisticly at 10 and 1 but on all the way down from 10 to
> 0 and flip the coin each time so no (100% sure) assumption can be made
> what we will do next time. 

See above: in the current code we decide once to thwart the easier statistical 
attacks. You are still vulnerable here: on the first (second?) hop, 80% will 
be HTL 9 and 20% HTL 10. On the second, 64% will be HTL 8, 16% HTL 9 and 4% 
HTL 10. And so on, you get a very easy to spot signature. Having said that, 
the current mechanism has a similar problem, as we've been discussing - the 
combination of HTL and nearest-location tells you a great deal.

> In Order to travel not too far we weigh the 
> coin (let's say at 80% for a decrement). If our location is closer than
> the closestLocation we don't set closestLocation to our actual location
> but to a location a little bit _closer_ (if we set it to a location more
> far away the next node knows we reset closestLocation), this is also
> done probabilistic, 

This looks detectable to me - how to generate such a location? Especially when 
the attacker knows the locations of nodes for a few hops because of swapping?

It's also rather messy alchemy.

> and set the HTL to a value between 8 and 10 (more 
> likely to 10 than to 8 - maybe we could even set it to a value between 1
> and 10 but make 1 extremely unlikely e.g.
> (int)(11-10*Math.pow(PROBABILITY_TUNER,(-Math.random()))) where
> PROBABILITY_TUNER is 10^(1/(1-(Probability of HTL=10))) ).

An exponential distribution? Interesting idea... you get some very short paths 
though...

> To add even more fuzz we could even make the probabilities a partly
> random value. So every time the node is started it calculates the
> probabilities to decrement the HTL per possible HTL-value. Of course we
> had to limit that but again I would speak for letting them be everything
> but making it more likely to be a suitable value.

Beyond a certain point fuzz becomes just fuzz with no real value.
> 
> If we do the above the attacker can't rely on anything being for sure,
> he can just estimate the probability, and even that does vary everytime
> the peer-node is restarted.
> But for routing purposes it should be reliable enough. If we get an data
> not found we just have to try again (If we decide that we still need it
> - that could be not the case if we're searching for the next 5
> USK-editions but find none of them, then we could just try the first
> again and if it's not found we assume we have found the last edition)
> and it will be likely that the data is found if it's present.

Even for USKs, we want to get it eventually. If it is killed in 3 hops, that's 
not very useful, especially if we have no way of knowing that.
> 
> regards
> Neo at NHNG
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080116/927a1b69/attachment.pgp>

Reply via email to