On Jan 16 2008, Matthew Toseland wrote: >If they are both local they'd both be HTL 10 iirc.
Right, if they're both local they'll have the same HTL, which means if they don't have the same HTL they can't both be local... so local traffic has less traffic to hide in. > A simple weighted coin would generate very few > timeouts: If we assume the data isn't found (typical on *long* requests), > 2 seconds per hop seems a reasonable upper estimate, so there is a 0.2% > chance that a request goes more than 60 hops and therefore causes a > timeout... on the other hand we want it to generally respond in much less > than that. That sounds good - and if it doesn't time out we'll get a reply of some kind within 60 seconds so we can move on to another peer. >> DNF after a short search isn't a big deal, we can always try again. > > True. But to the same node? And how do we detect this anyway? Simply by > time? If we get a DNF from one peer we move on to the next - DNF is equivalent to RNF because there's no hop counter. > Report the number of hops on a DNF? Is that safe? It's not safe and we don't know the number of hops anyway. > On either a DNF or a timeout, we'd like to route differently the next > time, because both could be caused by a single malicious node attempting > to censor a specific key. If short DNFs and timeouts are too common, this > could be a problem. In the case of a DNF we can just move on to the next peer... in the case of a timeout, maybe we should remember which peers we tried last time for a few seconds (in other words failure tables)? > This again appears solvable: allow more than one request before blocking > it - block the request only if there have been N requests over the > timeout period. This is probably wise anyway, since Bad Things may have > happened downstream. Sounds sensible - so one timeout or several DNFs will cause further requests to block. We can work out the number of DNFs to tolerate using the weight of the coin, eg if there's a 95% chance that at least one of the requests travelled 10 hops downstream, then block further requests. >> IMO it shouldn't be customisable. If some of the traffic flowing through >> my node belongs to unusually paranoid or unusually confident people who >> modify the weight of the coin, my anonymity is reduced even though I >> stuck with the default. > >I was referring to tunnels IIRC. In terms of HTL, I agree. Ah, I see. We'll also need a way to decide when to terminate the tunnel (ie switch from random routing to greedy routing) - I was thinking of a weighted coin for that as well. > Okay, supposing we implement a simple weighted coin (say 10% > probability). This would eliminate the *urgent* need for tunnels. I don't agree - replacing the hop counter without introducing tunnels might actually make things worse: if we flip a separate coin for each request then the attacker can quickly gather predecessor samples, even if each one only provides a probability of 10%. We could flip one coin per peer at startup, but then repeated requests will follow exactly the same path as the original request unless we introduce failure tables. > It would dramatically reduce the predecessor confidence, Yup, that's definitely worthwhile as long as we don't increase the number of samples at the same time. Cheers, Michael
