So far, various solutions to FOAF-routing vulnerabilities have been proposed:

1) Limit an opennet peer to advertising 19 of its peers' locations (plus 
ours). Nextgens has implemented this. It's not a full solution. We need to 
impose some limit on darknet peers too, but even then, a clever node could 
still get most of our traffic.
2) Limit any single node to no more than X% of the keyspace, or detect and 
disconnect from nodes which occupy more than X%. The problem here is that it 
is entirely legitimate for a node to have most of its neighbours near its 
specialisation, and a few "long links" covering large areas of the keyspace. 
This is in fact exactly what is supposed to happen!
3) Limit any single node to no more than 30% of our outgoing requests. This 
would help in that getting 100% of a node's outgoing requests wouldn't be 
possible... but it wouldn't solve the problem. If an attacker's objective is 
to capture all the locally originated traffic, he just needs to grab as big a 
part of the keyspace as possible, excluding the target's specialisation as 
most of its non-local requests will be in that area.
4) Limit any single node to no more than 30% of our outgoing *locally 
originated* requests, in addition to a limit applying to all requests. The 
worry here is that an attacker might use it to distinguish between locally 
and remotely originated requests.
5) Attempt to enforce a 1/n distribution of locations. IMHO this is probably 
unrealistic in real world routing...

Background: even without FOAF routing, an attacker who is directly connected 
to the target can probably identify any known content with reasonable 
certainty, by performing a correlation attack. This won't dramatically change 
until we implement some form of encrypted tunnels...

Thoughts? Nextgens simulated the current (disabled) code on a perfect 500 node 
network and saw average hops reduced from 5 to 3 ...

Should we enable FOAF routing anyway, and if so, which mitigation measures do 
we need to implement first? Note that encrypted tunnels would not solve this 
problem, as they are impacted by it also (if we do rendezvous at a key, and 
use FOAF-routing; random walk rendezvous wouldn't be affected).

On Friday 18 July 2008 15:42, Florent Daignière wrote:
> Yesterday I have implemented and committed the naive
> implementation of foaf into the trunk... (foaf routing :
> http://archives.freenetproject.org/message/20080707.111733.13824377.en.h
> tml)
> 
> I am reluctant to enable it by default as they are some major security
> implications. As far as I understand, the logic was: "the swapping
> algorithm can already be subverted by an attacker to extract our peers'
> location... hence implementing foaf won't harm much: it will just
> provide more accurate data to a potential attacker".
> 
> Foaf-routing is about two things:
>       1) publish our peers' location
>       2) use the intelligence our peers provide us to route more
>       effectively
> 
> While the old logic covers point 1 it doesn't cover point 2 and we have
> to ask ourselves how point 2 can be used by a bad guy...
> 
> The obvious attack scenario is:
>       The attacker has a direct link to my node. For his attack to
>       succeed he would like to capture all my outgoing traffic (in
>       which case it's obvious I don't have any anonymity). It is
>       trivial to do on a foaf-enabled node; The routing algorithm
>       always route to "the closest location it can find"; The bad guy
>       can advertise several locations for his node (pretending he is
>       peered to some nodes who have the locations he wants them to
>       have). The bad guy also have an accurate view of my peers'
>       location as I have cleverly sent them to him...
> 
>       If he advertises two peers for each of my peers with a location
>       slightly closer and slightly further on the keyspace (say +/-
>       0.000000001), my node *will* send every requests to his node!
> 
> It's obviously a problem we will want to address somehow...
> 
> One option is to limit the amount of locations he can advertise. It is
> an inefficient mitigation measure as he doesn't need many of them: the
> average node has 20 links... so he would only need to advertise 2*20=40
> locations to cover all the keyspace.
> 
> An other solution would be to compute the proportion of the keyspace
> each node controls... That would be an efficient mitigation measure...
> Moreover we could use that metric to determine whether a clustering
> attack is going on or not... and decide whether we should randomize our
> location or not. Of course it means introducing some more alchemy into
> the algorithm and the code... but I don't regard the current solution as
> acceptable.
> 
> Can anyone think about some other mitigation measure we could use?
> Is anyone willing to run some simulations and find out the magic-values
> we are going to use in the mitigation algorithm?
> 
> NextGen$
> 

Attachment: pgpwG3QUItfUq.pgp
Description: PGP signature

_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to