On Friday 13 June 2008 01:16, Ian Clarke wrote:
> On Thu, Jun 12, 2008 at 5:53 PM, Matthew Toseland
> <toad at amphibian.dyndns.org> wrote:
> > IMHO we should, in the near future, try a prototype of this.
> 
> Woah, wait a minute!  This is voodoo security, we need to think this
> through before we implement anything, there are some fundamental
> questions that must be answered first.

I tried to answer these in the original proposal, but I will try to answer 
your questions here since the original post was long.
> 
> For example:
> 
> What are the specific threats that we are currently vulnerable to,
> that this approach will prevent, and how does it prevent them?

Correlation attacks and adaptive search. Details below.
> 
> What are the assumptions they are making about the capabilities of the 
attacker?

For a correlation attack you need to be connected to the target. You spider 
the network and/or watch FMS, so that you can identify blocks belonging to 
the target splitfile or identity. Then you determine what proportion of the 
total come from your peer, and hence how plausible it is that they are merely 
a relay. You have moderately accurate data on the local topology from 
swapping, which helps, and on opennet you know that pretty much every node 
will have around 20 peers. So either way, you can be reasonably sure that if 
a sufficiently large fraction of the splitfile is being requested from you, 
the connected peer is probably the originator. (Too small a fraction doesn't 
necessarily exonerate the node however for various reasons).

For an adaptive search you need to be able to identify the target splitfile 
(or series of splitfiles connected to a single identity), and you need to be 
able to change your connections, presumably by accepting some path folding, 
and keeping those peers closer to your current view of the target's location. 
Each request you intercept gives you a sample of the originator's location on 
the keyspace (because you know that the request reached you, you can work out 
a vague idea of where it might be). You combine these and get an increasingly 
accurate view, which you use to connect to nodes closer and closer to the 
target, resulting in getting more and more of the request you are tracing, 
and therefore more and more samples. Thus once you have a few 
requests/inserts from the target, you can rapidly get more, and for a big 
insert you should be able to converge on his location in a reasonable time. 
Of course getting connections closer to the target is much more expensive on 
a true darknet, but right now no such thing exists: almost everyone uses 
opennet. And even on darknet, this is the strategy that an attacker would 
take, he'd just have to be a lot more careful since each link will be a 
significant cost.
> 
> What will be the real-world performance impact of implementing
> something like this?  

This depends on the tunnel length. The objective of implementing a prototype 
is simply to try to determine what the tunnel length will be. I don't intend 
to actually create tunnels and send requests down them, merely to execute the 
rendezvous and see how many hops are needed. If the tunnel length is long, we 
can either not implement tunneling until performance has improved, we can 
find ways to shorten it (e.g. with mrogers' proposal for random routing 
rendezvous instead of key based rendezvous), or we can make it an option for 
the really paranoid (e.g. insertors of controversial content).

If on the other hand the average tunnel length is short (say 5 hops or less), 
we should implement it immediately, initially as a paranoid option, and then 
look into what performance benefits we can derive from not having to worry 
about various attacks - for example, if we lose 5 hops from this, but then 
gain 3 hops from Bloom filters, it becomes very tempting to turn it on by 
default.

Note also that tunnels will be long-lived: the tunnel setup process can be 
fairly involved, we will use a small number of tunnels for each splitfile. In 
the long run we would tag all requests with an identity and therefore reuse 
tunnels for the same identity. Ideally we'd only use one tunnel, but in 
practice we would use many tunnels in order to achieve reasonable 
performance. Even with many tunnels, the attacker will see hundreds to 
thousands of times fewer location or predecessor samples: one per tunnel, not 
one per request.

> Are we now going to be talking about minutes 
> rather than seconds to respond to requests?  If so, we may make
> Freenet more secure, but this will be a pyrrhic victory as we will
> have also made Freenet useless.

If the tunnels are short, then it will not result in a large reduction in 
performance, and may enable some significant optimisations which we could not 
implement before for security reasons.
> 
> This requires a lot more discussion and consideration before a line of
> code is written.

And threatening to implement it is a great way to get people's attention! :)
> 
> Ian.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080613/f0fdfd8c/attachment.pgp>

Reply via email to