Problem 1) There are many nodes which have a lot more output bandwidth than 
Freenet can use.
Problem 2) Requests can in theory be traced across the network by advanced 
traffic analysis, even on darknet.
(Note that there are rather more powerful attacks that need to be dealt with 
first, I have discussed these in other mails)

The best way to improve problem 1 is:
- New load management which takes better account of local circumstances.
- Bulk requests flag, so some requests are flagged as bulk, and will be allowed 
much longer timeouts and therefore we allow many more of them at once, and thus 
are more likely to fill our bandwidth up.

It is interesting to note that right now requests are typically limited by 
projected worst-case bandwidth usage (i.e. output bandwidth liability), not by 
actual output bandwidth usage. Bulk requests should largely solve this.

However, there is likely to be some remnant even with these measures. And the 
second problem remains.

On the old email mixnets, they were said to be secure at a given minimum 
traffic level: An outgoing email packet could not be matched to an incoming 
one, because there were several different options to match it with. With delays 
this was even more secure.

Theoretically only full Constant Bit Rate links resist powerful traffic 
analysis. However IMHO we can get quite close to CBR in practice, without a 
massive performance cost:

First option: Queue blocks and only send one block when there are a bunch of 
others to send. The threshold should be configurable.

Sub-problems:
a) We would have to send entire blocks at once. I.e. we could not forward a 
block until we had received it in full. This may however be acceptable for bulk 
requests. And it is very much compatible with the trickle-back secure bursting 
proposals. And on fast nodes it should not normally be a problem anyway, the 
special case logic should only trigger occasionally.
b) What if there aren't any others? Solution: Create them from padding or see 
below.

Obviously this does leak some information, not being full CBR.

Second option: Create useful traffic. Ensure that most peers constantly have 
data to send, and therefore data is limited by the capacity of the internet 
connection to that node. Thus we obscure any data travelling along that link. 
The main caveat is if we let message priorities compete between peers, there 
will still be information leakage. Right now we don't let message priorities 
compete between peers, but if we implement the below we will want to. On nodes 
with relatively high security settings which have a lot of output bandwidth, it 
may be acceptable to turn this off.

Main sources:
a) Bulk requests, see above - the more requests are running at once the greater 
the chance that a given peer constantly has data to send.
b) Bloom filter sharing. This requires that the bloom filters or similar 
structures be divided into chunks which are independantly useful, but that is 
fairly essential anyway for reasons of e.g. memory usage.
c) Opportunistic datastore filling.

Obviously the latter two have security impacts themselves, however:
1) Both of these have the potential to significantly shorten average paths and 
therefore not only improve performance but security as well, especially against 
a distant attacker.
2) Our local requests are not cached in our datastore anyway. They could 
migrate back to it with opportunistic datastore filling, but no more so than 
they would migrate to other nodes in the area - if an attacker can encircle us 
and find the local data requests stored on a circle of nodes at 2-4 hops out, 
he can probably get some idea where we are anyway. The only way to stop that 
sort of thing is tunnels, or not letting the attacker get close in the first 
place. Both are planned, although likely not by default for all parts of all 
requests.

Bloom filters can be quite large, but opportunistic datastore filling is likely 
to be huge, especially if peers are dynamic i.e. it works slightly better on 
opennet than on darknet. Bloom filter sharing on the other hand will probably 
only work where the overhead is low i.e. where there is spare bandwidth on the 
side of the node sending the filters, or where there is a very long-term 
relationship (e.g. darknet), so the cost is divided over a long period.

Third option: Full blown enforced CBR. This may be acceptable for some really 
paranoid darknet users, it's clearly not something we want by default.

Bursts: The more bursty the network, the more obvious the traffic flows will be 
in the absence of full CBR. On the other hand, the shorter they are, both in 
path length and time, the less likely it is that attackers can exploit them 
even if present on a random proportion of nodes (which is very hard on 
darknet), and the easier it is to take countermeasures to prevent mobile 
attacker attacks.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 197 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/tech/attachments/20100823/c3bf5287/attachment.pgp>

Reply via email to