Freenet can never compete on speed with traditional peer to peer, for several 
reasons, of which at least 1 is intractable:
1. Freenet assumes high uptime. This does not happen in practice, at least not 
for the mass market. To some degree we can resolve this with e.g. persistent 
requests in 0.10.
2. Freenet returns data via intermediaries, both on darknet and opennet. This 
is what makes our caching model work, and it's a good thing for security, 
however broadcasting a search (or using some more efficient form of lookup) 
and then having those nodes with the data contact you directly will always be 
faster, often much much faster. Caching may well obviate this advantage in 
practice, at least in the medium term.
3. Freenet has a relatively low peer count. Hence the maximum transfer is 
determined by the output bandwidths of the peers, which is low. Increasing 
the number of peers will increase various costs, especially if they are slow, 
and make it harder to see whether the network can sclae, otoh it would 
increase maximum download rates...
4. Freenet avoids ubernodes. Very fast nodes are seen as a threat, rightly, 
because over-reliance on them makes the network very vulnerable. Practically 
speaking they may be attacked, if this is common it again neutralises this 
advantage of "traditional" p2p.
5. FREENET DOESN'T BURST.

The last is the fundamental, intractable issue IMHO. Freenet sends requests at 
a constant rate, and exchanges data between peers at a roughly constant rate. 
On something like Perfect Dark (which admittedly has much higher average 
upstream bandwidth and bigger stores), you start a request, and you get a 
huge great spike until the transfer is complete. It's similar on bittorrent, 
provided the file is popular. On Freenet, our load management is all designed 
to send requests constantly, and in practice, up to a certain level, it will 
use as much bandwidth as you allow it. We could introduce a monthly transfer 
limit as well as the upstream limit, but this would not help much, because 
bursting is inherently dangerous for Freenet's architecture. If you are Eve, 
and you see a big burst of traffic spreading out from Alice, with tons of 
traffic on the first hop, lots on the second, elevated levels on the third, 
you can guess that Alice is making a big request. But it's a lot worse than 
that: If you also own a node where the spike is perceptible, or can get one 
there before the spike ends, you can immediately identify what Alice is 
fetching! The more spiky the traffic, the more security is obliterated. And 
encrypted tunnels do not solve the problem, because they still have to carry 
the same data spike. Ultimately only CBR links solve the problem completely; 
what we have right now is hope that most of the network is busy enough to 
hide traffic flows, but this is the same assumption that many other systems 
rely on. But big spikes - which are necessary if the user wants to queue a 
large download and have it delivered at link speed - make it much worse.

There are lots of ways we can improve Freenet's performance, and we will 
implement some of the more interesting ones in 0.9: For example, sharing 
Bloom filters of our datastore with our peers will gain us a lot, although to 
what degree it can work on opennet is an open question, and encrypted tunnels 
may eat up most of the hops we gain from bloom filters. And new load 
management will help too when we eventually get there. However, at least for 
popular data, we can never achieve the high, transient download rates that 
bursty filesharing networks can. How does that affect our target audience and 
our strategy for getting people to use Freenet in general? Does it affect it?
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 835 bytes
Desc: This is a digitally signed message part.
URL: 
<https://emu.freenetproject.org/pipermail/tech/attachments/20090402/0402dca2/attachment.pgp>

Reply via email to