* Gregory Maxwell <gmaxw...@gmail.com> [2009-12-15 18:39:24]:

> On Mon, Dec 14, 2009 at 11:51 AM, Florent Daigniere
> <nextg...@freenetproject.org> wrote:
> > Modern compression algorithms allow FAST decompression. We are talking
> > 10 to 20 times faster here!
> >
> > http://en.wikipedia.org/wiki/Lempel-Ziv-Markov_chain_algorithm
> > # Compression speed: approximately 1 MiB per second on a 2 GHz CPU
> > # Decompression speed: 10-20 MiB per second on a 2 GHz CPU
> >
> > Anyway, the assumption is and has always been that CPU cycles are cheap
> > contrary  to network traffic. Moore's law doesn't apply to networks.
> 
> It does, in fact.   Networks are just a combination of processing,
> storage, and connectivity. Each of those is itself a combination of
> processing, storage, and connectivity. At the limit of this recusion
> the performance of all of these, except the connectivity, is driven by
> transistor density??? Moore's law.
> 

The keyword here is "except". What characterize a bottleneck is the weakest
part of the chain...

> There is a *ton* of available bandwidth on optical fibers. For the
> communication part we have Butter's law: "Butters' Law says the amount
> of data coming out of an optical fiber is doubling every nine
> months"[1]
> 

Very informative... except you're comparing transit connectivity. Here we are
writing a p2p software, what matters is what the average user has on his local
loop.

ADSL for most of them... or worst. If Freenet was run from servers with fiber
connectivity and high uptimes it would perfom much better.

> It took me a day to find a graph of historical wholesale internet transit
> prices:
> 
> http://www.drpeering.net/a/Peering_vs_Transit___The_Business_Case_for_Peering_files/droppedImage_1.png
> (In fact, this graph appears to be overstating the current cost for
> bulk rate transit. Advertised pricing at the gbit port level is down
> to $2/mbit/sec/month from some cut-rate providers; negotiated prices
> can be lower still)
> 
> Of course, Freenet does a lot of network criss-crossing... this shifts
> the balance in favour of stronger compression but that doesn't
> magically make compression that only gives a 1% reduction a win.
> 
> [1] http://www.eetimes.com/story/OEG20000926S0065

Like often we are arguing over not much: Freenet does heavy encryption and FEC
anyway... Adding compression to the mix is not much of an overhead compared
to the rest.

Actually I'm surprised no one suggested getting rid of encryption
altogether; it would be waaaaaayyyy faster for sure.

All I gathered from Ian's report is that we probably shouldn't have a
COMPRESSION stage... on the user interface. Users obviously know better but
they still don't understand what is being achieved here and press cancel.
If we weren't saying what we are doing they would just complain about speed,
which we are used to.

It's not like this is the first time we argue over it:
Feb 11 Todd Walton     (  24) [Tech] CHKs, Metadata, Encryption, Compression,
Hashing 
Aug 12 Matthew Toselan (  55) [Tech] Should we try to compress all files?
=> that one is worth reading, Ian already says he want the client to choose
whether to compress or not... and we already argued over recompressing video
http://archives.freenetproject.org/message/20060812.161518.1148b5c5.en.html
Jun 03 Matthew Toselan (  50) [freenet-dev] Memory overhead of compression 
codecs                                                                          
               

Florent
_______________________________________________
Devl mailing list
Devl@freenetproject.org
http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to