------- Begin forwarded message ------------
I wish to turn attention into a number of problems in Freenet, and offer a solution. Please forward to the dev mailing list if you think this has merit.
Apparently the current version of Fred now supports Freesites inside ZIP files. IMHO this is a _bad_ idea, because it will increase the load times _a lot_ (ZIP files have to be completely downloaded before any files can be extracted), and since Freenet latency is allready bad... Also, consider what happens if the activelink images are inside the ZIP archives as suggested: accessing TFE causes your node to download every last Freesite on Freenet, pushing all other data from your node (or a significant amount of it, anyway).
On the other hand, the ZIP files do solve the problem of bitrot, that is, parts of the Freesite (anything but the a-link and index page) dropping of the network. Bitrot is a logical consequence of how Freenet and current browsers work. Freesites are made of several files, which the browser loads on the order they appear on the page, giving up if it takes too long. This works great in regular Internet, but in Freenet, it causes the images near the bottom of the page to be accessed very rarely, and thus they fall of the network. Also, in sites containing multiple pages, the ones after the index one are likely to be a lot less often accessed and thus fall of. To combat this, various precaching methods have been suggested. This is the solution to the problem; I'll get back to it later.
There is also the issue of parallelism. In regular Internet, it's desirable to open only one or two connections per server, and download files serially (one after another) to reduce open connections and thus server load. On the Freenet, on the other hand, it's desirable to request many files in parallel (at the same time), since most of the time is spent waiting for data to be found in the network, at which point local Fred basically sits idle (AFAIK) and there is no central server to overload. Browsers, having been designed for normal Internet, typically only allow 2 connections per server, and because all connections go to FProxy at localhost, can only request two files at once. This hurts Freenets performance _a_lot_. As a solution, it has been suggested that users reconfigure their browser to allow for more connections; this, however, causes the browser to behave very badly in reqular Internet (and is difficult to do besides).
Fortunately, there is a (relatively) simple solution to all these problems. When FProxy retrieves a html file, it is passed through an anonymity filter. This filter tries to find all content that could make the browser compromise the users anonymity, mainly by blocking any non-Freenet links and preventing the loading of images from reqular internet. I suggest that, after this phase, we start downloading everything this page links to in background, in random order. For example, if the page has three hundred img tags and two href links, we request these all in random order (the amount of parallel request could be adjusted based on current node load, and no, we don't follow the links in any pages we downloaded). This performs the same function as the ZIP files (binds data together), with none of the problems, and no changes needed for insertion tools. It will immediately increase the performance of every existing freesite with no other changes neccessary. Because the link request order is random, the bottom pictures are more likely to be available (unlike now, when they never load). Because every HTML page linked from the main page is also precached, they are much more likely to be available. While the pictures for those pages aren't cached, at least the page itself is viewable (and, of course, if it uses the same decorative graphics as the main page, they're allready precached). And it overcomes the browser parallel request limit.
Somone might wonder now what happens if someone links a 600-meg file to the main page. Do we download it ? The answer is no. Freenet can only store files up to 2 megs. Any file bigger than this is split into parts which are inserted separately. The CHK keys and the order of these pieces are recorded into a text file, which is then inserted as well. Since FProxy precaching only requests files (and discards them upon getting them), not process them, we only download the text file, not the splits themselves.
Technical note: If the user moves from page to page often, we get new URIs to be precached faster than we can retrieve them. Because of this, we need an upper limit on waiting-to-be-cached files, and, if more are added, remove randomly chosen waiting ones. We _must_not_ write the list to the disk (security hazard).
I'm posting this on the "freenet" board on Frost, in the hopes that it reaches the developers. I'm currently uncapable of using email; and, after posting this, I can't post it via email without giving up my anonymity. I guess this is the test case, of whether the Freenet can be used to communicate something more usefull than porn and trolling...
If this is a bad idea, then please tell me why. ------- End forwarded message ---------------
_________________________________________________________________
MSN 8 helps eliminate e-mail viruses. Get 2 months FREE*. http://join.msn.com/?page=features/virus
_______________________________________________ devl mailing list [EMAIL PROTECTED] http://hawk.freenetproject.org:8080/cgi-bin/mailman/listinfo/devl
