On Thu, Sep 01, 2005 at 10:17:51AM +0200, Alex R. Mosteo wrote: > junk at giantblob.com wrote: > >What about this for a glorious kludge? > > large file downloads from fproxy actually trigger the 'download' of a > >modest size dummy file, the progress of which is matched to the progress > >of the actual split-file download - this provides the users' standard > >browser UI/feedback for a download > > when the download is 99% complete, fproxy redirects the browser to a > >link to the actual download, which then completes instantly > > Seen that this idea hasn't been well received, I suppose mine will > neither, but I must try ;) > > Given that all the downloading/assembling is done inside the node, you > just give to the browser a standard download connection[*], and make > available the data that is available contiguous from the start of the > file. This surely means that the download will go in jumps when holes > are filled, and it may seem stalled for some time, but at least you have > a regular download whichever the browser.
You do, however, the user will assume it is stalled and cancel it, since it won't return any data at all for minutes to hours on end. > > The API for 3rd party apps allows for more detailed download > managers/multi-node downloads for those entrepreneur developers. > > I really think that, even if some quantity of rounding squares is > needed, providing a standard download is in the benefit of freenet. > However, a "download feedback page" is too a deviation from standard > behavior, and I agree that it should not be repeated. > > Giving a standard download connection would allow too for cancelling > downloads in the usual way. And they would *always* be cancelled. The ONLY way to make this work would be to download the data in such a way that it works - i.e. to choose the random blocks first, then download the data. The problem with this is that then everyone will be able to get the first few megs of the file, but later parts will fall out. We need all parts of the file to drop out of the cache at the same rate. That means fetching in random order. That means we could well have most of the download before we have much sequential data - especially if we use the onion codec. It's fine for small files, where we can fetch every block at the same time. But it's not fine for large files. > > Depending on the circumstances, I could even have some time to implement > this if feasible and nobody is interested in future months. > > [*] I just mean answering to the request with a standard HTTP reply, and > provide the sequential data when it's available. > > >-- jeek -- Matthew J Toseland - toad at amphibian.dyndns.org Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20050901/69263800/attachment.pgp>
