Matthew Toseland wrote:
> On Thu, Sep 01, 2005 at 10:17:51AM +0200, Alex R. Mosteo wrote:

>>Given that all the downloading/assembling is done inside the node, you 
>>just give to the browser a standard download connection[*], and make 
>>available the data that is available contiguous from the start of the 
>>file. This surely means that the download will go in jumps when holes 
>>are filled, and it may seem stalled for some time, but at least you have 
>>a regular download whichever the browser.
> 
> You do, however, the user will assume it is stalled and cancel it, since
> it won't return any data at all for minutes to hours on end.

Well, it's a matter of advertising that freenet downloads can seem 
stalled for long times, just like now a long time for node integration 
is expected for new nodes.

But I wonder if there are browser-side timeouts.

>>The API for 3rd party apps allows for more detailed download 
>>managers/multi-node downloads for those entrepreneur developers.
>>
>>I really think that, even if some quantity of rounding squares is 
>>needed, providing a standard download is in the benefit of freenet. 
>>However, a "download feedback page" is too a deviation from standard 
>>behavior, and I agree that it should not be repeated.
>>
>>Giving a standard download connection would allow too for cancelling 
>>downloads in the usual way.
> 
> 
> And they would *always* be cancelled. The ONLY way to make this work
> would be to download the data in such a way that it works - i.e. to
> choose the random blocks first, then download the data. The problem with
> this is that then everyone will be able to get the first few megs of the
> file, but later parts will fall out. We need all parts of the file to
> drop out of the cache at the same rate. That means fetching in random
> order. That means we could well have most of the download before we have
> much sequential data - especially if we use the onion codec. It's fine
> for small files, where we can fetch every block at the same time. But
> it's not fine for large files.
> 
>>Depending on the circumstances, I could even have some time to implement 
>>this if feasible and nobody is interested in future months.
>>
>>[*] I just mean answering to the request with a standard HTTP reply, and 
>>provide the sequential data when it's available.
>>
>>
>>>-- jeek
>>>
>>>
>>>------------------------------------------------------------------------
>>>
>>>_______________________________________________
>>>Devl mailing list
>>>Devl at freenetproject.org
>>>http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl


Reply via email to