> 
>>This idea may be rather unpopular, but too bad... :))
>>
>>--------
>>Rateless codes are like FEC except that they support infinite number of 
>>checkblocks, and the file becomes a stream.  Simple implementation is 
>>just to wrap the checkblocks from fec in a repeating sequence and insert 
>>them in a channel like frost boards, etc.  More sophisticated 
>>implementations don't exist afaik, the theory is at
>>
>>http://www.scs.cs.nyu.edu/~petar/oncodes.pdf
>>http://www.scs.cs.nyu.edu/~petar/msdtalk.pdf
>>
>>and if you have postscript,
>>http://www.scs.cs.nyu.edu/~petar/msd.ps
>>
>>------------
>>
>>I believe this will provide a better performance than fec for large 
>>files than the current FEC algorithm as it will ensure that as long as 
>>someone is inserting the file stream it will be retrievable and 
>>reconstructable.  Of course mass adoption may have disastrous effects on 
>>the network (but then again the same has been said for the polling 
>>mechanism frost uses).  It has different uses than the standard FEC, 
>>most notably a large rare file that is not going to be very popular. 
>>When the file-stream stops being requested the inserter will effectively 
>>become a ddos-er, but freenet can handle that ;)  It may be integrated 
>>with the frost feedback mechanism (i.e. turn the source off when x 
>>succesful downloads complete)
> 
> 
> Why is it better to insert an infinite number of different blocks than
> to insert the original N blocks? If they cease to be reachable, reinsert
> them with skipDS, or pull them through with skipDS, or pull them from
> another node with skipDS. I don't see the advantage.

1. reachability can be measured only from those nodes to which the 
inserter has fcp access; in a properly functioning routing the blocks 
should be reachable from the inserting node most of the time even with 
skipDS.
2.  pulling them after some have ceased to be reachable will only 
refresh them on the path from the inserting node(s) to wherever the 
blocks were stored.

This mechanism is intended for large files that are not likely to be 
popular and ensures that a node which is very distant topologically 
reconstructs the entire file.  As long as the network is not totally 
disjoint, any node will be able to get the entire file exactly because 
the number of chuncks is infinite and they're likely to get routed 
evenly across the network.

If a finite number of chuncks is inserted, and then re-inserted with the 
same CHKs, then if the requesting node cannot reach some of the 
necessary chuncks due to crappy rt it won't be able to reach them ever 
unless the topology of the nodes around the inserter changes (but with 
good routing we expect the same key to go to similar place no matter how 
dynamic the network is)

Having infinite number of chucks ends up being a broadcast of sorts; 
however freenet is designed to handle exactly this kind of ddos.  As 
long as the requesting node has a chance greater than 0 of getting a 
single chunk from the file stream, its guaranteed to get the entire file 
  as long as it is being "seeded".  (infinity * anything over 0 = infinity)

> 
>>As soon as I hear back from the author of the paper I will start 
>>implementing a client which does that.  Forward all hatemail to 
>>/dev/null :-pp
> 
> 
> Hehe, we haven't got to that stage yet, first we have to argue about
> it, then it degenerates into hatemail. :)

and what happens after its implemented and totally screws everybody's 
freesite browsing experience?  death threats ? ;))))

Attachment: pgp00000.pgp
Description: PGP signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to