On Mon, Jun 16, 2003 at 11:40:58AM -0500, Tom Kaitchuck wrote:
> On Monday 16 June 2003 07:37 am, Toad wrote:
> > No, that information is not available to routing. If large chunks are a
> > problem we should impose a limit - we do not want only a few nodes to
> > cache a really large chunk.
> 
> Yeah, I agree with you there. But would it be possible to do the following:
> Suppose larger chunks are accessed in general less frequently. (I'd like to 
> see some hard evidence of that, but bare with me...) Then when a node reseves 
> an insert request for a piece of data, it looks at it's routing table and 
> finds a few nodes that it thinks are fairly good candidates for the data. 
> Then it picks one of them, but in it's decision it weights it's perseption of 
> that nodes available bandwidth. If it thinks that a node has a slow upload 
> rate and it is a large datafile then that node is more luckily to be 
> forwarded the data. This is because we are assuming large data is less 
> popular so it wouldn't need to upload as often. (IE: A node with 1000 1KB 
> chunks in it's data store uploaded an average of 10 times each, generates a 
> LOT more traffic than, a node with 1 1MB chunk in it's data store that is 
> uploaded 9 times.)
> 
> The number of nodes that have a particular piece of data is the same. The 
> large pieces of data from split files are more frequently on slow 
> connections. But this does not harm the overall download rate as the 
> requester can spawn more threads. It also means that the low bandwidth users 
> would on average receve fewer requests but still use all of their available 
> store space.

We will soon implement a new routing algorithm which decides which node
to route to based on how fast it expects it to be for that particular
key. This should make more efficient utilization of nodes of different
bandwidth and store sizes. The issue with caching is that we have to
ensure there is space for caching files being transferred, so nodes will
not cache files larger than approximately 1/100th of the datastore size.
> 
> Is there any reason that we couldn't/shouldn't make the datasize available to 
> the requesting node?

We would have to change the key spec.. we could do it if we had a
_really_ good reason.

-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
GPG key lost in last few weeks, new key on keyservers
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to