I am somewhat concerned that the datastore data size limit of 1/200th of
the total datastore size is less than an optimal solution.

Current behavior is that if someone decides to have a datastore of less 
than 200MB (or is it 256?), then even 1MB chunks of data won't be cached 
on that node, although the user will still be able to download such 
data.

Let's think about that, any user who initially decides to give Freenet
less than 200MB will automatically start leaching larger chunks of data
without storing them locally.  Someone who opted (as I do) to devote
50MB to Freenet will not cache anything larger than about 500k, even
though I will be able to get such data from other users in the network
who do.

I see this as a problem, it doesn't really disadvantage those who opt
for smaller datastores yet it ensures that the network as a whole is
significantly disadvantaged by such users.  

This can't be the only way to do this.  I dislike chosing arbitrary 
limits where it can be avoided, but if we must have one, I think setting 
a fixed maximum size on data which a datastore will cache (say 1MB) 
would be better than setting a variable limit based on the somewhat 
arbitrary 1/200th ratio that we currently employ.

Comments?

Ian.

-- 
Ian Clarke                [EMAIL PROTECTED]|locut.us|cematics.com]
Latest Project                                          http://locut.us/
Personal Homepage                                   http://locut.us/ian/

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to