On Thu, Feb 27, 2003 at 08:39:59PM -0800, Ian Clarke wrote:
> I am somewhat concerned that the datastore data size limit of 1/200th of
> the total datastore size is less than an optimal solution.

I have changed it to 1/100th. In CVS soon. I believe this is adequate
(in the absence of fproxy etc temp files, which are not a problem
because the user can always enlarge the store or explicitly use an
external temp dir).
> 
> Current behavior is that if someone decides to have a datastore of less 
> than 200MB (or is it 256?), then even 1MB chunks of data won't be cached 
> on that node, although the user will still be able to download such 
> data.
The default is 256MB. The strict minimum would be about 201MB, assuming
files of 1MB never have more than 5kB of fields plus metadata.
> 
> Let's think about that, any user who initially decides to give Freenet
> less than 200MB will automatically start leaching larger chunks of data
> without storing them locally.  Someone who opted (as I do) to devote
> 50MB to Freenet will not cache anything larger than about 500k, even
> though I will be able to get such data from other users in the network
> who do.
> 
> I see this as a problem, it doesn't really disadvantage those who opt
> for smaller datastores yet it ensures that the network as a whole is
> significantly disadvantaged by such users.  
Anything that harms your node's ability to route requests and/or cache
data harms your node in that nodes that are well-connected and get lots
of requests tend to know how to find stuff quicker. Witness the
disparity between a transient and a permanent node.
> 
> This can't be the only way to do this.  I dislike chosing arbitrary 
> limits where it can be avoided, but if we must have one, I think setting 
> a fixed maximum size on data which a datastore will cache (say 1MB) 
> would be better than setting a variable limit based on the somewhat 
> arbitrary 1/200th ratio that we currently employ.
I have set it to 1/100th. A separate issue is whether we should have a
hard-coded upper key size limit of 1MB. If you have a store that is
50MB, with the default settings, it is _possible_ to have 48 open
connections, each of them transferring a unique file in each direction.
The result of this, with a maximum file size of 1MB, would be a need for
100MB of disk space. So either freenet will have to grab more space,
which may not be available and in any case is deeply user-hostile (or
unmanageable or whatever you want to call it), or it will have to reject
connections, which is made harder by the fact that we do not know how
big a key is when deciding whether or not to accept the query.
> 
> Comments?
> 
> Ian.
> 
> -- 
> Ian Clarke                [EMAIL PROTECTED]|locut.us|cematics.com]
> Latest Project                                          http://locut.us/
> Personal Homepage                                 http://locut.us/ian/



-- 
Matthew Toseland
[EMAIL PROTECTED]/[EMAIL PROTECTED]
Full time freenet hacker.
http://freenetproject.org/
Freenet Distribution Node (temporary) at 
http://80-192-4-23.cable.ubr09.na.blueyonder.co.uk:8889/fkdUH8mp7-g/
ICTHUS.

Attachment: pgp00000.pgp
Description: PGP signature

Reply via email to