I have thought up a potential solution to the issue of how we give
small data priority in the datastore.  For those of you who don't
remember this is to prevent one huge piece of data displacing loads of
much smaller data in the datastore - think about it: a 50MB piece of
data would displace all other data in most nodes it passes through -
since 50MB is the current default datastore size.

Basically it goes like this, rather than inserting new, or freshly
requested data right back at the top of the datastore, we place it in
a position such that the total size of the data above it is just above
the size of the data we are inserting (ie. if it was any higher then
the total size of the data above it would be lower than the total size
of the data).  If the data is larger than the total size of the data
in the datastore, and there is insufficient free-space in the
datastore to accomodate that data, then it won't be inserted.  I may
be wrong, but I think this means that provided that the data is
smaller than half the size-limit of the datastore, then it will always
be inserted.

This has the nice property that it is computationally inexpensive, and
there are no scaling issues.  It should also be really easy to
implement.

If nobody can point out any flaws in this, or suggest a better way to
do it, then I see no reason why it shouldn't go into 0.3.

Ian.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 232 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20000727/18b922ba/attachment.pgp>

Reply via email to