> Failed requests are slow for the user, and costly for the system (5-10
> times that of a successful one). 

Then the requests that succeed early are used.  With sumultaneous requesting, 
if two or more that are requested actually exist, one of those is 
probabistically more likely to return quicker than the average case, although 
the node should wait for a few more requests to come through.

One thing that i'm not sure of is wether a failed request harms 
routing for it if it is eventually inserted.  If this is the case then 
someone should submit it as a bug :-)

> We have enough failed requests going
> around as it is, we do not need people using any more hit and miss type
> strategies - even if the number of misses is only close to logarithmic.

With what i've come up with so far, this can be made almost constant instead 
of logarithmic.  Exactly 3 to 4 requests are needed if the insert behavior is 
the same as the current DBR system - this is the best case senario (I know 
this is not as good as the current DBR's 1 request lookup, but the features 
would get more publishers in Freenet).  The typical case would probably be 
about 2-10 returned requests + 2-5 "misses," depending on how irregular the 
insert interval is.  The worst case senario would only happen if a user who 
has only inserted once before inserts 127 versions of the same file in the 
same 2^-31 second interval.  This would result in 127 hits and 128 misses, 
wich is the absolute max (unless you count rollovers into a new epoch).  This 
is highly atypical, and can even be optimized, so we don't need to worry 
about this.
There are MANY pattern analysis optimizations that may be done to this to get 
requests down to the best case, but I I'll leave those for later.

Scott Young

_______________________________________________
Devl mailing list
Devl at freenetproject.org
http://lists.freenetproject.org/mailman/listinfo/devl

Reply via email to