On Monday 17 November 2003 06:56 pm, Toad wrote:
> On Mon, Nov 17, 2003 at 02:47:59PM -0500, Ken Corson wrote:
> > Martin Stone Davis wrote:
> > >Martin Stone Davis wrote:
> > >>We start at the top of the list, and see who is going to make our will
> > >>the fastest.  Since our lawyer is "backed off" at the moment, we go
> > >>with our chef.
> >
> > important: "at the moment" . How big do we consider this "moment" to
> > be ? 100ms , 5 seconds, a singular point in time ? hmmm....
> >
> > >>The solution is to look ahead in our list until we find a nice
> > >>query/node combination.  This could be done by sampling a certain
> > >>number of queries from the ticker randomly.  Then, for each one
> > >>sampled, and for each node in the RT, calculate estimate().  Pick the
> > >>*combination* with the smallest value.
> >
> > This is a form of queueing, even though we use a 'bin' instead of
> > a formal 'queue.' I like it muchly, however, the timeout 'period'
> > on the requesting end needs to be considered / adjusted. What is
> > the upper limit of loiter time on the requestee ? This clearly
> > introduces some latency along the query path, at each hop. Which
> > probably has the payoff of vastly better query throughput in the
> > freenet network. But increases wait time for someone sitting at
> > a browser. Which is more important for this project ? Only Ian
> > could tell us - it is (originally) his design, after all. It
> > seems that people (would like to) use this network for file
> > distribution (other than HTML files) ... The obvious difference
> > here is that we would be shifting from a "forward instantaneously"
> > strategy to a more efficient queueing model. Plus we are making
> > this tradeoff in order to preserve our "route to the best node
> > with best effort" concept, rather than trading off best routing
> > for best speed.
>
> What you are suggesting will result in fast queries getting faster, and
> slow queries timing out - but producing considerable CPU load in the
> process. Is this a good thing?

No, a better way to do it would be to process each request as they come in and 
then take into account the nodes that are backed off but add to their 
estimate the length of time for the backoff to expire. That way the node can 
still decide if the node is worth waiting for. (Of course while they are 
waiting, we would need to put them in anouthor queue, so they don't consume 
threads.)

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to