On Thursday 18 October 2001 16:42, Oskar wrote:
> On Thu, Oct 18, 2001 at 04:42:43PM -0400, Gianni Johansson wrote:
> > On Thursday 18 October 2001 15:24, you wrote:
>
> < >
>
> > The fundamental problem with the current CP approach is that it doesn't
> > take time into account in the way it models contact reliability.  No
> > amount of tuning will fix this.
How is time taken into account in the current system?

The phenomenon being modeled is time dependant and I can figure out where 
time is factored in, even implicitly.

> >
> > A noderef that was responding to 100% of requests until 20 minutes ago
> > but has failed to respond to the last 10 requests is qualitatively
> > different from a noderef which which has failed all 10 requests that were
> > made to it since the node was started a week ago.  The former is much
> > more likely to respond than the latter.
>
> The only reason that would have happened was if the former node got
> picked in the RoutingTable as many times in 20 minutes as the latter did
> in 7 days. 
Which is what should happen if CP's are doing their job.

>And since it continues to get picked 504 times as often, it
> will obviously be routed to more often even though the CP is the same.
>

? 

I don't follow your analysis.

The RoutingTable *depends* on the CP of the node refs. 

TreeRoutingTable.findRoutes
   -> TreeRoutingTable.RouteWalker.nextElement()
        ->  TreeRoutingTable.RouteWalker.step()

Take the case where there are only two noderefs, the one that always responded
until 10 minutes ago and the one that has never responded.

Once the good nodes starts to fail, it's CP will erode causing more requests 
to get routed to the bad node, driving it's CP down yet further as it 
continues to fail. 

The CP of the good node will always be better than the CP of the bad.  But 
that doesn't matter since they are absolute probabilities. 

What does matter is that both CP's have been driven so low that almost all 
requests RNF without ever getting a chance to retry the node that might 
actually respond.  You could try to weedle out of this by enforcing a minimum 
floor CP, but if you do that you have lost the information that the 
previously good node is better. 

--gj


> <>
>
> > > Well, I'm not entirely sure how the ThreadPool works, but I thought
> > > that the pool number was the number of threads that were always kept
> > > alive (except I guess "minPool" would be a better name for that I
> > > guess), the maxThreads number was the maximum that could ever be alive,
> > > and I don't see the need to enqueue any jobs at all (there is no sense
> > > in
> > > leaving jobs hanging we don't have threads for, all new Threads except
> > > connections come from the Ticker).
> > >
> > > It seems logical to me that we keep an active pool of about half the
> > > allowable threads - why would we only keep 5?
> >
> > Back in the .3 days people expressed dismay that all of those "unused"
> > threads were being kept around.
>
> That sounds silly, a couple of idle threads aren't a resource issue on
> modern computers.
>
> <>

-- 
Freesites
(0.3) freenet:MSK at SSK@enI8YFo3gj8UVh-Au0HpKMftf6QQAgE/homepage//
(0.4) freenet:SSK at npfV5XQijFkF6sXZvuO0o~kG4wEPAgM/homepage//

_______________________________________________
Devl mailing list
Devl at freenetproject.org
http://lists.freenetproject.org/mailman/listinfo/devl

Reply via email to