On Mon, Dec 01, 2003 at 10:51:25PM -0800, Martin Stone Davis wrote:
> Toad wrote:
> >On Sun, Nov 30, 2003 at 10:09:40AM -0800, Martin Stone Davis wrote:
> <snip>
> >>However, I think we can implement the other ideas right away.  We know 
> >>(Toad has proven) that we have more queries than we can actually deal 
> >>with, so your back-off scheme should be implemented.  We know (I have 
> >>proven) that the current estimate() formula is incorrect, so we should 
> >>fix it to match its original purpose.  And as for Unobtanium routing, I 
> >>can't prove to you that it would cure a major ill, but it would not be 
> >>hard to implement, it couldn't hurt, and it just might help.
> >
> >
> >We should think about tRetry. Maybe it needs to be infinite. I.e. maybe
> >we need to route strictly by success probability. And if we don't, we
> >have to somehow get rid of the HTL, because that's the only way not to
> >have tRetry (failures such as connection failure could lead to a
> >restart - what we'd get rid of is not just DNFs, it's all the varied
> >reasons to kill a request).
> 
> Routing strictly on success probability would lead to us favoring a node 
> which fails often, takes a LONG to fail, but whose probability of 
> success somewhat (or even slightly) better than others.  Say A takes 10 
> minutes to fail, 2 minutes to succeed, and succeeds 20% of the time. 
> Say B takes 1 minute to fail, 2 minutes to succeed, and succeeds 10% of 
> the time.  Now, let's compute the total time we will spend if we try A 
> first vs if we try B first.
> 
> TryAFirstThenB = 20%*2 + 80% * (10 + (2*10% +  1*90%)) = 9.28 minutes
> TryBFirstThenA = 10%*2 + 90% * (1 +  (2*20% + 10*80%)) = 8.66 minutes

I don't agree with your reasoning here. Because in the current network,
if the node fails with a DataNotFound, we have to terminate the entire
request. Thus, we should route to A, because of the massive cost of
routing to the wrong node.
> 
> So, it's to our advantage to try B first.  If you changed the example to 
> include a bunch of nodes just like B (call them B1, B2, ..., BN), the 
> difference between trying the BX:s first and A first would be much more 
> striking due to high likelihood of success on one of the BX:s and the 
> relatively low time to fail.  (If you need me to prove this, I will, but 
> it's a pain in the ass to compute.)

If we don't kill requests on DNF, and implement a separate load
balancing system to kill requests, (such as unobtanium), we can safely
route to fast, unreliable nodes, and greatly simplify NGRouting.
> 
> Based on the above analysis, routing based strictly on success 
> probability would suffer from malicious exploits.  A well-established 
> node could become a black hole by suddenly taking enormous times to fail 
> while succeeding at the same rate as before.

Unfortunately this is the case. So that is a further argument in favour
of iTTL.
> 
> So, before we try some alchemical modification of tRetry, shouldn't we 
> consider using the new computation of estimate() I derived (see 
> http://article.gmane.org/gmane.network.freenet.devel/8143)?

If we have to have a tRetry based on failing the whole request, it has
to be alchemically made enormous. Which suffers from the outlined
possible attacks.
> 
> -Martin
-- 
Matthew J Toseland - [EMAIL PROTECTED]
Freenet Project Official Codemonkey - http://freenetproject.org/
ICTHUS - Nothing is impossible. Our Boss says so.

Attachment: signature.asc
Description: Digital signature

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to