On Sun, Jun 17, 2001 at 09:44:33AM -0700, Ian Clarke wrote:
> On Sun, Jun 17, 2001 at 01:34:58PM +0200, Oskar Sandberg wrote:
> > There is no guesswork involved.
> 
> I was referring to:
> 
> > The
> > second is the values we are using for the Expected value and variance,
> > which are based on some pretty rough experiments from last spring.
> 
> You say estimates, I say guesswork.  It is very likely that the
> measurements now would be radically different from measurements taken
> months ago.

Actually they should be based on experiment. If people run the
experiments I will be glad the change the numbers.

< > 
> > The reason we would have to fucking wait a long time to restart when HTL
> > is 100, is because it takes a fucking long time to make 100 hops even if
> > everything goes right. To say that is because of how we calculate it is
> > silly beyond words.
> 
> Not true.  Ask anyone who requests at a HTL of 100, you will find that
> most requests return in a reasonable amount of time, even failed
> requests.

I have tried it myself, it is hardly reasonable. It's certainly positive
if 100 hops finish comparatively fast - it gives hope that we can have
truly reasonable (ie surfable) speed at some point in the future - but
the current speed is everything but reasonable.

> > However, that would be a bitch to implement, add lots
> > of load, and be an invaluable gift to anyone trying to use traffic
> > analysis on the network - which is not something I am willing to accept
> > for what I see as a half-assed workaround for the fact the routing is
> > currently not working.
> 
> If routing wasn't working, then nobody would be able to retrieve
> anything, and our simulations would have demonstrated this.  The reality
> is that multiple independent simulations have demonstrated that routing
> does work.  It is probably that overzealous caching is currently
> degrading document longevity, and we will address this.  But in the mean
> time, people want to know how they can improve performace *now*, and in
> everybody's experience, it seems that increasing the HTL does this.  It
> may be a kludge, and it hopefully won't be nescessary on 0.4, but it
> does appear beneficial now.  Do you advocate trying to hide this fact
> from people?

Firstly, I believe this is a bed pissing solution (nice and warm at
first, cold and sticky soon thereafter). Greater HTL values means more
caching, not less, and thus if that truly is the problem then increasing
the HTL will only make it worse. The argument that high HTL values
flaten the search trees also seems reasonable.

Secondly, if all you want is a short term solution to make searches work
more often why doesn't somebody just add request broadcasting instead.
That way at least something will pop up in fproxy and you will be able
to go "Look it works" although we all know that (for whatever reason,
fundamental or detail) it sure as hell doesn't ATM.

Thirdly, where does it end? 100 just happens to be the number I pulled
out of my ass to set as the maximum hops to live (because I figured that
if we needed that high htl the network was useless). If you set it up
100 and that helps for a couple of weeks, some more users join, and it
starts getting worse, would you want to increase it further? 200? 300?
Where does it end?

-- 
'DeCSS would be fine. Where is it?'
'Here,' Montag touched his head.
'Ah,' Granger smiled and nodded.

Oskar Sandberg
oskar at freenetproject.org

_______________________________________________
Devl mailing list
Devl at freenetproject.org
http://lists.freenetproject.org/mailman/listinfo/devl

Reply via email to