On Thu, Nov 21, 2002 at 06:17:26PM -0600, Edgar Friendly wrote:
> Matthew Toseland <toad at amphibian.dyndns.org> writes:
> 
> > On Thu, Nov 21, 2002 at 07:39:54AM -0600, Edgar Friendly wrote:
> > That doesn't empty your routing table, does it? If so, it's a bug.
> > References are deleted only when the table fills up (now), AFAIK.
> 
> So the code that deletes references when their backoff count gets
> above 6 is gone?  If so, that's great.  If not, it should be whacked.
And replaced with what? We can't "move nodes offline" and resurrect them
later unless we have some sort of metric for comparing an offline node
that we just managed to contact to an online node.
> > > 
> > > If a page has DNFed at HTL=25 after 3 or 4 attempts, it should be
> > > "not-retrievable", and given up on.  If the key is in freenet and
> > Maybe so. But IMHO exponential backoff is cleaner.
> 
> 
> > > isn't being found, the solution is *not* to add a auto-retry "feature"
> > > into fproxy; that's just patching over the symptom of the problem.
> > > The real problem is that the key isn't being found, and it's this that
> > > needs to be worked on, either by tweaking routing table settings so
> > > that nodes (in general) use more information to route, or by making
> > Oh yes, making nodes route to hosts less likely to find the key just
> > because they are faster is obviously going to improve network
> > performance. This is the line you have always advocated, it may work in
> > terms of speed for successful requests but it certainly doesn't for
> > reliability.
> 
> I wouldn't route to a T3 node over a 56k node just because the T3 is
> faster, but I would insist that the average hoptime of requests can be
> improved significantly, and once this is done, higher HTLs will not be
> as much of a drain on the network.
How? The average hoptime should have improved significantly in recent
months, modulo overloading and network upfuckage caused by lots of new
users and the seednodes and so on.
> 
> > > requests more lightweight so that higher HTLs can be supported.
> > HAH. How? 
> 
> if average hoptime is reduced to 1/2 of what it is now, then HTLs can
> be doubled without requests taking longer.  In fact, deep requests may
> be vital to the working of the network, as they bring data to a lot of
> nodes.  Of course, DNFs need to be as efficient as possible, so that
> the higher HTLs don't allow request flooding.
Yes but as the network gets bigger HTLs will have to increase anyway. If
we increase them a lot now, we lose the headway we will need.
> 
> > Also: what makes you think that a max HTL of 25 is
> > insufficient for the current network, given a bit more time to evolve?
> > DNF does not necessarily mean the request visited that many nodes - it
> > only means that it ran out of HTL. The request can lose a hop by not
> > connecting somewhere, or being rejected, or whatever.
> > > 
> "given more time to evolve"?  The network will always be changing; I
> assert that the routes through the network for different parts of the
> keyspace are already as stable/evolved as they're going to get.  As
> for why HTL 25 isn't sufficient for the current network, the proof is
> clear; people aren't able to find data that was just recently inserted
> into the network.  (i.e. the problem that we're trying to solve)
Yeah. There are many possible reasons for this. The presence of old
buggy nodes doesn't help, but there are lots and lots of potentially
serious problems affecting the network at the moment that might cause
DNFs.
> 
> > > 
> > > Frost is *not* distributed as part of the node.  Yes, many people are
> > > using frost to flood the network, but that doesn't mean we should make
> > > people flood the network each time they browse to an edition-based
> > Given exponential backoff I would argue that this is not flooding in any
> > meaningful sense.
> 
> Exponential backoff would make me much happier about this idea, but I
> still see it taking a request that's not succeeding and turning it
> into 10 (or more) requests that aren't going to succeed.
Only if misused. And if you're gonna misuse freenet, you'll go download
Frost. The common case usage will likely be to get sites that are
eventually found. For example, trying to load TFE at five minutes past
midnight when it hasn't propagated fully yet, often takes a few tries
but almost always gets there in the end.
> 
> > > site.  Yes, I'm talking about the links to future editions, which will
> > > set fproxy working forever trying to find data that's not in the
> > > network.  As for your characterization of "one request ever couple to
> > Only if you click the link. The broken images will NOT cause a retry.
> > It's in the HTML, not the headers.
> 
> what headers does the link need to be in to be retried?  fproxy
> doesn't distringuish between a request for an inline image and a html
> page, does it?
No but the browser does. The refresh goes into the HTML generated to
tell the user about a DNF. The browser will not do anything with this if
it's expecting an image.
> 
> Thelema
> -- 
> E-mail: thelema314 at swbell.net                         Raabu and Piisu
> GPG 1024D/36352AAB fpr:756D F615 B4F3 BFFC 02C7  84B7 D8D7 6ECE 3635 2AAB
> 

-- 
Matthew Toseland
toad at amphibian.dyndns.org
amphibian at users.sourceforge.net
Freenet/Coldstore open source hacker.
Employed full time by Freenet Project Inc. from 11/9/02 to 11/1/03
http://freenetproject.org/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20021122/8db60c2f/attachment.pgp>

Reply via email to