Matthew Toseland <toad at amphibian.dyndns.org> writes: > On Thu, Nov 21, 2002 at 07:39:54AM -0600, Edgar Friendly wrote: > That doesn't empty your routing table, does it? If so, it's a bug. > References are deleted only when the table fills up (now), AFAIK.
So the code that deletes references when their backoff count gets above 6 is gone? If so, that's great. If not, it should be whacked. > > > > If a page has DNFed at HTL=25 after 3 or 4 attempts, it should be > > "not-retrievable", and given up on. If the key is in freenet and > Maybe so. But IMHO exponential backoff is cleaner. > > isn't being found, the solution is *not* to add a auto-retry "feature" > > into fproxy; that's just patching over the symptom of the problem. > > The real problem is that the key isn't being found, and it's this that > > needs to be worked on, either by tweaking routing table settings so > > that nodes (in general) use more information to route, or by making > Oh yes, making nodes route to hosts less likely to find the key just > because they are faster is obviously going to improve network > performance. This is the line you have always advocated, it may work in > terms of speed for successful requests but it certainly doesn't for > reliability. I wouldn't route to a T3 node over a 56k node just because the T3 is faster, but I would insist that the average hoptime of requests can be improved significantly, and once this is done, higher HTLs will not be as much of a drain on the network. > > requests more lightweight so that higher HTLs can be supported. > HAH. How? if average hoptime is reduced to 1/2 of what it is now, then HTLs can be doubled without requests taking longer. In fact, deep requests may be vital to the working of the network, as they bring data to a lot of nodes. Of course, DNFs need to be as efficient as possible, so that the higher HTLs don't allow request flooding. > Also: what makes you think that a max HTL of 25 is > insufficient for the current network, given a bit more time to evolve? > DNF does not necessarily mean the request visited that many nodes - it > only means that it ran out of HTL. The request can lose a hop by not > connecting somewhere, or being rejected, or whatever. > > "given more time to evolve"? The network will always be changing; I assert that the routes through the network for different parts of the keyspace are already as stable/evolved as they're going to get. As for why HTL 25 isn't sufficient for the current network, the proof is clear; people aren't able to find data that was just recently inserted into the network. (i.e. the problem that we're trying to solve) > > > > Frost is *not* distributed as part of the node. Yes, many people are > > using frost to flood the network, but that doesn't mean we should make > > people flood the network each time they browse to an edition-based > Given exponential backoff I would argue that this is not flooding in any > meaningful sense. Exponential backoff would make me much happier about this idea, but I still see it taking a request that's not succeeding and turning it into 10 (or more) requests that aren't going to succeed. > > site. Yes, I'm talking about the links to future editions, which will > > set fproxy working forever trying to find data that's not in the > > network. As for your characterization of "one request ever couple to > Only if you click the link. The broken images will NOT cause a retry. > It's in the HTML, not the headers. what headers does the link need to be in to be retried? fproxy doesn't distringuish between a request for an inline image and a html page, does it? Thelema -- E-mail: thelema314 at swbell.net Raabu and Piisu GPG 1024D/36352AAB fpr:756D F615 B4F3 BFFC 02C7 84B7 D8D7 6ECE 3635 2AAB _______________________________________________ devl mailing list devl at freenetproject.org http://hawk.freenetproject.org/cgi-bin/mailman/listinfo/devl
