Oskar has just about convinced me that the backoff code makes no sense,
and is a hangover from the 0.4 overload problems, which were resolved
eventually by a set of bug fixes rather than by better load handling.
I propose that we use pure CPs. Hence:
* Unreliable nodes _will_ be retried eventually,
* Even unreliable nodes' CPs will rarely fall so far that they are never
  retried, because when CP is low the node is retried less often (think
  about it).
* We will not be "making too many connections to bad nodes".
* When the node comes back online, its CP will rise again.

As part of this strategy, we would also remove the failure intervals
code, and the 7-times-rule, so that nodes are never removed from the
routing table, only replaced by superior ones.

The idea here is to make the network more dynamic. We need to make it
possible for every user of freenet not behind a NAT to run a node useful
to the network.
-- 
Matthew Toseland
toad at amphibian.dyndns.org
amphibian at users.sourceforge.net
Freenet/Coldstore open source hacker.
Employed full time by Freenet Project Inc. from 11/9/02 to 11/1/03
http://freenetproject.org/
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20021209/f1225d3d/attachment.pgp>

Reply via email to