> You'll notice that for an online node, most of the time connections will
> succeed, so a constant factor might want to be a little lower than
> 0.1.  Doubling the factor each time might be a bad idea, since a possibly
> unreliable node could become 'reliable' too quickly.  A server should have
> to 'prove its love' before being respected as reliable again.  

In our particular case, however, most unreliable nodes are probably
unreliable because they are switching IPs, or, at least, they will
probably switch IPs if they crash. Modems, cable modems, and ADSL all use
DHCP and so switch IPs on reboot/disconnect, oftentimes.

So in this case the reliability formula should be one such that nodes
which are around a lot are always on top, but nodes which reappear jump
quickly back up to pretty high, but still under truly reliable nodes and
nodes which have been dead for a while get deleted fairly quickly. I think
the way to solve this is to have two classes, very reliable and somewhat
reliable. It takes a lot to move from one class to another. So a very
reliable node can go for days without being downgraded to a somewhat
reliable node, but a somewhat reliable node will be deleted after a day of
down time. On the other hand, a somewhat reliable node will have to have a
marathon of uptime to become a very reliable node.

Of course, we are entirely ignoring closeness. The algorithm for choosing
nodes needs to taken into account both closeness and reliability for them
both to have meaning. A simple multiplication of the two values might be
good enough, with a constant weighing the outcome towards one value or the
other.



_______________________________________________
Freenet-dev mailing list
Freenet-dev at lists.sourceforge.net
http://lists.sourceforge.net/mailman/listinfo/freenet-dev

Reply via email to