On Wed, 06 Jun 2007 19:11:27 +0100, Matthew Toseland wrote:

> Recent probe data suggests a theory:
> 
> Parts of the network are "rabbit holes" or "dungeons", i.e. sub-networks
> which are only weakly connected to the larger network. These cover a small
> chunk of the keyspace, say 0.36-0.41 (roughly, in the trace I had). A
> request for 0.5 got stuck down the rabbit hole. The gateway node at 0.436
> was backed off; if the request had been able to reach that node, it would
> have been able to get much closer to where it should be. So the request
> bounced around in the dungeon, and eventually DNFed.
> 
> What we need is some backtracking. At the moment backtracking only occurs
> when we actually run out of nodes in a pocket i.e. when it is really
> small. We track the best location seen so far on the request (not
> including dead ends i.e. RNFs), and whenever this improves we reset the
> HTL back to the maximum (10); otherwise it is decremented.

No, what you need is an opennet. Having a "sparse" network with plenty of
"leaves" only connected to the rest of a network through a single node is
a natural feature of a darknet. The leaves are made of people who
happened to be in #freenet-refs at the same time, exchanged refs, and
left; the connecting nodes are those who are running automated ref
exchange scripts and therefore get connected to the new leaves as they
are formed. Simply backing away from the leaves is going to overload the
single connecting node since all the traffic to and from the leaves is
going to go through it; and of course if that node happens to go offline
for any reason the network will splinter.

A darknet is never going to work well; the network topology forces the
majority of request through a small amount of well-connected nodes. They
are going to get overloaded, and even if they won't, the network will be
splintered if they are taken offline, which is easy since they have to be
semi-public to get well connected in the first place. This makes the
"dark" Freenet easy to take down.

So, the darknet is fragile, won't ever perform well, and is a pain in the
ass to get into and stay in. No amount of tweaking can get over these
fundamental inherent flaws in a darknet approach. Only an
enabled-by-default opennet can solve them.

Oh well, it's not my problem, since I'm not running Freenet anymore; the
hassle of having to get new refs from #freenet-refs every few days as the
old ones go offline was simply too much. I guess I'll be checking back
every couple of months to see if the sanity of openness has returned or if
the Freenet is still in the dark.

> It wasn't always this way. What we did at the beginning of 0.7 was this:
> Limit the number of linear hops (HTL) to 10. Only route to a node closer
> to the target than we are (actually I think it was only route to a node
> closer than the best-so-far). The problem with this approach is we'd tend
> to dance around at the beginning, looking for a route, wasting all our
> HTL. So we changed it to the above algorithm. Which does have the
> advantage that we won't run out of hops if we are actually getting closer
> to the target, and we'll have a few hops left over after we reach the best
> node to look around in case there have been short range network changes.
> The problem is, it gets stuck down rabbit holes, because there is
> essentially no backtracking (unless the network is really small).
> 
> Both approaches are wrong. But we can have the best of both worlds: - The
> first time on a new node, we always route to the closest node to the
> target, even if it is further away from the target than the best-so-far
> and than this node is. Because the following hop will often be better. -
> When we go say 4 hops without advancing (without best-so-far improving),
> backtrack to the previous node with something like a DNF. - This may
> include the new best-so-far location if any. - After a DNF, we will try
> other nodes, but only if they are closer to the target than the node which
> is rerouting is (or the previous best-so-far if this node was the
> best-so-far when first entered). - A normal completion will not cause very
> much backtracking, because we keep the new best-so-far, and many nodes we
> would visit we have already visited (because of the triangles property,
> and this does seem to happen in practice). Having said that, there will be
> some forking on the way back.
> 
> "Fork if closer than this node" is better than "fork if closer than the
> best so far" in simulations on a related project, although there are some
> differences.
> 
> Note that dungeons are IMHO inevitable on a true darknet. Also, this
> should not generally increase the linear path length along which data is
> returned. We can probably afford the extra hops since we're not sending
> data down them. One worry is that inserts would send data to all the
> nodes; we should probably limit this. We might also want to have a total
> nodes visited count, with a 
> maximum._______________________________________________ Devl mailing list
> Devl at freenetproject.org
> http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl



Reply via email to