Roger Hayter wrote:

In message <[EMAIL PROTECTED]>, Martin Stone Davis <[EMAIL PROTECTED]> writes

Roger Hayter wrote:

As I understand it, the present routing method tries to predict the node to contact which will give it the quickest response overall to its request.


Almost. It chooses the node for which estimate() is the least. As I understand it (please someone correct me if I'm wrong) estimate() is *roughly* defined as:

pSuccess*tSuccess + pFailure*(tFailure+tGlobalSuccess)

where all of the below are a function of location in the keyspace:

pSuccess=chance of the node returning data
tSuccess=time it will take the node to return data if successful
pFailure=1-pSuccess
tFailure=time it will take the node to fail
tGlodalSuccess=time it will take to retrieve data successfully if we went to another node. I believe this is estimated by averaging all the past tSuccess:es *starting from our node*. So if, in the past, we had to try 10 different nodes to get a success for a key in this area, all the time we spent trying the 10 nodes is added-in to tGlobalSuccess.


This is (I will take as given) the individual node's best
strategy. However, inherent in this strategy, is a high likelihood of any given request being QRed (because a quick QR is better than a very slow response from a node which might ultimately be somewhat more likely to reach the needed key than the first choice node, but which is known to be less likely to QR quickly).

The speed of QR affects tFailure in the above formula. In your example, let's also assume that the node which retrieves a quick QR (let's call it the "hare") also will be less likely to return a success, but if successful will return data more quickly than the other node (let's call that the "tortoise"), which will be more likely to succeed but slower to respond either successfully or unsuccessfully.


The best option will not always be the hare. The higher tGlobalSuccess is, the better the tortoise becomes. Lower it, and the hare is a better option.

The system should balance itself out. Suppose we started off trying only hares. We might eventually see that tGlobalSuccess was getting large because of all the failed initial attemps which many times resulted in us finally succeeding with a tortoise. Since tGlobalSuccess would then be larger, we would tend to start favoring tortoises more. The reverse is also true, and we should achieve a balance.

But, a large number of QRs on the
network adds to network load, and, arguably, may greatly reduce the chances of a fast node ever specialising, as the requests coming to it are likely to be from all areas, "hoping" for a quick QR.
Perhaps, an altruistic node would route to avoid QRs for the benefit of the network, even if it meant it was likely to take it much longer to get the data it wants, by trying to predict where it would be found without QRs.

Or maybe it doesn't need to be altruisitic. If the hares get overloaded, their pSuccess will get even worse, tGlobalSuccess will then start to rise, and we will start favoring tortoises.


The paradox (of course) would be apparent if a network of altruistic nodes in fact achieved better routing, and much less pointless traffic (I know they are not *necessarily* the same thing), and worked better.
Have I got a point here, or am I just arguing for bringing the original routing algorithm back - is there a possible variant of NGRouting that looks for quickest route but with time to QR weighted at X1000 seconds or some such?

See if the above answers your concerns.


-Martin

Potentially, yes it does: but I gathered that some people were concerned that the noise from QRs and the requests eliciting them was so great as to drown out any stimulus to specialisation. Presumably if the specialisation stimulus is so small that it is less than the random variation in effectiveness resulting from, say, caching a particular file, then there will never be any net effect from routing on a node's specialisation: the granularity of a node's randomly varying "specialisation" will be greater than the size of any net stimulus to alter it.

Specialization:
---------------
I guess the idea is that if there are 10 areas of specialization, and we know 5 that specialize in the first area, but all 5 regularly QR, then we begin diluting the specialization of the other nodes which aren't specialized in the first area. Did I state the problem correctly?


If so, I would say the problem here is that all 5 nodes specialized near this key are hares. If the hares become burdened with QR:s, shouldn't NGR lead to us eventually favoring a sixth (tortoise) which also becomes specialized?

Altruism:
---------
I think the problem with any altruistic strategy is that it then lets non-altruistic nodes get the better of us, and hordes of people would choose a non-altruistic hack of freenet. Also, if we take your altruistic strategy, couldn't it be argued that we would be unnecessarily slowing down the tortoises even more? NGR takes care of load balancing naturally.


-Martin


_______________________________________________ Devl mailing list [EMAIL PROTECTED] http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to