Ultra-Lightweight Passive Requests and per-node failure tables have been 
implemented, and are working well in the live-code (many nodes using the real 
code in one JVM; see node.simulator) simulations I have been running. There 
is more work to be done, but we should be able to release a new build with 
both soon. There is no request quenching at the moment: if there are 
bazillions of requests for a specific key, these will be rerouted according 
to failures to produce an exhaustive network search, and when it is found, 
the data will be rapidly propagated to all requestors/subscribers.

What ULPRs and per-node failure tables do, for background:

ULPRs:

When a request fails, the node remembers which node asked it for the data. 
When it eventually finds the data, it offers it to the node (and also to 
nodes which it routed the request to, since they may also have subscribers 
and may be closer to the target key). If the node wants the data, or if any 
other nodes have asked it for the data recently, it takes up the offer and 
downloads the data, and then offers it to any nodes that have recently asked 
for it. The downloading process is integrated with the client queue: if we 
want it, we execute the request according to the priority of the request, and 
if a remote node wants it, we give it a priority of 
IMMEDIATE_SPLITFILE_PRIORITY_CLASS (there are two higher priorities).

In my simulations, I set up a network of 10 nodes, make a key, ask each node 
for the key, then insert it on 1 node. On average after 3 seconds all nodes 
have the key.

Per-node failure tables:

When a request fails, we record which node we routed it to. If we have another 
request for the same key within a timeout period, we avoid that node, unless 
we have no other option. If there are several recently-failed nodes, we 
alternate between them i.e. we route to the node that least recently failed 
the request.

Details: Routing:
Our routing order is now:
- Non-backed-off nodes which we haven't had failures from, nearest location to 
the target first.
- Non-backed-off nodes which we have had failures from, oldest failure first.
- Backed-off nodes which we haven't had failures from, nearest location to the 
target first.
- Backed-off nodes which we have had failures from, oldest failure first.

Timeout periods:
- On a fatal error (DNF, transfer failure, verify failure): 10 minutes.
- On a nonfatal error: The time taken to send the request to the node and get 
the error back.

ADVANTAGES
========

* Much greater resistance to selective key censorship attacks by cancer nodes, 
at least on popular keys or with requestors willing to retry. (Per-node 
failure tables)
* Much more thorough search of the network for frequently requested data (at 
negligible performance cost per request). (Both)
* Much faster propagation of frequently requested data once it is found. 
(ULPRs)
* Hopefully much more efficient polling, enabling FMS for example to work 
well. (ULPRs)

FUTURE
=====

First off, we need a way to conveniently access this from the client layer: 
see my other mail.

Secondly, it may be useful to enable the other part of the original ULPRs 
design: the RecentlyFailed message. This would mean that when a node requests 
a key from us, which we have recently requested (probably for a different 
node), we would fail the request with a RecentlyFailed. Since we are 
a "subscriber" for the key, and since the node which requested is also one, 
this should cut the load on the network from polling without (significantly) 
reducing performance. We would probably allow a few requests through before 
closing the route.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080207/4092ed97/attachment.pgp>

Reply via email to