On Tuesday 18 November 2003 06:41 am, Edward J. Huff wrote:
> Well, what I want to do is to allow the node to be in control of
> the backoff time, as follows:
>
> When a fluctuation of the success rate results in excess trailers,
> the node estimates how long they will take to transmit.  Then it
> informs all of its connected peers that it is going to ignore all
> queries _that is has not already responded to_.  And if there are
> no sequence numbers in messages, this application requires them
> because the other node needs to know exactly which queries this
> statement applies to.  (This solves the "in transit" problem).
>
> The message sent to the connected peers is a contract:  the
> node promises to ignore a specific set of messages, starting
> with a particular sequence number, and as yet open ended.
> Call this the overload message.  It amounts to a reply to
> all of the in-transit messages, and a promise to ignore any
> and all subsequent messages.
>
> When the node estimates that it can safely accept queries, it
> will send a second message to all connected peers, call it an
> overload cleared message.  This message contains a token which
> the peers must present in the next request.  Any requests still
> in the pipeline will be ignored until one with the token arrives.
>
> Can anyone explain how this protocol is vulnerable to attack?

No, but I think it would be a lot better if this were done non-alchemisticly. 
(Sometimes it's worth while to try a node even if it is overloaded and 
sometimes it is better to wait until it's not loaded, and sometimes it's best 
to use another node. The only way to tell which situation is applicable, is 
for the requesting node to know the revelevent information)

There a lot of other ideas on how to do this but if you or anyone else thinks 
theirs is better, please explain what advantages this could have over my 
proposal. If you don't like to do a split LIFO/FIFO that's fine. But the 
point is that the requesting node should know the load on the other node. And 
it should know the presentage of the queries that are processing that are 
it's. And the probability of being rejected is the product of the two. Then 
the node knows exactly the probability of being rejected. AND the exact 
probability that the new request or any of it's other pending requests will 
be passed over because your node is making too many. Then the node can 
CALCULATE the total time lost for all pending requests including the one it 
is trying to send if it sends the request to the node. 

_______________________________________________
Devl mailing list
[EMAIL PROTECTED]
http://dodo.freenetproject.org/cgi-bin/mailman/listinfo/devl

Reply via email to