On Friday 30 November 2007 16:28, Michael Rogers wrote:
> On Nov 30 2007, Matthew Toseland wrote:
> > Increasing MAX_PING_TIME would have no effect, for example, because most 
> > nodes mostly reject on bandwidth liability.
> 
> MAX_PING_TIME was just an example - my point is that if we know most nodes 
> aren't using the available bandwidth, we should tweak the rejection 
> thresholds until most nodes hit their bandwidth limits. That doesn't 
> require any new algorithms, just tuning the constants of the existing ones.
> 
> > But the point I am making is 
> > *we don't even limit effectively on bandwidth liability* : busy-looping 
> > until a request gets through shouldRejectRequest() improves performance 
> > significantly, therefore backoff and AIMD is not supplying enough 
> > requests to the front end of the current load limiting system.
> 
> To play devil's advocate for a minute: maybe it only improves performance 
> because we're hammering our peers with so many requests that probabilistic 
> rejection is effectively circumvented (sooner or later the coin will come 
> up heads). This isn't necessarily a good strategy.

Not true either, most rejects are due to bandwidth liability (which isn't 
probabilistic). We could make bandwidth liability less aggressive, but IMHO 
it wouldn't help, because as I have demonstrated above, we get more bandwidth 
if there is a constant stream of local requests rather than an occasional 
trickle.
>
> I'm not opposed to disabling AIMD and replacing backoff with explicit 
> "start/stop" signals, I'm just not convinced it will fix anything either.
> 
> > Yes. Well really it's a form of token passing, but I'm trying to make it 
> > simple and obviously correct.
> 
> It's not really token passing - a peer that receives the "start" signal can 
> send unlimited requests until it receives the "stop" signal (pre-emptive 
> rejection). With token passing the peer knows how many requests it can 
> send, so there's no need for pre-emptive rejection.

No, the proposal was that the node sends a solicitation for a single request. 
Or perhaps for a number of requests. It's essentially token passing.
> 
> That's not to say that I think token passing is better than your proposal - 
> we never settled the question of how many tokens to hand out or how to 
> allocate them, for example. A simple solution is definitely preferable. 
> However, there's a reason most protocols don't use simple start/stop flow 
> control: it's hard to get good performance because the peer's response is 
> delayed by one RTT and you can't make smooth adjustments (it's all or 
> nothing).
> 
> To be honest I think we're just trying to compensate for a broken transport 
> layer. Look at the way HTTP handles flow control: it doesn't. Flow control 
> is left to the transport layer. Requests can be pipelined; if you're busy 
> processing the last request, don't read another one from the socket. To 
> handle timeouts, add a timestamp to the request and skip it if the 
> timestamp indicates that the previous hop will have timed out and moved on.

This is exactly what I am trying to achieve: A good transport layer. But for 
Freenet, the transport layer is *everything from the request source to the 
data source*.

What I propose above is the exact transposition of your "don't read any more 
packets from the socket" to something that can actually be implemented at the 
request layer. We don't accept (read) another request (packet) until we have 
space in our queue (buffer). It's exactly the same.
> 
> > I mean that requests queued may not be successfully forwarded because 
> > they are too far away from any of our peers' locations, yet since they 
> > don't go away, our peers cannot send us any more requests which are 
> > closer to the target. I believe what I said about this is sufficient.
> 
> I must have missed something - does the twice-the-median limit only apply 
> to misrouted requests? If it applies to all requests, then either we can 
> send the head of the queue to *someone* or we can't send anything to 
> anyone. Either way there's no way for a "bad" request to block a "good" 
> request.

It would apply to all requests. If we accept a "bad" request, that occupies a 
slot that could be used by a "good" request. It may occupy it for a very long 
time, because we won't accept any more potentially "good" requests in the 
meantime. Hence the need for a timeout leading to backtracking.
> 
> Cheers,
> Michael
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20071130/9fd42f33/attachment.pgp>

Reply via email to