Matthew Toseland wrote: > When a request is *completed*. Otherwise we will be creating too many > tokens when a request is forwarded more than once!
OK, is it safe to generate a token when the request has been accepted by the next hop? > As far as congestion and CPU load go, there is only a > problem when we actually lose a request, correct? That would be a > timeout. Which should still generate a token; we know what requests are > in flight, and if one is lost it times out, and we still have a > completion so we still make a token, allowing another request to be sent > - but only after the timeout has expired, so usually a long time. This sounds right... I may have been conflating the number of tokens handed out with the rate at which they're handed out. > Hmmm. Okay, what exactly are we talking about with rejected requests? I > had assumed that if a request was rejected, it would just remain on the > previous queue; if it keeps being rejected eventually it will timeout. > We don't have to keep things as they are... Ah, I was assuming rejected requests would die. But it's probably better if they remain on the previous queue, it uses less bandwidth if we assume the source will just retransmit the request anyway. > Why does it not control load? If it takes ages for requests to complete, > then we are compelled to wait ages between sending requests. This does > indeed propagate back across the network, because of the policy on > deserving nodes. Doesn't it? You're right. In that case forget about my proposal, I'll just simulate queues vs no queues, with a token handed out every time a request is accepted by the next hop, answered locally, or times out. Thanks for clearing that up. :-) Cheers, Michael
