Quoting Michael Rogers <m.rogers at cs.ucl.ac.uk>: > Robert Hailey wrote: >> Prioritizing accepted messages over bulk data may help this case, but >> the more general solution would be to prioritize data less than control >> packets, no? Find the data fast (Request/DF/DNF/RNF/...), transfer it >> slow (packetTransmit/???). > > We'd have to be careful not to let the low-priority traffic starve, > otherwise we'd keep accepting new requests while existing transfers > repeatedly got pushed to the back of the queue and eventually timed out. > > Cheers, > Michael
I disagree. I think that it should be a strict priority (if there is any queued control packets they are sent first). Both the number of pending requests (represented by thread count if nothing else) and the data send-queue length are parameters for rejection. Rejecting new requests would keep the data-transfer level traffic from being starved. If my understanding of how small (and therefore easily coalescable) these control packets are is correct, there would be no chance of starvation. And it only makes sense... the data packets want high throughput and the control packets want low latency. I imagine once this is implemented data searches timing out will be a thing of the past. -- Robert Hailey
