A number of people have reported low bandwidth usage recently. Possibly 
starting in 1141 or a few builds before that, although I've heard a number of 
low payload reports since much further back. Correlated with low payload 
usage (but we know this, this is expected given our high packet overhead). 
Generally resulting from output bandwidth liability.

I have some theories:

1. It's all the SSKs.

This does not fully explain it. I see a 1:3 or 1:4 ratio of CHKs to SSKs. But 
SSKs use 10 times less bandwidth. So it shouldn't be a huge hit: the CHKs 
should still use most of the bandwidth. We can reduce the volume of SSK 
polling by implementing RecentlyFailed.

2. Local requests.

When I run a node with a 12K/sec limit, if I have queued requests I get maybe 
5-8K/sec average output. If I have no queued requests I get 13K/sec average 
output, fairly consistently. The same pattern appears to happen with higher 
bandwidth limits.

Local requests obviously use far less *output* bandwidth than remote requests. 
However, output bandwidth liability treats them the same, for privacy 
reasons. Maybe we should reexamine this. Obviously if a lot of input 
bandwidth is being used, a node is probably downloading stuff; if a lot of 
output bandwidth is being used, a node is probably uploading stuff; we have 
an opportunity to send a local request every X millis, determined by our 
AIMDs; we decide whether to do so through shouldRejectRequest(), which is 
generally dominated by output liability limiting. On my node X is around 
800ms for requests. But the request can go to any of our peers... so it's a 
noisy signal.

In the short term, the fix appears to be to take into account whether a 
request is local or remote when deciding on output bandwidth liability.
-------------- next part --------------
A non-text attachment was scrubbed...
Name: not available
Type: application/pgp-signature
Size: 189 bytes
Desc: not available
URL: 
<https://emu.freenetproject.org/pipermail/devl/attachments/20080422/f5d20afe/attachment.pgp>

Reply via email to