On 22/01/2011, at 7:34 AM, Dennis Nezic wrote:

> On Sat, 22 Jan 2011 07:26:56 +1300, Phillip Hutchings wrote:
>> 
>> On 22/01/2011, at 7:21 AM, Dennis Nezic wrote:
>> 
>>> On Fri, 21 Jan 2011 05:59:06 +0100, David ‘Bombe’ Roden wrote:
>>>> a “simple thing” like bandwidth limiting
>>> 
>>> Can someone explain why bandwidth limiting might not be such a
>>> simple thing? Volodya tried, with his massive-incoming-packet theory
>>> (40KiB :p), but that's not true -- freenet packets are about 1KiB.
>>> So, is there not a central class/wrapper in place that feeds the
>>> node with at most X KiB / second? Ie. it will only read X UDP
>>> packets per second?
>> 
>> It doesn't matter if you only read X packets a second, they've still
>> been sent to you so it still used bandwidth. If you don't read UDP
>> all that happens is your OS queues for a while the starts dropping
>> packets.
> 
> Exactly. Why isn't this being done?

Why isn't what being done? There's absolutely no point letting the OS drop the 
packets. They have already been transmitted, they're in the receiver's memory. 
Dropping the packets is just wasting time and resources, you have to stop them 
before they're transmitted.
_______________________________________________
Support mailing list
Support@freenetproject.org
http://news.gmane.org/gmane.network.freenet.support
Unsubscribe at http://emu.freenetproject.org/cgi-bin/mailman/listinfo/support
Or mailto:support-requ...@freenetproject.org?subject=unsubscribe

Reply via email to