One minor issue: If the localDelay is much greater than the minPacketDelay, we should wait until it isn't, before allocating a packet, rather than slowing down other sends.
On Wed, Nov 30, 2005 at 08:07:06PM +0000, Matthew Toseland wrote: > We only throttle data packets. > > Ian has suggested that we can avoid a global data packet queue by the > following: > > When we want to send a data packet, we already have a delay, lets call > it localDelay. This depends on the AIMD throttle for that particular > transfer. > > We don't want to under any circumstances send a packet more frequently > than once every minPacketDelay milliseconds. This is specified by the > bandwidth limiter. > > We keep a global variable, lastPacketSendTime. This is the time at which > the next packet will be sent. > > So: > The earliest time at which we can send another packet is > lastPacketSendTime + Math.max(localDelay, minPacketDelay). > This then becomes the new lastPacketSendTime. > > However, this may be in the past. If we haven't sent any packets for a > while, lastPacketSendTime could be a long time ago, and when we do send > a packet, we'd end up sending a whole bunch of them in rapid succession > without any throttling, until lastPacketSendTime caught up with the > present. > > Now, this is clearly not beneficial for the "hard" general bandwidth > limit. The solution is simply to make it: > max(lastPacketSendTime + max(localDelay, minPacketDelay), currentTime()) > > This would allow us to send a packet immediately if we haven't sent one > for ages. Oh, and this must happen synchronized. > > However, it *could* be beneficial for other applications. Soft bandwidth > limiting for people on X GB/mo transfer limits, for example. > > So, we could have two lastPacketSendTime's, and two minPacketDelay's. > > The first is the hard limit. We must update this every time we send a > packet (if it is in the past, since sometimes packet sends will be delayed), > and we don't allow it to be in the past when updating it above. > > The second is the soft limit. This will have a larger minPacketDelay, > because it is a lower limit. However, we don't need to be as strict with > timekeeping. It is perfectly acceptable to have a burst of packets if we > haven't sent any for ages. The two parameters are: > - What the limit is > - How big the burst can be > > If we want, for example, never to send more than 500,000 packets in a > given 24 hour period, how can we acheive this? > > If the last packet was sent 24 hours ago, we could send half a million > packets immediately. However, we would then be unable to send any more > for another 24 hours. Let us suppose that we make lastPacketSendTime > never be more than 1 hour behind the present. Suppose we send some > packets, then there is a gap in transmission of 1 hour. We can now send > a burst of up to 1 hour's worth of packets, i.e. around 20,833 packets. > In the hour leading up to the end of the burst, we have just met our > quota. In the hour after the start of the burst, we can send packets at > the nominal rate. The result of this is two hours worth of packets sent > in the hour starting at the start of the burst. However, this is > balanced by the emptiness preceding it; over the two hours around the > burst, we meet our target exactly. > > So to get accurate, averaged bandwidth limiting, all we have to do is > set the latency limit to half of the period over which we want it to > average out. If we have a limit of 5GB per 28 days, which is around 3.8 > million packets (assuming lots of overhead), or 1.6 packets per second, > we set minPacketDelaySoft to 1.6 seconds and the maximum latency to 14 > days. > > Practical issues? This would have to be saved to disk over such a long > period. If we save it every 60 seconds, then on restart we can calculate > the maximum number of packets that could have been sent in that period, > assume that they were, and that the node was shut down on the instant of > writing the next update, and update the lastPacketSendTimeSoft > accordingly. That is, assuming that the maximum latency is sufficiently > large. > -- > Matthew J Toseland - toad at amphibian.dyndns.org > Freenet Project Official Codemonkey - http://freenetproject.org/ > ICTHUS - Nothing is impossible. Our Boss says so. > _______________________________________________ > Tech mailing list > Tech at freenetproject.org > http://emu.freenetproject.org/cgi-bin/mailman/listinfo/tech -- Matthew J Toseland - toad at amphibian.dyndns.org Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: <https://emu.freenetproject.org/pipermail/tech/attachments/20051130/7cc59870/attachment.pgp>
