The 0.7 load balancing and limiting mechanisms will be based on the following metaphor: Node to node / link layer = Ethernet Requests = TCP/IP
So, we get load limiting from the latter, and load balancing from the former. Load limiting: TCP/IP employs a strategy called Additive Increment Exponential Decrement. Basically it calculates the current round trip time (the time a request or insert takes), and keeps a window size, which is the nominal number of requests/inserts in flight at a given time. This is increased by a certain amount on a success (defined as anything other than an overload message or an internal error), and decreased by a certain fraction on an overload message. The two parameters are related by a formula. We use this for data transmissions, so that our traffic is "compatible" with TCP's load balancing, as well as for requests. Load balancing: Ethernet employs a simple randomized exponential backoff mechanism. This works well in practice - but only because most traffic running on top runs on TCP and therefore does load limiting. We will therefore emulate this: When we get a locally generated overload rejection message, we back off the node for a while. If the next time we send a request, we get the same problem, we back off for twice as long. And so on. Except that it's randomized. Any comments? Anyone with experience of these might be of use; what formula does ethernet use for randomizing? E.g. do they track the "base" backoff, and then multiply it by 1+random*k ? Or do they just track the backoff, and multiply it by e.g. 1.5+random each time? -- Matthew J Toseland - toad at amphibian.dyndns.org Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: <https://emu.freenetproject.org/pipermail/tech/attachments/20051128/dce3a415/attachment.pgp>
