The 0.7 load balancing and limiting mechanisms will be based on the following metaphor: Node to node / link layer = Ethernet Requests = TCP/IP
So, we get load limiting from the latter, and load balancing from the former. Load limiting: TCP/IP employs a strategy called Additive Increment Exponential Decrement. Basically it calculates the current round trip time (the time a request or insert takes), and keeps a window size, which is the nominal number of requests/inserts in flight at a given time. This is increased by a certain amount on a success (defined as anything other than an overload message or an internal error), and decreased by a certain fraction on an overload message. The two parameters are related by a formula. We use this for data transmissions, so that our traffic is "compatible" with TCP's load balancing, as well as for requests. Load balancing: Ethernet employs a simple randomized exponential backoff mechanism. This works well in practice - but only because most traffic running on top runs on TCP and therefore does load limiting. We will therefore emulate this: When we get a locally generated overload rejection message, we back off the node for a while. If the next time we send a request, we get the same problem, we back off for twice as long. And so on. Except that it's randomized. Any comments? Anyone with experience of these might be of use; what formula does ethernet use for randomizing? E.g. do they track the "base" backoff, and then multiply it by 1+random*k ? Or do they just track the backoff, and multiply it by e.g. 1.5+random each time? -- Matthew J Toseland - [EMAIL PROTECTED] Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so.
signature.asc
Description: Digital signature
_______________________________________________ Devl mailing list [email protected] http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl
