On Wed, Apr 12, 2006 at 11:05:01PM +0000, Thomas Bruderer wrote: > > > The rationale was to treat the network collectively as a single > > > receiver. Do you see reasons why that approach won't work? > > > I'm not saying it definitely won't work, but it's so far outside of > > what TCP was designed for that I don't think it can really be considered > > a well-tested system. > > I personally think it won't work. The slow nodes will slow down the whole > network because they backpropagate always the "plaese stop sending" messages > of > the same nodes. I said in the other thread. I dont care what happens 20 nodes > away, matthew asked if we should ignore them complely?
They won't "always". They will only send a RejectedOverload if they actually get a request. Which will be rare because they will be backed off most of the time. > > well: one single node contributes only a tiny amount of the overall > rejectprobability. Maybe those reject messages should backpropagate as a > probability of the likelyhood a message cant find its target. > > so if a Server says: sorry I am overloaded (100% reject probabilty at my node) > the next in chain has maybe 5 links. the only reject probabilty available is > the > one. which rejects and his own status. lets say he is not very busy aswell. > So I > will backpropagate (max (#rejected / #links, myrejectprobability)) in this > case > I will backpropagate 20%. Lets say I am the orignator and get back 20%. > > (does this sounds familiar?) > > what does mean this to me. It means. There is load on this link, but it tells > me > aswell, there is at least 80% hope for the next insert. > > Well this sounds nice, but its late and have not exactly thought about the > consequences... However a Node 20Hops away has still an influence on my > sending > rate, but the further away the less the influence is. > > But a tested algorithm should be definitly preferred, I still believe thre is > no > way without a good caching at nodes. I don't know what you mean by this. > > > >>> * TCP's congestion control also assumes the sender is well behaved - a > > >>> badly behaved sender can cause all other flows to back off, for selfish > > >>> or malicious reasons > > I think freenet nodes have to be a bit more egoistic, well behaving is good in > theory to do the maths, but in practice everyone wants to get most out of it. > Think of fuqid which just floods the network with hundrets of thounsands of > requests. You have to be a bit aggressive in your tactics because if you won't > be someone else will try it with a modified node. (because Client level has > not > the influence anymore it had) If you have a way to manage load that doesn't rely on the nodes being honest, I'd be very interested to hear about it, but so far in this email I haven't seen anything like that. > > > I think Matthew's right about pushing load back to the sender - the > > question is how to do this over multiple hops in a way that doesn't > > reveal the identity of the sender and gives the sender an incentive to > > slow down (rather than a polite request to do so). > > I agree completley. Most users wont be malicous but most want to have a > maximal > profit. > > > >>> * "Route as greedily as possible, given the available capacity" > > > The problem here, and it is one we have faced before, is that this > > > degrades routing > I believe in queues, if a node is not permanently overloaded, you will find a > window where you can send the data. Wait at those links, but dont slow down > the > packets behind it, because its likely that not all nodes are overloaded the > same > time. If the queue gets too long, ok, then we need some slow down... It is likely that all nodes will be requesting all of the time, if everyone has large download/upload queues. This means that we will need to tell them to slow down. > we have to use the resources well, and now the are not used well,most > connections are idle. I heard about the bug. I'll see if that helps, but I > dont > believe it will help much, because the node still listenes mostly to the slow > nodes. Maybe we should backpropagate idle messages aswell... "hey my node is > idle, its boring send me as much as you can" :) There are certainly bugs, but it is more important that we minimize the number of RejectedOverload's, timeouts and consequent backoffs (within reason), and therefore get reasonable routing, than that we use every link at 100%. IMHO we *can* make fairly good usage of links. -- Matthew J Toseland - toad at amphibian.dyndns.org Freenet Project Official Codemonkey - http://freenetproject.org/ ICTHUS - Nothing is impossible. Our Boss says so. -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 189 bytes Desc: Digital signature URL: <https://emu.freenetproject.org/pipermail/devl/attachments/20060413/eabb2cda/attachment.pgp>
