On Saturday 13 June 2009 20:43:58 Evan Daniel wrote: > On Sat, Jun 13, 2009 at 2:54 PM, Matthew > Toseland<t...@amphibian.dyndns.org> wrote: > > On Saturday 13 June 2009 19:05:36 Evan Daniel wrote: > >> On Sat, Jun 13, 2009 at 1:08 PM, Matthew > >> Toseland<t...@amphibian.dyndns.org> wrote: > >> > Now that 0.7.5 has shipped, we can start making disruptive changes again > >> > in a few days. The number one item on freenet.uservoice.com has been for > >> > some time to allow more opennet peers for fast nodes. We have discussed > >> > this in the past, and the conclusions which I agree with and some others > >> > do: > >> > - This is feasible. > >> > - It will not seriously break routing. > >> > - Reducing the number of connections on slow nodes may actually be a > >> > gain in security, by increasing opportunities for coalescing. It will > >> > improve payload percentages, improve average transfer rates, let slow > >> > nodes accept more requests from each connection, and should improve > >> > overall performance. > >> > - The network should be less impacted by the speed of the slower nodes. > >> > - But we have tested using fewer connections on slow nodes in the past > >> > and had anecdotal evidence that it is slower. We need to evaluate it > >> > more rigorously somehow. > >> > - Increasing the number of peers allowed for fast opennet nodes, within > >> > reason, should not have a severe security impact. It should improve > >> > routing (by a smaller network diameter). It will of course allow fast > >> > nodes to contribute more to the network. We do need to be careful to > >> > avoid overreliance on ubernodes (hence an upper limit of maybe 50 peers). > >> > - Routing security: FOAF routing allows you to capture most of the > >> > traffic from a node already, the only thing stopping this is the > >> > 30%-to-one-peer limit. > >> > - Coalescing security: Increasing the number of peers without increasing > >> > the bandwidth usage does increase vulnerability to traffic analysis by > >> > doing less coalescing. On the other hand, this is not a problem if the > >> > bandwidth usage scales with the number of nodes. > >> > > >> > How can we move forward? We need some reliable test results on whether a > >> > 10KB/sec node is better off with 10 peers or with 20 peers. I think it's > >> > a fair assumption for faster nodes. Suggestions? > >> > >> I haven't tested at numbers that low. At 15KiB/s, the stats page > >> suggests your slightly better off with 12-15 peers than 20. I saw no > >> subjective difference in browsing speed either way. > > > > Which stats are you comparing? > > Output bandwidth (average), payload %, and nodeAveragePingTime. I'd > be happy to track others as well.
Request success rates maybe? I dunno. > > >> > >> I'm happy to do some testing here, if you tell me what data you want > >> me to collect. More testers would obviously be good. > > > > That would be a good start. It would be useful to compare: > > - 12KB/sec with 10, 12, 20 peers. > > - 8KB/sec with 8, 10, 20 peers. > > - 20KB/sec with 10, 15, 20 peers. > > 10 peers on each setting (proposed minimum), 20 peers (current > setting), and 1 peer per KiB/s... What's the rationale behind 20KiB/s > with 15 peers? We might not want 1kB/sec/peer ? > > The huge variable is what sort of load I put on the node. Nothing? A > few queued downloads? Run the spider? Some test files inserted for > the purpose by someone else? Other ideas? Yeah... dunno. > > >> > We also need to set some arbitrary parameters. There is an argument for > >> > linearity, to avoid penalising nodes with different bandwidth levels, > >> > but nodes with more peers and the same amount of bandwidth per peer are > >> > likely to be favoured by opennet anyway... Non-linearity, in the sense > >> > of having a lower threshold and an upper threshold and linearly add > >> > peers between them but not necessarily consistently with the lower > >> > threshold, would mean fewer nodes with lots of peers, and might achieve > >> > better results? E.g. > >> > > >> > 10 peers at 10KB/sec ... 20 peers at 20KB/sec (1 per KB/sec) > >> > 20 peers at 20KB/sec ... 50 peers at 80KB/sec (1 per 3KB/sec) > >> > >> I wouldn't go as low as 10 peers, simply because I haven't tested it. > > > > Well, maybe the lower bound should be different. Testing should help. It > > might very well be that there is a minimum number of opennet connections > > below which it just doesn't work well. > > I suspect that is the case. I have no idea where that limit is, > though. I suspect having the 30% limit become relevant just due to > normal routing policy would be bad. Well yeah. It would be worth finding out what the threshold is. > > Also, your math above is off: 20 KiB/s to 80 KiB/s is a 60 KiB/s jump; > adding 30 peers is 1 peer per 2 KiB/s. Okay then 50 peers at 110KB/sec. :) Or at 80, it may be that 2KB/sec/peer is reasonable. > > >> Other than that, those seem perfectly sensible to me. > >> > >> We should also watch for excessive cpu usage. If there's lots of bw > >> available, we'd want to have just enough connections to not quite > >> limit on available cpu power. Of course, I don't really know how many > >> connections / how much bw it is before that becomes a concern. > > > > Maybe... just routing requests isn't necessarily a big part of our overall > > CPU usage, the client layer stuff tends to be pretty heavy ... IMHO if > > people have CPU problems they can just reduce their bandwidth limits. To > > some degree ping time will keep it in check, but that's a crude measure in > > that it can't do much until the situation is pretty bad already... > > That just means need a better control law ;) I think I agree, though, > make this the users' problem. Maybe. Ping time is affected by lots of things - specifically both network and cpu.
signature.asc
Description: This is a digitally signed message part.
_______________________________________________ Devl mailing list Devl@freenetproject.org http://emu.freenetproject.org/cgi-bin/mailman/listinfo/devl