On Thu, Jun 18, 2009 at 8:00 PM, Matthew
Toseland<toad at amphibian.dyndns.org> wrote:
> Are you doing more testing?
>
> On Saturday 13 June 2009 19:05:36 Evan Daniel wrote:
>> On Sat, Jun 13, 2009 at 1:08 PM, Matthew
>> Toseland<toad at amphibian.dyndns.org> wrote:
>> > Now that 0.7.5 has shipped, we can start making disruptive changes again 
>> > in a few days. The number one item on freenet.uservoice.com has been for 
>> > some time to allow more opennet peers for fast nodes. We have discussed 
>> > this in the past, and the conclusions which I agree with and some others 
>> > do:
>> > - This is feasible.
>> > - It will not seriously break routing.
>> > - Reducing the number of connections on slow nodes may actually be a gain 
>> > in security, by increasing opportunities for coalescing. It will improve 
>> > payload percentages, improve average transfer rates, let slow nodes accept 
>> > more requests from each connection, and should improve overall performance.
>> > - The network should be less impacted by the speed of the slower nodes.
>> > - But we have tested using fewer connections on slow nodes in the past and 
>> > had anecdotal evidence that it is slower. We need to evaluate it more 
>> > rigorously somehow.
>> > - Increasing the number of peers allowed for fast opennet nodes, within 
>> > reason, should not have a severe security impact. It should improve 
>> > routing (by a smaller network diameter). It will of course allow fast 
>> > nodes to contribute more to the network. We do need to be careful to avoid 
>> > overreliance on ubernodes (hence an upper limit of maybe 50 peers).
>> > - Routing security: FOAF routing allows you to capture most of the traffic 
>> > from a node already, the only thing stopping this is the 30%-to-one-peer 
>> > limit.
>> > - Coalescing security: Increasing the number of peers without increasing 
>> > the bandwidth usage does increase vulnerability to traffic analysis by 
>> > doing less coalescing. On the other hand, this is not a problem if the 
>> > bandwidth usage scales with the number of nodes.
>> >
>> > How can we move forward? We need some reliable test results on whether a 
>> > 10KB/sec node is better off with 10 peers or with 20 peers. I think it's a 
>> > fair assumption for faster nodes. Suggestions?
>>
>> I haven't tested at numbers that low. ?At 15KiB/s, the stats page
>> suggests your slightly better off with 12-15 peers than 20. ?I saw no
>> subjective difference in browsing speed either way.
>>
>> I'm happy to do some testing here, if you tell me what data you want
>> me to collect. ?More testers would obviously be good.
>>
>> >
>> > We also need to set some arbitrary parameters. There is an argument for 
>> > linearity, to avoid penalising nodes with different bandwidth levels, but 
>> > nodes with more peers and the same amount of bandwidth per peer are likely 
>> > to be favoured by opennet anyway... Non-linearity, in the sense of having 
>> > a lower threshold and an upper threshold and linearly add peers between 
>> > them but not necessarily consistently with the lower threshold, would mean 
>> > fewer nodes with lots of peers, and might achieve better results? E.g.
>> >
>> > 10 peers at 10KB/sec ... 20 peers at 20KB/sec (1 per KB/sec)
>> > 20 peers at 20KB/sec ... 50 peers at 80KB/sec (1 per 3KB/sec)
>>
>> I wouldn't go as low as 10 peers, simply because I haven't tested it.
>> Other than that, those seem perfectly sensible to me.
>>
>> We should also watch for excessive cpu usage. ?If there's lots of bw
>> available, we'd want to have just enough connections to not quite
>> limit on available cpu power. ?Of course, I don't really know how many
>> connections / how much bw it is before that becomes a concern.
>>
>> Evan Daniel
>

I'd been running the Spider, and trying to get a complete run out of
it in order to provide a full set of bug reports.  Unfortunately,
after spidering over 100k keys (representing over a week of runtime),
the .dbs file became unrecoverably corrupted, and it won't write index
files.  I had started rerunning it; I've since paused that and started
taking data on connections.

I've got a little data so far at 12KiB/s limit, 10 and 12 peers.
Basically, I don't see a difference between 10 and 12 peers.  Both
produce reasonable performance numbers.  My node has 2 darknet peers,
remainder opennet.  I'm not using the node much during these tests; it
has a few MiB of downloads queued that aren't making progress (old
files that have probably dropped off).

Evan Daniel


12 peers, 12 KiB/s limit

# bwlimitDelayTime: 91ms
# nodeAveragePingTime: 408ms
# darknetSizeEstimateSession: 0 nodes
# opennetSizeEstimateSession: 63 nodes
# nodeUptime: 1h37m

# Connected: 10
# Backed off: 2

# Input Rate: 2.54 KiB/s (of 60.0 KiB/s)
# Output Rate: 12.9 KiB/s (of 12.0 KiB/s)
# Total Input: 31.3 MiB (5.5 KiB/s average)
# Total Output: 47.5 MiB (8.34 KiB/s average)
# Payload Output: 32.6 MiB (5.73 KiB/sec)(68%)

1469    Output bandwidth liability
18      >SUB_MAX_PING_TIME

Success rates
Group   P(success)      Count
All requests    3.329%  10,633
CHKs    9.654%  3,377
SSKs    0.386%  7,256
Local requests  2.022%  2,176
Remote requests         3.666%  8,457
Block transfers         95.646%         666
Turtled downstream      87.500%         8
Transfers timed out     0.000%  8
Turtle requests         100.000%        6

Detailed timings (local CHK fetches)
Successful      9.503s
Unsuccessful    6.656s
Average         6.700s



12 peers, 12 KiB/s limit

# bwlimitDelayTime: 108ms
# nodeAveragePingTime: 380ms
# darknetSizeEstimateSession: 20 nodes
# opennetSizeEstimateSession: 107 nodes
# nodeUptime: 4h4m

    * Connected: 10
    * Backed off: 2

# Input Rate: 4.51 KiB/s (of 60.0 KiB/s)
# Output Rate: 12.9 KiB/s (of 12.0 KiB/s)
# Total Input: 95.3 MiB (6.64 KiB/s average)
# Total Output: 144 MiB (10.0 KiB/s average)
# Payload Output: 102 MiB (7.13 KiB/sec)(70%)

8661    Output bandwidth liability
26      >SUB_MAX_PING_TIME

Success rates
Group   P(success)      Count
All requests    3.497%  30,738
CHKs    9.048%  11,119
SSKs    0.352%  19,619
Local requests  4.497%  4,025
Remote requests         3.347%  26,713
Block transfers         96.388%         2,021
Turtled downstream      79.545%         44
Transfers timed out     0.000%  44
Turtle requests         50.000%         16

Detailed timings (local CHK fetches)
Successful      12.386s
Unsuccessful    7.091s
Average         7.128s



10 peers, 12 KiB/s limit

# bwlimitDelayTime: 78ms
# nodeAveragePingTime: 385ms
# darknetSizeEstimateSession: 0 nodes
# opennetSizeEstimateSession: 79 nodes
# nodeUptime: 3h31m

    * Connected: 9
    * Backed off: 1

# Input Rate: 8.25 KiB/s (of 60.0 KiB/s)
# Output Rate: 13.1 KiB/s (of 12.0 KiB/s)
# Total Input: 91.2 MiB (7.37 KiB/s average)
# Total Output: 127 MiB (10.3 KiB/s average)
# Payload Output: 92.9 MiB (7.50 KiB/sec)(72%)

4208    Output bandwidth liability
8       >SUB_MAX_PING_TIME
1       Insufficient output bandwidth

Success rates
Group   P(success)      Count
All requests    4.811%  25,400
CHKs    11.310%         10,115
SSKs    0.510%  15,285
Local requests  8.772%  2,679
Remote requests         4.344%  22,721
Block transfers         95.744%         2,091
Turtled downstream      81.481%         54
Transfers timed out     0.000%  54
Turtle requests         33.333%         9

Detailed timings (local CHK fetches)
Successful      7.040s
Unsuccessful    13.240s
Average         13.220s

Reply via email to