Is it still true in Tahoe-LAFS 1.9.1 that throughput for immutable
uploads = segment size / RTT?

Given our preliminary results below, what would be a good segment size?
What is a reasonable upper limit for segment size? We're just planning
for immutable uploads and downloads, so alacrity isn't an issue. What
else might we break by setting segment size too large?

Our test grid is on a private VPN. A remote dedicated server with one
10Mbps port hosts our OpenVPN server and a storage node, with helper
enabled. The introducer, test client and other storage nodes are all VMs
(some local and some remote VPS).

The test client, and most of the storage nodes, connect to our OpenVPN
server through nested multi-hop commercial VPN services. Also, the real
diameter (not counting VPN paths) of the grid is ~2E+4Km. Although
bandwidths in both directions are typically 1.5-2.5Mbps, latencies are
large (up to 400ms ping from our OpenVPN server).

It's not surprising that helper upload saturates at 73KBps (0.6Mbps) for
single immutable uploads (1MB-100MB random characters). For three
simultaneous uploads, we get 217KBps (1.7Mbps), so it seems clear that
segment size is an issue.

For single immutable uploads, reported pushing saturates at 200KBps. But
the helper is really pushing 3.3 times that, 660KBps (5.3 Mbps). For
three simultaneous uploads, reported pushing saturates at 240KBps, or
790KBps (6.3 Mbps) for all shares. That's not far short of the server's
10Mbps bandwidth.

Everything else seems OK: peer selection = 2.2s; encoding = 33MBps;
sending hash and closing = 0.98s + 0.04s/MB.

Thank you for any comments. Eventually, we'll write something, and put
it up somewhere.
_______________________________________________
tahoe-dev mailing list
tahoe-dev@tahoe-lafs.org
http://tahoe-lafs.org/cgi-bin/mailman/listinfo/tahoe-dev

Reply via email to