Multiple threads would indeed make your life nicer.

GNU parallel with rsync is a quick and dirty option, bbcp and UDT +
Sector/Sphere are also gross parallel transfer options.

- Rich

On Thu, Sep 13, 2012 at 1:32 PM, Ray Van Dolson <[email protected]> wrote:
> Hello all;
>
> Got a couple of Dell R720's running NexentaStor 3.1.4 on a 10GbE
> network with jumbo frames enabled (MTU 9000).
>
> iperf gives me over 9Gbps, so I'm confident the network itself is
> capable.
>
> We are dealing with a scenario where we need to move lots of data
> around via NAS protocols, and right now I'm focused on NFS.
>
> For example, we have a tree of ~10TB or so of relatively large files
> (around 200MB each).  I've made the following changes to the networking
> stack on both server and client:
>
>   ndd -set /dev/tcp tcp_max_buf 2097152
>   ndd -set /dev/tcp tcp_cwnd_max 2097152
>   ndd -set /dev/tcp tcp_recv_hiwat 400000
>   ndd -set /dev/tcp tcp_xmit_hiwat 400000
>
> On the client:
>
>   set ip:tcp_squeue_wput=1
>   set rpcmod:clnt_max_conns = 8
>   set nfs:nfs3_bsize=0x100000
>   set nfs:nfs3_max_transfer_size_cots=1048576
>   set nfs:nfs3_max_transfer_size=1048576
>   set nfs:nfs4_bsize=0x100000
>   set nfs:nfs4_max_transfer_size_cots=1048576
>   set nfs:nfs4_max_transfer_size=1048576
>
> And am using the following options on the NFS client to mount the
> export off the server:
>
>   mount -o wsize=1048576,rsize=1048576,vers=4 \
>     10.212.100.16:/volumes/datapool/DG_US_CONVERTED /mnt
>
> I'm initiating a copy of the tree from the client by doing a good 'ol:
>
>   # cd /mnt/path
>   # tar cf - . | (cd /path/dst ; tar xvf -)
>
> I'm only getting around 200MB/sec (~1.6Gbps).  Does this sound like an
> expected ceiling?
>
> This is obviously only one thread of NFS.  Should I be able to get
> better throughput?  Do I need to focus on ways to multiplex the
> transfer?  Are there any file copy tools that utilize multiple TCP
> connections for data transport of a single data stream off my disk
> (maybe lftp could work here).
>
> Thanks for any thoughts.
>
> Ray
> _______________________________________________
> nfs-discuss mailing list
> [email protected]
_______________________________________________
nfs-discuss mailing list
[email protected]

Reply via email to