Oh dear god.  Sorry folks, it looks like the new hotmail really doesn't play 
well with the list.  Trying again in plain text:
 
 
> Try to separate the two things:
> 
> (1) Try /dev/zero -> mbuffer --- network ---> mbuffer> /dev/null
> That should give you wirespeed
 
I tried that already.  It still gets just 10-11MB/s from this server.
I can get zfs send / receive and mbuffer working at 30MB/s though from a couple 
of test servers (with much lower specs).
 
> (2) Try zfs send | mbuffer> /dev/null
> That should give you an idea how fast zfs send really is locally.
 
Hmm, that's better than 10MB/s, but the average is still only around 20MB/s:
summary:  942 MByte in 47.4 sec - average of 19.9 MB/s
 
I think that points to another problem though as the send mbuffer is 100% full. 
 Certainly the pool itself doesn't appear under any strain at all while this is 
going on:
 
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rc-pool      732G  1.55T    171     85  21.3M  1.01M
  mirror     144G   320G     38      0  4.78M      0
    c1t1d0      -      -      6      0   779K      0
    c1t2d0      -      -     17      0  2.17M      0
    c2t1d0      -      -     14      0  1.85M      0
  mirror     146G   318G     39      0  4.89M      0
    c1t3d0      -      -     20      0  2.50M      0
    c2t2d0      -      -     13      0  1.63M      0
    c2t0d0      -      -      6      0   779K      0
  mirror     146G   318G     34      0  4.35M      0
    c2t3d0      -      -     19      0  2.39M      0
    c1t5d0      -      -      7      0  1002K      0
    c1t4d0      -      -      7      0  1002K      0
  mirror     148G   316G     23      0  2.93M      0
    c2t4d0      -      -      8      0  1.09M      0
    c2t5d0      -      -      6      0   890K      0
    c1t6d0      -      -      7      0  1002K      0
  mirror     148G   316G     35      0  4.35M      0
    c1t7d0      -      -      6      0   779K      0
    c2t6d0      -      -     12      0  1.52M      0
    c2t7d0      -      -     17      0  2.07M      0
  c3d1p0      12K   504M      0     85      0  1.01M
----------  -----  -----  -----  -----  -----  -----
 
Especially when compared to the zfs send stats on my backup server which 
managed 30MB/s via mbuffer (Being received on a single virtual SATA disk):
               capacity     operations    bandwidth
pool         used  avail   read  write   read  write
----------  -----  -----  -----  -----  -----  -----
rpool       5.12G  42.6G      0      5      0  27.1K
  c4t0d0s0  5.12G  42.6G      0      5      0  27.1K
----------  -----  -----  -----  -----  -----  -----
zfspool      431G  4.11T    261      0  31.4M      0
  raidz2     431G  4.11T    261      0  31.4M      0
    c4t1d0      -      -    155      0  6.28M      0
    c4t2d0      -      -    155      0  6.27M      0
    c4t3d0      -      -    155      0  6.27M      0
    c4t4d0      -      -    155      0  6.27M      0
    c4t5d0      -      -    155      0  6.27M      0
----------  -----  -----  -----  -----  -----  -----
The really ironic thing is that the 30MB/s send / receive was sending to a 
virtual SATA disk which is stored (via sync NFS) on the server I'm having 
problems with...
 
Ross

 

> Date: Thu, 16 Oct 2008 14:27:49 +0200
> From: [EMAIL PROTECTED]
> To: [EMAIL PROTECTED]
> CC: zfs-discuss@opensolaris.org
> Subject: Re: [zfs-discuss] Improving zfs send performance
> 
> Hi Ross
> 
> Ross wrote:
>> Now though I don't think it's network at all. The end result from that 
>> thread is that we can't see any errors in the network setup, and using 
>> nicstat and NFS I can show that the server is capable of 50-60MB/s over the 
>> gigabit link. Nicstat also shows clearly that both zfs send / receive and 
>> mbuffer are only sending 1/5 of that amount of data over the network.
>> 
>> I've completely run out of ideas of my own (but I do half expect there's a 
>> simple explanation I haven't thought of). Can anybody think of a reason why 
>> both zfs send / receive and mbuffer would be so slow?
> 
> Try to separate the two things:
> 
> (1) Try /dev/zero -> mbuffer --- network ---> mbuffer> /dev/null
> 
> That should give you wirespeed
> 
> (2) Try zfs send | mbuffer> /dev/null
> 
> That should give you an idea how fast zfs send really is locally.
> 
> Carsten
_________________________________________________________________
Get all your favourite content with the slick new MSN Toolbar - FREE
http://clk.atdmt.com/UKM/go/111354027/direct/01/
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to