And one comment:
When we do write operation(by command dd), heavy read operation increased from zero to 3M for each disk,
and the write bandwidth is poor.
The disk io %b increase from 0 to about 60.

I don't understand why this happened.

capacity operations bandwidth pool used avail read write read write -------------------------------------- ----- ----- ----- ----- ----- ----- datapool 19.8T 5.48T 543 47 1.74M 5.89M raidz1 5.64T 687G 146 13 480K 1.66M c3t6002219000854867000003B2490FB009d0 - - 49 13 3.26M 293K c3t6002219000854867000003B4490FB063d0 - - 48 13 3.19M 296K c3t60022190008528890000055F4CB79C10d0 - - 48 13 3.19M 293K c3t6002219000854867000003B8490FB0FFd0 - - 50 13 3.28M 284K c3t6002219000854867000003BA490FB14Fd0 - - 50 13 3.31M 287K c3t60022190008528890000041C490FAFA0d0 - - 49 14 3.27M 297K c3t6002219000854867000003C0490FB27Dd0 - - 48 14 3.24M 300K raidz1 5.73T 594G 102 7 337K 996K c3t6002219000854867000003C2490FB2BFd0 - - 52 5 3.59M 166K c3t60022190008528890000041F490FAFD0d0 - - 54 5 3.72M 166K c3t600221900085288900000428490FB0D8d0 - - 55 5 3.79M 166K c3t600221900085288900000422490FB02Cd0 - - 52 5 3.57M 166K c3t600221900085288900000425490FB07Cd0 - - 53 5 3.64M 166K c3t600221900085288900000434490FB24Ed0 - - 55 5 3.76M 166K c3t60022190008528890000043949100968d0 - - 55 5 3.83M 166K raidz1 5.81T 519G 117 10 388K 1.26M c3t60022190008528890000056B4CB79D66d0 - - 46 9 3.09M 215K c3t6002219000854867000004B94CB79F91d0 - - 44 9 2.91M 215K c3t6002219000854867000004BB4CB79FE1d0 - - 44 9 2.97M 224K c3t6002219000854867000004BD4CB7A035d0 - - 44 9 2.96M 215K c3t6002219000854867000004BF4CB7A0ABd0 - - 44 9 2.97M 216K c3t60022190008528890000055C4CB79BB8d0 - - 45 9 3.04M 215K c3t6002219000854867000004C14CB7A0FDd0 - - 46 9 3.02M 215K raidz1 2.59T 3.72T 176 16 581K 2.00M c3t60022190008528890000042B490FB124d0 - - 48 5 3.21M 342K c3t6002219000854867000004C54CB7A199d0 - - 46 5 2.99M 342K c3t6002219000854867000004C74CB7A1D5d0 - - 49 5 3.27M 342K c3t6002219000852889000005594CB79B64d0 - - 46 6 3.00M 342K c3t6002219000852889000005624CB79C86d0 - - 47 6 3.11M 342K c3t6002219000852889000005654CB79CCCd0 - - 50 6 3.29M 342K c3t6002219000852889000005684CB79D1Ed0 - - 45 5 2.98M 342K c3t6B8AC6F0000F8376000005864DC9E9F1d0 4K 928G 0 0 0 0 -------------------------------------- ----- ----- ----- ----- ----- -----

^C
root@nas-hz-01:~#


On 06/08/2011 11:07 AM, Ding Honghui wrote:
Hi,

I got a wired write performance and need your help.

One day, the write performance of zfs degrade.
The write performance decrease from 60MB/s to about 6MB/s in sequence write.

Command:
date;dd if=/dev/zero of=block bs=1024*128 count=10000;date

The hardware configuration is 1 Dell MD3000 and 1 MD1000 with 30 disks.
The OS is Solaris 10U8, zpool version 15 and zfs version 4.

I run Dtrace to trace the write performance:

fbt:zfs:zfs_write:entry
{
        self->ts = timestamp;
}


fbt:zfs:zfs_write:return
/self->ts/
{
        @time = quantize(timestamp-self->ts);
        self->ts = 0;
}

It shows
           value  ------------- Distribution ------------- count
            8192 |                                         0
           16384 |                                         16
           32768 |@@@@@@@@@@@@@@@@@@@@@@@@                 3270
           65536 |@@@@@@@                                  898
          131072 |@@@@@@@                                  985
          262144 |                                         33
          524288 |                                         1
         1048576 |                                         1
         2097152 |                                         3
         4194304 |                                         0
         8388608 |@                                        180
        16777216 |                                         33
        33554432 |                                         0
        67108864 |                                         0
       134217728 |                                         0
       268435456 |                                         1
       536870912 |                                         1
      1073741824 |                                         2
      2147483648 |                                         0
      4294967296 |                                         0
      8589934592 |                                         0
     17179869184 |                                         2
     34359738368 |                                         3
     68719476736 |                                         0

Compare to a working well storage(1 MD3000), the max write time of zfs_write is 4294967296, it is about 10 times faster.

Any suggestions?

Thanks
Ding

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to