В сообщении от 5 августа 2010 14:19:59 вы написали:
> > Can you please remove use of the zpool entirely (e.g. zpool destroy > tank) and do a write test to each disk itself? E.g.: > > dd if=/dev/zero of=/dev/ad8 bs=64k count=1000000 > dd if=/dev/zero of=/dev/ad10 bs=64k count=1000000 > dd if=/dev/zero of=/dev/ad12 bs=64k count=1000000 > > I don't recommend using large block sizes (e.g. bs=1M, bs=3M). dd if=/dev/zero of=/dev/ad8 bs=64k count=1000000 1000000+0 records in 1000000+0 records out 65536000000 bytes transferred in 604.849406 secs (108350937 bytes/sec) dd if=/dev/zero of=/dev/ad10 bs=64k count=1000000 1000000+0 records in 1000000+0 records out 65536000000 bytes transferred in 757.755459 secs (86487005 bytes/sec) dd if=/dev/zero of=/dev/ad12 bs=64k count=1000000 1000000+0 records in 1000000+0 records out 65536000000 bytes transferred in 604.857282 secs (108349526 bytes/sec) > If all of the above dds show good/decent throughput, then there's > something strange going on with ZFS. If this is the case, I would > recommend filing a PR and posting to freebsd-fs about the problem, > pointing folks to this thread. > > If all of the dds show bad throughput, then could you please do the > following: > > - Provide vmstat -i output > - Install ports/sysutils/smartmontools and run smartctl -a /dev/ad8, > smartctl -a /dev/ad10, and smartctl -a /dev/ad12 > > If only one of the dds shows bad throughput, then please: > > - Install ports/sysutils/smartmontools and run smartctl -a /dev/XXX, > where XXX is the disk which has bad throughput > - Try making a ZFS pool with all 3 disks, but then do "zpool offline > tank XXX" and then re-attempt the following dd: > dd if=/dev/zero of=/tank/test.zero bs=64k count=1000000 > And see what throughput looks like. > > Thanks. ----- Alex V. Petrov _______________________________________________ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "freebsd-stable-unsubscr...@freebsd.org"