Hi All, Just want to note that I had the same issue with zfs send + vdevs that had 11 drives in them on a X4500. Reducing the count of drives per zvol cleared this up.
One vdev is IOPS limited to the speed of one drive in that vdev, according to this post <http://opensolaris.org/jive/thread.jspa?threadID=74033> (see comment from ptribble.) On Wed, Oct 15, 2008 at 3:07 PM, Carsten Aulbert <[EMAIL PROTECTED] > wrote: > Hi Richard, > > Richard Elling wrote: > > Since you are reading, it depends on where the data was written. > > Remember, ZFS dynamic striping != RAID-0. > > I would expect something like this if the pool was expanded at some > > point in time. > > No, the RAID was set-up in one go right after jumpstarting the box. > > >> (2) The disks should be able to perform much much faster than they > >> currently output data at, I believe it;s 2008 and not 1995. > >> > > > > X4500? Those disks are good for about 75-80 random iops, > > which seems to be about what they are delivering. The dtrace > > tool, iopattern, will show the random/sequential nature of the > > workload. > > > > I need to read about his a bit and will try to analyze it. > > >> (3) The four cores of the X4500 are dying of boredom, i.e. idle >95% all > >> the time. > >> > >> Has anyone a good idea, where the bottleneck could be? I'm running out > >> of ideas. > >> > > > > I would suspect the disks. 30 second samples are not very useful > > to try and debug such things -- even 1 second samples can be > > too coarse. But you should take a look at 1 second samples > > to see if there is a consistent I/O workload. > > -- richard > > > > Without doing too much statistics (yet, if needed I can easily do that) > it looks like these: > > > capacity operations bandwidth > pool used avail read write read write > ---------- ----- ----- ----- ----- ----- ----- > atlashome 3.54T 17.3T 256 0 7.97M 0 > raidz2 833G 6.00T 0 0 0 0 > c0t0d0 - - 0 0 0 0 > c1t0d0 - - 0 0 0 0 > c4t0d0 - - 0 0 0 0 > c6t0d0 - - 0 0 0 0 > c7t0d0 - - 0 0 0 0 > c0t1d0 - - 0 0 0 0 > c1t1d0 - - 0 0 0 0 > c4t1d0 - - 0 0 0 0 > c5t1d0 - - 0 0 0 0 > c6t1d0 - - 0 0 0 0 > c7t1d0 - - 0 0 0 0 > c0t2d0 - - 0 0 0 0 > c1t2d0 - - 0 0 0 0 > c4t2d0 - - 0 0 0 0 > c5t2d0 - - 0 0 0 0 > raidz2 1.29T 5.52T 133 0 4.14M 0 > c6t2d0 - - 117 0 285K 0 > c7t2d0 - - 114 0 279K 0 > c0t3d0 - - 106 0 261K 0 > c1t3d0 - - 114 0 282K 0 > c4t3d0 - - 118 0 294K 0 > c5t3d0 - - 125 0 308K 0 > c6t3d0 - - 126 0 311K 0 > c7t3d0 - - 118 0 293K 0 > c0t4d0 - - 119 0 295K 0 > c1t4d0 - - 120 0 298K 0 > c4t4d0 - - 120 0 291K 0 > c6t4d0 - - 106 0 257K 0 > c7t4d0 - - 96 0 236K 0 > c0t5d0 - - 109 0 267K 0 > c1t5d0 - - 114 0 282K 0 > raidz2 1.43T 5.82T 123 0 3.83M 0 > c4t5d0 - - 108 0 242K 0 > c5t5d0 - - 104 0 236K 0 > c6t5d0 - - 104 0 239K 0 > c7t5d0 - - 107 0 245K 0 > c0t6d0 - - 108 0 248K 0 > c1t6d0 - - 106 0 245K 0 > c4t6d0 - - 108 0 250K 0 > c5t6d0 - - 112 0 258K 0 > c6t6d0 - - 114 0 261K 0 > c7t6d0 - - 110 0 253K 0 > c0t7d0 - - 109 0 248K 0 > c1t7d0 - - 109 0 246K 0 > c4t7d0 - - 108 0 243K 0 > c5t7d0 - - 108 0 244K 0 > c6t7d0 - - 106 0 240K 0 > c7t7d0 - - 109 0 244K 0 > ---------- ----- ----- ----- ----- ----- ----- > > the iops vary between about 70 - 140, interesting bit is that the first > raidz2 does not get any hits at all :( > > Cheers > > Carsten > _______________________________________________ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss >
_______________________________________________ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss