Marko,
Matt and I discussed this offline some more and he had a couple of ideas
about double-checking your hardware.

It looks like your controller (or disks, maybe?) is having trouble with
multiple simultaneous I/Os to the same disk.  It looks like prefetch
aggravates this problem.

When I asked Matt what we could do to verify that it's the number of
concurrent I/Os that is causing performance to be poor, he had the
following suggestions:

        set zfs_vdev_{min,max}_pending=1 and run with prefetch on, then
        iostat should show 1 outstanding io and perf should be good.

        or turn prefetch off, and have multiple threads reading
        concurrently, then iostat should show multiple outstanding ios
        and perf should be bad.

Let me know if you have any additional questions.

-j

On Wed, May 16, 2007 at 11:38:24AM -0700, [EMAIL PROTECTED] wrote:
> At Matt's request, I did some further experiments and have found that
> this appears to be particular to your hardware.  This is not a general
> 32-bit problem.  I re-ran this experiment on a 1-disk pool using a 32
> and 64-bit kernel.  I got identical results:
> 
> 64-bit
> ======
> 
> $ /usr/bin/time dd if=/testpool1/filebench/testfile of=/dev/null bs=128k
> count=10000
> 10000+0 records in
> 10000+0 records out
> 
> real       20.1
> user        0.0
> sys         1.2
> 
> 62 Mb/s
> 
> # /usr/bin/time dd if=/dev/dsk/c1t3d0 of=/dev/null bs=128k count=10000
> 10000+0 records in
> 10000+0 records out
> 
> real       19.0
> user        0.0
> sys         2.6
> 
> 65 Mb/s
> 
> 32-bit
> ======
> 
> /usr/bin/time dd if=/testpool1/filebench/testfile of=/dev/null bs=128k
> count=10000
> 10000+0 records in
> 10000+0 records out
> 
> real       20.1
> user        0.0
> sys         1.7
> 
> 62 Mb/s
> 
> # /usr/bin/time dd if=/dev/dsk/c1t3d0 of=/dev/null bs=128k count=10000
> 10000+0 records in
> 10000+0 records out
> 
> real       19.1
> user        0.0
> sys         4.3
> 
> 65 Mb/s
> 
> -j
> 
> On Wed, May 16, 2007 at 09:32:35AM -0700, Matthew Ahrens wrote:
> > Marko Milisavljevic wrote:
> > >now lets try:
> > >set zfs:zfs_prefetch_disable=1
> > >
> > >bingo!
> > >
> > >   r/s    w/s   kr/s   kw/s wait actv wsvc_t asvc_t  %w  %b device
> > > 609.0    0.0 77910.0    0.0  0.0  0.8    0.0    1.4   0  83 c0d0
> > >
> > >only 1-2 % slower then dd from /dev/dsk. Do you think this is general
> > >32-bit problem, or specific to this combination of hardware?
> > 
> > I suspect that it's fairly generic, but more analysis will be necessary.
> > 
> > >Finally, should I file a bug somewhere regarding prefetch, or is this
> > >a known issue?
> > 
> > It may be related to 6469558, but yes please do file another bug report. 
> >  I'll have someone on the ZFS team take a look at it.
> > 
> > --matt
> > _______________________________________________
> > zfs-discuss mailing list
> > zfs-discuss@opensolaris.org
> > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to