Re: [zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-30 Thread Robert B. Wood


On May 29, 2007, at 2:59 PM, [EMAIL PROTECTED] wrote:

When sequential I/O is done to the disk directly there is no  
performance

degradation at all.


All filesystems impose some overhead compared to the rate of raw disk
I/O.  It's going to be hard to store data on a disk unless some  
kind of

filesystem is used.  All the tests that Eric and I have performed show
regressions for multiple sequential I/O streams.  If you have data  
that

shows otherwise, please feel free to share.


[I]t does not take any additional time in ldi_strategy(),
bdev_strategy(), mv_rw_dma_start().  In some instance it actually
takes less time.   The only thing that sometimes takes additional  
time

is waiting for the disk I/O.


Let's be precise about what was actually observed.  Eric and I saw
increased service times for the I/O on devices with NCQ enabled when
running multiple sequential I/O streams.  Everything that we observed
indicated that it actually took the disk longer to service requests  
when

many sequential I/Os were queued.

-j

It could very well be that on-disc cache is being partitioned  
differently when NCQ is enabled in certain implementations.  For  
example, with NCQ disabled, on disc look ahead may be enabled,  
netting sequential I/O improvements.  Just guessing, as this level of  
disc implementation detail is vendor specific and generally  
proprietary.  I would not expect the elevator sort algorithm to  
impose any performance penalty unless it were fundamentally flawed.


There's a bit of related discussion here

I'm actually struck by the minimal gains being seen in random I/O.  A  
few years ago, when NCQ was in prototype, I saw better than 50%  
improvement in average random I/O response time with large queue  
depths.  My gut feeling is that the issue is farther up the stack .. Bob




___
storage-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread Lida Horn

Point one, the comments that Eric made do not give the complete picture.
All the tests that Eric's referring to were done through ZFS filesystem.
When sequential I/O is done to the disk directly there is no performance
degradation at all.  Second point,  it does not take any additional
time in ldi_strategy(), bdev_strategy(), mv_rw_dma_start().  In some
instance it actually takes less time.   The only thing that sometimes
takes additional time is waiting for the disk I/O.

Regards,
Lida

eric kustarz wrote:

I've been looking into the performance impact of NCQ.  Here's what i  
found out:

http://blogs.sun.com/erickustarz/entry/ncq_performance_analysis

Curiously, there's not too much performance data on NCQ available via  a 
google search ...


enjoy,
eric

___
storage-discuss mailing list
[EMAIL PROTECTED]
http://mail.opensolaris.org/mailman/listinfo/storage-discuss

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread johansen-osdev
 When sequential I/O is done to the disk directly there is no performance
 degradation at all.  

All filesystems impose some overhead compared to the rate of raw disk
I/O.  It's going to be hard to store data on a disk unless some kind of
filesystem is used.  All the tests that Eric and I have performed show
regressions for multiple sequential I/O streams.  If you have data that
shows otherwise, please feel free to share.

 [I]t does not take any additional time in ldi_strategy(),
 bdev_strategy(), mv_rw_dma_start().  In some instance it actually
 takes less time.   The only thing that sometimes takes additional time
 is waiting for the disk I/O.

Let's be precise about what was actually observed.  Eric and I saw
increased service times for the I/O on devices with NCQ enabled when
running multiple sequential I/O streams.  Everything that we observed
indicated that it actually took the disk longer to service requests when
many sequential I/Os were queued.

-j


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


[zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread eric kustarz


On May 29, 2007, at 1:25 PM, Lida Horn wrote:

Point one, the comments that Eric made do not give the complete  
picture.
All the tests that Eric's referring to were done through ZFS  
filesystem.
When sequential I/O is done to the disk directly there is no  
performance

degradation at all.


Doing what test exactly?  single stream from a single disk?  What  
type of disk are you using?  My blog explicitly shows that there's no  
difference with NCQ enabled or disabled when its just a single stream  
sequential read (using the Hitachi AJOA disk).




Second point,  it does not take any additional
time in ldi_strategy(), bdev_strategy(), mv_rw_dma_start().  In some
instance it actually takes less time.   The only thing that sometimes
takes additional time is waiting for the disk I/O.


Right, this shows that most likely the disk (firmware) is doing  
pointless work trying to re-order I/Os that we're already sequential.


always a pleasure lida,
eric

___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] Re: [storage-discuss] NCQ performance

2007-05-29 Thread Lida Horn

Roch Bourbonnais wrote:


Le 29 mai 07 à 22:59, [EMAIL PROTECTED] a écrit :

When sequential I/O is done to the disk directly there is no 
performance

degradation at all.


All filesystems impose some overhead compared to the rate of raw disk
I/O.  It's going to be hard to store data on a disk unless some kind of
filesystem is used.  All the tests that Eric and I have performed show
regressions for multiple sequential I/O streams.  If you have data that
shows otherwise, please feel free to share.


[I]t does not take any additional time in ldi_strategy(),
bdev_strategy(), mv_rw_dma_start().  In some instance it actually
takes less time.   The only thing that sometimes takes additional time
is waiting for the disk I/O.


Let's be precise about what was actually observed.  Eric and I saw
increased service times for the I/O on devices with NCQ enabled when
running multiple sequential I/O streams.  Everything that we observed
indicated that it actually took the disk longer to service requests when
many sequential I/Os were queued.

-j




I just posted a comment which might reconcile the positions.

It is taking longer to run the I/O because (possibly) the I/O 
completion interrupt is delayed until _all_ the
N queued I/Os are effectively done. This is compatible with the data 
that 32 I/O are each taking ~25 time longer with NCQ but giving a
10-20% what have you, performance degradation. NCQ as currently done, 
messes up with the staged I/O pipelining that ZFS tries to do.


Is it possible to have NCQ not coalesce interrupts that much ? I 
suspect this will provide the best to both worlds (raw and zfs).


-r


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
As I posted as a reply to your (I believe yours) blog reply.  This sort 
of coalescing is not

occurring.  Nice idea though.

Regards,
Lida
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss