> When sequential I/O is done to the disk directly there is no performance
> degradation at all.  

All filesystems impose some overhead compared to the rate of raw disk
I/O.  It's going to be hard to store data on a disk unless some kind of
filesystem is used.  All the tests that Eric and I have performed show
regressions for multiple sequential I/O streams.  If you have data that
shows otherwise, please feel free to share.

> [I]t does not take any additional time in ldi_strategy(),
> bdev_strategy(), mv_rw_dma_start().  In some instance it actually
> takes less time.   The only thing that sometimes takes additional time
> is waiting for the disk I/O.

Let's be precise about what was actually observed.  Eric and I saw
increased service times for the I/O on devices with NCQ enabled when
running multiple sequential I/O streams.  Everything that we observed
indicated that it actually took the disk longer to service requests when
many sequential I/Os were queued.

-j


_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to