Frank Hofmann writes:
 > On Tue, 27 Feb 2007, Jeff Davis wrote:
 > 
 > >>
 > >> Given your question are you about to come back with a
 > >> case where you are not
 > >> seeing this?
 > >>
 > >
 > > As a follow-up, I tested this on UFS and ZFS. UFS does very poorly: the 
 > > I/O rate drops off quickly when you add processes while reading the same 
 > > blocks from the same file at the same time. I don't know why this is, and 
 > > it would be helpful if someone explained it to me.
 > 
 > UFS readahead isn't MT-aware - it starts trashing when multiple threads 
 > perform reads of the same blocks. UFS readahead only works if it's a 
 > single thread per file, as the readahead state, i_nextr, is per-inode 
 > (and not a per-thread) state. Multiple concurrent readers trash this for 
 > each other, as there's only one-per-file.
 > 

To qualify 'trashing', this means UFS looses tracks of the
access, considers workload as random and so does not do any
readahead.

 > >
 > > ZFS did a lot better. There did not appear to be any drop-off after the 
 > > first process. There was a drop in I/O rate as I kept adding processes, 
 > > but in that case the CPU was at 100%. I haven't had a chance to test this 
 > > on a bigger box, but I suspect ZFS is able to keep the sequential read 
 > > going at full speed (at least if the blocks happen to be written 
 > > sequentially).
 > 
 > ZFS caches multiple readahead states - see the leading comment in
 > usr/src/uts/common/fs/zfs/vdev_cache.c in your favourite workspace.
 > 

The vdev_cache is where you have the low level device level prefetch (I/O
for 8K, read 64K of whatever happens to be under the disk
head).

dmu_zfetch.c is where the logical prefetching occurs.


-r

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to