Jonathan Edwards writes:
 > 
 > On Jan 5, 2007, at 11:10, Anton B. Rang wrote:
 > 
 > >> DIRECT IO is a set of performance optimisations to circumvent  
 > >> shortcomings of a given filesystem.
 > >
 > > Direct I/O as generally understood (i.e. not UFS-specific) is an  
 > > optimization which allows data to be transferred directly between  
 > > user data buffers and disk, without a memory-to-memory copy.
 > >
 > > This isn't related to a particular file system.
 > >
 > 
 > true .. directio(3) is generally used in the context of *any* given  
 > filesystem to advise it that an application buffer to system buffer  
 > copy may get in the way or add additional overhead (particularly if  
 > the filesystem buffer is doing additional copies.)  You can also look  
 > at it as a way of reducing more layers of indirection particularly if  
 > I want the application overhead to be higher than the subsystem  
 > overhead.  Programmatically .. less is more.

Direct IO makes good sense when the target disk sectors are
set a priori. But in the context of ZFS, would you rather
have 10 direct disk I/Os or 10 bcopies and 2 I/O (say that
was possible).

As for read, I  can see that when  the load is cached in the
disk array and we're running  100% CPU, the extra copy might
be noticeable. Is this the   situation that longs for DIO  ?
What % of a system is spent in the copy  ? What is the added
latency that comes from the copy ? Is DIO the best way to
reduce the CPU cost of ZFS ?

The  current Nevada  code base  has  quite nice  performance
characteristics  (and  certainly   quirks); there are   many
further efficiency gains to be reaped from ZFS. I just don't
see DIO on top of  that list for now.   Or at least  someone
needs to  spell out what  is ZFS/DIO and  how much better it
is expected to be (back of the envelope calculation accepted).

Reading RAID-Z  subblocks on filesystems that  have checksum
disabled might be interesting.   That would avoid  some disk
seeks.    To served  the  subblocks directly   or  not is  a
separate matter; it's  a small deal  compared to the feature
itself.  How about disabling the  DB  checksum (it can't fix
the block anyway) and do mirroring ?

-r


 > _______________________________________________
 > zfs-discuss mailing list
 > zfs-discuss@opensolaris.org
 > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Reply via email to