Writes using the character interface (/dev/zvol/rdsk) are synchronous.
If you want caching, you can go through the block interface
(/dev/zvol/dsk) instead.
- Eric
--
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@open
After reading many-many threads on ZFS performance today (top of the list in
the
forum, and some chains of references), I applied a bit of tuning to the server.
In particular, I've set the zfs_write_limit_override to 384Mb so my cache is
spooled
to disks more frequently (if streaming lots of w
On Jul 9, 2009, at 4:22 AM, Jim Klimov wrote:
To tell the truth, I expected zvols to be faster than filesystem
datasets. They seem
to have less overhead without inodes, posix, acls and so on. So I'm
puzzled by test
results.
I'm now considering the dd i/o block size, and it means a lot
in
To tell the truth, I expected zvols to be faster than filesystem datasets. They
seem
to have less overhead without inodes, posix, acls and so on. So I'm puzzled by
test
results.
I'm now considering the dd i/o block size, and it means a lot indeed,
especially if
compared to zvol results with sm
Hmm, scratch that. Maybe.
I did not first get the point that your writes to a filesystem dataset work
quickly.
Perhaps filesystem is (better) cached indeed, i.e. *maybe* zvol writes are
synchronous and zfs writes may be cached and thus async? Try playing around
with relevant dataset attributes.
Do you have any older benchmarks on these cards and arrays (in their pre-ZFS
life?) Perhaps this is not a ZFS regression but a hardware config issue?
Perhaps there's some caching (like per-disk write-through) not enabled on the
arrays? As you may know, the ability (and reliability) of such cache