Moinak Ghosh wrote:
Roland Mainz wrote:
Ian Murdock wrote:
Shawn Walker wrote:
However, initially the most sane appraoch is to pick one filesystem
so that the initial effort can be focused on a great experience and
take advantage of that filesystem's features.
Agreed. This is all about keeping the focus tight. In terms of "bang
for
the buck", we can't go far wrong with ZFS, since it's what the industry
is abuzz over, and we can do all kinds of neat things with it from an
end user point of view (as is currently being explored in this thread).
For other workloads we care about, such as HPC, other file systems
might make more sense.
It's not only about "HPC" and high-end scenarious. There are other
examples like virtualised hardware (like VMware or XEN) where ZFS's
checksumming eats lots of CPU cycles without any benefits since the
underlying host OS may already do the same (for example in a typical
VMware installation with 384MB ZFS drains so many resources (e.g. CPU
and memory) that normal development work becomes painfull).
You do have an interesting point. Some of the stuff can however be
tuned:
zfs set checksum=off ...
Set zil_disable to 1 in /etc/system
Clamp down the max memory used by ARC: set zfs:zfs_arc_max = in
/etc/system
Another possibility is to assign physical disks or partitions to the
Vmware
machine and create a pool on those so that you bypass the host OS
filesystem. However in that case you'd have to create fdisk
partitions from
within the VM and use those for the pool, otherwise ZFS will put an EFI
label and the Host machine may refuse to boot.
Regards,
Moinak.
_______________________________________________
indiana-discuss mailing list
[email protected]
http://opensolaris.org/mailman/listinfo/indiana-discuss