On 2012-11-08 17:49, Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris) wrote: 

>> From:
zfs-discuss-boun...@opensolaris.org [1] [mailto:zfs-discuss-
boun...@opensolaris.org [2]] On Behalf Of Karl Wagner I am just
wondering why you export the ZFS system through NFS? I have had much
better results (albeit spending more time setting up) using iSCSI. I
found that performance was much better,
> A couple years ago, I tested
and benchmarked both configurations on the same system. I found that the
performance was equal both ways (which surprised me because I expected
NFS to be slower due to FS overhead.) I cannot say if CPU utilization
was different - but the IO measurements were the same. At least,
indistinguishably different. Based on those findings, I opted to use NFS
for several weak reasons. If I wanted to, I could export NFS to more
different systems. I know everything nowadays supports iscsi initiation,
but it's not as easy to set up as a NFS client. If you want to expand
the guest disk, in iscsi, ... I'm not completely sure you *can* expand a
zvol, but if you can, you at least have to shut everything down, then
expand and bring it all back up and then have the iscsi initiator expand
to occupy the new space. But in NFS, the client can simply expand, no
hassle. I like being able to look in a filesystem and see the guests
listed there as files. Know I could, if I wanted to, copy those things
out to any type of storage I wish. Someday, perhaps I'll want to move
some guest VM's over to a BTRFS server instead of ZFS. But it would be
more difficult with iscsi. For what it's worth, in more recent times,
I've opted to use iscsi. And here are the reasons: When you create a
guest file in a ZFS filesystem, it doesn't automatically get a
refreservation. Which means, if you run out of disk space thanks to
snapshots and stuff, the guest OS suddenly can't write to disk, and it's
a hard guest crash/failure. Yes you can manually set the refreservation,
if you're clever, but it's easy to get wrong. If you create a zvol, by
default, it has an appropriately sized refreservation that guarantees
the guest will always be able to write to disk. Although I got the same
performance using iscsi or NFS with ESXi... I did NOT get the same
result using VirtualBox. In Virtualbox, if I use a *.vdi file... The
performance is *way* slower than using a *.vmdk wrapper for physical
device (zvol). ( using VBoxManage internalcommands createrawvmdk ) The
only problem with the zvol / vmdk idea in virtualbox is that every
reboot (or remount) the zvol becomes owned by root again. So I have to
manually chown the zvol for each guest each time I restart the
host.

Fair enough, thanks for the info. 

As I say it was quite a while
back and I was using either Xen or KVM (can't remember which). It may be
that the performance profiles are/were just very different. I was also
just using an old desktop for testing purposes, which skews the
performance too (it was far too memory and CPU limited to be used for
real). 

If I was doing this now, I would probably use the ZFS aware OS
bare metal, but I still think I would use iSCSI to export the ZVols
(mainly due to the ability to use it across a real network, hence
allowing guests to be migrated simply) 

Links:
------
[1]
mailto:zfs-discuss-boun...@opensolaris.org
[2]
mailto:boun...@opensolaris.org
_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
            • ... Dan Swartzendruber
            • ... Edmund White
            • ... dswartz
            • ... Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
            • ... Dan Swartzendruber
            • ... Dan Swartzendruber
            • ... Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
          • ... Dan Swartzendruber
      • Re: [zfs... Karl Wagner
        • Re: ... Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
          • ... Karl Wagner
            • ... Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
            • ... Jim Klimov
      • Re: [zfs... Dan Swartzendruber
  • Re: [zfs-discuss]... Jim Klimov

Reply via email to