-----Original Message-----
From: Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
[mailto:opensolarisisdeadlongliveopensola...@nedharvey.com] 
Sent: Wednesday, November 07, 2012 11:44 PM
To: Dan Swartzendruber; Edward Ned Harvey
(opensolarisisdeadlongliveopensolaris)
Cc: Tiernan OToole; zfs-discuss@opensolaris.org
Subject: RE: [zfs-discuss] Dedicated server running ESXi with no RAID card,
ZFS for storage?

> From: Dan Swartzendruber [mailto:dswa...@druber.com]
> 
> I'm curious here.  Your experience is 180 degrees opposite from mine.  
> I run an all in one in production and I get native disk performance, 
> and ESXi virtual disk I/O is faster than with a physical SAN/NAS for 
> the NFS datastore, since the traffic never leaves the host (I get 
> 3gb/sec or so usable thruput.)

What is all in one?
I wonder if we crossed wires somehow...  I thought Tiernan said he was
running Nexenta inside of ESXi, where Nexenta exports NFS back to the ESXi
machine, so ESXi will have the benefit of ZFS underneath its storage.

*** This is what we mean by 'all in one'.  ESXi with a single guest (OI say)
running on a small local disk.  It has one or more HBA passed to it via pci
passthrough with the real data disks attached.  It runs ZFS with a data pool
on those disks, serving the datastore back to ESXi via NFS.  The guests with
their vmdks reside on that datastore.  So, yes, we're talking about the same
thing.

That's what I used to do.

When I said performance was abysmal, I meant, if you dig right down and
pressure the system for throughput to disk, you've got a Linux or Windows VM
isnide of ESX, which is writing to a virtual disk, which ESX is then
wrapping up inside NFS and TCP, talking on the virtual LAN to the ZFS
server, which unwraps the TCP and NFS, pushes it all through the ZFS/Zpool
layer, writing back to the virtual disk that ESX gave it, which is itself a
layer on top of Ext3, before it finally hits disk.  Based purely on CPU and
memory throughput, my VM guests were seeing a max throughput of around 2-3
Gbit/sec.  That's not *horrible* abysmal.  But it's bad to be CPU/memory/bus
limited if you can just eliminate all those extra layers, and do the
virtualization directly isnide a system that supports zfs.

*** I guess I don't think 300MB/sec disk I/O aggregate for your guests is
abysmal.  Also, your analysis misses the crucial point that none of us are
talking about the virtualized SAN/NAS writing to vmdks passed to it, but
rather, actual disks via pci passthrough.  As I said, I can get near native
disk I/O this way.  As far as the ESXi vs vbox thing, I think that's a
matter of taste...

_______________________________________________
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
            • ... Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
            • ... Dan Swartzendruber
            • ... Dan Swartzendruber
            • ... Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
          • ... Dan Swartzendruber
      • Re: [zfs... Karl Wagner
        • Re: ... Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
          • ... Karl Wagner
            • ... Edward Ned Harvey (opensolarisisdeadlongliveopensolaris)
            • ... Jim Klimov
      • Re: [zfs... Dan Swartzendruber
  • Re: [zfs-discuss]... Jim Klimov

Reply via email to