Re: [zfs-discuss] maczfs / ZEVO

2013-02-15 Thread Hearn, Christopher
On Feb 15, 2013, at 11:08 AM, Edward Ned Harvey 
(opensolarisisdeadlongliveopensolaris) 
opensolarisisdeadlongliveopensola...@nedharvey.commailto:opensolarisisdeadlongliveopensola...@nedharvey.com
 wrote:

Anybody using maczfs / ZEVO?  Have good or bad things to say, in terms of 
reliability, performance, features?

My main reason for asking is this:  I have a mac, I use Time Machine, and I 
have VM's inside.  Time Machine, while great in general, has the limitation of 
being unable to intelligently identify changed bits inside a VM file.  So you 
have to exclude the VM from Time Machine, and you have to run backup software 
inside the VM.

I would greatly prefer, if it's reliable, to let the VM reside on ZFS and use 
zfs send to backup my guest VM's.

I am not looking to replace HFS+ as the primary filesystem of the mac; although 
that would be cool, there's often a reliability benefit to staying on the 
supported, beaten path, standard configuration.  But if ZFS can be used to hold 
the guest VM storage reliably, I would benefit from that.

Thanks...
___

ZEVO's great as long as you don't mind managing everything from the command 
line.  I had to figure out how to identify the disks, as it manages them a 
little differently in MacOS.  I had some minor issues hosting iTunes/iPhoto 
libraries on ZFS volumes, with it being a little more sluggish  freezing up 
slightly.  Other than that it worked fine.  I'm back on HFS+ for iTunes/iPhoto 
now, but I'm hopeful it'll be resolved in a future release so I can switch back 
again.

MacZFS will give zpool version 8  zfs version 2, whereas ZEVO is zpool version 
28 zfs version 5, so make your decision accordingly.  I have not tried MacZFS 
in a long time, so I couldn't say if it is better or worse than ZEVO.

Chris
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss


Re: [zfs-discuss] all in one server

2012-09-18 Thread Hearn, Christopher
On Sep 18, 2012, at 10:40 AM, Dan Swartzendruber wrote:

On 9/18/2012 10:31 AM, Eugen Leitl wrote:
I'm currently thinking about rolling a variant of

http://www.napp-it.org/napp-it/all-in-one/index_en.html

with remote backup (via snapshot and send) to 2-3
other (HP N40L-based) zfs boxes for production in
our organisation. The systems themselves would
be either Dell or Supermicro (latter with ZIL/L2ARC
on SSD, plus SAS disks (pools as mirrors) all with
hardware pass-through).

The idea is to use zfs for data integrity and
backup via data snapshot (especially important
data will be also back-up'd via conventional DLT
tapes).

Before I test thisi --

Is anyone using this is in production? Any caveats?

I run an all-in-one and it works fine. Supermicro x9scl-f with 32gb ECC ram.  
20 is for the openindiana SAN, with an ibm m1015 passed through via vmdirect 
(pci passthru).  4 SAS nearline drives in 2x2 mirror config in a jbod chassis.  
2 samsung 830 128gb ssds as l2arc.  The main caveat is to order the VMs 
properly for auto-start (assuming you use that as I do.)  The OI VM goes first, 
and I give a good 120 seconds before starting the other VMs.  For auto 
shutdown, all VMs but OI do suspend, OI does shutdown.  The big caveat: do NOT 
use iSCSI for the datastore, use NFS.  Maybe there's a way to fix this, but I 
found that on start up, ESXi would time out the iSCSI datastore mount before 
the virtualized SAN VM was up and serving the share - bad news.  NFS seems to 
be more resilient there.  vmxnet3 vnics should work fine for OI VM, but might 
want to stick to e1000.
Can I actually have a year's worth of snapshots in
zfs without too much performance degradation?

Dunno about that.


I did something similar:  
http://churnd.wordpress.com/2011/06/27/zfsesxi-all-in-one-part-1/

Works great… need to bump up the RAM to 32GB.
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss