se or modify
it for our purposes.
On Sat, Nov 5, 2011 at 9:18 AM, Jim Klimov wrote:
> 2011-11-05 2:12, HUGE | David Stahl wrote:
> Our problem is that we need to use the -R to snapshot and send all
>
>> the child zvols, yet since we have a lot of data (3.5 TB), the hourly
>> sn
, yet we seem to use that feature as
people here tend to accidentally delete stuff off the server. Or perhaps
disabling the hourlies service at the beginning of the script and
re-enabling at the end.
Or is there a better way of doing this that I am not seeing?
--
HUGE
David Stahl
Sr. Systems A
does anyone have any thoughts?
--
HUGE
David Stahl
Sr. Systems Administrator
718 233 9164
www.hugeinc.com <http://www.hugeinc.com>
--
HUGE
David Stahl
Sr. Systems Administrator
718 233 9164
www.hugeinc.com <http://www.hugeinc.com>
___
zfs-
done for mounting these kind of ssdĀ¹s inside
a poweredge case?
any suggestions on anything else?
-D
--
HUGE
David Stahl
Sr. Systems Administrator
718 233 9164 / F 718 625 5157
www.hugeinc.com <http://www.hugeinc.com>
___
zfs-discuss mailing li
--
HUGE
David Stahl
Sr. Systems Administrator
718 233 9164 / F 718 625 5157
www.hugeinc.com <http://www.hugeinc.com>
From: Vladimir Novakovic
Date: Wed, 12 Aug 2009 17:45:11 +0200
To: zfs-discuss
Subject: Re: [zfs-discuss] zpool import -f rpool hangs
Hi David,
thank for a tip.
I wonder if one prob is that you already have an rpool when you are booted of
the CD.
could you do
zpool import rpool rpool2
to rename?
also if system keeps rebooting on crash you could add these to your /etc/system
(but not if you are booting from disk)
set zfs:zfs_recover=1
set aok=1
th
The real benefit of the of using a separate zvol for each vm is the
instantaneous cloning of a machine, and the clone will take almost no
additional space initially. In our case we build a template VM and then
provision our development machines from this.
However the limit of 32 nfs mounts per
I would think you would run into the same problem I have. Where you can't
view child zvols from a parent zvol nfs share.
> From: Scott Meilicke
> Date: Fri, 19 Jun 2009 08:29:29 PDT
> To:
> Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh my!
>
> So how are folks getting around the NFS speed
I actually prefer nfs for right now. We had an issue with iscsi where we
lost some data and were unable to recover it due to solaris not being able
to read propriatory vmfs.
--
HUGE
David Stahl
Systems Administrator
718 233 9164 / F 718 625 5157
www.hugeinc.com <http://www.hugeinc.com>
mounts
period?
--
HUGE
David Stahl
Systems Administrator
718 233 9164 / F 718 625 5157
www.hugeinc.com <http://www.hugeinc.com>
> From: Ryan Arneson
> Date: Tue, 16 Jun 2009 15:14:31 -0600
> To: HUGE | David Stahl
> Cc:
> Subject: Re: [zfs-discuss] ZFS, ESX ,and NFS. oh
server. But
you cannot have more than one vmkernel on the same subnet.
Does anyone have any experience with overcoming these limitations?
--
HUGE
David Stahl
Systems Administrator
718 233 9164 / F 718 625 5157
www.hugeinc.com <http://www.hugeinc.com>
__
11 matches
Mail list logo