If your pool is on version > 19 then you should be able to import a pool
with a missing log device by using the '-m' option to 'zpool import'.
- George
On Sat, Oct 23, 2010 at 10:03 PM, David Ehrmann wrote:
> > > From: zfs-discuss-boun...@opensolaris.org
> > [mailto:zfs-discuss-
> > > boun...@o
On 10/23/2010 8:22 PM, Anil wrote:
We have Sun STK RAID cards in our x4170 servers. These are battery
backed with 256mb cache.
What is the recommended ZFS configuration for these cards?
Right now, I have created a one-to-one logical volume to disk mapping
on the RAID card (one disk == one volume
> > From: zfs-discuss-boun...@opensolaris.org
> [mailto:zfs-discuss-
> > boun...@opensolaris.org] On Behalf Of Roy Sigurd
> Karlsbakk
> >
> > Last I checked, you lose the pool if you lose the
> slog on zpool
> > versions < 19. I don't think there is a trivial way
> around this.
>
> The actual dat
We have Sun STK RAID cards in our x4170 servers. These are battery
backed with 256mb cache.
What is the recommended ZFS configuration for these cards?
Right now, I have created a one-to-one logical volume to disk mapping
on the RAID card (one disk == one volume on RAID card). Then, I mirror
them u
I set up a slog and cache device using slices on a 30g OCZ Vertex 2
about a year ago. In late August, I decided that it wasn't worth it to
have the slog on a drive that might fail to honor cache flushes,
especially since most of the writes I'm doing are via smb which don't
need a slog as direly as
On Sun, Sep 19, 2010 at 12:37 AM, Steve Arkley wrote:
> is there anyway to get the data over onto the other drive at all?
You can create a new pool on the replacement drive and use send | recv
to populate it.
There are a few properties that are normally set on a boot pool,
namely bootfs, which p
On Thu, Sep 30, 2010 at 9:41 AM, Ian Levesque wrote:
> Thanks for your email; I noticed that link before sending this to the list.
> Unfortunately, I'm running b134+ and there aren't any clones reported via zdb.
Are there holds on any of the snapshots that need to be removed? I
remember that giv
Cindy Swearingen writes:
> Hi Harry,
>
> Generally, you need to use zpool clear to clear the pool errors, but I
> can't reproduce the removed files reappearing in zpool status on my own
> system when I corrupt data so I'm not sure this will help. Some other
> larger problem is going on here...
>
On Oct 22, 2010, at 10:40 AM, Miles Nordin wrote:
>> "re" == Richard Elling writes:
>
>re> The risk here is not really different that that faced by
>re> normal disk drives which have nonvolatile buffers (eg
>re> virtually all HDDs and some SSDs). This is why applications
>re
comments at the bottom...
On Oct 23, 2010, at 1:48 AM, Erik Trimble wrote:
> On 10/22/2010 8:44 PM, Haudy Kazemi wrote:
>> Never Best wrote:
>>> Sorry I couldn't find this anywhere yet. For deduping it is best to have
>>> the lookup table in RAM, but I wasn't too sure how much RAM is suggested?
On Oct 23, 2010, at 4:31 AM, Ian D wrote:
>> Likely you don't have enough ram or CPU in the box.
>
> The Nexenta box has 256G of RAM and the latest X7500 series CPUs. That said,
> the load does get crazy high (like 35+) very quickly. We can't figure out
> what's taking so much CPU. It happen
Greetings,
First off, I'm new to this and don't quite understand what I'm doing.
I would like different groups in my workplace to have their own folders.
I would like each file and folder underneath the parent folders to
inherit the ACL and group ownership of the directory.
I'm using ACL's i
> I don't think the switch model was ever identified...perhaps it is a 1 GbE
> switch with a few 10 GbE ports? (Drawing at straws.)
It is a a Dell 8024F. It has 24 SPF+ 10GbE ports and every NICs we connect to
it are Intel X520. One issue we do have with it is when we turn jumbo frames
on,
Hi list,
while preparing for the changed ACL/mode_t mapping semantics coming
with onnv-147 [1], I discovered that in onnv-134 on my system ACLs are
not inherited when aclmode is set to passthrough for the filesystem.
This very much puzzles me. Example:
$ uname -a
SunOS os 5.11 snv_134 i86pc i386
> Likely you don't have enough ram or CPU in the box.
The Nexenta box has 256G of RAM and the latest X7500 series CPUs. That said,
the load does get crazy high (like 35+) very quickly. We can't figure out
what's taking so much CPU. It happens even when checksum/compression/unduping
are off.
> A network switch that is being maxed out? Some
> switches cannot switch
> at rated line speed on all their ports all at the
> same time. Their
> internal buses simply don't have the bandwidth needed
> for that. Maybe
> you are running into that limit? (I know you
> mentioned bypassing the
On 10/22/2010 8:44 PM, Haudy Kazemi wrote:
Never Best wrote:
Sorry I couldn't find this anywhere yet. For deduping it is best to
have the lookup table in RAM, but I wasn't too sure how much RAM is
suggested?
::Assuming 128KB Block Sizes, and 100% unique data:
1TB*1024*1024*1024/128 = 8388608
I actually have three Dell R610 boxes running OSol snv134 and since I
switched from the internal Broadcom NICs to Intel ones, I didn't have
any issue with them.
budy
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/
18 matches
Mail list logo