> at least one location:
>
> When adding a new dva node into the tree, a kmem_alloc is done with
> a KM_SLEEP argument.
>
> thus, this process thread could block waiting for memory.
>
> I would suggest adding a pre-allocated pool of dva nodes.
This is how the Solari
Hi Experts,
I got two questions below.
1. [b]Is there any mechanism to protect the zfs mount point from being renamed
via command mv?[/b] Now I can use "mv" to rename the mount point which has zfs
filesystem currently mounted. Of course solaris will find no mnt-pt to mount
the zfs filesystem.
group,
at least one location:
When adding a new dva node into the tree, a kmem_alloc is done with
a KM_SLEEP argument.
thus, this process thread could block waiting for memory.
I would suggest adding a pre-allocated pool of dva nodes.
When a new
Michael Phua - PTS wrote:
Hi,
Our customer has an Sun Fire X4100 with Solaris 10 using ZFS and a HW RAID
array (STK D280).
He has extended a LUN on the storage array and want to make this new size
known to ZFS and Solaris.
Does anyone know if this can be done and how it can be done.
Unfortun
ZFS will currently panic on a write failure to a non replicated pool.
In the case below the Intent Log (though it could have been any module)
could not write an intent log block. Here's a previous response from Eric
Schrock explaining how ZFS intends to handle this:
Yes,
Hi,
No, I can't offer insight, but I do have some questions
that are not really on topic.
What version of solaris are you running? Is this
the console output at time of panic? When did the
panic code (or mdb) learn about frame recycling?
Or are you using scat to get this output?
thanks,
max
On
Could someone offer insight into this panic, please?
panic string: ZFS: I/O failure (write on off 0: zio
6000c5fbc0
0 [L0 ZIL intent log] 1000L/1000P DVA[0]=<1:249b68000:1000> zilog uncompre
ssed BE contiguous birth=318892 fill=0 cksum=3b8f19730caa4327:9e102
panic kernel thread: 0x2a101
I just updated the ZFS faq with what little info we have on third party
backup support in ZFS.
http://www.opensolaris.org/os/community/zfs/faq/
-Mark
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/list
eric kustarz <[EMAIL PROTECTED]> wrote:
> Ben Rockwood wrote:
> I imagine what's happening is that tar is a single-threaded application
> and it's basically doing: open, asynchronous write, close. This will go
> really fast locally. But for NFS due to the way it does cache
> consistency, on
On Tue, eric kustarz wrote:
> Ben Rockwood wrote:
> >I was really hoping for some option other than ZIL_DISABLE, but finally
> >gave up the fight. Some people suggested NFSv4 helping over NFSv3 but it
> >didn't... at least not enough to matter.
> >
> >ZIL_DISABLE was the solution, sadly. I'm ru
Ben Rockwood wrote:
I was really hoping for some option other than ZIL_DISABLE, but finally gave up
the fight. Some people suggested NFSv4 helping over NFSv3 but it didn't... at
least not enough to matter.
ZIL_DISABLE was the solution, sadly. I'm running B43/X86 and hoping to get up
to 48 o
On Tue, Oct 03, 2006 at 10:23:29AM -0500, Keith Clay wrote:
> Folks,
>
> If I have a mirrored configuration, I can add hot spares to the pool
> and if a mirror should fail, zfs will automatically replace the
> failed drive with one of the hot spares. Is this correct?
Yes, that is correct. H
On Oct 3, 2006, at 11:15 AM, Keith Clay wrote:
Folks,
Would it be wise to buy 2 jbod box and place one side of the mirror
on each one? Would that make sense?
Of course that makes sense. Doing so will give you chassis-level
redundancy. If one JBOD were to, say, lose power or in some way
Folks,
If I have a mirrored configuration, I can add hot spares to the pool
and if a mirror should fail, zfs will automatically replace the
failed drive with one of the hot spares. Is this correct?
keith
___
zfs-discuss mailing list
zfs-discuss@o
Folks,
Would it be wise to buy 2 jbod box and place one side of the mirror
on each one? Would that make sense?
Also, we are looking at SATA to FC to hook into our san. Any
comments/admonitions/advice?
keith
___
zfs-discuss mailing list
zfs-dis
I was really hoping for some option other than ZIL_DISABLE, but finally gave up
the fight. Some people suggested NFSv4 helping over NFSv3 but it didn't... at
least not enough to matter.
ZIL_DISABLE was the solution, sadly. I'm running B43/X86 and hoping to get up
to 48 or so soonish (I BFU'd
Need a bit of help salvaging a perfectly working ZFS
mirror that I've managed to render unbootable.
I've had a ZFS root (x86, mirored zpool, SXCR b46 ) working fine for months.
I very foolishly decided to mirror /grub using SVM
(so I could boot easily if a disk died). Shrank swap partitions
to m
17 matches
Mail list logo