Re: [zfs-discuss] single memory allocation in the ZFS intent log

2006-10-03 Thread Casper . Dik
> at least one location: > > When adding a new dva node into the tree, a kmem_alloc is done with > a KM_SLEEP argument. > > thus, this process thread could block waiting for memory. > > I would suggest adding a pre-allocated pool of dva nodes. This is how the Solari

[zfs-discuss] Questions of ZFS mount point and import Messages

2006-10-03 Thread Tejung Ted Chiang
Hi Experts, I got two questions below. 1. [b]Is there any mechanism to protect the zfs mount point from being renamed via command mv?[/b] Now I can use "mv" to rename the mount point which has zfs filesystem currently mounted. Of course solaris will find no mnt-pt to mount the zfs filesystem.

[zfs-discuss] single memory allocation in the ZFS intent log

2006-10-03 Thread Erblichs
group, at least one location: When adding a new dva node into the tree, a kmem_alloc is done with a KM_SLEEP argument. thus, this process thread could block waiting for memory. I would suggest adding a pre-allocated pool of dva nodes. When a new

Re: [zfs-discuss] How to make an extended LUN size known to ZFS and Solaris

2006-10-03 Thread Matthew Ahrens
Michael Phua - PTS wrote: Hi, Our customer has an Sun Fire X4100 with Solaris 10 using ZFS and a HW RAID array (STK D280). He has extended a LUN on the storage array and want to make this new size known to ZFS and Solaris. Does anyone know if this can be done and how it can be done. Unfortun

Re: [zfs-discuss] panic string assistance

2006-10-03 Thread Neil Perrin
ZFS will currently panic on a write failure to a non replicated pool. In the case below the Intent Log (though it could have been any module) could not write an intent log block. Here's a previous response from Eric Schrock explaining how ZFS intends to handle this: Yes,

Re: [zfs-discuss] panic string assistance

2006-10-03 Thread Max Bruning
Hi, No, I can't offer insight, but I do have some questions that are not really on topic. What version of solaris are you running? Is this the console output at time of panic? When did the panic code (or mdb) learn about frame recycling? Or are you using scat to get this output? thanks, max On

[zfs-discuss] panic string assistance

2006-10-03 Thread Frank Leers
Could someone offer insight into this panic, please? panic string: ZFS: I/O failure (write on off 0: zio 6000c5fbc0 0 [L0 ZIL intent log] 1000L/1000P DVA[0]=<1:249b68000:1000> zilog uncompre ssed BE contiguous birth=318892 fill=0 cksum=3b8f19730caa4327:9e102 panic kernel thread: 0x2a101

[zfs-discuss] Updated ZFS Faq with 3rd party backup info

2006-10-03 Thread Mark Shellenbaum
I just updated the ZFS faq with what little info we have on third party backup support in ZFS. http://www.opensolaris.org/os/community/zfs/faq/ -Mark ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/list

Re: [zfs-discuss] Re: NFS Performance and Tar

2006-10-03 Thread Joerg Schilling
eric kustarz <[EMAIL PROTECTED]> wrote: > Ben Rockwood wrote: > I imagine what's happening is that tar is a single-threaded application > and it's basically doing: open, asynchronous write, close. This will go > really fast locally. But for NFS due to the way it does cache > consistency, on

Re: [zfs-discuss] Re: NFS Performance and Tar

2006-10-03 Thread Spencer Shepler
On Tue, eric kustarz wrote: > Ben Rockwood wrote: > >I was really hoping for some option other than ZIL_DISABLE, but finally > >gave up the fight. Some people suggested NFSv4 helping over NFSv3 but it > >didn't... at least not enough to matter. > > > >ZIL_DISABLE was the solution, sadly. I'm ru

Re: [zfs-discuss] Re: NFS Performance and Tar

2006-10-03 Thread eric kustarz
Ben Rockwood wrote: I was really hoping for some option other than ZIL_DISABLE, but finally gave up the fight. Some people suggested NFSv4 helping over NFSv3 but it didn't... at least not enough to matter. ZIL_DISABLE was the solution, sadly. I'm running B43/X86 and hoping to get up to 48 o

Re: [zfs-discuss] hot spares in a mirrored config

2006-10-03 Thread Eric Schrock
On Tue, Oct 03, 2006 at 10:23:29AM -0500, Keith Clay wrote: > Folks, > > If I have a mirrored configuration, I can add hot spares to the pool > and if a mirror should fail, zfs will automatically replace the > failed drive with one of the hot spares. Is this correct? Yes, that is correct. H

Re: [zfs-discuss] jbod questions

2006-10-03 Thread Dale Ghent
On Oct 3, 2006, at 11:15 AM, Keith Clay wrote: Folks, Would it be wise to buy 2 jbod box and place one side of the mirror on each one? Would that make sense? Of course that makes sense. Doing so will give you chassis-level redundancy. If one JBOD were to, say, lose power or in some way

[zfs-discuss] hot spares in a mirrored config

2006-10-03 Thread Keith Clay
Folks, If I have a mirrored configuration, I can add hot spares to the pool and if a mirror should fail, zfs will automatically replace the failed drive with one of the hot spares. Is this correct? keith ___ zfs-discuss mailing list zfs-discuss@o

Re: [zfs-discuss] jbod questions

2006-10-03 Thread Keith Clay
Folks, Would it be wise to buy 2 jbod box and place one side of the mirror on each one? Would that make sense? Also, we are looking at SATA to FC to hook into our san. Any comments/admonitions/advice? keith ___ zfs-discuss mailing list zfs-dis

[zfs-discuss] Re: NFS Performance and Tar

2006-10-03 Thread Ben Rockwood
I was really hoping for some option other than ZIL_DISABLE, but finally gave up the fight. Some people suggested NFSv4 helping over NFSv3 but it didn't... at least not enough to matter. ZIL_DISABLE was the solution, sadly. I'm running B43/X86 and hoping to get up to 48 or so soonish (I BFU'd

[zfs-discuss] zfs mirror resurrection

2006-10-03 Thread Dick Davies
Need a bit of help salvaging a perfectly working ZFS mirror that I've managed to render unbootable. I've had a ZFS root (x86, mirored zpool, SXCR b46 ) working fine for months. I very foolishly decided to mirror /grub using SVM (so I could boot easily if a disk died). Shrank swap partitions to m