Re: [zfs-discuss] ZFS mount fails at boot

2007-03-21 Thread Justin Stringfellow
Matt, Can't see anything wrong with that procedure. However, could the problem be that you're trying to mount on /home which is usually used by the automounter? e.g. $ grep home /etc/auto_master /home auto_home -nobrowse Maybe you need to deconfigure this from your automounte

[zfs-discuss] ZFS mount fails at boot

2007-03-21 Thread Matt B
I have about a dozen two disk systems that were all setup the same using a combination of SVM and ZFS. s0 = / SMV Mirror s1 = swap s3 = /tmp s4 = metadb s5 = zfs mirror The system does boot, but once it gets to zfs, zfs fails and all subsequent services fail as well (including ssh) /home,/tmp,

[zfs-discuss] Re: ZFS performance with Oracle

2007-03-21 Thread Matt B
Did you try using ZFS compression on Oracle zsystems? (filesystems) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Proposal: ZFS hotplug support and autoconfiguration

2007-03-21 Thread Matt B
Autoreplace is currently the biggest advantage that H/W raid controllers have over ZFS and other less advanced forms of S/W raid. I would even go so far as to promote this issue to the forefront as a leading deficiency that is hindering ZFS adoption. Regarding H/W raid controllers things are k

[zfs-discuss] Re: Proposal: ZFS hotplug support and autoconfiguration

2007-03-21 Thread Anton B. Rang
A couple of questions/comments -- Why is the REMOVED state not persistent? It seems that, if ZFS knows that an administrator pulled a disk deliberately, that's still useful information after a reboot. Changing the state to FAULTED is non-intuitive, at least to me. What happens with autoreplace

[zfs-discuss] Re: Re: ZFS memory and swap usage

2007-03-21 Thread Rainer Heilke
> So why don't you state the actual time it takes to "come up"? I can't because I don't know. The DBA's have been very difficult about sharing the information. It took several emails and a meeting before we even found out the fact that the 10GB SGA DB didn't start up "quick enough". We also hone

[zfs-discuss] s10u3 (125101-03) isci zfs status

2007-03-21 Thread Frank Cusack
I'm strongly considering using iscsi with zfs. What is the current status wrt bugs or bad configurations, for S10 U3 patched to 125101-03? I mean things like, if you have 2 iscsi target hosts as zfs mirrors, and one goes away, will solaris panic? Will the data be safe after reboot? Or, if you

Re[2]: [zfs-discuss] Proposal: ZFS hotplug support and autoconfiguration

2007-03-21 Thread Robert Milkowski
Hello Eric, Thursday, March 22, 2007, 1:13:19 AM, you wrote: ES> On Thu, Mar 22, 2007 at 01:03:48AM +0100, Robert Milkowski wrote: >> >> What if I have a failing drive (still works but I want it to be >> replaced) and I have a replacement drive on a shelf. All I want is >> to remove failin

[zfs-discuss] migration/acl4 problem

2007-03-21 Thread Jens Elkner
Hi, S10U3: It seems, that ufs POSIX-ACLs are not properly translated to zfs ACL4 entries, when one xfers a directory tree from UFS to ZFS. Test case: Assuming one has an user A and B, both belonging to group G and having their umask set to 022: 1) On UFS - as user A do: mkdir /dir

Re: [zfs-discuss] Re: ZFS with raidz

2007-03-21 Thread Richard Elling
Kory, I'm sorry that you had to go through this. We're all working very hard to make ZFS better for everyone. We've noted this problem on the ZFS Best Practices wiki to try and help avoid future problems until we can get the quotas issue resolved. -- richard Kory Wheatley wrote: Richard, I a

[zfs-discuss] intel SSR212CC wasabi

2007-03-21 Thread Frank Cusack
Anyone have any experience with this? ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Proposal: ZFS hotplug support and autoconfiguration

2007-03-21 Thread Eric Schrock
On Thu, Mar 22, 2007 at 01:03:48AM +0100, Robert Milkowski wrote: > > What if I have a failing drive (still works but I want it to be > replaced) and I have a replacement drive on a shelf. All I want is > to remove failing drive, insert new one and resilver. I do not want > a hot spare to

Re: [zfs-discuss] Proposal: ZFS hotplug support and autoconfiguration

2007-03-21 Thread Robert Milkowski
Hello Eric, What if I have a failing drive (still works but I want it to be replaced) and I have a replacement drive on a shelf. All I want is to remove failing drive, insert new one and resilver. I do not want a hot spare to automatically kick in. -- Best regards, Robert

Re: [zfs-discuss] Re: Re: ZFS performance with Oracle

2007-03-21 Thread Richard Elling
JS wrote: I'd definitely prefer owning a sort of SAN solution that would basically just be trays of JBODs exported through redundant controllers, with enterprise level service. The world is still playing catch up to integrate with all the possibilities of zfs. It was called the A5000, later

[zfs-discuss] Re: Re: ZFS performance with Oracle

2007-03-21 Thread JS
I'd definitely prefer owning a sort of SAN solution that would basically just be trays of JBODs exported through redundant controllers, with enterprise level service. The world is still playing catch up to integrate with all the possibilities of zfs. This message posted from opensolaris.org

Re: [zfs-discuss] Heads up: 'zpool history' on-disk version change

2007-03-21 Thread eric kustarz
Is this the same panic I observed when moving a FireWire disk from a SPARC system running snv_57 to an x86 laptop with snv_42a? 6533369 panic in dnode_buf_byteswap importing zpool Yep, thanks - i was looking for that bug :) I'll close it out as a dup. eric ___

Re[2]: [zfs-discuss] ditto blocks for use data integrated in b61

2007-03-21 Thread Robert Milkowski
Hello Richard, Wednesday, March 21, 2007, 6:23:05 PM, you wrote: RE> Robert Milkowski wrote: RE> Wouldn't that fall under the generic rewrite/shrink functionality we're also RE> anxiously waiting for? Note that this also brings up a nasty edge case where RE> the rewrite may cause you to run ou

Re: [zfs-discuss] Heads up: 'zpool history' on-disk version change

2007-03-21 Thread Rainer Orth
eric kustarz <[EMAIL PROTECTED]> writes: > I just integrated into snv_62: > 6529406 zpool history needs to bump the on-disk version > > The original CR for 'zpool history': > 6343741 want to store a command history on disk > was integrated into snv_51. > > Both of these are planned to make s10u4

[zfs-discuss] Re: Re: ZFS with raidz

2007-03-21 Thread Douglas R. McCallum
The fix for CR 6491973 won't have much effect on boot time since it is more specific to the act of setting of the sharenfs property, but as Tom said, we are looking at anything that can reduce the times for sharing out large numbers of shares.The time to share is separate from the mount times si

Re: [zfs-discuss] Proposal: ZFS hotplug support and autoconfiguration

2007-03-21 Thread Eric Schrock
On Wed, Mar 21, 2007 at 02:37:16PM -0400, Bill Sommerfeld wrote: > > 1) What happens if the hotplugged replacement device is too small? > The replace will fail, just as if the administrator tried to issue a 'zpool replace' with a smaller drive. In the auto-replace case, the result will be a faul

Re: [zfs-discuss] Proposal: ZFS hotplug support and autoconfiguration

2007-03-21 Thread Bill Sommerfeld
This is tangential, but then arc review is all about feature interaction. 1) What happens if the hotplugged replacement device is too small? 2) What's the interaction between autoreplace and automatic vdev growth (when the underlying device gets bigger)? Since we can't yet shrink a pool, i'm won

Re: [zfs-discuss] Re: ZFS with raidz

2007-03-21 Thread Wade . Stuart
[EMAIL PROTECTED] wrote on 03/21/2007 11:00:43 AM: > > >The problem is that in order to restrict disk usage, ZFS *requires* > >that you create this many filesystems. I think most in this situation > >would prefer not to have to do that. The two solutions I see would > >be to add user quotas

[zfs-discuss] Proposal: ZFS hotplug support and autoconfiguration

2007-03-21 Thread Eric Schrock
Folks - I'm preparing to submit the attached PSARC case to provide better support for device removal and insertion within ZFS. Since this is a rather complex issue, with a fair share of corner issues, I thought I'd send the proposal out to the ZFS community at large for further comment before sub

Re: [zfs-discuss] ditto blocks for use data integrated in b61

2007-03-21 Thread Matthew Ahrens
Richard Elling wrote: Robert Milkowski wrote: Hello Richard, Wednesday, March 21, 2007, 1:48:23 AM, you wrote: RE> Yes, PSARC 2007/121 integrated into build 61 (and there was much rejoicing :-) RE> I'm working on some models which will show the affect on various RAID RE> configurations and i

Re: [zfs-discuss] ditto blocks for use data integrated in b61

2007-03-21 Thread Richard Elling
Robert Milkowski wrote: Hello Richard, Wednesday, March 21, 2007, 1:48:23 AM, you wrote: RE> Yes, PSARC 2007/121 integrated into build 61 (and there was much rejoicing :-) RE> I'm working on some models which will show the affect on various RAID RE> configurations and intend to post some resul

Re: [zfs-discuss] Re: ZFS with raidz

2007-03-21 Thread Casper . Dik
>The problem is that in order to restrict disk usage, ZFS *requires* >that you create this many filesystems. I think most in this situation >would prefer not to have to do that. The two solutions I see would >be to add user quotas to ZFS or to be able to set a quota on a >directory without it beco

Re: [zfs-discuss] Re: ZFS with raidz

2007-03-21 Thread James F. Hranicky
Richard Elling wrote: > I think this is a systems engineering problem, not just a ZFS problem. > Few have bothered to look at mount performance in the past because > most systems have only a few mounted file systems[1]. Since ZFS does > file system quotas instead of user quotas, now we have the si

Re: [zfs-discuss] Re: ZFS memory and swap usage

2007-03-21 Thread Al Hopper
On Wed, 21 Mar 2007, Rainer Heilke wrote: [... reformatted ] > We're running Update 3. Note that the DB _does_ come up, just not in the > two minutes they were expecting. If they wait a few moments after their > two-minute start-up attempt, it comes up just fine. So why don't you state the a

[zfs-discuss] Re: ZFS memory and swap usage

2007-03-21 Thread Rainer Heilke
We're running Update 3. Note that the DB _does_ come up, just not in the two minutes they were expecting. If they wait a few moments after their two-minute start-up attempt, it comes up just fine. I was looking at vmstat, and it seems to tell me what I need. It's just that I need to present the

Re: [zfs-discuss] HELP!! I can't mount my zpool!!

2007-03-21 Thread Victor Latushkin
Gino, Gino Ruopolo пишет: Victor, 1) crash dump dir has moved on the crashed zpool a few days ago : Anyway we think the crash is related to mpxio. We had tens of crash in the last weeks but never lost a zpool!! 2) That particular unis is out of Sun contract We hope there is a way to rec

Re: [zfs-discuss] HELP!! I can't mount my zpool!!

2007-03-21 Thread Victor Latushkin
Gino, Gino Ruopolo пишет: Victor, can we try to mount the zpool on a S10U3 system? No, this may require to use one of the recent Solaris Nevada builds. I'm trying to check relevant build number. What about answers to other my questions? Wbr, Victor From: Victor Latushkin <[EMAIL PROTECTED

Re: [zfs-discuss] HELP!! I can't mount my zpool!!

2007-03-21 Thread Victor Latushkin
Gino, S10U2 Ok, then if you have a support contract for this system you may want to open new case for this issue. unfortunately we have nothing on the logs about the first panic! This is not good... Without it may be impossible to find out what went wrong. You may have nothing on the logs, bu

[zfs-discuss] Re: ZFS memory and swap usage

2007-03-21 Thread Bjorn Munch
Did you say what version of Solaris 10 you were using? I had similar problems on Sol10 U2, booting a database. This involved first initializing the data files (a few Gb), then starting the server(s) which tried to allocate a large chunk of shared memory. This failed miserably since ZFS had go

Re: [zfs-discuss] HELP!! I can't mount my zpool!!

2007-03-21 Thread Victor Latushkin
Hi Gino, What version of Solaris your server is running? What happens here is while opening your pool ZFS is trying to process ZFS Intent Log of this poll and discovers some inconsistency between on-disk state and ZIL contents. What was the first panic you refer to? Wbr, Victor Gino Ruopol

[zfs-discuss] HELP!! I can't mount my zpool!!

2007-03-21 Thread Gino Ruopolo
Hi all. One of our server had a panic and now can't mount the zpool anymore! Here is what I get at boot: Mar 21 11:09:17 SERVER142 ^Mpanic[cpu1]/thread=90878200: Mar 21 11:09:17 SERVER142 genunix: [ID 603766 kern.notice] assertion failed: ss->ss_start <= start (0x67b800 <= 0x67 9

[zfs-discuss] Re: Large ZFS-bug...

2007-03-21 Thread Peter Eriksson
Ah :-) Btw, that bug note is a bit misleading - our usage case had nothing to do with ZFS Root filesystems - he was trying to install in a completely separate filesystem - a very large one. And yes, he found out that setting a quota was a good workaround :-) This message posted from opensol

Re: [zfs-discuss] Re: ZFS performance with Oracle

2007-03-21 Thread Roch - PAE
JS writes: > The big problem is that if you don't do your redundancy in the zpool, > then the loss of a single device flatlines the system. This occurs in > single device pools or stripes or concats. Sun support has said in > support calls and Sunsolve docs that this is by design, but I've nev

Re[3]: [zfs-discuss] Re: ZFS checksum error detection

2007-03-21 Thread Robert Milkowski
Hello Robert, Saturday, March 17, 2007, 6:49:05 PM, you wrote: RM> Hello Thomas, RM> Saturday, March 17, 2007, 11:46:14 AM, you wrote: TN>> On Fri, 16 Mar 2007, Anton B. Rang wrote: >>> It's possible (if unlikely) that you are only getting checksum errors on >>> metadata. Since ZFS always inte