Re: [zfs-discuss] Help replacing dual identity disk in ZFS raidz and SVM mirror

2007-12-06 Thread Matt B
Anyone? Really need some help here This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Help replacing dual identity disk in ZFS raidz and SVM mirror

2007-12-03 Thread Matt B
S) that is sliced up Any help or pointing to good documentation would be much appreciated. Thanks Matt B Below I included a metastat dump d3: Mirror Submirror 0: d13 State: Okay Submirror 1: d23 State: Needs maintenance Submirror 2: d33 State: Okay Submirro

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-25 Thread Matt B
Here is what seems to be the best course of action assuming IP over FC is supported by the HBA's (which I am pretty sure they so since this is all brand new equipment) Mount the shared disk backup lun on Node 1 via the FC link to the SAN as a non-redundant ZFS volume. On node 1 RMAN (oracle bac

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-25 Thread Matt B
Im not sure what you mean This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-25 Thread Matt B
the 4 database servers are part of an Oracle RAC configuration. 3 databases are hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and littledb2 on the last two. The oracle backup system spawns db backup jobs that could occur on any node based on traffic and load. All nodes are

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Matt B
Cant use the network because these 4 hosts are database servers that will be dumping close to a Terabyte every night. If we put that over the network all the other servers would be starved This message posted from opensolaris.org ___ zfs-discuss mai

Re: [zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Matt B
That is what I was afraid of. In regards to QFS and NFS, isnt QFS something that must be purchased? I looked on the SUN website and it appears to be a little pricey. NFS is free, but is there a way to use NFS without traversing the network? We already have our SAN presenting this disk to each o

[zfs-discuss] Single SAN Lun presented to 4 Hosts

2007-08-24 Thread Matt B
Is it a supported configuration to have a single LUN presented to 4 different Sun servers over a fiber channel network and then mounting that LUN on each host as the same ZFS filesystem? We need any of the 4 servers to be able to write data to this shared FC disk. We are not using NFS as we do

[zfs-discuss] Re: Re: Re: Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
Worked great. Thanks This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Re: Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
Oh, one other thing...s1 (8GB swap) is part of an SVM mirror (on d1) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Re: Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
And just doing this will automatically target my /tmp at my 8GB swap slice on s1 as well as placing the quota in place? This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mail

[zfs-discuss] Re: Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
Ok, since I already have an 8GB swap slice i'd like to use, what would be the best way of setting up /tmp on this existing SWAP slice as tmpfs and then apply the 1GB quota limit? I know how to get rid of the zpool/tmp filesystem in ZFS, but I'm not sure how to actually get to the above in a pos

[zfs-discuss] Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
For reference...here is my disk layout currently (one disk of two, but both are identical) s4 is for the MetaDB s5 is dedicated for ZFS partition> print Current partition table (original): Total disk cylinders available: 8921 + 2 (reserved cylinders) Part TagFlag CylindersSi

[zfs-discuss] Re: Re: /tmp on ZFS?

2007-03-23 Thread Matt B
Ok so you are suggesting that I simply mount /tmp as tmpfs on my existing 8GB swap slice and then put in the VM limit on /tmp? Will that limit only affect users writing data to /tmp or will it also affect the systems use of swap? This message posted from opensolaris.org __

[zfs-discuss] Re: /tmp on ZFS?

2007-03-23 Thread Matt B
Well, I am aware that /tmp can be mounted on swap as tmpfs and that this is really fast as most all writes go straight to memory, but this is of little to no value to the server in question. The server in question is running 2 enterprise third party applications. No compilers are installed...in

[zfs-discuss] /tmp on ZFS?

2007-03-22 Thread Matt B
Is this something that should work? The assumption is that there is a dedicated raw SWAP slice and after install /tmp (which will be on /) will be unmounted and mounted to zpool/tmp (just like zpool/home) Thoughts on this? This message posted from opensolaris.org

[zfs-discuss] ZFS mount fails at boot

2007-03-21 Thread Matt B
I have about a dozen two disk systems that were all setup the same using a combination of SVM and ZFS. s0 = / SMV Mirror s1 = swap s3 = /tmp s4 = metadb s5 = zfs mirror The system does boot, but once it gets to zfs, zfs fails and all subsequent services fail as well (including ssh) /home,/tmp,

[zfs-discuss] Re: ZFS performance with Oracle

2007-03-21 Thread Matt B
Did you try using ZFS compression on Oracle zsystems? (filesystems) This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss

[zfs-discuss] Re: Proposal: ZFS hotplug support and autoconfiguration

2007-03-21 Thread Matt B
Autoreplace is currently the biggest advantage that H/W raid controllers have over ZFS and other less advanced forms of S/W raid. I would even go so far as to promote this issue to the forefront as a leading deficiency that is hindering ZFS adoption. Regarding H/W raid controllers things are k

[zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Matt B
So it sounds like the consensus is that I should not worry about using slices with ZFS and the swap best practice doesn't really apply to my situation of a 4 disk x4200. So in summary(please confirm) this is what we are saying is a safe bet for using in a highly available production environment

[zfs-discuss] Re: ZFS/UFS layout for 4 disk servers

2007-03-07 Thread Matt B
Thanks for responses. There is a lot there I am looking forward to digesting. Right off the bat though I wanted to bring up something I found just before reading this reply as the answer to this question would automatically answer some other questinos There is a ZFS best practices wiki at http:

[zfs-discuss] ZFS/UFS layout for 4 disk servers

2007-03-06 Thread Matt B
I am trying to determine the best way to move forward with about 35 x86 X4200's Each box has 4x 73GB internal drives. All the boxes will be built using Solaris 10 11/06. Additionally, these boxes are part of a highly available production environment with an uptime expectation of 6 9's ( just a f