Anyone? Really need some help here
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
S) that is sliced up
Any help or pointing to good documentation would be much appreciated.
Thanks
Matt B
Below I included a metastat dump
d3: Mirror
Submirror 0: d13
State: Okay
Submirror 1: d23
State: Needs maintenance
Submirror 2: d33
State: Okay
Submirro
Here is what seems to be the best course of action assuming IP over FC is
supported by the HBA's (which I am pretty sure they so since this is all brand
new equipment)
Mount the shared disk backup lun on Node 1 via the FC link to the SAN as a
non-redundant ZFS volume.
On node 1 RMAN (oracle bac
Im not sure what you mean
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
the 4 database servers are part of an Oracle RAC configuration. 3 databases are
hosted on these servers, BIGDB1 on all 4, littledb1 on the first 2, and
littledb2 on the last two. The oracle backup system spawns db backup jobs that
could occur on any node based on traffic and load. All nodes are
Cant use the network because these 4 hosts are database servers that will be
dumping close to a Terabyte every night. If we put that over the network all
the other servers would be starved
This message posted from opensolaris.org
___
zfs-discuss mai
That is what I was afraid of.
In regards to QFS and NFS, isnt QFS something that must be purchased? I looked
on the SUN website and it appears to be a little pricey.
NFS is free, but is there a way to use NFS without traversing the network? We
already have our SAN presenting this disk to each o
Is it a supported configuration to have a single LUN presented to 4 different
Sun servers over a fiber channel network and then mounting that LUN on each
host as the same ZFS filesystem?
We need any of the 4 servers to be able to write data to this shared FC disk.
We are not using NFS as we do
Worked great. Thanks
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Oh, one other thing...s1 (8GB swap) is part of an SVM mirror (on d1)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
And just doing this will automatically target my /tmp at my 8GB swap slice on
s1 as well as placing the quota in place?
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mail
Ok, since I already have an 8GB swap slice i'd like to use, what would be the
best way of setting up /tmp on this existing SWAP slice as tmpfs and then apply
the 1GB quota limit?
I know how to get rid of the zpool/tmp filesystem in ZFS, but I'm not sure how
to actually get to the above in a pos
For reference...here is my disk layout currently (one disk of two, but both are
identical)
s4 is for the MetaDB
s5 is dedicated for ZFS
partition> print
Current partition table (original):
Total disk cylinders available: 8921 + 2 (reserved cylinders)
Part TagFlag CylindersSi
Ok so you are suggesting that I simply mount /tmp as tmpfs on my existing 8GB
swap slice and then put in the VM limit on /tmp? Will that limit only affect
users writing data to /tmp or will it also affect the systems use of swap?
This message posted from opensolaris.org
__
Well, I am aware that /tmp can be mounted on swap as tmpfs and that this is
really fast as most all writes go straight to memory, but this is of little to
no value to the server in question.
The server in question is running 2 enterprise third party applications. No
compilers are installed...in
Is this something that should work? The assumption is that there is a dedicated
raw SWAP slice and after install /tmp (which will be on /) will be unmounted
and mounted to zpool/tmp (just like zpool/home)
Thoughts on this?
This message posted from opensolaris.org
I have about a dozen two disk systems that were all setup the same using a
combination of SVM and ZFS.
s0 = / SMV Mirror
s1 = swap
s3 = /tmp
s4 = metadb
s5 = zfs mirror
The system does boot, but once it gets to zfs, zfs fails and all subsequent
services fail as well (including ssh)
/home,/tmp,
Did you try using ZFS compression on Oracle zsystems? (filesystems)
This message posted from opensolaris.org
___
zfs-discuss mailing list
zfs-discuss@opensolaris.org
http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Autoreplace is currently the biggest advantage that H/W raid controllers have
over ZFS and other less advanced forms of S/W raid.
I would even go so far as to promote this issue to the forefront as a leading
deficiency that is hindering ZFS adoption.
Regarding H/W raid controllers things are k
So it sounds like the consensus is that I should not worry about using slices
with ZFS
and the swap best practice doesn't really apply to my situation of a 4 disk
x4200.
So in summary(please confirm) this is what we are saying is a safe bet for
using in a highly available production environment
Thanks for responses. There is a lot there I am looking forward to digesting.
Right off the bat though I wanted to bring up something I found just before
reading this reply as the answer to this question would automatically answer
some other questinos
There is a ZFS best practices wiki at
http:
I am trying to determine the best way to move forward with about 35 x86 X4200's
Each box has 4x 73GB internal drives.
All the boxes will be built using Solaris 10 11/06. Additionally, these boxes
are part of a highly available production environment with an uptime
expectation of 6 9's ( just a f
22 matches
Mail list logo