Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?
Solaris wrote: > Perhaps a better solution would be to front a J4500 with a pair of > X4100s with Sun Cluster? Hrrm... That's a way better solution, since you can then build a clustered solution for basically the same price. u should also think about using 2x J4400 instead of 1x J4500 to eliminate the storage as a SPoF, too. -- Ralf Ramge Senior Solaris Administrator, SCNA, SCSA Tel. +49-721-91374-3963 [EMAIL PROTECTED] - http://web.de/ 1&1 Internet AG Brauerstraße 48 76135 Karlsruhe Amtsgericht Montabaur HRB 6484 Vorstand: Henning Ahlert, Ralph Dommermuth, Matthias Ehrlich, Thomas Gottschlich, Matthias Greve, Robert Hoffmann, Markus Huhn, Oliver Mauss, Achim Weiss Aufsichtsratsvorsitzender: Michael Scheeren ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?
Hi, Maybe this might be an option too? http://blogs.sun.com/storage/entry/mike_shapiro_and_steve_o Original Message Subject: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0Servers? From: Solaris <[EMAIL PROTECTED]> To: zfs-discuss@opensolaris.org Date: Thu Oct 9 13:09:28 2008 > I have been leading the charge in my IT department to evaluate the Sun > Fire X45x0 as a commodity storage platform, in order to leverage > capacity and cost against our current NAS solution which is backed by > EMC Fiberchannel SAN. For our corporate environments, it would seem > like a single machine would supply more than triple our current usable > capacity on our NAS, and the cost is significantly less per GB. I am > also working to prove the multi-protocol shared storage capabilities > of the Thumper significantly out perform those of our current solution > (which is notoriously bad from the end user perspective). > > The EMC solution is completely redundant with no single point of > failure. What are some good strategies for providing a Thumper > solution with no single point of failure? > > The storage folks are poo-poo'ing this concept because of the chances > for an Operating System failure... I'd like to come up with some > reasonable methods to put them in their place :) > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?
If you are having touble booting to the mirrored drive, the following is what we had to do to correctly boot off the mirrored drive in a Thumper mirrored with disksuite. The root drive is c5t0d0 and the mirror is c5t4d0. The BIOS will try those 2 drives. Just a note, if it ever switches to c5t4d0 as the primary boot device, the BIOS will not swap back autoamtically. You will have to change this back by hand in the BIOS. Mostly likely you'll find this out on your next OS upgrade when you upgrade c5t0d0 only to still be booting off c5t4d0. The vtoc.out file was created with prtvtoc on the primary root drive. and the partitions were added to SVM and synced one by one in a for loop. MIRROR=c5t4d0 echo "y" | /usr/sbin/fdisk -B /dev/rdsk/${MIRROR}s2 echo "5" | /usr/sbin/fdisk -b /usr/lib/fs/ufs/mboot /dev/rdsk/${MIRROR}p0 /usr/sbin/fmthard -s /tmp/disk-vtoc.out /dev/rdsk/${MIRROR}s2 /sbin/installgrub /boot/grub/stage1 /boot/grub/stage2 /dev/rdsk/${MIRROR}s0 -- This message posted from opensolaris.org ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?
Perhaps a better solution would be to front a J4500 with a pair of X4100s with Sun Cluster? Hrrm... On Thu, Oct 9, 2008 at 4:30 PM, Glaser, David <[EMAIL PROTECTED]> wrote: > As shipped, there our x4500 have 8 raidz pools with 6 disks each in them. If > spaced right, you can loose 6(?) disks without the pool dying. The root disk > is mirrored, so if one dies it's not the end of the world. With the exception > that grub is thoroughly fraked up in that if the 0 disk dies, you have to > manually make the darn thing boot. You can't hot swap CPU or memory, but you > can swap drives, fans, network links, and power supplies. > > With the rest of the hardware redundancy built in, they have been working > pretty well for us here. We did have some issues with a failure of the > machine (software related) but with a decent support contract, you should be > ok. > > Our windows group purchased their BlueArc san and spent 100k for 15TB > (raw)... I spent 50K for 33TB (useable)... > > > David > > > David Glaser > Systems Administrator > LSA Information Technology > University of Michigan > > -Original Message- > From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Solaris > Sent: Thursday, October 09, 2008 4:09 PM > To: zfs-discuss@opensolaris.org > Subject: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 > Servers? > > I have been leading the charge in my IT department to evaluate the Sun > Fire X45x0 as a commodity storage platform, in order to leverage > capacity and cost against our current NAS solution which is backed by > EMC Fiberchannel SAN. For our corporate environments, it would seem > like a single machine would supply more than triple our current usable > capacity on our NAS, and the cost is significantly less per GB. I am > also working to prove the multi-protocol shared storage capabilities > of the Thumper significantly out perform those of our current solution > (which is notoriously bad from the end user perspective). > > The EMC solution is completely redundant with no single point of > failure. What are some good strategies for providing a Thumper > solution with no single point of failure? > > The storage folks are poo-poo'ing this concept because of the chances > for an Operating System failure... I'd like to come up with some > reasonable methods to put them in their place :) > ___ > zfs-discuss mailing list > zfs-discuss@opensolaris.org > http://mail.opensolaris.org/mailman/listinfo/zfs-discuss > -- Ignorance: America's most abundant and costly commodity. ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?
As shipped, there our x4500 have 8 raidz pools with 6 disks each in them. If spaced right, you can loose 6(?) disks without the pool dying. The root disk is mirrored, so if one dies it's not the end of the world. With the exception that grub is thoroughly fraked up in that if the 0 disk dies, you have to manually make the darn thing boot. You can't hot swap CPU or memory, but you can swap drives, fans, network links, and power supplies. With the rest of the hardware redundancy built in, they have been working pretty well for us here. We did have some issues with a failure of the machine (software related) but with a decent support contract, you should be ok. Our windows group purchased their BlueArc san and spent 100k for 15TB (raw)... I spent 50K for 33TB (useable)... David David Glaser Systems Administrator LSA Information Technology University of Michigan -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] On Behalf Of Solaris Sent: Thursday, October 09, 2008 4:09 PM To: zfs-discuss@opensolaris.org Subject: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers? I have been leading the charge in my IT department to evaluate the Sun Fire X45x0 as a commodity storage platform, in order to leverage capacity and cost against our current NAS solution which is backed by EMC Fiberchannel SAN. For our corporate environments, it would seem like a single machine would supply more than triple our current usable capacity on our NAS, and the cost is significantly less per GB. I am also working to prove the multi-protocol shared storage capabilities of the Thumper significantly out perform those of our current solution (which is notoriously bad from the end user perspective). The EMC solution is completely redundant with no single point of failure. What are some good strategies for providing a Thumper solution with no single point of failure? The storage folks are poo-poo'ing this concept because of the chances for an Operating System failure... I'd like to come up with some reasonable methods to put them in their place :) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
Re: [zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?
On Thu, Oct 9, 2008 at 3:09 PM, Solaris <[EMAIL PROTECTED]> wrote: > I have been leading the charge in my IT department to evaluate the Sun > Fire X45x0 as a commodity storage platform, in order to leverage > capacity and cost against our current NAS solution which is backed by > EMC Fiberchannel SAN. For our corporate environments, it would seem > like a single machine would supply more than triple our current usable > capacity on our NAS, and the cost is significantly less per GB. I am > also working to prove the multi-protocol shared storage capabilities > of the Thumper significantly out perform those of our current solution > (which is notoriously bad from the end user perspective). > > The EMC solution is completely redundant with no single point of > failure. What are some good strategies for providing a Thumper > solution with no single point of failure? > > The storage folks are poo-poo'ing this concept because of the chances > for an Operating System failure... I'd like to come up with some > reasonable methods to put them in their place :) > Unless you're taking about buying multiple thumpers and mirroring them, there are none. The motherboard is a single point of failure. --Tim ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss
[zfs-discuss] Strategies to avoid single point of failure w/ X45x0 Servers?
I have been leading the charge in my IT department to evaluate the Sun Fire X45x0 as a commodity storage platform, in order to leverage capacity and cost against our current NAS solution which is backed by EMC Fiberchannel SAN. For our corporate environments, it would seem like a single machine would supply more than triple our current usable capacity on our NAS, and the cost is significantly less per GB. I am also working to prove the multi-protocol shared storage capabilities of the Thumper significantly out perform those of our current solution (which is notoriously bad from the end user perspective). The EMC solution is completely redundant with no single point of failure. What are some good strategies for providing a Thumper solution with no single point of failure? The storage folks are poo-poo'ing this concept because of the chances for an Operating System failure... I'd like to come up with some reasonable methods to put them in their place :) ___ zfs-discuss mailing list zfs-discuss@opensolaris.org http://mail.opensolaris.org/mailman/listinfo/zfs-discuss