Re: gmirror disks vs partitions
On Wed, Jan 17, 2007 at 02:29:33PM +0300, Andrew Pantyukhin wrote: > On 1/17/07, Josef Karthauser <[EMAIL PROTECTED]> wrote: > > A poll for opinions if I may? i suppose i'm asking the smae here as well ... > > I've got a few gmirrors running on various machines, all of which > > pair up two drives at the physical level (i.e. mirror /dev/ad0s1 > > with /dev/ad1s1). Of course there are other ways of doing it to, > > like mirroring at the partition level, ie pairing /dev/ad0s1a with > > /dev/ad1s1a, /dev/ad0s1e with /dev/ad0s1e, etc. > > > > Apart from potentially avoiding a whole disk from being copied > > during a resync after a crash, are there any other advantages to > > using partition level mirroring instead of drive level mirroring? > > I can imagine people using partition-level raid to > implement a popular configuration: > > You divide a couple of identical drives proportionally > in two partitions each, place a couple of the first > partitions into gmirror and a couple of the second > ones into gstripe. This way you get both reliable and > fast storage with just two drives. Some strings are > attached. my situation is somewhat different, in theat i am providing internet services for a (private) group to access tcp/ip based communications (we are all disabled and couldn't fine "reasonable" priced and competently serviced "ISP" services in our part of teh world, so we decided to do it for our selves) .. sorry thet is teh history and reason behind my participation in/with freebsd (over teh last 10 or so years). we have just recieved several older machines, PIII compaq proliant 5500 with hardware raid works quite nicely wonce it settled down and its batteries regained working voltage so to speak, it is running freebsd 6.10release, ms windows professional 2003 server, and linux debian (sarge v3.1) it is a multi-boot fixit box as well as bing teh basic "fileserver/nfs host" and kernel builder, with its 4 cpu architecture it works well. also came several 233 mhz 2 ide/2 rom drives (cd and dvd) and an 800 mhz PIII similarly equiped. all are intel hardware of some 8-10 years vintage, this is now the basic netowrk backbone, and upgrading from several intel 386dx33 and intel 486dx33/50 machines that have served this netowrk for over 20 years now. now that andrew has 'opened' my eyes so to speak to teh world of software raid and after some extensive reading i discovered RAIDFrame which looked to provide all tehat i am looking for, yes i played with vinum and got burned so badly i was only going to use hardware raid and the basis of my comments to andrew. i too have seen teh raid in freebsd has moved on, so i guess its time for me to move on as well, looks like software raid might just fit the bills that these multiple drive machines are begging .. all have several largis (for me) ide style harddisks, mainly 6-8 gb and i have relic 4 gb scsi harddisks that (as i read in RAIDFrame for freebsd) i'm hoping that i could build some sort of basic media platform for each of teh machines instead of constantly worrying about how to cut up teh operating system software load over teh available spindle count .. its not fun anymore working out where teh system was loading up teh spindles and draging down teh system as a whole .. i'm sure many of teh readers here have expericenced this before from time to time, atleast. i've seen lots of posts about RAIDFrame for freebsd upto about 2002 and perhaos 2003 .. is teh port stabalised and not in need of anymore work, or has it been canned and or droped ??? from what i have read the raidframe package would be an ideal solution, i like very much mr long's introduction on teh freebsd (people) page. this discussion on teh whole had been most enlightening and i hape it will bear much fruit for the geom project in teh long term .. i've been gollowing teh gstripe (here in -stable) i need to keep reminding myself that teh software is not bad, it is being developed and thats why all teh "bad/bug/things going wrong are being reported here in -stable, that what -stable is for/all about. sorry for my post, i'm not very good at comunicationing, its one of teh parts of mybrain that don't work too good, and that is why i'm (struggling) on teh invalid pension. umm i'd also like to take this opportunity to say thank you for al the support freebsd has given me over teh years, it has been a most wonderfull experience, the stability and reliability has been a shining light that i take with me whereever i go, int eh softeware world, and in general as its produced because people band togehter and care about what they do and that is what makes freebsd what it is .. not superieor code and all tehse other things, which i'm sure help, ok just a linny little bit (grin). much appreciations, thanks and gratittude. most kind regards jonathan and caamora dot com dot au -- powered by .. QNX, OS9 and freeBSD -- http://caamora com
Re: gmirror disks vs partitions
- Vulpes Velox <[EMAIL PROTECTED]> wrote: > On Thu, 18 Jan 2007 10:15:56 +0900 > "Adrian Chadd" <[EMAIL PROTECTED]> wrote: > > > On 17/01/07, Andrew Pantyukhin <[EMAIL PROTECTED]> wrote: > > > > > [...after reading the slashdotter's piece of wisdom...] > > > > > > Yes, but that's the kind of functionality I have always > > > expected to be present in software raid solutions. I > > > hope I'll live to see this implemented in geom. > > > > That made my eyes bleed. > > > > Bring on ZFS and its method of managing JBODs. > > I second that. I have been way less than impressed with software raid > and LVM on linux. ... But LVM by itself is a good volume manager. The block level snapshot ability is especially good. LVM can actually notify dependent filesystems so that they flush all data, when the block level snapshot is created. ext3 does not support filesystem based snapshots (like ufs2 does), but LVM snapshots are better than most filesystem snapshots. ZFS is clearly better than LVM+ext3, and is really the only option for really big filesystems right now. ufs2 doesn't support journaling, and background fsck isn't a complete replacement for journalling. ext3 is stable but doesn't really scale well, or have leading performance, and doesn't really work on FreeBSD anyways. XFS is virtually unsupported, as SGI laid off all their filesystem developers when they went into chapter 11, and ReiserFS, besides having some dodgy reliability issues, the head of development is currently in jail for suspicion of murder. So besides, being the best, ZFS is nearly the only choice for really big filesystems. Tom ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: gmirror disks vs partitions
On Jan 19, 2007, at 12:42 AM, Vulpes Velox wrote: When ZFS comes available, I plan to actually run it across multiple mirrors. It has built in JBOD, but it does not do mirroring. It just does stripping. I think you misunderstand ZFS. It is robust against multiple disk failures. It doesn't do full disk mirroring, but does place multiple copies of data on multiple drives.
Re: gmirror disks vs partitions
Andrew Pantyukhin wrote: > Yes, but that's the kind of functionality I have always > expected to be present in software raid solutions. I > hope I'll live to see this implemented in geom. For adding drives there's gconcat, for resizing (well, you currently have to decide on the maximum size in advance) there's gvirstor (http://wikitest.freebsd.org/gvirstor). ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: gmirror disks vs partitions
On Thu, 18 Jan 2007 10:15:56 +0900 "Adrian Chadd" <[EMAIL PROTECTED]> wrote: > On 17/01/07, Andrew Pantyukhin <[EMAIL PROTECTED]> wrote: > > > [...after reading the slashdotter's piece of wisdom...] > > > > Yes, but that's the kind of functionality I have always > > expected to be present in software raid solutions. I > > hope I'll live to see this implemented in geom. > > That made my eyes bleed. > > Bring on ZFS and its method of managing JBODs. I second that. I have been way less than impressed with software raid and LVM on linux. I have all ways found not mirroring partitions to be way better. It makes it way easier to repair the damn thing do fewer steps. When ZFS comes available, I plan to actually run it across multiple mirrors. It has built in JBOD, but it does not do mirroring. It just does stripping. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: gmirror disks vs partitions
On 17/01/07, Andrew Pantyukhin <[EMAIL PROTECTED]> wrote: [...after reading the slashdotter's piece of wisdom...] Yes, but that's the kind of functionality I have always expected to be present in software raid solutions. I hope I'll live to see this implemented in geom. That made my eyes bleed. Bring on ZFS and its method of managing JBODs. Adrian ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: gmirror disks vs partitions
On 1/18/07, Scott Long <[EMAIL PROTECTED]> wrote: Andrew Pantyukhin wrote: > On 1/17/07, Josef Karthauser <[EMAIL PROTECTED]> wrote: >> A poll for opinions if I may? >> >> I've got a few gmirrors running on various machines, all of which >> pair up two drives at the physical level (i.e. mirror /dev/ad0s1 >> with /dev/ad1s1). Of course there are other ways of doing it to, >> like mirroring at the partition level, ie pairing /dev/ad0s1a with >> /dev/ad1s1a, /dev/ad0s1e with /dev/ad0s1e, etc. >> >> Apart from potentially avoiding a whole disk from being copied >> during a resync after a crash, are there any other advantages to >> using partition level mirroring instead of drive level mirroring? > > I can imagine people using partition-level raid to > implement a popular configuration: > > You divide a couple of identical drives proportionally > in two partitions each, place a couple of the first > partitions into gmirror and a couple of the second > ones into gstripe. This way you get both reliable and > fast storage with just two drives. Some strings are > attached. The head movement that this causes makes it a poor performer. It is an option, but not a terribly popular one. I hear many desktops and laptops nowadays (used to?) come preconfigured this way. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: gmirror disks vs partitions
Andrew Pantyukhin wrote: On 1/17/07, Josef Karthauser <[EMAIL PROTECTED]> wrote: A poll for opinions if I may? I've got a few gmirrors running on various machines, all of which pair up two drives at the physical level (i.e. mirror /dev/ad0s1 with /dev/ad1s1). Of course there are other ways of doing it to, like mirroring at the partition level, ie pairing /dev/ad0s1a with /dev/ad1s1a, /dev/ad0s1e with /dev/ad0s1e, etc. Apart from potentially avoiding a whole disk from being copied during a resync after a crash, are there any other advantages to using partition level mirroring instead of drive level mirroring? I can imagine people using partition-level raid to implement a popular configuration: You divide a couple of identical drives proportionally in two partitions each, place a couple of the first partitions into gmirror and a couple of the second ones into gstripe. This way you get both reliable and fast storage with just two drives. Some strings are attached. The head movement that this causes makes it a poor performer. It is an option, but not a terribly popular one. Scott ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: gmirror disks vs partitions
On Wednesday 17 January 2007 06:29, Andrew Pantyukhin wrote: > On 1/17/07, Josef Karthauser <[EMAIL PROTECTED]> wrote: > > A poll for opinions if I may? > > > > I've got a few gmirrors running on various machines, all of which > > pair up two drives at the physical level (i.e. mirror /dev/ad0s1 > > with /dev/ad1s1). Of course there are other ways of doing it to, > > like mirroring at the partition level, ie pairing /dev/ad0s1a with > > /dev/ad1s1a, /dev/ad0s1e with /dev/ad0s1e, etc. > > > > Apart from potentially avoiding a whole disk from being copied > > during a resync after a crash, are there any other advantages to > > using partition level mirroring instead of drive level mirroring? > > I can imagine people using partition-level raid to > implement a popular configuration: > > You divide a couple of identical drives proportionally > in two partitions each, place a couple of the first > partitions into gmirror and a couple of the second > ones into gstripe. This way you get both reliable and > fast storage with just two drives. Some strings are > attached. The reduced likelihood of needing to rebuild a given volume is usually enough of an argument for me to mirror at the partition level. Of course, the other side of the coin is that if more than one volume on a given pair of disks needs to be rebuilt, the disks will be twice (or more) as hammered (and less efficient due to the greater number of seeks) during the rebuild(s). If you want to be creative/exotic then it's sometimes useful to use partitions as building blocks for odd (or "advanced") volume configurations. For instance, let's say you're trying to get some disk redundancy for your workstation but you're limited to whatever drives you can scrounge up. (Have _I_ ever been in this position? nah... :) ) You have a 40GB disk, a 60GB disk, and an 80GB disk. If you partition them up right and use gmirror with gstripe, it's possible to use all of the space and still be able to survive the failure of any one disk. Divide everything up into partitions of equal sizes. For an even number of disks you can use the GCD of the sizes as the partition size, but since there's an odd number of disks in this example we'll use GCD/2 or ~10GB. Pair one partition on the 40GB disk with one on the 60GB disk. Then pair all of the partitions on the 80GB disk with the remaining partitions on the 40 and 60 GB disks. Make each pair into a gmirror volume. If you need to boot from the array, pick one pair to be your system volume. The rest of the gmirrors can all be added into a gstripe volume, so you end up with 90GB (or 80+10) of redundant storage with quite good performance (not that I would know, of course). You can use the leftover bits for swap, etc. The two drawbacks to this approach vs a two-disk mirror are increased likelihood of drive failure (due to the greater number of disks) and a more complex recovery procedure if a drive fails (especially if you don't have a spare identical to or slightly larger than the one that failed). Just some thoughts.. JN ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: gmirror disks vs partitions
On 1/17/07, Matthew X. Economou <[EMAIL PROTECTED]> wrote: > Apart from potentially avoiding a whole disk from being copied > during a resync after a crash, are there any other advantages to > using partition level mirroring instead of drive level mirroring? Joe, Partition-level software RAID plus LVM is how the following Slashdot poster manages extendable (and inequally sized disk) arrays on Linux: http://ask.slashdot.org/comments.pl?sid=169386&cid=14117414 [...after reading the slashdotter's piece of wisdom...] Yes, but that's the kind of functionality I have always expected to be present in software raid solutions. I hope I'll live to see this implemented in geom. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
RE: gmirror disks vs partitions
> Apart from potentially avoiding a whole disk from being copied > during a resync after a crash, are there any other advantages to > using partition level mirroring instead of drive level mirroring? Joe, Partition-level software RAID plus LVM is how the following Slashdot poster manages extendable (and inequally sized disk) arrays on Linux: http://ask.slashdot.org/comments.pl?sid=169386&cid=14117414 Best wishes, Matthew -- "Rogues are very keen in their profession, and know already much more than we can teach them respecting their several kinds of roguery." - A. C. Hobbs in _Locks and Safes_ (1853) ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"
Re: gmirror disks vs partitions
On 1/17/07, Josef Karthauser <[EMAIL PROTECTED]> wrote: A poll for opinions if I may? I've got a few gmirrors running on various machines, all of which pair up two drives at the physical level (i.e. mirror /dev/ad0s1 with /dev/ad1s1). Of course there are other ways of doing it to, like mirroring at the partition level, ie pairing /dev/ad0s1a with /dev/ad1s1a, /dev/ad0s1e with /dev/ad0s1e, etc. Apart from potentially avoiding a whole disk from being copied during a resync after a crash, are there any other advantages to using partition level mirroring instead of drive level mirroring? I can imagine people using partition-level raid to implement a popular configuration: You divide a couple of identical drives proportionally in two partitions each, place a couple of the first partitions into gmirror and a couple of the second ones into gstripe. This way you get both reliable and fast storage with just two drives. Some strings are attached. ___ freebsd-stable@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-stable To unsubscribe, send any mail to "[EMAIL PROTECTED]"