Re: Software raid VS hardware raid
My other concern is what happens when one drive goes down if we use gmirror? Is it completelly transparent and bad drive can be hot swapped while server is running and rebuild started? I am thinking now about gpt+gmirror (including boot and swap) Artem Yes. In fact, you can test this by unplugging the data or power cable to a drive while the server is running. I've done this with consumer sata drives and, so far, not had a problem. The server stays up and running and disk access is not interrupted. I can then plug in a new disk and add it to the gmirror and the array rebuilds. I've not tried this with gpt, so I can't comment there. -Modulok- ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
30.01.2013 1:01, Warren Block: On Tue, 29 Jan 2013, Artem Kuchin wrote: 29.01.2013 18:57, Warren Block: On Tue, 29 Jan 2013, Artem Kuchin wrote: The Handbook chapter on gmirror talks about the problems with GPT and GEOM metadata. In short: right now, they conflict. It's possible to mirror GPT partitions, but be aware that if you mirror more than one partition on a drive, a rebuild after replacing a drive could thrash the heads as mirrors are rebuilt simultaneously. http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html So, gmirror+GPT=conflict on last sector GPT+gmirror = hardrive head kill nice... So, for no more than 2TB disks the best way to go is GMIRROR of the drive +PARTITION on top of it? GPT partitions should work, just limit it to one mirrored partition per drive. Please, clarify what you mean here. Or maybe there is a way to instruct gmirror do rebuild only what i say (manual rebuild) ? 'gmirror configure -n' ? Have not tried it. The trick would be to do that before multiple mirrors start rebuilding, which they will as soon as geom_mirror.ko is loaded. As i understand from the man page -n setup the device not to auto rebuild ever. So, this is probably the thing i want. I need to setup a test system and play with it a bit. Artem ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On 01/28/13 21:43, Artem Kuchin wrote: I am planning to use mirror configuration of two SATA 7200rpm 2TB disks. I personally vote for gmirror in this case; I've used it a lot and found it very good wrt to both performance and robustness. You can spend the extra money you spare on the controller buying good disks; as someone else pointed out don't get desktop-class ones, but 24x7 ones. Just my 2c. bye av. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
30.01.2013 18:06, Warren Block: On Wed, 30 Jan 2013, Artem Kuchin wrote: 30.01.2013 1:01, Warren Block: On Tue, 29 Jan 2013, Artem Kuchin wrote: 29.01.2013 18:57, Warren Block: On Tue, 29 Jan 2013, Artem Kuchin wrote: The Handbook chapter on gmirror talks about the problems with GPT and GEOM metadata. In short: right now, they conflict. It's possible to mirror GPT partitions, but be aware that if you mirror more than one partition on a drive, a rebuild after replacing a drive could thrash the heads as mirrors are rebuilt simultaneously. http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html So, gmirror+GPT=conflict on last sector GPT+gmirror = hardrive head kill nice... So, for no more than 2TB disks the best way to go is GMIRROR of the drive +PARTITION on top of it? GPT partitions should work, just limit it to one mirrored partition per drive. Please, clarify what you mean here. If only one GPT partition on a drive is mirrored with another GPT partition on another drive, head contention never comes up. There is only one mirror. It does nearly eliminate the usefulness of GPT partitioning. Um... and how can i do that if i have a simple mirror with two drives and want to mirror everything on them? As i understand i will have at least bootable, swap and ufs parttions on those drives, that is 3 partitions at least. Artem ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On Jan 30, 2013, at 8:10 AM, Andrea Venturoli wrote: You can spend the extra money you spare on the controller buying good disks; as someone else pointed out don't get desktop-class ones, but 24x7 ones. Server Class drives buy you some improvement, but my recent experience with Seagate Barracuda ES.2 drives is not that good. I have had 50% of them fail within the 5-year warranty period. My disks run 24x7 and I use ZFS under FreeBSD 9 so I have not lost any data. I have: 2 x Seagate ES.2 250 GB (one has failed) 4 x Seagate ES.2 1 TB (two have failed) 2 x Hitachi UltraStar 1 TB (pre-WD acquisition), no failures, but they are less than 2 years old. They are also noticeably faster than the Seagate ES.2 I just ordered 2 x WD RE4 500 GB, we'll see how those do I go out of my way to purchase disks with a 5-year warranty, they are still out there but you have to look for them. -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
There seems to be one more advantage to gmirror If i understood correctly gmirror label -v -b split -s 2048 data da0 da1 da2 will create a tripple mirror raid 1, that is triple redundancy, which is hardly available on any hardware raid. Am i correct here? Also, does anyone know how to choose split threshold (-s 2048) correctly ? Artem ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On Wed, 30 Jan 2013, Artem Kuchin wrote: 30.01.2013 18:06, Warren Block: GPT partitions should work, just limit it to one mirrored partition per drive. Please, clarify what you mean here. If only one GPT partition on a drive is mirrored with another GPT partition on another drive, head contention never comes up. There is only one mirror. It does nearly eliminate the usefulness of GPT partitioning. Um... and how can i do that if i have a simple mirror with two drives and want to mirror everything on them? As i understand i will have at least bootable, swap and ufs parttions on those drives, that is 3 partitions at least. If you want to use the same drive for booting, it's possible. Create all three partitions on both drives manually. Then mirror the freebsd-ufs partition only. The contents of the freebsd-boot partition don't change often, and swap does not have to be mirrored. Not that it's easy or convenient, but it's an option. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On Jan 30, 2013, at 10:22 AM, Warren Block wrote: If you want to use the same drive for booting, it's possible. Create all three partitions on both drives manually. Then mirror the freebsd-ufs partition only. The contents of the freebsd-boot partition don't change often, and swap does not have to be mirrored. Note that if you do NOT mirror SWAP, then in the event of a disk failure you will most likely crash when the system tries to swap in some data from the failed drive. If you mirror swap then you do not risk a crash due to missing swap data. -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
30.01.2013 19:28, Paul Kraus: On Jan 30, 2013, at 10:22 AM, Warren Block wrote: If you want to use the same drive for booting, it's possible. Create all three partitions on both drives manually. Then mirror the freebsd-ufs partition only. The contents of the freebsd-boot partition don't change often, and swap does not have to be mirrored. Note that if you do NOT mirror SWAP, then in the event of a disk failure you will most likely crash when the system tries to swap in some data from the failed drive. If you mirror swap then you do not risk a crash due to missing swap data. yes, that's what i wanted to say. Also, not being able to boot if first disk has some error in boot section or just strangly dead is not an option too. However, i was just thinking, if i use gmirror then bios does not know anything about it. I may set both harddisk as boot disk, but if first disk is brain damaged then bios may just stuck trying to boot from it and will not pass boot attempt to the second disk. I don't know, it depends on bios of course. But this seems to be a disadvantage to a software raid. Artem ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On Wed, 30 Jan 2013, Artem Kuchin wrote: Also, not being able to boot if first disk has some error in boot section or just strangly dead is not an option too. However, i was just thinking, if i use gmirror then bios does not know anything about it. I may set both harddisk as boot disk, but if first disk is brain damaged then bios may just stuck trying to boot from it and will not pass boot attempt to the second disk. I don't know, it depends on bios of course. But this seems to be a disadvantage to a software raid. That's true. The similar situation with hardware RAID is when the controller fails. The metadata is probably specific to that manufacturer and maybe to that model of controller. It's a good idea to get spares, because as Murphy is my witness, in an emergency that controller will not be available in the same town, district, country, or continent. More likely it will have been long discontinued, with no data migration path. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
29.01.2013 11:54, Michael Powell: Artem Kuchin wrote: I guess what I'm trying to point out is that low performance wrt software RAID will stem from other things besides just simply consuming a few CPU cycles. Today's CPUs have the cycles to spare. I've been using gmirror for RAID 1 mirrors for a few years now and am happy with this. I have had a few old drives die and the servers stayed up and online. This allowed me to defer the actual drive replacement and not have to drop everything and fight fire. Thank you everyone for replying. I realize that many other things affect the performance, not only the CPU power. For example, disk IO kernel multithreading is one of the things. But i guess in FBSD 9 it is more or less solved. The server is going to be a web server with many sites and with mysql running on it. Nothing really really heavy. Currently with run all this on our own server with 8 cores and 16GB ram and 3ware raid1 and cpu load is about 5% :) Everything is quick and responsive. I hope to see the same on a software raid. I really don't want to deploy ZFS on a new server where all these site need to migrate because i am kind of don't fix it if it is not broken kind of guy. UFS+journaling+softupdates served us well for years and snapshots are available on ufs too. My other concern is what happens when one drive goes down if we use gmirror? Is it completelly transparent and bad drive can be hot swapped while server is running and rebuild started? I am thinking now about gpt+gmirror (including boot and swap) Artem ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
Artem Kuchin wrote: [snip] The server is going to be a web server with many sites and with mysql running on it. Nothing really really heavy. Currently with run all this on our own server with 8 cores and 16GB ram and 3ware raid1 and cpu load is about 5% :) Everything is quick and responsive. I hope to see the same on a software raid. The controller would be a slight concern. But for what you've described doing I doubt it will be a big deal. The 3Ware may have a faster processor on it than say a generic onboard built-in. But since all we're talking here is a RAID 1 mirror my guess is it may not be a big enough difference to see. Writes will be just as if you are writing to 1 drive, reads will be faster. Maybe that 5% cpu load turns into 6% or 7%. I really don't want to deploy ZFS on a new server where all these site need to migrate because i am kind of don't fix it if it is not broken kind of guy. UFS+journaling+softupdates served us well for years and snapshots are available on ufs too. I understand; I've only played around with ZFS some on Solaris. I may move in that direction some day, but for now My other concern is what happens when one drive goes down if we use gmirror? Is it completelly transparent and bad drive can be hot swapped while server is running and rebuild started? I am thinking now about gpt+gmirror (including boot and swap) I've never actually hot-swapped one but I can't see any reason why not. You can't use the gmirror remove directive when a drive has failed, but you do a gmirror forget device , swap it, then just do gmirror insert device to insert the replaced drive into the mirror. When everything is working as it should gmirror is mostly 'automatic', e.g. after the insert the rebuild just starts. Main thing I appreciated about this is the server stayed up and online after one drive died. My two servers at home are my testbeds to test out things first before doing stuff to the ones at work. I just installed both to 9.1. The difference now is I've used GPT (gpart) and this is new to me. Previously everything was always fdisk and disklabel. Both these machines are setup on one drive at this point and I haven't yet gotten into the mirroring yet. With the old fdisk/disklabel it was simple to just mirror the entire drive itself (slice). The other approach is to mirror partitions. I think I may need to do this as I think this is the way you have to proceed in order to avoid having gpt and gmirror both trying to claim the last sector on the drive (metadata storage). -Mike ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On Tue, 29 Jan 2013, Artem Kuchin wrote: My other concern is what happens when one drive goes down if we use gmirror? Is it completelly transparent and bad drive can be hot swapped while server is running and rebuild started? I am thinking now about gpt+gmirror (including boot and swap) As far a gmirror is concerned, yes, drives can be removed and new drives inserted while the mirror is running. Hot swap is more of an issue with the hardware. I have not tried it with SATA drives, although I think it should work. The Handbook chapter on gmirror talks about the problems with GPT and GEOM metadata. In short: right now, they conflict. It's possible to mirror GPT partitions, but be aware that if you mirror more than one partition on a drive, a rebuild after replacing a drive could thrash the heads as mirrors are rebuilt simultaneously. http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
29.01.2013 18:57, Warren Block: On Tue, 29 Jan 2013, Artem Kuchin wrote: The Handbook chapter on gmirror talks about the problems with GPT and GEOM metadata. In short: right now, they conflict. It's possible to mirror GPT partitions, but be aware that if you mirror more than one partition on a drive, a rebuild after replacing a drive could thrash the heads as mirrors are rebuilt simultaneously. http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html So, gmirror+GPT=conflict on last sector GPT+gmirror = hardrive head kill nice... So, for no more than 2TB disks the best way to go is GMIRROR of the drive +PARTITION on top of it? Or maybe there is a way to instruct gmirror do rebuild only what i say (manual rebuild) ? Artem ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On Tue, 29 Jan 2013 08:57:31 -0600, Warren Block wbl...@wonkity.com wrote: As far a gmirror is concerned, yes, drives can be removed and new drives inserted while the mirror is running. Hot swap is more of an issue with the hardware. I have not tried it with SATA drives, although I think it should work. The Handbook chapter on gmirror talks about the problems with GPT and GEOM metadata. In short: right now, they conflict. It's possible to mirror GPT partitions, but be aware that if you mirror more than one partition on a drive, a rebuild after replacing a drive could thrash the heads as mirrors are rebuilt simultaneously. http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html Why isn't gmirror more intelligent? I hate to use Linux as an example, but mdadm won't simultaneously rebuild multiple RAID sets if they use the same physical providers to prevent this. Could this be added as a feature? Even a sysctl toggle? ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On Tue, 29 Jan 2013, Artem Kuchin wrote: 29.01.2013 18:57, Warren Block: On Tue, 29 Jan 2013, Artem Kuchin wrote: The Handbook chapter on gmirror talks about the problems with GPT and GEOM metadata. In short: right now, they conflict. It's possible to mirror GPT partitions, but be aware that if you mirror more than one partition on a drive, a rebuild after replacing a drive could thrash the heads as mirrors are rebuilt simultaneously. http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html So, gmirror+GPT=conflict on last sector GPT+gmirror = hardrive head kill nice... So, for no more than 2TB disks the best way to go is GMIRROR of the drive +PARTITION on top of it? GPT partitions should work, just limit it to one mirrored partition per drive. Or maybe there is a way to instruct gmirror do rebuild only what i say (manual rebuild) ? 'gmirror configure -n' ? Have not tried it. The trick would be to do that before multiple mirrors start rebuilding, which they will as soon as geom_mirror.ko is loaded. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Software raid VS hardware raid
Hello! I have to made a decision on choosing a dedicated server. The problem i see is that while i can find very affordable and good options they do not provide hardware raid or even if they do it is not the best hardware for freebsd. The server base conf is 8core 32gb ram 2.8+ ghz. So, maybe someone has personal experience with both worlds and can tell if it really matters in such configuration if i go for software raid. What are the benefits and what are the negatives of software raid? How much is the performance penalty? I am planning to use mirror configuration of two SATA 7200rpm 2TB disks. Nothing fancy. File system planned is UFS with journaling. Artem ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On 01/28/13 21:43, Artem Kuchin wrote: Hello! I have to made a decision on choosing a dedicated server. The problem i see is that while i can find very affordable and good options they do not provide hardware raid or even if they do it is not the best hardware for freebsd. The server base conf is 8core 32gb ram 2.8+ ghz. So, maybe someone has personal experience with both worlds and can tell if it really matters in such configuration if i go for software raid. What are the benefits and what are the negatives of software raid? How much is the performance penalty? I am planning to use mirror configuration of two SATA 7200rpm 2TB disks. Nothing fancy. File system planned is UFS with journaling. I won't delve into detail here but if the data is important HW RAID is where you want to be. Perhaps you could give us a little more details about what the purpose of the server is? Mission-critical or low cost? Those two tends to be mutually exclusive... We are HP-only but have good experience from LSI as well. Just my $0.02. //per ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On Mon, 28 Jan 2013, Per olof Ljungmark wrote: On 01/28/13 21:43, Artem Kuchin wrote: Hello! I have to made a decision on choosing a dedicated server. The problem i see is that while i can find very affordable and good options they do not provide hardware raid or even if they do it is not the best hardware for freebsd. The server base conf is 8core 32gb ram 2.8+ ghz. So, maybe someone has personal experience with both worlds and can tell if it really matters in such configuration if i go for software raid. What are the benefits and what are the negatives of software raid? How much is the performance penalty? I am planning to use mirror configuration of two SATA 7200rpm 2TB disks. Nothing fancy. File system planned is UFS with journaling. I won't delve into detail here but if the data is important HW RAID is where you want to be. Perhaps you could give us a little more details A problem with HW RAID is that if the controller breaks, you need to get an identical controller to replace it, or the data will be lost. With software raid, you can read the data on any machine that will boot FreeBSD. That is a great convenience compared to searching eBay for an obsolete controller with the proper rev level. We haven't noticed any speed disadvantage on modern multi-core hardware and RAID 1. The advantages of HW raid escape me - I understand that years ago it provided OS independence and reduced CPU load, but it no longer provides the former, and with 8 cores do you need the latter while waiting for a disk platter to spin? ZFS is worthwhile, too, especially since you have a good amount of memory. That would give you snapshots and some other desirable features, such as background scanning for defects that UFS doesn't have. about what the purpose of the server is? Mission-critical or low cost? Those two tends to be mutually exclusive... Surely the presence of SATA drives shows that low cost is essential. Mirroring and ZFS provide very important advantages. HW raid seems to fill a much needed gap (apologies to Brian Kernigan). daniel feenberg We are HP-only but have good experience from LSI as well. Just my $0.02. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
On Jan 28, 2013, at 3:43 PM, Artem Kuchin wrote: I have to made a decision on choosing a dedicated server. The problem i see is that while i can find very affordable and good options they do not provide hardware raid or even if they do it is not the best hardware for freebsd. I prefer SW RAID, specifically ZFS, for two very large reasons: 1) Visibility: From the OS layer you have very good visibility into the health of the RAID set and the underlying drives. All of the lower end HW RAID solutions I have seen require proprietary software to manage the RAID configuration, usually from the physical system's BIOS layer. Finding good OS layer software to monitor the RAID and the drives has been very painful. If you don't know you have a failure, then you can't do anything about it and when you have a second failure you lose data. Running a HW RAID system and not being able to issue a simple command from the OS and see the status of the RAID scares me. 2) Error Detection and Correction: HW RAID relies on the drives to report read and write errors. With UNCORRECTABLE error rates of 10^-14 and 10^-15 and LARGE (1 TB plus) drives you are almost guaranteed to statistically run into UNCORRECTABLE errors over the life of a typical drive. ZFS has end to end checksums and can detect a single bad bit from a drive, if the set is redundant it can recreate the correct data and re-write it, effectively correcting the bad data on disk. NOTE: Larger, more expensive HW RAID systems address both of the above issues, but at a much higher cost in terms of money and management overhead. DISCLAIMER: I have been managing mission critical, cannot afford to lose it data under ZFS for over 5 years, with no loss of data (even with some horribly unreliable low cost HW RAID systems under the ZFS layer... if we had not used ZFS we would have lost data multiple times). -- Paul Kraus Deputy Technical Director, LoneStarCon 3 Sound Coordinator, Schenectady Light Opera Company ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software raid VS hardware raid
Artem Kuchin wrote: Hello! I have to made a decision on choosing a dedicated server. The problem i see is that while i can find very affordable and good options they do not provide hardware raid or even if they do it is not the best hardware for freebsd. The server base conf is 8core 32gb ram 2.8+ ghz. So, maybe someone has personal experience with both worlds and can tell if it really matters in such configuration if i go for software raid. What are the benefits and what are the negatives of software raid? How much is the performance penalty? I am planning to use mirror configuration of two SATA 7200rpm 2TB disks. Nothing fancy. File system planned is UFS with journaling. I can't say for sure exactly what's best for your needs, however, please allow me to toss out some very generic tidbits which may aid you in some way. Historically back when RAID was new, hardware controllers were the only way to go. Back then I would never look at software RAID for a server machine. Best to offload as much work away from the CPU as possible to free it up for running the OS. What has changed is the amount of raw horsepower available from modern-day processors as compared to when RAID first came out. On the multi-core monster CPUs of today software RAID is a perfectly viable consideration because there are CPU cycles to spare, so the performance penalty is less now than it once was. Having said that, there are several other considerations to keep in mind as well. The type of RAID required matters. If you want/need RAID 5/6 it is definitely better to go with hardware RAID because of the horsepower required to do the XOR parity generation. You would want RAID 5/6 running on a hardware controller and not on the CPU. On the other hand, RAID 0, 1, and 10 are fine candidates for software RAID. One thing I've noticed that seems to somewhat get lost in this discussion is equating software-based RAID with not needing to spend money on the expensive RAID controller. At first glance it does seem like quite a waste to spend hundreds of dollars on a really fast RAID controller and then turn all its functionality off and just use it JBOD style. If you truly want performance you still need the processing power of the hardware chip on the (expensive) controller. Most central to this is I/Os per second. This matters more to some workloads than others, with being a database server probably at the top of the list where I/Os per second is king. The better the chip on the controller card the more I/Os per second. Another thing that matters less wrt to server hardware is the third kind of RAID known as fake or pseudo RAID. This is mostly found on desktop PC motherboards and some low-end (cheap) hardware cards. There is a config in the BIOS to set up so-called RAID, but it is only half of the matter - the other half is in the driver. FreeBSD does indeed have support for some of these fake RAID things but I stay far far away from them. Either go hardware or pure software only - the fakeraid is crap. Another thing I'd warn you about is the drives themselves. Take a look: http://wdc.custhelp.com/app/answers/detail/a_id/1397 Many people get very lucky much of the time and don't experience problems with this. Using drives designed for desktop PCs with RAID can be prone to problem. Drives designed for servers are more expensive, but I've always felt it is better to put server drives in servers. :-) In terms of a 'performance penalty' what you will find is it gets shifted away from just losing a few CPU cycles into other areas. If the drives are Advanced Format 4k sector critters and they aren't properly aligned in the partitioning phase of set up performance will take a hit. If the controller chip they are hooked up to is slow, then the entire drive subsystem will suffer. Another thing you will find that will surface as a problem area is the shift away from the old style DOS MBR scheme and towards GPT. Software RAID (and indeed hardware controllers too) store their metadata at the end of the drive and needs to be outside the file system. The problem arises when both the software raid and the GPT partitioning try to store metadata to the same location and collide. Just knowing about this in advance and spending some quality reading time about it prior to trying to set up the box will help greatly. Plenty has been written (even in this list) about this subject by people smarter than me so the info you need is out there, albeit it can be confusing at first. I guess what I'm trying to point out is that low performance wrt software RAID will stem from other things besides just simply consuming a few CPU cycles. Today's CPUs have the cycles to spare. I've been using gmirror for RAID 1 mirrors for a few years now and am happy with this. I have had a few old drives die and the servers stayed up and online. This allowed me to defer the actual drive replacement and not have
software raid
Does FreeBSD support any type of software raid? I have an old rack mount server which has 8 bays, but all SATA, and NO raid. Sure would be nice to have a software raid to create a NAS device. -- Jim Pazarena fqu...@paz.bz ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: software raid
Does FreeBSD support any type of software raid? I have an old rack mount server which has 8 bays, but all SATA, and NO raid. Sure would be nice to have a software raid to create a NAS device. Yes! An example of setting up a 3 disk raidz might look like this: zpool create myfancyraid raidz ad4 ad6 ad8 zfs create myfancyraid/foo zfs set mountpoint=/usr/foo myfancyraid/foo zfs mount -a cd /usr/foo echo hello world hello.txt Yay! Then edit /etc/rc.conf to enable zfs at boot time: echo 'zfs_enable=YES' /etc/rc.conf How's my raid doing today? Cake: zpool status zfs list You can even mix and match raid and encryption. Below, I put a raidz on top a geli encryption layer on three devices. (There are other ways to do this too.) When it comes time to decommission disks, there's no company data leaks (depending on your needs): # Create the geli: geli init -b -e AES -l 256 /dev/ad4 geli init -b -e AES -l 256 /dev/ad6 geli init -b -e AES -l 256 /dev/ad8 # Attach it or reboot: geli attach ad4 geli attach ad6 geli attach ad8 # Make the zpool and Z file system: zpool create myfancyraid raidz ad4.eli ad6.eli ad8.eli zfs create myfancyraid/foo zfs set mountpoint=/usr/foo myfancyraid/foo zfs mount -a Then edit /boot/loader.conf to load geli at boot time:: echo 'geom_eli_load=YES' /boot/loader.conf Finally, add the bit about ZFS to /etc/rc.conf:: echo 'zfs_enable=YES' /etc/rc.conf You'll be asked for the password to each provider (disk) at boot time before the system enters multi-user mode. Make sure you have console access and a backup copy of the password somewhere! A word on graid3: For a multi-user file server, serving lots of small requests, graid3 is about the worst performance you can get due to its raid3 nature. Requests have to be served sequentially, using all disks in the array. Slow in my experience. Good luck! -Modulok- ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: software raid
On Tue, 7 Feb 2012, Jim Pazarena wrote: Does FreeBSD support any type of software raid? I have an old rack mount server which has 8 bays, but all SATA, and NO raid. Sure would be nice to have a software raid to create a NAS device. Sure, multiple ways, in fact: http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-striping.html http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-raid3.html http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/filesystems-zfs.html http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/vinum-vinum.html http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/disks-hast.html That's a start. gmirror and ZFS are probably the most common. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software RAID options
On Saturday 30 January 2010, Danny Edge wrote: Thanks, Glen, I should have mentioned that I did see gmirror mentioned in the HB. Pending further suggestions, I will try gmirror for software RAID 1 (yes, as large as the smallest disk). It's also possible to mirror individual slices rather than an entire disk http://people.freebsd.org/~rse/mirror/ so you could create matching slices on the disks and still have the spare space of the larger disk available for use as non-mirrored space. -- Mike Clarke ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Software RAID options
What works for you and can you suggest a guide? I haven't setup a BSD server in 8 years, but my environment will be: FreeBSD 7.2 Release x2 HD's (not the same size, if I need to spend the money, on two like drives, kindly insist) DNS cache and auth Postfix MTA 1 user/1 IMAP mailbox less than 10GB's of data I also plan on backing up via newbie rsync and SSH scripts. Thanks. -- CPDE - Certified Petroleum Distribution Engineer CCBC - Certified Canadian Beer Consumer ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software RAID options
Hi, Danny Edge wrote: What works for you and can you suggest a guide? I haven't setup a BSD server in 8 years, but my environment will be: I've been using gmirror for some time, without problems. http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html FreeBSD 7.2 Release x2 HD's (not the same size, if I need to spend the money, on two like You really never specified your RAID type. If it is RAID-0 (striping), as the cliche goes, size doesn't matter. If it is RAID-1, if you do not have identically sized disks, the mirror will only be as large as the smallest disk. (This is mentioned in the handbook, as well.) I also plan on backing up via newbie rsync and SSH scripts. May I suggest rsnapshot? Regards, -- Glen Barber ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: Software RAID options
On Fri, Jan 29, 2010 at 11:18 PM, Glen Barber glen.j.bar...@gmail.comwrote: Hi, Danny Edge wrote: What works for you and can you suggest a guide? I haven't setup a BSD server in 8 years, but my environment will be: I've been using gmirror for some time, without problems. http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html Thanks, Glen, I should have mentioned that I did see gmirror mentioned in the HB. Pending further suggestions, I will try gmirror for software RAID 1 (yes, as large as the smallest disk). [Snip...] . I also plan on backing up via newbie rsync and SSH scripts. May I suggest rsnapshot? I will look into rsnapshot. All these new tools that I didn't have 10 years ago! -- CPDE - Certified Petroleum Distribution Engineer CCBC - Certified Canadian Beer Consumer ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
Gary Gatten wrote: What about with PAE and/or other extension schemes? Doesn't help with the KVM requirement, and still only provides a 4GB address space for any single process. If it's just memory requirements, can I assume if I don't have a $hit load of storage and billions of files it will work ok with 4GB of RAM? I guess I'm just making sure there isn't some bug that only exists on the i386 architecture? ZFS should work on i386. As far as I know there aren't any killer bugs that are architecture specific, but I'm no expert. Unless your aim is to learn about ZFS I personally wouldn't bother with it on an i386 system: you'll almost certainly get a lot better performance and a lot less grief out of UFS under those conditions. Cheers, Matthew -- Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard Flat 3 PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate Kent, CT11 9PW signature.asc Description: OpenPGP digital signature
Re: FreeBSD Software RAID
I really don't have any hard data on ZFS performance relative to UFS + geom. so please test yourself :) ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
ZFS should work on i386. As far as I know there aren't any killer bugs that are architecture specific, but I'm no expert. Unless your aim is to learn unless someone assume than size of pointers are 4 bytes, and write program in C, there will work as good in 64-bit mode and in 32-bit mode. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Wednesday 27 May 2009 09:52:42 am Wojciech Puchar wrote: ZFS should work on i386. As far as I know there aren't any killer bugs that are architecture specific, but I'm no expert. Unless your aim is to learn unless someone assume than size of pointers are 4 bytes, and write program in C, there will work as good in 64-bit mode and in 32-bit mode. Wojciech, I have to ask: are you actually a programmer or are you repeating things you've read elsewhere? I can think of a whole list of reasons why code written to target a 64-bit system would be non-trivial to port to 32-bit, particularly if performance is an issue. -- Kirk Strauser ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
in C, there will work as good in 64-bit mode and in 32-bit mode. Wojciech, I have to ask: are you actually a programmer or are you repeating yes i am. if you are interested i wrote programs for x86, ARM (ARM7TDMI), MIPS32 (4Kc), and once for alpha. I have quite good knowledge for ARM and MIPS assembly, for x86 - quite outdated as i wrote my last assembly program when 486 was new CPU. things you've read elsewhere? you probably mistaken me with some poeple on that list that do this. If you are reading my posts on that list (and maybe others) you know that the last thing i do is to repeat and repeat know and popular opinions :) I can think of a whole list of reasons why code written to target a 64-bit system would be non-trivial to port to 32-bit, you talk about performance or if it work at all? i already wrote a lot of programs, and after moving to 64-bit (amd64) only one wasn't working just after recompiling, because i assumed that pointer is 4 byte long. do you have any other examples of code non-portability between amd64 and i386? I say between amd64 and i386 because there are more issues with other archs, where for example non-aligned memory access is not allowed. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Wednesday 27 May 2009 11:40:51 am Wojciech Puchar wrote: you talk about performance or if it work at all? Both, really. If they have to code up macros to support identical operations (such as addition) on both platforms, and accidentally forget to use the macro in some place, then voila: untested code. do you have any other examples of code non-portability between amd64 and i386? You're also forgetting that this isn't high-level programming where you get to lean on a cross-platform libc or similar. This is literally interfacing with the hardware, and there are a whole boatload of subtle incompatibilities when handling stuff at that level. -- Kirk Strauser ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
you talk about performance or if it work at all? Both, really. If they have to code up macros to support identical operations OK. talking about performance: - 64-bit addition/substraction on 32-bit computer: 2 instructions instead of one (ADD+ADC) - 64-bit NOT, XOR, AND, OR and compare/test etc - 2 instead of one - multiply - depends of machine, something like 7-8 times longer (4 multiples+additions) to do 64bitx64bit multiply. But how often do you multiply 2 longs in C. Actually VERY rarely. the only exception i can think now is RSA/DSA assymetric key generation and processing. - every operation on 32-bit or smaller values - same - every branching - same - external memory access - depends of chipset/CPU not mode - same now do cc -O2 -s some C program and look at resulting assembly output to see how much performance could really be gained. about checksumming in ZFS - it could be much faster on 64-bit arch, if only memory speed and latency wouldn't be a limit. and it is, and any performance difference in that case would be rather marginal. (such as addition) on both platforms, and accidentally forget to use the macro in some place, then voila: untested code. do you have any other examples of code non-portability between amd64 and i386? You're also forgetting that this isn't high-level programming where you get to lean on a cross-platform libc or similar. This is literally interfacing with the hardware, and there are a whole boatload of subtle incompatibilities when handling stuff at that level. we talked about C code. if not - please be more clear as i don't understand what you talking about. and no - ZFS is not on interface level, doesn't talk directly to hardware. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Wed, May 27, 2009 at 11:52:33AM -0500, Kirk Strauser wrote: On Wednesday 27 May 2009 11:40:51 am Wojciech Puchar wrote: you talk about performance or if it work at all? Both, really. If they have to code up macros to support identical operations (such as addition) on both platforms, and accidentally forget to use the macro in some place, then voila: untested code. I haven't looked at the ZFS code but this sort of thing is exactly why all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when the first thing I have to do with a new compiler is to work out the proper typedefs to create them. -- David Kelly N4HHE, dke...@hiwaay.net Whom computers would destroy, they must first drive mad. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
I haven't looked at the ZFS code but this sort of thing is exactly why all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when the first thing I have to do with a new compiler is to work out the proper typedefs to create them. int, short and char are portable, only other things must be defined this way. int8_t int16_t is just unneeded work. anyway - it's just defines, having no effect on compiled code and it's performance. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Wed, May 27, 2009 at 09:24:17PM +0200, Wojciech Puchar wrote: I haven't looked at the ZFS code but this sort of thing is exactly why all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when the first thing I have to do with a new compiler is to work out the proper typedefs to create them. int, short and char are portable, only other things must be defined this way. No, they are not portable. int is 16 bits on many systems I work with. char is sometimes signed, sometimes not. uint8_t is never signed and always unambiguous. int8_t int16_t is just unneeded work. anyway - it's just defines, having no effect on compiled code and it's performance. No, they are not just defines, I said typedef. Typedef is subject to stricter checking by the compiler. Packing and alignment in structs is a big portability problem. -- David Kelly N4HHE, dke...@hiwaay.net Whom computers would destroy, they must first drive mad. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Wed, May 27, 2009 at 09:24:17PM +0200, Wojciech Puchar wrote: I haven't looked at the ZFS code but this sort of thing is exactly why all code I write uses int8_t, int16_t, int32_t, uint8_t, ... even when the first thing I have to do with a new compiler is to work out the proper typedefs to create them. int, short and char are portable, Not completely, at least as far as C is concerned. I'd say that char and long are portable, but not short and int. According to KR (and I don't think this has changed in later standards), a char is defined as one byte. Short, int and long can vary but short and int must be at least 16 bits, and a long must be at least 32 bits. Additionally a short may not be longer than an int which may not be longer than a long. But the size of an int depends on hardware platform and compiler data model. Roland -- R.F.Smith http://www.xs4all.nl/~rsmith/ [plain text _non-HTML_ PGP/GnuPG encrypted/signed email much appreciated] pgp: 1A2B 477F 9970 BA3C 2914 B7CE 1277 EFB0 C321 A725 (KeyID: C321A725) pgp4s0ze2UDGA.pgp Description: PGP signature
Re: FreeBSD Software RAID
Wojciech Puchar wrote: you are right. you can't be happy of warm house without getting really cold some time :) that's why it's excellent that ZFS (and few other things) is included in FreeBSD but it's COMPLETELY optional. Well, I switched from the heater that doesn't work and is poorly documented (gvinum) to the one that does and is (zfs, albeit mostly documented by Sun), and so far I am warm :-) Once I'd increased kmem, at least. I did get a panic before that, but now I am shuffling data happily and slightly faster than gvinum did, and memory has levelled off at about 160MB for zfs. I'll be keeping my previous hardware RAID in one piece for a little while though, I think, just in case! (old Adaptec card with a 2TB limit on containers). ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
Howard Jones wrote: Wojciech Puchar wrote: you are right. you can't be happy of warm house without getting really cold some time :) that's why it's excellent that ZFS (and few other things) is included in FreeBSD but it's COMPLETELY optional. Well, I switched from the heater that doesn't work and is poorly documented (gvinum) to the one that does and is (zfs, albeit mostly documented by Sun), and so far I am warm :-) Once I'd increased kmem, at least. I did get a panic before that, but now I am shuffling data happily and slightly faster than gvinum did, and memory has levelled off at about 160MB for zfs. I'll be keeping my previous hardware RAID in one piece for a little while though, I think, just in case! (old Adaptec card with a 2TB limit on containers). I moved my AMANDA tapeless backup system to ZFS well over a year ago. It's got four 500GB SATA drives. At first, it would panic frequently sometime during the backup. The backups peak at ~400Mbps of network traffic. I adopted the following script to write out the memory usage during the backup, so I could better tune the system (sorry, I can't recall where I found this code snip): #!/bin/sh TEXT=`/sbin/kldstat | /usr/bin/awk 'BEGIN {print 16i 0;} NR1 \ {print toupper($4) +} END {print p}' | dc` DATA=`/usr/bin/vmstat -m | sed -Ee \ '1s/.*/0/;s/.* ([0-9]+)K.*/\1+/;$s/$/1024*p/' | dc` TOTAL=$((DATA + TEXT)) DATE=`/bin/date | awk '{print $4}'` /bin/echo $DATE `/bin/echo $TOTAL | \ /usr/bin/awk '{print $1/1048576}'` /home/steve/mem.usage Cronned every minute, I'd end up with a file like this: 19:16:01 500.205 19:17:02 485.699 19:18:01 474.305 19:19:01 473.265 19:20:01 471.874 19:21:02 471.94 ...the next day, I'd be able to review this file to see what the memory usage was at the time of the panic/reboot. I found that: vm.kmem_size=1536M vm.kmem_size_max=1536M made the system extremely stable, and since then: amanda# uptime 9:01AM up 81 days, 17:06, I'm about to upgrade the system to -STABLE today... Steve smime.p7s Description: S/MIME Cryptographic Signature
Re: FreeBSD Software RAID
Sweet thanks for the info. Building one of those boxes is next in the list. On 5/26/09, Steve Bertrand st...@ibctech.ca wrote: Howard Jones wrote: Wojciech Puchar wrote: you are right. you can't be happy of warm house without getting really cold some time :) that's why it's excellent that ZFS (and few other things) is included in FreeBSD but it's COMPLETELY optional. Well, I switched from the heater that doesn't work and is poorly documented (gvinum) to the one that does and is (zfs, albeit mostly documented by Sun), and so far I am warm :-) Once I'd increased kmem, at least. I did get a panic before that, but now I am shuffling data happily and slightly faster than gvinum did, and memory has levelled off at about 160MB for zfs. I'll be keeping my previous hardware RAID in one piece for a little while though, I think, just in case! (old Adaptec card with a 2TB limit on containers). I moved my AMANDA tapeless backup system to ZFS well over a year ago. It's got four 500GB SATA drives. At first, it would panic frequently sometime during the backup. The backups peak at ~400Mbps of network traffic. I adopted the following script to write out the memory usage during the backup, so I could better tune the system (sorry, I can't recall where I found this code snip): #!/bin/sh TEXT=`/sbin/kldstat | /usr/bin/awk 'BEGIN {print 16i 0;} NR1 \ {print toupper($4) +} END {print p}' | dc` DATA=`/usr/bin/vmstat -m | sed -Ee \ '1s/.*/0/;s/.* ([0-9]+)K.*/\1+/;$s/$/1024*p/' | dc` TOTAL=$((DATA + TEXT)) DATE=`/bin/date | awk '{print $4}'` /bin/echo $DATE `/bin/echo $TOTAL | \ /usr/bin/awk '{print $1/1048576}'` /home/steve/mem.usage Cronned every minute, I'd end up with a file like this: 19:16:01 500.205 19:17:02 485.699 19:18:01 474.305 19:19:01 473.265 19:20:01 471.874 19:21:02 471.94 ...the next day, I'd be able to review this file to see what the memory usage was at the time of the panic/reboot. I found that: vm.kmem_size=1536M vm.kmem_size_max=1536M made the system extremely stable, and since then: amanda# uptime 9:01AM up 81 days, 17:06, I'm about to upgrade the system to -STABLE today... Steve -- Adam Vande More Systems Administrator Mobility Sales ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Monday 25 May 2009 08:57:48 am Howard Jones wrote: I'm was half-considering switching to ZFS, but the most positive thing I could find written about that (as implemented on FreeBSD) is that it doesn't crash that much, so perhaps not. That was from a while ago though. Wojciech hates it for some reason, but I wouldn't let that deter you. I'm using ZFS on several production machines now and it's been beautifully solid the whole time. It has several huge advantages over UFS: - Filesystem sizes are dynamic. They all grow and shrink inside the same pool, so you don't have to worry about making one too large or too small. - You can sort of think of a ZFS filesystem as a directory with a set of configurable, inheritable attributes. Set your /usr/ports to use compression, and tell /home to keep two copies of everything for safety's sake. - Snapshots aren't painful. It's been 100% reliable on every amd64 machine I've put it on (but avoid it on x86!). 7-STABLE hasn't required any tuning since February or so. UFS and gstripe/gmirror/graid* are good, but ZFS has spoiled me and I won't be going back. -- Kirk Strauser ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
RE: FreeBSD Software RAID
Why avoid ZFS on x86? -Original Message- From: owner-freebsd-questi...@freebsd.org [mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Kirk Strauser Sent: Tuesday, May 26, 2009 12:39 PM To: freebsd-questions@freebsd.org Subject: Re: FreeBSD Software RAID On Monday 25 May 2009 08:57:48 am Howard Jones wrote: I'm was half-considering switching to ZFS, but the most positive thing I could find written about that (as implemented on FreeBSD) is that it doesn't crash that much, so perhaps not. That was from a while ago though. Wojciech hates it for some reason, but I wouldn't let that deter you. I'm using ZFS on several production machines now and it's been beautifully solid the whole time. It has several huge advantages over UFS: - Filesystem sizes are dynamic. They all grow and shrink inside the same pool, so you don't have to worry about making one too large or too small. - You can sort of think of a ZFS filesystem as a directory with a set of configurable, inheritable attributes. Set your /usr/ports to use compression, and tell /home to keep two copies of everything for safety's sake. - Snapshots aren't painful. It's been 100% reliable on every amd64 machine I've put it on (but avoid it on x86!). 7-STABLE hasn't required any tuning since February or so. UFS and gstripe/gmirror/graid* are good, but ZFS has spoiled me and I won't be going back. -- Kirk Strauser ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org font size=1 div style='border:none;border-bottom:double windowtext 2.25pt;padding:0in 0in 1.0pt 0in' /div This email is intended to be reviewed by only the intended recipient and may contain information that is privileged and/or confidential. If you are not the intended recipient, you are hereby notified that any review, use, dissemination, disclosure or copying of this email and its attachments, if any, is strictly prohibited. If you have received this email in error, please immediately notify the sender by return email and delete this email from your system. /font ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Tue, May 26, 2009 at 01:15:41PM -0500, Gary Gatten wrote: Why avoid ZFS on x86? That's because ZFS works best with huge amounts of (Kernel-)RAM, and i386 32-bit doesn't provide enough adressing space. Btw, I've tried ZFS on two FreeBSD/amd64 test machines with 8GB and 16GB of RAM, and it looks very promising. I wouldn't put it on production servers yet, but will eventually, once FreeBSD's ZFS integration matures and stabilizes. -cpghost. -- Cordula's Web. http://www.cordula.ws/ ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
Gary Gatten wrote: Why avoid ZFS on x86? Because in order to deal most effectively with disk arrays of 100s or 1000s of GB as are typical nowadays, ZFS requires more than the 4GB of addressable RAM[*] that the i386 arch can provide. You can make ZFS work on i386, but it requires very careful tuning and is not going to work brilliantly well for particularly large or high-throughput filesystems. Cheers, Matthew [*] Technically, it requires more than the typical 2GB of kernel memory that is the default on i386. KVM under 64bit architectures can be *much* bigger than that. -- Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard Flat 3 PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate Kent, CT11 9PW signature.asc Description: OpenPGP digital signature
RE: FreeBSD Software RAID
What about with PAE and/or other extension schemes? If it's just memory requirements, can I assume if I don't have a $hit load of storage and billions of files it will work ok with 4GB of RAM? I guess I'm just making sure there isn't some bug that only exists on the i386 architecture? -Original Message- From: owner-freebsd-questi...@freebsd.org [mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Matthew Seaman Sent: Tuesday, May 26, 2009 1:38 PM To: Gary Gatten Cc: freebsd-questions@freebsd.org Subject: Re: FreeBSD Software RAID Gary Gatten wrote: Why avoid ZFS on x86? Because in order to deal most effectively with disk arrays of 100s or 1000s of GB as are typical nowadays, ZFS requires more than the 4GB of addressable RAM[*] that the i386 arch can provide. You can make ZFS work on i386, but it requires very careful tuning and is not going to work brilliantly well for particularly large or high-throughput filesystems. Cheers, Matthew [*] Technically, it requires more than the typical 2GB of kernel memory that is the default on i386. KVM under 64bit architectures can be *much* bigger than that. -- Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard Flat 3 PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate Kent, CT11 9PW font size=1 div style='border:none;border-bottom:double windowtext 2.25pt;padding:0in 0in 1.0pt 0in' /div This email is intended to be reviewed by only the intended recipient and may contain information that is privileged and/or confidential. If you are not the intended recipient, you are hereby notified that any review, use, dissemination, disclosure or copying of this email and its attachments, if any, is strictly prohibited. If you have received this email in error, please immediately notify the sender by return email and delete this email from your system. /font ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Tuesday 26 May 2009 01:44:51 pm Gary Gatten wrote: What about with PAE and/or other extension schemes? If it's just memory requirements, can I assume if I don't have a $hit load of storage and billions of files it will work ok with 4GB of RAM? I guess I'm just making sure there isn't some bug that only exists on the i386 architecture? My understanding is that it's much more than just the memory addressing. ZFS is thoroughly 64-bit and uses 64-bit math pervasively. That means you have to emulate all those operations with 2 32-bit values, and on the register-starved x86 platform you end up with absolutely horrible performance. Furthermore, it's just not that well tested. Sun designed ZFS for 64-bit systems and I think 32-bit support was pretty much an afterthought. -- Kirk Strauser ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
Wojciech hates it for some reason, but I wouldn't let that deter you. I'm same == incredibly low performance. of course having overmuscled CPU not much used for anything else - it may not be a problem. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
RE: FreeBSD Software RAID
10-4, thanks! -Original Message- From: owner-freebsd-questi...@freebsd.org [mailto:owner-freebsd-questi...@freebsd.org] On Behalf Of Kirk Strauser Sent: Tuesday, May 26, 2009 2:00 PM To: freebsd-questions@freebsd.org Subject: Re: FreeBSD Software RAID On Tuesday 26 May 2009 01:44:51 pm Gary Gatten wrote: What about with PAE and/or other extension schemes? If it's just memory requirements, can I assume if I don't have a $hit load of storage and billions of files it will work ok with 4GB of RAM? I guess I'm just making sure there isn't some bug that only exists on the i386 architecture? My understanding is that it's much more than just the memory addressing. ZFS is thoroughly 64-bit and uses 64-bit math pervasively. That means you have to emulate all those operations with 2 32-bit values, and on the register-starved x86 platform you end up with absolutely horrible performance. Furthermore, it's just not that well tested. Sun designed ZFS for 64-bit systems and I think 32-bit support was pretty much an afterthought. -- Kirk Strauser ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org font size=1 div style='border:none;border-bottom:double windowtext 2.25pt;padding:0in 0in 1.0pt 0in' /div This email is intended to be reviewed by only the intended recipient and may contain information that is privileged and/or confidential. If you are not the intended recipient, you are hereby notified that any review, use, dissemination, disclosure or copying of this email and its attachments, if any, is strictly prohibited. If you have received this email in error, please immediately notify the sender by return email and delete this email from your system. /font ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
RE: FreeBSD Software RAID
- Filesystem sizes are dynamic. They all grow and shrink inside the same pool, so you don't have to worry about making one too large or too small. there are actually almost no filesystems, just one filesystem with many upper descriptors and separate per filesystem quota. just to make happy those who like to have separate filesystem for many things. i always make one filesystem for /, unless it's multiple disks config and i do like some data to be physically on different drive.for example highly loaded squid cache. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
You can make ZFS work on i386, but it requires very careful tuning and is not going to work brilliantly well for particularly large or high-throughput filesystems. you mean high transfer like reading/writing huge files. anyway not faster than properly configured UFS+maybe gstripe/gmirror. for small files it's only fast when they will fit in cache, same with UFS ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
RE: FreeBSD Software RAID
ZFS is thoroughly 64-bit and uses 64-bit math pervasively. That means you have to emulate all those operations with 2 32-bit values, and on the register-starved x86 platform you end up with absolutely horrible performance. no this difference isn't that great. it doesn't use much less CPU on the same processor using i386 and amd64 kernels - i checked it. no precise measurements but there are no more than 20% performance difference - comparable to most programs used in i386 and amd64 mode. so no horrible performance on i386, or if you prefer - always horrible performance no matter what CPU mode. while x86 architecture doesn't have much registers EAX,EBX,ECX,EDX,ESI,EDI,EBP,ESP 8 total (+EIP) it doesn't affect programs that much, as all modern x86 processors perform memory-operand instructions single cycle (or more than one of them). anyway extra 8 registers and PC-relative addresses are very useful. this roughly 20% performance difference is because of this. if you mean gain on 64-bit registers when calculating block checksums in ZFS - it's for sure memory-bandwidth and latency limited, not CPU power. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
Wojciech Puchar wrote: You can make ZFS work on i386, but it requires very careful tuning and is not going to work brilliantly well for particularly large or high-throughput filesystems. you mean high transfer like reading/writing huge files. anyway not faster than properly configured UFS+maybe gstripe/gmirror. I mean high-throughput, as in bytes-per-second. Whether that consists of a very large number of small files or fewer larger ones is pretty much immaterial. for small files it's only fast when they will fit in cache, same with UFS For any files, it's a lot faster when they can be served out of cache. That's true for any filesystem. It's only when you get beyond the capacity of your caches that things get interesting. I really don't have any hard data on ZFS performance relative to UFS + geom. However my feeling is that UFS will win at small scales, but that ZFS will close the gap as the scale increases, and that ZFS is the clear winner when you consider things other than direct performance -- manageability, resilience to hardware failure or disk errors, etc. Of course, small scale (ie. about the same size as a single drive) is hundreds of GB nowadays, and growing. Cheers, Matthew -- Dr Matthew J Seaman MA, D.Phil. 7 Priory Courtyard Flat 3 PGP: http://www.infracaninophile.co.uk/pgpkey Ramsgate Kent, CT11 9PW signature.asc Description: OpenPGP digital signature
FreeBSD Software RAID
Hi, Can anyone with experience of software RAID point me in the right direction please? I've used gmirror before with no trouble, but nothing fancier. I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD 7.1-p4 system. I created a RAID 5 set with gvinum: drive d0 device /dev/ad4s1a drive d1 device /dev/ad6s1a drive d2 device /dev/ad8s1a drive d3 device /dev/ad10s1a volume jumbo plex org raid5 256k sd drive d0 sd drive d1 sd drive d2 sd drive d3 and it shows as up and happy. If I reboot, all the subdisks show as stale, and so the plex is down. It seems to be doing a rebuild, although it wasn't before, and would newfs, mount and accept data onto the new plex before the reboot. Is there any way to avoid having to wait while gvinum apparently calculates the parity on all those zeroes? Am I missing some step to 'liven up' the plex before the first reboot? (loader.conf has the correct line to load gvinum at boot) I tried again, with 'gvinum start jumbo' before rebooting, and that made no difference. Also is the configuration file format actually documented anywhere? I got that example from someone's blog, but the gvinum manpage doesn't mention the format at all! It *does* have a few pages dedicated to things that don't work, which was handy... :-) The handbook is still talking about ccd and vinum, and mostly covers the complications of booting of such a device. On the subject of documentation, I'm also assuming that this: S jumbo.p0.s2 State: I 1% D: d2 Size: 931 GB means it's 1% through initialising, because the states or the output of 'list' aren't described in the manual either. I'm was half-considering switching to ZFS, but the most positive thing I could find written about that (as implemented on FreeBSD) is that it doesn't crash that much, so perhaps not. That was from a while ago though. Does anyone use software RAID5 (or RAIDZ) for data they care about? Cheers, Howie ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
Hi, I remember building a RAID5 on gvinum with 3 500GB hard drives some months ago, and it took horribly long to initialize the raid5 (several hours). It seems to be a one-time job, cause since the raid finished it's initialization the machine starts up/ reboots within normal times. The documentation is some point, yes ;-) I got my basic know-how about gvinum and raid-1 from a blog also and could read-on with what I needed depending on the man pages. but it was hard.. Regards --- Mr. Olli On Mon, 2009-05-25 at 14:57 +0100, Howard Jones wrote: Hi, Can anyone with experience of software RAID point me in the right direction please? I've used gmirror before with no trouble, but nothing fancier. I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD 7.1-p4 system. I created a RAID 5 set with gvinum: drive d0 device /dev/ad4s1a drive d1 device /dev/ad6s1a drive d2 device /dev/ad8s1a drive d3 device /dev/ad10s1a volume jumbo plex org raid5 256k sd drive d0 sd drive d1 sd drive d2 sd drive d3 and it shows as up and happy. If I reboot, all the subdisks show as stale, and so the plex is down. It seems to be doing a rebuild, although it wasn't before, and would newfs, mount and accept data onto the new plex before the reboot. Is there any way to avoid having to wait while gvinum apparently calculates the parity on all those zeroes? Am I missing some step to 'liven up' the plex before the first reboot? (loader.conf has the correct line to load gvinum at boot) I tried again, with 'gvinum start jumbo' before rebooting, and that made no difference. Also is the configuration file format actually documented anywhere? I got that example from someone's blog, but the gvinum manpage doesn't mention the format at all! It *does* have a few pages dedicated to things that don't work, which was handy... :-) The handbook is still talking about ccd and vinum, and mostly covers the complications of booting of such a device. On the subject of documentation, I'm also assuming that this: S jumbo.p0.s2 State: I 1% D: d2 Size: 931 GB means it's 1% through initialising, because the states or the output of 'list' aren't described in the manual either. I'm was half-considering switching to ZFS, but the most positive thing I could find written about that (as implemented on FreeBSD) is that it doesn't crash that much, so perhaps not. That was from a while ago though. Does anyone use software RAID5 (or RAIDZ) for data they care about? Cheers, Howie ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
RE: FreeBSD Software RAID
-Original Message- From: Howard Jones [mailto:howard.jo...@network-i.net] Sent: 25 May 2009 14:58 To: freebsd-questions@freebsd.org Subject: FreeBSD Software RAID Hi, Can anyone with experience of software RAID point me in the right direction please? I've used gmirror before with no trouble, but nothing fancier. I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD 7.1-p4 system. I created a RAID 5 set with gvinum: drive d0 device /dev/ad4s1a drive d1 device /dev/ad6s1a drive d2 device /dev/ad8s1a drive d3 device /dev/ad10s1a volume jumbo plex org raid5 256k sd drive d0 sd drive d1 sd drive d2 sd drive d3 and it shows as up and happy. If I reboot, all the subdisks show as stale, and so the plex is down. It seems to be doing a rebuild, although it wasn't before, and would newfs, mount and accept data onto the new plex before the reboot. Is there any way to avoid having to wait while gvinum apparently calculates the parity on all those zeroes? Am I missing some step to 'liven up' the plex before the first reboot? (loader.conf has the correct line to load gvinum at boot) I tried again, with 'gvinum start jumbo' before rebooting, and that made no difference. Also is the configuration file format actually documented anywhere? I got that example from someone's blog, but the gvinum manpage doesn't mention the format at all! It *does* have a few pages dedicated to things that don't work, which was handy... :-) The handbook is still talking about ccd and vinum, and mostly covers the complications of booting of such a device. On the subject of documentation, I'm also assuming that this: S jumbo.p0.s2 State: I 1% D: d2 Size: 931 GB means it's 1% through initialising, because the states or the output of 'list' aren't described in the manual either. I'm was half-considering switching to ZFS, but the most positive thing I could find written about that (as implemented on FreeBSD) is that it doesn't crash that much, so perhaps not. That was from a while ago though. Does anyone use software RAID5 (or RAIDZ) for data they care about? Cheers, Howie ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org I have been running ZFS RAIDZ for 5 months on a 7.1 amd64 install, I have to say my experience has been mostly good. Initially I had an issue with a pci sata card causing drives to disconnect, but after investing a new motherboard with 6 sata ports everything has been smooth. I did have to replace a disk last week as it was showing checksum, read and write errors. ZFS rebuilt 2TB of data in around 5hours and did not loose any files at all. Regards Graeme ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Mon, May 25, 2009 at 7:30 PM, Graeme Dargie a...@tangerine-army.co.ukwrote: -Original Message- From: Howard Jones [mailto:howard.jo...@network-i.net] Sent: 25 May 2009 14:58 To: freebsd-questions@freebsd.org Subject: FreeBSD Software RAID Hi, Can anyone with experience of software RAID point me in the right direction please? I've used gmirror before with no trouble, but nothing fancier. I have a set of brand new 1TB drives, a Sil3124 SATA card and a FreeBSD 7.1-p4 system. I created a RAID 5 set with gvinum: drive d0 device /dev/ad4s1a drive d1 device /dev/ad6s1a drive d2 device /dev/ad8s1a drive d3 device /dev/ad10s1a volume jumbo plex org raid5 256k sd drive d0 sd drive d1 sd drive d2 sd drive d3 and it shows as up and happy. If I reboot, all the subdisks show as stale, and so the plex is down. It seems to be doing a rebuild, although it wasn't before, and would newfs, mount and accept data onto the new plex before the reboot. Is there any way to avoid having to wait while gvinum apparently calculates the parity on all those zeroes? Am I missing some step to 'liven up' the plex before the first reboot? (loader.conf has the correct line to load gvinum at boot) I tried again, with 'gvinum start jumbo' before rebooting, and that made no difference. Also is the configuration file format actually documented anywhere? I got that example from someone's blog, but the gvinum manpage doesn't mention the format at all! It *does* have a few pages dedicated to things that don't work, which was handy... :-) The handbook is still talking about ccd and vinum, and mostly covers the complications of booting of such a device. On the subject of documentation, I'm also assuming that this: S jumbo.p0.s2 State: I 1% D: d2 Size: 931 GB means it's 1% through initialising, because the states or the output of 'list' aren't described in the manual either. I'm was half-considering switching to ZFS, but the most positive thing I could find written about that (as implemented on FreeBSD) is that it doesn't crash that much, so perhaps not. That was from a while ago though. Does anyone use software RAID5 (or RAIDZ) for data they care about? Cheers, Howie ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org I have been running ZFS RAIDZ for 5 months on a 7.1 amd64 install, I have to say my experience has been mostly good. Initially I had an issue with a pci sata card causing drives to disconnect, but after investing a new motherboard with 6 sata ports everything has been smooth. I did have to replace a disk last week as it was showing checksum, read and write errors. ZFS rebuilt 2TB of data in around 5hours and did not loose any files at all. Regards Graeme ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org I have been using ZFS for about half an year. I just have mirroring with 2 drives. Never had a problem with it. I would go with ZFS in the future too. And yes the server is in production and it has all sort of important data. a great day, v -- network warrior since 2005 ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
i use gmirror but once i tried gvinum and it doesn't work well. i think simply use mirroring. ZFS will introduce 100 times more problems than it solves ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Mon, May 25, 2009 at 07:37:59PM +0300, Valentin Bud wrote: On Mon, May 25, 2009 at 7:30 PM, Graeme Dargie a...@tangerine-army.co.ukwrote: Can anyone with experience of software RAID point me in the right direction please? I've used gmirror before with no trouble, but nothing fancier. [76 lines trimmed] I have been using ZFS for about half an year. I just have mirroring with 2 drives. Never had a problem with it. I would go with ZFS in the future too. And yes the server is in production and it has all sort of important data. I have looked at ZFS recently. Appears to be a memory hog, needs about 1 GB especially if large file transfers may occur over gigabit ethernet to/from other machines. -- David Kelly N4HHE, dke...@hiwaay.net Whom computers would destroy, they must first drive mad. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
I have looked at ZFS recently. Appears to be a memory hog, needs about 1 GB especially if large file transfers may occur over gigabit ethernet while it CAN be set up on 256MB machine with a little big flags in loader.conf (should be autotuned anyway) - it generally takes as much memory as it's available, and LOTS of CPU power. with similar operations ZFS takes 10-20 TIMES more CPU than UFS and it's NOT faster than properly configured UFS. doesn't make any sense ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
RE: FreeBSD Software RAID
-Original Message- From: Wojciech Puchar [mailto:woj...@wojtek.tensor.gdynia.pl] Sent: 25 May 2009 18:09 To: FreeBSD-Questions@freebsd.org Cc: Howard Jones; Graeme Dargie; Valentin Bud Subject: Re: FreeBSD Software RAID I have looked at ZFS recently. Appears to be a memory hog, needs about 1 GB especially if large file transfers may occur over gigabit ethernet while it CAN be set up on 256MB machine with a little big flags in loader.conf (should be autotuned anyway) - it generally takes as much memory as it's available, and LOTS of CPU power. with similar operations ZFS takes 10-20 TIMES more CPU than UFS and it's NOT faster than properly configured UFS. doesn't make any sense ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org Ok granted this is a server sat in my house and it is not a mission critical server in a large business, personally I have can live with ZFS taking a bit longer vs resilience. From just looking at my system at the moment I have 1.8GB of free ram from a total of 4GB. Regards Graeme ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
On Mon, May 25, 2009 at 07:09:15PM +0200, Wojciech Puchar wrote: I have looked at ZFS recently. Appears to be a memory hog, needs about 1 GB especially if large file transfers may occur over gigabit ethernet while it CAN be set up on 256MB machine with a little big flags in loader.conf (should be autotuned anyway) - it generally takes as much memory as it's available, and LOTS of CPU power. with similar operations ZFS takes 10-20 TIMES more CPU than UFS and it's NOT faster than properly configured UFS. doesn't make any sense It makes a certain degree of sense. Sometimes things have to be done wrong for us to realize how good we had it before. How would we know how great FreeBSD is if we didn't have Linux? I had to look at ZFS to decide not to use it when I rebuild my storage this week due to a failing drive. -- David Kelly N4HHE, dke...@hiwaay.net Whom computers would destroy, they must first drive mad. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
RE: FreeBSD Software RAID
Ok granted this is a server sat in my house and it is not a mission critical server in a large business, personally I have can live with ZFS taking a bit longer vs resilience. simply gmirror and UFS gives the same. much simpler, much faster. but of course lots of people like to make their life harder ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Re: FreeBSD Software RAID
It makes a certain degree of sense. Sometimes things have to be done wrong for us to realize how good we had it before. How would we know how great FreeBSD is if we didn't have Linux? I had to look at ZFS to decide not to use it when I rebuild my storage this week due to a failing drive. you are right. you can't be happy of warm house without getting really cold some time :) that's why it's excellent that ZFS (and few other things) is included in FreeBSD but it's COMPLETELY optional. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
RE: FreeBSD Software RAID
-Original Message- From: Wojciech Puchar [mailto:woj...@wojtek.tensor.gdynia.pl] Sent: 25 May 2009 18:54 To: Graeme Dargie Cc: FreeBSD-Questions@freebsd.org; Howard Jones; Valentin Bud Subject: RE: FreeBSD Software RAID Ok granted this is a server sat in my house and it is not a mission critical server in a large business, personally I have can live with ZFS taking a bit longer vs resilience. simply gmirror and UFS gives the same. much simpler, much faster. but of course lots of people like to make their life harder No I am not making life harder at all ... I have 6x500gb hard disks I want in a good solid raid 5 type configuration. So you are somewhat wide of the mark in your assumptions. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
RE: FreeBSD Software RAID
but of course lots of people like to make their life harder No I am not making life harder at all ... I have 6x500gb hard disks I want in a good solid raid 5 type configuration. So you are somewhat wide of the mark in your assumptions. that's a reason. just don't forget that RAID-z is MUCH closer to RAID3 than RAID5. so you get random access speed of single drive, just higher transfer. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Software RAID performance? RAID-Z or vinum and RAID5?
I'm looking into moving a workstation from Ubuntu 10 to FreeBSD 7.1 (both amd64) and I'm a bit worried about storage -- specifically moving from mdadm, which performs very well for me. Current in Linux I use an mdadm RAID5 of 5 disks. After investigating FreeBSD storage options, RAID-Z sounds optimal[1]. I'd like to avoid levels 3 and 1 due to write bottlenecks[2], and level 0 for obvious reasons. Migrating from the existing mdadm is not an issue. I also do not plan to boot from the software array. Various docs/postings seem to indicate that using ZFS/RAID-Z under FreeBSD will destroy my computer, run over my cat, and bail out the investment banking industry. Will it really perform that poorly on a Phenom and 8GB RAM? Significantly more resources than mdadm in Linux? How about compared to RAID 5 under vinum? Thanks, ~Mike Manlief 1: The ability to read the array with the Linux FUSE ZFS implementation is very appealing; don't care about performance for such inter-op scenarios. Copy-on-write sounds awesome too. 2: ...and even level 5, now that I've learned of RAID-Z. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to freebsd-questions-unsubscr...@freebsd.org
Software RAID options for a media server
Hi Guys, As my dream of a hardware based SCSI RAID root disk was so soundly dashed, I have been trying to figure out the most appropriate software implementation for a media server. Which sw RAID is best for streaming media? The options I have are: RAID1z, the redundancy is not my concern so much as performance over a network, but if the reduction in performance is negligible I may opt for it for fun. or RAID0 using gvinum, a far more complex option so I'd like to get an idea of its suitability. Otherwise if there are any other avenues please fill me in. I am at a loss on which way to go... Thanks =^_^= ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID and Logical Volume in Linux versus FreeBSD
Matt Proud wrote: Hi all, I have used FreeBSD for a long time very casually but have never explored any of its software RAID or volume management features---at least to a degree to which I feel comfortable with them. What I would like to know with this post is 1.) whether there exists the ability to setup an analogue of this in FreeBSD; 2.) how this would be done if it is possible; 3.) whether the capabilities of this in FreeBSD are sufficiently mature to manage it; and 4.) how worst-case recovery scenarios would go on FreeBSD. I have a four disk software RAID setup in Linux. Everything is in RAID with the exception of swap. Here's an approximation of my setup: /dev/sd{a,b,c}1 is in a RAID 1 array used as /boot. /dev/sd{a,b,c}2 is in a RAID 1 array used a /root. /dev/sd{a,b,c}3 is used as swap with each of equal priority. /dev/sd{a,b,c}4 is in a RAID 5 array used as LVM. /dev/sdd houses spare partitions for the compliment supra. LVM is henceforth broke up according to proper Linux-FHS rules. What are your thoughts on this? It's definitely doable, see gmirror(8) for RAID1, gvinum(8) for RAID5. Note that there's no separate entity that performs as LVM does - this functionality is integrated in the system behaviour. You can use any disk device or partition with any transformation (such as RAID, encryption, iSCSI, etc.) without special preparation, partitioning or labeling. signature.asc Description: OpenPGP digital signature
Software RAID and Logical Volume in Linux versus FreeBSD
Hi all, I have used FreeBSD for a long time very casually but have never explored any of its software RAID or volume management features---at least to a degree to which I feel comfortable with them. What I would like to know with this post is 1.) whether there exists the ability to setup an analogue of this in FreeBSD; 2.) how this would be done if it is possible; 3.) whether the capabilities of this in FreeBSD are sufficiently mature to manage it; and 4.) how worst-case recovery scenarios would go on FreeBSD. I have a four disk software RAID setup in Linux. Everything is in RAID with the exception of swap. Here's an approximation of my setup: /dev/sd{a,b,c}1 is in a RAID 1 array used as /boot. /dev/sd{a,b,c}2 is in a RAID 1 array used a /root. /dev/sd{a,b,c}3 is used as swap with each of equal priority. /dev/sd{a,b,c}4 is in a RAID 5 array used as LVM. /dev/sdd houses spare partitions for the compliment supra. LVM is henceforth broke up according to proper Linux-FHS rules. What are your thoughts on this? Cheers, Matt ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: software raid 1 and recovery
On Fri, 2008-01-04 at 10:56 -0500, Brian A. Seklecki wrote: Google: nagios + seklecki + check_raid_gmirror Also check out sysutils/smartmontools/ Also, I recently updated the plugin code to r270 with some patches from Scott Swanson. You can see a small screenshot of it in action here: http://people.collaborativefusion.com/~seklecki/images/check_raid_gmirror_fbsd_nagiosWeb.png ~BAS ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: software raid 1 and recovery
Google: nagios + seklecki + check_raid_gmirror Also check out sysutils/smartmontools/ Cheers! ~BAS (Dealing with a fucked up gmirror raid 1 this morning) On Fri, 2008-01-04 at 15:32 +, Robin Becker wrote: I set this system up using Dru Lavigne's recipe, but I don't really understand -- Brian A. Seklecki [EMAIL PROTECTED] Collaborative Fusion, Inc. IMPORTANT: This message contains confidential information and is intended only for the individual named. If the reader of this message is not an intended recipient (or the individual responsible for the delivery of this message to an intended recipient), please be advised that any re-use, dissemination, distribution or copying of this message is prohibited. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
software raid 1 and recovery
I'm using software raid 1 on a 6.1 freebsd. This is a so called cold swap system, but I wonder how much it actually improves reliability. First off what should I be doing to detect error conditions and secondly what happens if the machine refuses to boot. I set this system up using Dru Lavigne's recipe, but I don't really understand what happens if one of the drives starts to fail. I think there was some discussion about HD monitoring recently, but I can't seem to locate it. -- Robin Becker ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Best software raid 5 software?
Hello, I am about to switch to software raid 5 for my personal server. I know hardware raid 5 is better, but being a student I'd rather not invest in a raid adapter now, plus my cpu is being used at about 0.0% 24/24 7/7, so it needs some exercise :-) I've heard of several software-based raid-5 projects, mainly of Vinum, has anybody tested it or any other ones? Which would you suggest? Thank you, Gabriel ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Best software raid 5 software?
On Wednesday 21 March 2007 03:03:53 am Gabriel Rossetti wrote: I am about to switch to software raid 5 for my personal server. I know hardware raid 5 is better, but being a student I'd rather not invest in a raid adapter now, plus my cpu is being used at about 0.0% 24/24 7/7, so it needs some exercise :-) I've heard of several software-based raid-5 projects, mainly of Vinum, has anybody tested it or any other ones? Which would you suggest? As far as I know, gvinum is the only software package in FreeBSD that can do RAID 5. The initial learning curve is a bit steep, but it should work fine once you get it configured. I would also suggest that you look at graid3 which, not surprisingly, supports RAID 3. As you may or may not know, RAID 3 is very similar to RAID 5. You get S*(N-1) usable space, where S is your disk size and N is the number of disks. You need at least three disks but can use more. Both allow you to lose any single disk and not lose any data. The difference is that RAID 5 stripes the redundant parity data across all of the disks and RAID 3 uses a single disk for all parity writes. As a result, RAID 5 potentially offers somewhat better read performance if disk I/O is the bottleneck (and assuming each disk has its own controller/I/O path). In the case of software raid and commodity (non-server) hardware, the difference should be nominal. Other software RAID options include gmirror (recommended for RAID1), gstripe (recommended for RAID0, can be combined w/ gmirror), ataraid (supports RAID0, RAID1, JBOD, and combinations on ata controllers only), and ccd (supports RAID0, RAID1, and JBOD; largely deprecated by gmirror and gstripe). JN ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Best software raid 5 software?
On 2007/03/21 6:33, John Nielsen seems to have typed: On Wednesday 21 March 2007 03:03:53 am Gabriel Rossetti wrote: I am about to switch to software raid 5 for my personal server. I know hardware raid 5 is better, but being a student I'd rather not invest in a raid adapter now, plus my cpu is being used at about 0.0% 24/24 7/7, so it needs some exercise :-) I've heard of several software-based raid-5 projects, mainly of Vinum, has anybody tested it or any other ones? Which would you suggest? As far as I know, gvinum is the only software package in FreeBSD that can do RAID 5. The initial learning curve is a bit steep, but it should work fine once you get it configured. There is also a geom_raid5 class, which you can find out about by searching the freebsd-geom mailing list: http://www.freebsd.org/cgi/search.cgi?words=graid5max=250source=freebsd-geom I don't think its quite ready for prime-time though. Vinum did not make the transition to Gvinum as cleanly as could be desired, so if you setup a gvinum array, I would recommend keeping good backups and testing it pretty harshly to make sure it will cleanly survive a drive failure. Gvinum has been getting significantly better with time, but as with anything before putting it into a production environment, test it throughly (and keep good backups, did I mention that keeping good backups is important? Because good backups are important...) ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID guidance
Robert Fitzpatrick wrote: I have an old NT4 PIII here that has a pair Adaptec Array1000 Family controllers with 2 pairs of identical drives on one of them (2 IBM 9GB and 2 Seagate 35GB). From what I googled, *nix does not support the controller, so I have removed the RAID arrays and loaded FreeBSD 6.0 onto the two IBM drives. Now, I wanted to mirror the other two for data and looking for guidance as to whether it is first of all suited for software RAID and if so, CCD or vinum. I am contemplating vinum because the handbook mentions CCD is when cost is the important factor and for me, is reliability. What would someone suggest? If vinum, one thing I don't quite understand is do I create the partitions to be used in the device? There doesn't seem to be a man for gvinum and the link to it in the handbook section 19.6.1 is broken. Hi Robert, I use gmirror(8) to setup RAID 1 volumes. I've used it successfully with IDE, SCSI and SATA drives. It is very simple to setup and administration is easy. If you only need RAID 1, then you should try it out. Should you need RAID 5 and/or a fully fledged volume manager, then vinum is the way. I also wrote a document on gmirror(8) setup. If you're interested, I can share it with you. David FYI: man page URLs gmirror(8) http://www.freebsd.org/cgi/man.cgi?query=gmirrorapropos=0sektion=0manpath=FreeBSD+6.0-RELEASE+and+Portsformat=html vinum(4) http://www.freebsd.org/cgi/man.cgi?query=vinumapropos=0sektion=0manpath=FreeBSD+6.0-RELEASE+and+Portsformat=html -- David Robillard UNIX systems administrator, CISSP Montreal: +1 514 966 0122 ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID guidance
Robert Fitzpatrick wrote: I have an old NT4 PIII here that has a pair Adaptec Array1000 Family controllers with 2 pairs of identical drives on one of them (2 IBM 9GB and 2 Seagate 35GB). From what I googled, *nix does not support the controller, so I have removed the RAID arrays and loaded FreeBSD 6.0 onto the two IBM drives. Now, I wanted to mirror the other two for data and looking for guidance as to whether it is first of all suited for software RAID and if so, CCD or vinum. I am contemplating vinum because the handbook mentions CCD is when cost is the important factor and for me, is reliability. What would someone suggest? If vinum, one thing I don't quite understand is do I create the partitions to be used in the device? There doesn't seem to be a man for gvinum and the link to it in the handbook section 19.6.1 is broken. Just to give you another option. You can support RAID1 using atacontrol to just make two disk into a RAID. Plenty of posts in the archive with more info. As an outsider (i.e. without any RAID) this option always seemed the simplest. No doubt other with experience can tell you the relative merits of atacontrol vs gmirror. --Alex ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID guidance
On Fri, 2006-05-05 at 00:37 +0100, Alex Zbyslaw wrote: Robert Fitzpatrick wrote: I have an old NT4 PIII here that has a pair Adaptec Array1000 Family controllers with 2 pairs of identical drives on one of them (2 IBM 9GB and 2 Seagate 35GB). From what I googled, *nix does not support the controller, so I have removed the RAID arrays and loaded FreeBSD 6.0 onto the two IBM drives. Now, I wanted to mirror the other two for data and looking for guidance as to whether it is first of all suited for software RAID and if so, CCD or vinum. I am contemplating vinum because the handbook mentions CCD is when cost is the important factor and for me, is reliability. What would someone suggest? If vinum, one thing I don't quite understand is do I create the partitions to be used in the device? There doesn't seem to be a man for gvinum and the link to it in the handbook section 19.6.1 is broken. Just to give you another option. You can support RAID1 using atacontrol to just make two disk into a RAID. Yes, I saw mention of atacontrol somewhere in the handbook, the drives all SCSI. It seems atacontrol only addresses IDE? Excuse my ignorance on the subject of ATA vs SCSI :/ files# atacontrol list ATA channel 0: Master: acd0 CD-ROM 50X/10 ATA/ATAPI revision 0 Slave: no device present ATA channel 1: Master: no device present Slave: no device present -- Robert ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID guidance
I have been uable to get vinum to work under 6.0. I'm no expert though. Vinum became gvinum in 6.0 and is implemented using geom. Recently the gvinum man page has been updated and it available in 6.1 RC-1. I think if you want mirroring only you should consult the geom pages. It seems as though geom is the way of the future but does not currently support R5 which is what I was looking for. Somewhere out there is a pretty comprehensive set of iozone benchmarks comparing linux and BSD software Raid. Ah found it: http://www25.big.jp/~jam/filesystem/old/ This might give you some ideas. On Thu, 4 May 2006, Robert Fitzpatrick wrote: I have an old NT4 PIII here that has a pair Adaptec Array1000 Family controllers with 2 pairs of identical drives on one of them (2 IBM 9GB and 2 Seagate 35GB). From what I googled, *nix does not support the controller, so I have removed the RAID arrays and loaded FreeBSD 6.0 onto the two IBM drives. Now, I wanted to mirror the other two for data and looking for guidance as to whether it is first of all suited for software RAID and if so, CCD or vinum. I am contemplating vinum because the handbook mentions CCD is when cost is the important factor and for me, is reliability. What would someone suggest? If vinum, one thing I don't quite understand is do I create the partitions to be used in the device? There doesn't seem to be a man for gvinum and the link to it in the handbook section 19.6.1 is broken. Thanks in advance. -- Robert ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED] ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID guidance
Robert Fitzpatrick wrote: I have an old NT4 PIII here that has a pair Adaptec Array1000 Family controllers with 2 pairs of identical drives on one of them (2 IBM 9GB and 2 Seagate 35GB). From what I googled, *nix does not support the controller, so I have removed the RAID arrays and loaded FreeBSD 6.0 onto the two IBM drives. Now, I wanted to mirror the other two for data and looking for guidance as to whether it is first of all suited for software RAID and if so, CCD or vinum. I am contemplating vinum because the handbook mentions CCD is when cost is the important factor and for me, is reliability. What would someone suggest? If vinum, one thing I don't quite understand is do I create the partitions to be used in the device? There doesn't seem to be a man for gvinum and the link to it in the handbook section 19.6.1 is broken. Thanks in advance. Unlike what some seem to be claiming, I *have* been able to use gvinum on 6.X --- the documentation for vinum was helpful, you just put a g in front of the commands; quid pro quo, a few things in vinum aren't carried over into gvinum, but it's basically the same stuff (thanks Lukas, thanks Grog, etc.). I did have some system instability during my trial, though; I've put it down to a bad IDE HDD (because it gave issues when not part of a gvinum plex as well), but I didn't give it a serious amount of testing. As for the handbook, you seem to be correct. You might file a doc PR --- they'd probably appreciate having the opportunity to fix this. However, I do find gvinum(8) on my box Kevin Kinsey ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID guidance
On Thu, 2006-05-04 at 19:59 -0400, Ian Jefferson wrote: I think if you want mirroring only you should consult the geom pages. Great, I believe I have this setup right. I'm not sure what the fdisk issue may be with the message 'fdisk: Geom not found', but all looks to have setup properly. Now, just to have a clear understanding, what is the purpose of /dev/mirror/datas1c as it is not used in creating the mirror it seems? files# geom mirror label -v -s 35000 data /dev/da2 /dev/da3 Metadata value stored on /dev/da2. Metadata value stored on /dev/da3. Done. files# gmirror load files# fdisk -vBI /dev/mirror/data *** Working on device /dev/mirror/data *** parameters extracted from in-core disklabel are: cylinders=4462 heads=255 sectors/track=63 (16065 blks/cyl) Figures below won't work with BIOS for partitions not in cyl 1 parameters to be used for BIOS calculations are: cylinders=4462 heads=255 sectors/track=63 (16065 blks/cyl) Information from DOS bootblock is: 1: sysid 165 (0xa5),(FreeBSD/NetBSD/386BSD) start 63, size 71681967 (35000 Meg), flag 80 (active) beg: cyl 0/ head 1/ sector 1; end: cyl 365/ head 254/ sector 63 2: UNUSED 3: UNUSED 4: UNUSED fdisk: Geom not found files# ls -l /dev/mirror/ total 0 crw-r- 1 root operator0, 127 May 4 20:48 data crw-r- 1 root operator0, 110 May 4 20:43 datas1 crw-r- 1 root operator0, 117 May 4 20:43 datas1a crw-r- 1 root operator0, 118 May 4 20:43 datas1c files# bsdlabel -wB /dev/mirror/datas1 files# newfs -U /dev/mirror/datas1a /dev/mirror/datas1a: 35001.0MB (71681948 sectors) block size 16384, fragment size 2048 using 191 cylinder groups of 183.77MB, 11761 blks, 23552 inodes. with soft updates super-block backups (for fsck -b #) at: 160, 376512, 752864, 1129216, 1505568, 1881920, 2258272, 2634624, 3010976, 3387328, 3763680, 4140032, 4516384, 4892736, 5269088, 5645440, 6021792, 6398144, 6774496, 7150848, 7527200, 7903552, 8279904, 8656256, 9032608, 9408960, 9785312, 10161664, 10538016, 10914368, 11290720, 11667072, 12043424, 12419776, 12796128, 13172480, 13548832, 13925184, 14301536, 14677888, 15054240, 15430592, 15806944, 16183296, 16559648, 16936000, 17312352, 17688704, 18065056, 18441408, 18817760, 19194112, 19570464, 19946816, 20323168, 20699520, 21075872, 21452224, 21828576, 22204928, 22581280, 22957632, 2984, 23710336, 24086688, 24463040, 24839392, 25215744, 25592096, 25968448, 26344800, 26721152, 27097504, 27473856, 27850208, 28226560, 28602912, 28979264, 29355616, 29731968, 30108320, 30484672, 30861024, 31237376, 31613728, 31990080, 32366432, 32742784, 33119136, 33495488, 33871840, 34248192, 34624544, 35000896, 35377248, 35753600, 36129952, 36506304, 36882656, 37259008, 37635360, 38011712, 38388064, 38764416, 39140768, 39517120, 39893472, 40269824, 40646176, 41022528, 41398880, 41775232, 42151584, 42527936, 42904288, 43280640, 43656992, 44033344, 44409696, 44786048, 45162400, 45538752, 45915104, 46291456, 46667808, 47044160, 47420512, 47796864, 48173216, 48549568, 48925920, 49302272, 49678624, 50054976, 50431328, 50807680, 51184032, 51560384, 51936736, 52313088, 52689440, 53065792, 53442144, 53818496, 54194848, 54571200, 54947552, 55323904, 55700256, 56076608, 56452960, 56829312, 57205664, 57582016, 57958368, 58334720, 58711072, 59087424, 59463776, 59840128, 60216480, 60592832, 60969184, 61345536, 61721888, 62098240, 62474592, 62850944, 63227296, 63603648, 6398, 64356352, 64732704, 65109056, 65485408, 65861760, 66238112, 66614464, 66990816, 67367168, 67743520, 68119872, 68496224, 68872576, 69248928, 69625280, 70001632, 70377984, 70754336, 71130688, 71507040 files# mount /dev/mirror/datas1a /data files# df -h Filesystem SizeUsed Avail Capacity Mounted on /dev/da0s1a3.8G 55M3.4G 2%/ devfs 1.0K1.0K 0B 100%/dev /dev/da1s1d8.3G1.0G6.6G13%/usr /dev/da0s1d4.0G4.2M3.7G 0%/var /dev/mirror/datas1a 33G4.0K 30G 0%/data -- Robert ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID guidance
You can try gmirror(8) Ref: 1. http://people.freebsd.org/~rse/mirror/ 2. http://www.onlamp.com/lpt/a/6309 On Thu, May 04, 2006 at 07:24:15PM -0400, Robert Fitzpatrick wrote: I have an old NT4 PIII here that has a pair Adaptec Array1000 Family controllers with 2 pairs of identical drives on one of them (2 IBM 9GB and 2 Seagate 35GB). From what I googled, *nix does not support the controller, so I have removed the RAID arrays and loaded FreeBSD 6.0 onto the two IBM drives. Now, I wanted to mirror the other two for data and looking for guidance as to whether it is first of all suited for software RAID and if so, CCD or vinum. I am contemplating vinum because the handbook mentions CCD is when cost is the important factor and for me, is reliability. What would someone suggest? If vinum, one thing I don't quite understand is do I create the partitions to be used in the device? There doesn't seem to be a man for gvinum and the link to it in the handbook section 19.6.1 is broken. Thanks in advance. -- Cheng-Lung Sung - clsung@ pgp5qEjA4LT8h.pgp Description: PGP signature
Re: Software RAID guidance
IMHO, fdisk is unnecessary. I got my two brand new HDs ad[46] mirrored w/o fdisk. On Thu, May 04, 2006 at 09:15:39PM -0400, Robert Fitzpatrick wrote: On Thu, 2006-05-04 at 19:59 -0400, Ian Jefferson wrote: I think if you want mirroring only you should consult the geom pages. Great, I believe I have this setup right. I'm not sure what the fdisk issue may be with the message 'fdisk: Geom not found', but all looks to have setup properly. Now, just to have a clear understanding, what is the purpose of /dev/mirror/datas1c as it is not used in creating the mirror it seems? Have you tried to mount it? -- Cheng-Lung Sung - clsung@ pgpn7j51uTNwz.pgp Description: PGP signature
Re: Software RAID guidance
On Fri, 2006-05-05 at 09:20 +0800, Cheng-Lung Sung wrote: Great, I believe I have this setup right. I'm not sure what the fdisk issue may be with the message 'fdisk: Geom not found', but all looks to have setup properly. Now, just to have a clear understanding, what is the purpose of /dev/mirror/datas1c as it is not used in creating the mirror it seems? Have you tried to mount it? files# mount /dev/mirror/datas1c mount: /dev/mirror/datas1c: unknown special file or file system -- Robert ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID guidance
Hi, newfs first? In my experiment, there is only one mirror/gm0s1 exists (no s1a, s1c...) On Thu, May 04, 2006 at 09:40:17PM -0400, Robert Fitzpatrick wrote: On Fri, 2006-05-05 at 09:20 +0800, Cheng-Lung Sung wrote: Great, I believe I have this setup right. I'm not sure what the fdisk issue may be with the message 'fdisk: Geom not found', but all looks to have setup properly. Now, just to have a clear understanding, what is the purpose of /dev/mirror/datas1c as it is not used in creating the mirror it seems? Have you tried to mount it? files# mount /dev/mirror/datas1c mount: /dev/mirror/datas1c: unknown special file or file system -- Cheng-Lung Sung - clsung@ pgpK0zIgjIVTI.pgp Description: PGP signature
Re: Software RAID guidance
On Fri, 2006-05-05 at 09:16 +0800, Cheng-Lung Sung wrote: 1. http://people.freebsd.org/~rse/mirror/ Great doc, thanks! I was able to get the first part of the 2nd approach booting from the gm0 mirror, but after booting and trying to add my da0 to the mirror, it does not recognize the device...I tried re-splicing the drive in sysinstall with no help... files# gmirror configure -a gm0s1 No such device: gm0s1. files# df -h Filesystem SizeUsed Avail Capacity Mounted on /dev/mirror/gm0s1a 8.3G1.2G6.4G16%/ devfs 1.0K1.0K 0B 100%/dev /dev/mirror/datas1a 33G4.0K 30G 0%/data files# ls -lah /dev/mirror/ total 1 dr-xr-xr-x 2 root wheel 512 Dec 31 1969 . dr-xr-xr-x 5 root wheel 512 Dec 31 1969 .. crw-r- 1 root operator0, 116 May 4 23:43 data crw-r- 1 root operator0, 118 May 4 23:43 datas1 crw-r- 1 root operator0, 121 May 4 19:43 datas1a crw-r- 1 root operator0, 122 May 4 23:43 datas1c crw-r- 1 root operator0, 109 May 4 23:43 gm0 crw-r- 1 root operator0, 117 May 4 23:43 gm0s1 crw-r- 1 root operator0, 119 May 4 19:43 gm0s1a crw-r- 1 root operator0, 120 May 4 23:43 gm0s1c Again, not sure where mine is getting the s1c devices...while the data mirror was setup with another doc, the gm0 setup flawlessly following your 2nd approach. -- Robert ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Booting into an installed software raid system
Hello FreeBSD users, I am happilly installing FreeBSD systems (remotely), however there is one thing which I would like to get solved, hopefully the one or the other can help me out. Anyway here the story goes: I have setup a sample FreeBSD system (Software Raid 1) on the devices /dev/ad2 and /dev/ad4. I reboot the system and want to leave the FreeBSD system in the CDrom drive for later use, but boot from the software mirrored FreeBSD installation (on the HDD). How do I do this ? 0) I could use the KVM and set the motherboards bios'es boot option, but lets ignore that for a moment ;-) 1) In the Bootloader menu I am choosing Option 6 - Loading a command prompt 2) I could probably use the Fixit option in the install CD So for now, lets explore 1) a bit more I load the necessary kernel and the modules. f.e. load geom_mirror lsdev will show me as devices cd0, disk1s1xxx, disk2s1xxx Note that the real device name used should be actually f.e. /dev/ad2s1xxx and /dev/ad4s1xxx How can I boot from here into f.e. /dev/gm0s1xxx ? (ad2 and ad4 are defined as gm0 in the original setup) Any suggestion welcome. Best regards Nils Valentin - 転送メールは以上です - ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Software RAID-1 FreeBSD 5.4
Hi, This is on FreeBSD 5.4 latest stable snapshot on January. I've followed the instructions at: http://www.onlamp.com/pub/a/bsd/2005/11/10/FreeBSD_Basics.html for creating software RAID, which appears to have been successful. the raid created, and synched, couple of reboots all is good. So I wanted to test it out, I've unplugged one of the drives and rebooted, however, I've received the error: ffs_mountroot: can't find rootvp Root mount failed: 6 mountroot It doesn't matter which disk I unplug, it gives the same result. I've attempted to remount: ufs:/dev/mirror/gm0s1a ufs:/dev/ad6s1a ufs:/dev/ad4s1a no luck. so I looked over: http://www.freebsd.org/doc/en_US.ISO8859-1/books/handbook/geom-mirror.html and added the 'options GEOM_MIRROR' to Kernel, then recompiled, installed and restarted, the machine would hang completely just when loading the AD drives. Are the articles missing any steps ? any help is appreciated. Thx, Tamouh Hakmi ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Software RAID 5
Hello there, I have a Dell Power Edge 2400 system with 4 X 18 GB SCSI Drives. I would like to use a software Raid 5 using FreeBSD 5.4. Any suggestions on how to go about doing it? I have read so many articles, it makes my head hurt. There are many options, for mirroring, but some are better than others, and some are out of date. Could someone please tell me what is best for FreeBSD 5.4, and if you have it, a How-to would be nice :-). Thanks in Advance Michael. ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Atacontrol software RAID
Hi all, I've been trying to build a 2 disk mirror with atacontrol on a standard IDE controller (testing in vmware), from an already running 4.11 system. ad0 has the system, ad3 is the new drive. I thought I should be able to mimic the gmirror trick of making a 1 disk degraded mirror on ad3, move everything over to the raid, wipe ad0 and add it to the mirror. # atacontrol create RAID1 ad3 fails. needs @ least 2 disksso #atacontrol created RAID1 ad3 ad3 did the trick ;) - after a reboot it became : ar0: WARNING - mirror lost ar0: 6143MB ATA RAID1 array [783/255/63] status: DEGRADED subdisks: 0 DOWN 1 READY ad3: 6143MB VMware Virtual IDE Hard Drive [12483/16/63] at ata1-slave UDMA33 and I could use ar0 as i'd expect it to (could boot off it, moved all the data from ad0 to ar0, mounted all partitions from it,etc). The problem happened when I tried to add ad0 to the mirror...i couldn't find a way to do it. atacontrol addspare is not available in 4.x systems... Any suggestions on how to get ad0 to be part of ar0? Upgrading to 5.x may be an option, but then i'd be using gmirror anyway :-) thanks in advance, Beto ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Recommended Software RAID-5 on dual-amd64
Hi all, I have a dual Opteron box built with http://tyan.com/products/html/gt24b2891_spec.html , using 4 identical SATA drives. I plan to use FBSD 6 (installing beta2, cvsup to head). I will use gmirror to RAID-1 the boot partition, and RAID-5 for the remainder. I was wondering which is the best option for software RAID5. GVinum? geom_raid5 (is there such thing yet?) I'd love to use a geom-only solution if possible, is gvinum fully GEOM-compatible? thanks in advance, Beto ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID-1 - Swap partition
John Oxley wrote: Hi, I followed http://people.freebsd.org/~rse/mirror/ to create a software RAID mirror. I have two 75G drives in the machine. I allocated 74G to the filesystem on each drive and 1 G to swap. When I blanked ad1 and created ad1s1, I didn't notice that it had taken up the whole of the drive. Can I shrink the mirror partition and have two swap partitions, or if that is not possible, how would I go about creating a mirrored swap partition? Your swap partition ought to be mirrored already. From a similar system: 0-11:01 [EMAIL PROTECTED] ~ swapinfo Device 1K-blocks UsedAvail Capacity /dev/mirror/gm0s1b 41674880 4167488 0% 0-11:01 [EMAIL PROTECTED] ~ grep swap /etc/fstab /dev/mirror/gm0s1b noneswapsw 0 0 -danny -- http://dannyman.toldme.com/ ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Software RAID-1 - Swap partition
Hi, I followed http://people.freebsd.org/~rse/mirror/ to create a software RAID mirror. I have two 75G drives in the machine. I allocated 74G to the filesystem on each drive and 1 G to swap. When I blanked ad1 and created ad1s1, I didn't notice that it had taken up the whole of the drive. Can I shrink the mirror partition and have two swap partitions, or if that is not possible, how would I go about creating a mirrored swap partition? # bsdlabel /dev/mirror/gm0s1 # /dev/mirror/gm0s1: 8 partitions: #size offsetfstype [fsize bsize bps/cpg] a: 52428804.2BSD 2048 16384 32776 c: 1562963220unused0 0 d: 1048576 5242884.2BSD 2048 16384 8 e: 20971520 15728644.2BSD 2048 16384 28552 f: 132643453 225443844.2BSD 2048 16384 28552 # gmirror list Geom name: gm0 State: COMPLETE Components: 2 Balance: round-robin Slice: 4096 Flags: NONE GenID: 0 SyncID: 1 ID: 2442074130 Providers: 1. Name: mirror/gm0 Mediasize: 80026361344 (75G) Sectorsize: 512 Mode: r4w4e2 Consumers: 1. Name: ad0 Mediasize: 80026361856 (75G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Priority: 0 Flags: NONE GenID: 0 SyncID: 1 ID: 707256281 2. Name: ad1 Mediasize: 80026361856 (75G) Sectorsize: 512 Mode: r1w1e1 State: ACTIVE Priority: 0 Flags: NONE GenID: 0 SyncID: 1 ID: 655505428 -- John Oxley Systems Administrator Yo!Africa E-Mail: john at yoafrica.com Tel: +263 4 858404 smime.p7s Description: S/MIME cryptographic signature
RE: Software RAID-1 on FreeBSD 5.4
One last comment for you on software mirroring, While I am not trying to disparage the various efforts, software mirroring provides limited redundancy unless the hard drives are on separate busses. If you do the common thing of putting 2 IDE drives as the master and slave on the primary IDE controller, than all it takes is a buss error on the IDE bus and you have scrambled the data on both hard drives. The same problem exists if you setup a ccd or vinum or gmirror or whatever on a SCSI controller where the disks are all on the same SCSI bus. The same problem also exists on SATA controllers, and on cheaper hardware RAID cards where the disks are on a single IDE bus. This is why hardware RAID controllers are quite often better as the better ones contain multiple interfaces. Ted -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED] Behalf Of ptitoliv Sent: Wednesday, June 29, 2005 12:56 PM To: freebsd-questions@freebsd.org Subject: Re: Software RAID-1 on FreeBSD 5.4 Hello again, Thank you for all your answers ! I am going to look at gmirror and ccd. But I have a last question. My disks are differents. One is a Maxtor detected with a 111 GB capacity and the other is a Seagate detected with a 114 GB capacity. Will I have problems trying to use RAID with this configuration ? Best regards, ptitoliv ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED] ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Software RAID-1 on FreeBSD 5.4
Hello everybody, I have 2 120 Go Drives installed on my FreeBSD 5.4 Box. I want to create with these 2 disks a software RAID-1 solution. I wanted to use vinum but lots of people say that vinum is very unstable on FreeBSD 5.4. So I am asking you what is the best solution to make RAID-1 on FreeBSD 5.4. Thank you for your answers Best Regards, ptitoliv ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID-1 on FreeBSD 5.4
Am Mittwoch, 29. Juni 2005 21:28 schrieb ptitoliv: Hello everybody, I have 2 120 Go Drives installed on my FreeBSD 5.4 Box. I want to create with these 2 disks a software RAID-1 solution. I wanted to use vinum but lots of people say that vinum is very unstable on FreeBSD 5.4. So I am I can't confirm that, but I can recommend gmirror. -Harry asking you what is the best solution to make RAID-1 on FreeBSD 5.4. Thank you for your answers Best Regards, ptitoliv ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED] pgpUmU3RilDrv.pgp Description: PGP signature
Re: Software RAID-1 on FreeBSD 5.4
On Wed, 29 Jun 2005 21:28:22 +0200 ptitoliv [EMAIL PROTECTED] wrote: I have 2 120 Go Drives installed on my FreeBSD 5.4 Box. I want to create with these 2 disks a software RAID-1 solution. I wanted to use vinum but lots of people say that vinum is very unstable on FreeBSD 5.4. So I am asking you what is the best solution to make RAID-1 on FreeBSD 5.4. you might want to look at this : http://people.freebsd.org/~rse/mirror/ HTH ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]
Re: Software RAID-1 on FreeBSD 5.4
I have had a lot of sucess with ccd. Its pretty simple to configure. Basically, you just add the kernel device. Label the disks, do a ccdconfig ccd0 stripe size 0 /dev/drive #1 /dev/drive #2. Then newfs ccd0 and mount it where you want it. Casey Hello everybody, I have 2 120 Go Drives installed on my FreeBSD 5.4 Box. I want to create with these 2 disks a software RAID-1 solution. I wanted to use vinum but lots of people say that vinum is very unstable on FreeBSD 5.4. So I am asking you what is the best solution to make RAID-1 on FreeBSD 5.4. Thank you for your answers Best Regards, ptitoliv ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED] ___ freebsd-questions@freebsd.org mailing list http://lists.freebsd.org/mailman/listinfo/freebsd-questions To unsubscribe, send any mail to [EMAIL PROTECTED]