On Tue, Mar 06, 2007 at 03:11:51AM +0000, Martin wrote: > On Mon, 2007-03-05 at 20:24 +0100, Herbert Poetzl wrote: > > On Mon, Mar 05, 2007 at 07:43:51AM -0500, Chuck wrote: > > > On Monday 05 March 2007 06:15, Herbert Poetzl wrote: > <snip> > > > > > > controllers. i would rather see the boss change the case to a 2u and > > > put a real hardware raid controller in on a 2 card riser but...... it > > > is not my call.. (and of course we find all this out after the machine > > > has been in our production environment for 5 months) > > > > in most cases the hardware raid controller is not worth > > the money, as a software raid usually gives a much better > > performance with less latency and more control for the > > operating system ... > > > > nevertheless, hw-raid can have some advantages if it is > > done properly, e.g. auto reconstruction without affecting > > the system performance and/or battery buffering in power > > failure cases ... > > I used to like the idea of hardware RAID but two things put me off: > > 1. When you pull the power on a system apparently the memory goes > first but I/O systems function for just a bit longer - often writing > junk data. This is apparently one of the things the high end UNIX > vendors used to spend money on trying to get right. In short, you > *need* a battery backed hardware RAID if you are serious about > avoiding data corruption. These are more expensive. It also makes any > form of RAID device that requires drivers to run (i.e. the soft-RAID > devices on many modern machines) a little questionable to my mind. > > 2. Data corruption is serious because none of the formats the hardware > RAID systems use are public. I am under the impression that in many > cases even data recovery specialists do not have access to these. Thus > you are completely at the mercy of the tools the vendor gives you. If > they are buggy or you get into a situation (see above) that they can't > recover from it's game over. > > Thus, I would *strongly* advise that unless you /need/ the performance > a hardware RAID controller gives (and can then afford the UPS and the
note that the 'performance' in many cases is a myth, for several reasons, mainly because: - hardware raid has 2-256MB cache, software has 1-4GB - hardware raid has a single channel to the host, while proper setup soft raid can burst over N channels simultaniously (and will do so, e.g. for separate I/O threads) - elevator in the kernel, vs limited TCQ best, Herbert > high level service contract with the vendor, etc.), use the Linux > software RAID. > If it all goes wrong you can always read the source and piece things > together manually. I've had to do this. It's not fun but it is > possible. For me it made the difference between having to tell my boss > that the fileserver would be down for a while and having to tell my > boss that we would have to revert to last months backup. > > HTH > > Cheers, > - Martin > > > _______________________________________________ > Vserver mailing list > [email protected] > http://list.linux-vserver.org/mailman/listinfo/vserver _______________________________________________ Vserver mailing list [email protected] http://list.linux-vserver.org/mailman/listinfo/vserver
