Re: raid-2.2.17-A0 cleanup for LVM
On Aug 2, 7:12pm, Andrea Arcangeli wrote: } Subject: raid-2.2.17-A0 cleanup for LVM This patch cleanups the new raid code so that we have a chance that LVM on top of RAID will keep working. It's untested at the moment. ftp://ftp.*.kernel.org/pub/linux/kernel/people/andrea/patches/v2.2/2.2.17pre13/raid-2.2.17-A0/raid-lvm-cleanup-1 What are people using for LVM code on 2.2.1[67]? The only thing that I have found reliable was a port of the 8i stuff that a gentleman created which he said he was submitting to Heinz for approval. I had to couple this with the 2/10/1999 toolset in order to get a complete system. I have been using this in a limited production environment but considering the pathway to it I have been reluctant to really put the system under stress. The LVM code looks very promising and well-done and essential to those of us in production environments. There doesn't appear to be a clear path to follow for those of us working with late 2.2.x kernels. I tried merging the LVM patches that I am using with the 2.2.16 RAID patchset but there is a massive collision in ll_rw_blk.c file that doesn't appear to be straight forward in its resolution. Andrea Greg }-- End of excerpt from Andrea Arcangeli As always, Dr. G.W. Wettstein, Ph.D. Enjellic Systems Development, LLC. 4206 N. 19th Ave. Specializing in information infra-structure Fargo, ND 58102development. PH: 701-281-4950WWW: http://www.enjellic.com FAX: 701-281-3949 EMAIL: [EMAIL PROTECTED] -- "If you think nobody cares if you're alive, try missing a couple of car payments." -- Earl Wilson
Re: Is the raid1readbalance patch production ready?
On Jul 21, 10:57am, Malcolm Beattie wrote: } Subject: Is the raid1readbalance patch production ready? Is the raid1readbalance-2.2.15-B2 patch (when applied against a 2.2.16+linux-2.2.16-raid-B2 kernel) rock-solid and production quality? Can I trust 750GB of users' email to it? Is it guaranteed to behave the same during failure modes that the non-patched RAID code does? Is anyone using it heavily in a production system? (Not that I expect any other answer except maybe for a resounding "probably" :-) Here's one probably :-) We are using the read-balancing patch as part of our standard patchset to 2.2.16. We are currently using it on about 20 production servers with varying degress of business. I haven't heard a twerp out of it including through the failure of one side of a mirror on a moderately busy file server. --Malcolm }-- End of excerpt from Malcolm Beattie As always, Dr. G.W. Wettstein, Ph.D. Enjellic Systems Development, LLC. 4206 N. 19th Ave. Specializing in information infra-structure Fargo, ND 58102development. PH: 701-281-4950WWW: http://www.enjellic.com FAX: 701-281-3949 EMAIL: [EMAIL PROTECTED] -- "IMPORTANT: The entire physical universe, including this message, may one day collapse back into an infinitesimally small space. Should another universe subsequently re-emerge, the existence of this message in that universe cannot be guaranteed." -- Ryan Tucker
Re: RAID Devices and FS labels
On Apr 1, 10:40pm, Theo Van Dinter wrote: } Subject: RAID Devices and FS labels On my home machine today, I decided to change how the filesystems are listed in /etc/fstab from the standard /dev/name to FS labels: LABEL=ROOT/ ext2defaults1 1 LABEL=USR /usrext2defaults1 2 ... [ deleted ] ... Does anyone know how the tools would handle this situation? I'd assume that given a list of devices and labels, the RAID devices would come up first, and then the individual partitions, but I'm not sure how this works WRT mount, fsck, etc. Any ideas? Thanks. As you have already anticipated there are problems associated with using volume and filesystem label based mounting with RAID1 mirrors. I can attest from personal experience that you will end up mounting the underlying physical mirrors rather than the virtual block device. I had initiated a thread about this a couple of months ago when I ran into problems trying to get this to work. The issue got batted around a little and the general consensus was that this is indeed a problem. There were essentially no good solutions proposed at that time other than the notion of some heuristics. The overall consensus was that the /proc/partitions pseudo-file has to either present the /dev/md* devices first or mount has to explicitly look for labels on them before considering the actual physical block devices. Hopefully this information is helpful. Greg }-- End of excerpt from Theo Van Dinter As always, Dr. G.W. Wettstein, Ph.D. Enjellic Systems Development, Inc. 4206 N. 19th Ave. Specializing in information infra-structure Fargo, ND 58102development. PH: 701-281-4950WWW: http://www.enjellic.com FAX: 701-281-3949 EMAIL: [EMAIL PROTECTED] -- "I'd rather see my sister in a whorehouse than my brother using windows." -- Sam Creasey
Re: Raid with new kernel
On Dec 5, 12:01pm, Danilo Godec wrote: } Subject: Re: Raid with new kernel Good morning to everyone. On Sun, 5 Dec 1999, ACEAlex wrote: the 2.2.13 kernel with is the latest stable. But when i try to start using it i get a different startup screens (see belove). Do i have to patch the kernel before i use raidtools. Cause i get errors when trying to execute Yes. The RedHat kernel includes the latest RAID patches. You should patch your 2.2.13 kernel too. The patch will probably produce two rejects, but you can ignore them. ftp.kernel.org/pub/linux/daemons/raid/alpha/raid0145-19990824-2.2.11.bz2 Since we use the new RAID code extensively I always keep clean patches against our operationally validated kernels. The following should cleanly add new RAID support to 2.2.13: ftp://ftp.nodak.edu/pub/linux/ESD/kernel/raid0145-19990824-2.2.13.gz Usual caveats about 'it works for us' but test locally before saving all the accounting data on a RAID volume apply of course. Hi Dave!. I thought I would hit two birds with one stone. Greg }-- End of excerpt from Danilo Godec As always, Dr. G.W. Wettstein, Ph.D. Enjellic Systems Development, Inc. 4206 N. 19th Ave. Specializing in information infra-structure Fargo, ND 58102development. PH: 701-281-4950WWW: http://www.enjellic.com FAX: 701-281-3949 EMAIL: [EMAIL PROTECTED] -- "Extensive interviews show that not one alcoholic has ever actually seen a pink elephant." -- Yale University Center of Alcohol Studies
Re: Hard Vs. Soft raid?
On Oct 28, 2:15am, [EMAIL PROTECTED] wrote: } Subject: Hard Vs. Soft raid? HI, Good day. I was wondering what would be a good hardware raid controller for use with RH 6.1? I'm looking at the Mylex extreme Raid. Is a hardware raid controller worth the money? Is it easier to set up, more reliable, less hassle? I can speak very highly of the Mylex/IBM products. We have been using DAC960's in production servers and have been happy with them. The Mylex ExtremeRaid and AccelleRaid products trace their heritage back to the DAC060's. Leonard Zubkoff's excellent driver for this architecture is now in the mainline kernel sources so there is minimal hassle from this perspective as well. Going hardware or software is in essence a judgement call. The software RAID performs better secondary to the greater resources that the PII's with MMX can bring to the table. The hardware RAID option is probably a more plug it in and forget about it solution at this point in time. If the decision to go with hardware RAID is made you will definitely want to opt for SCA drives in some type of hot-swap enclosure. This is especially true if Mylex is your selected vendor. Leonard's driver supports control of the composite RAID volumes through the proc filesystem. To realize the full advantage of RAID5 for high-availability you will want to be able to pull and replace drives if you happen to have a failure. If a drive does fail out of a Mylex controlled volume you can simply inspect the output from /proc/rd/cn/current_status and determine which drive has been failed. With hot-swap you simply pull and replace the drive and echo an appropriate rebuild command to /proc/rd/cn/user_command and the array will be rebuilt. All this can occur while the server is on-line and without apparent user disruption. The hardware RAID solution is also easier from a system setup perspective at this time as well. Since the composite RAID volume looks simply like another drive it can be partitioned and booted from without any of the concerns that have been discussed with the software solution. So to re-iterate it comes down to pretty much a decision based on your operating environment. I use both hardware and software solutions, the decision pretty much comes down to the goals of the deployment. Any ideas on this subject will be greatly apreciated. Hopefully this information is of assistance. Good luck with your project. Ralf R. Kotowski Greg }-- End of excerpt from [EMAIL PROTECTED] As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "A raccoon tangled with a 23,000 volt line today. The results blacked out 1400 homes and, of course, one raccoon." -- Steel City News --
RE: DPT Linux RAID.
On Oct 26, 12:27pm, [EMAIL PROTECTED] wrote: } Subject: RE: DPT Linux RAID. On Tue, 26 Oct 1999, G.W. Wettstein wrote: CONFIG_SCSI_EATA_DMA The instability is especially profound in an SMP environment. Under any kind of load there will be crashes and hangs. Red Hat defaults to using CONFIG_SCSI_EATA_DMA if you have a PM2144UW. I have a client with 2 servers using PM2144UW's with CONFIG_SCSI_EATA_DMA that have been rock stable. Neither is SMP. One is a mail/web/dns server. The other is backup mail/dns and squid. That is an interesting datapoint. The PM3334 in a dual-PII simply would not hold up under load on our main IMAP server with the EATA_DMA driver. Our news server (dual PII-300) was also running with one of these and we saw problems. The problem seem to be compounded when we had SMC Etherpower-II (EPIC) cards in the machines. I had a lot of respect for SMC but these cards have no place in a production environment from our experience. The issue is probably pretty much moot for us at this point. The DPT cards, at least the 3334 we have, simply don't have the I/O performance that we need as our load scales. We have been pretty happy with the DAC960 cards although we had to turn off tagged queing in order to keep the drives on-line. We are moving toward fibre-channel and outboard RAID controllers to implement the SAN that we are deploying for our Linux server farms. Given the excellent luck that we have had with the software RAID code for Linux I probably see a diminishing future for hardware RAID controller cards in most of our servers. We are using software RAID1 to mirror root, var and swap and than deploying the service filesystems on the RAID5 composite volumes provided by the fibre-channel controllers. Jon Lewis *[EMAIL PROTECTED]*| Spammers will be winnuked or Thanks again for the note, have a pleasant remainder of the week. Greg }-- End of excerpt from [EMAIL PROTECTED] As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "MS can classify NT however they like. Calling a pig a bird still doesn't get you flying ham, however." -- Steven N. Hirsch
RE: DPT Linux RAID.
On Oct 25, 8:33pm, Kenneth Cornetet wrote: } Subject: RE: DPT Linux RAID. Carefull on this! There are two EATA drivers. It's been several months since I was trying this driver, but I believe the correct one is simply called EATA. The ones *not* to use are called eata_dma and eata_pio. I got this bit of info from the authors of the drivers themselves. If I recall correctly, the author of the eata_dma had move all of his development to freebsd and was not currently working on the linux version. Also, I definately recall that the "bad" driver would not even boot in a multiprocessor system. Again, I think the one to use is EATA or EATA/DMA and the one not to use was eata_dma, but I may have these backwards. The following is the .config option that needs to be set for reliable operation with the DPT cards: CONFIG_SCSI_EATA=y We have not found the driver configured with the following define to be stable: CONFIG_SCSI_EATA_DMA The instability is especially profound in an SMP environment. Under any kind of load there will be crashes and hangs. We have a fair amount of experience with DTP-3334UW cards in IMAP based messaging servers. We had a lot of problems until we discovered the difference between the two drivers. With the proper driver the DPT cards have been as reliable as a bowling ball. Not very fast but extremely tough and reliable. Greg }-- End of excerpt from Kenneth Cornetet As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "Meeting: An assembly of people coming together to decide what person or department not represented in the room must solve a problem." -- Unknown
Re: DAC960 and swap
On Oct 13, 7:14pm, Vedad Kajtaz wrote: } Subject: DAC960 and swap Hello, Good morning, I hope that your day is going well. i've purchased a DAC960 card, and i'm succesfully running it in raid1 mode with two scsi disks in hot swappable bays. The doc says it is possible to use a partition /dev/rd/cXdXpX as root disk, but it doesnt say if a swap partition may be used on it. Does anyone know if it is possible/safe to do so? I could then remove ide disks from all servers (that would be nice :) The partitions on the DAC970 virtual drives are as valid as any other block device. As such you will not have any problems using one of them as a swap partition. In fact if you belong to the school of optimum reliability/redundancy you will want to have your swap partition on a device which supports some type of redundancy, either RAID1 or RAID5. Here is the partition table of one virtual drive on a production server that we configured with a DAC960 controller: Device Boot Start End Blocks Id System /dev/rd/c0d0p11 129 264176 83 Linux native /dev/rd/c0d0p2 130 194 133120 82 Linux swap /dev/rd/c0d0p3 195 451 526336 83 Linux native /dev/rd/c0d0p4 452 3375 59883525 Extended /dev/rd/c0d0p5 452 3375 5988336 83 Linux native Swap has been running on /dev/rd/c0d0p2 for about 6 months without a peep from it. Thanx, -- Vedad Kajtaz No problem. Good luck with the DAC960. Greg }-- End of excerpt from Vedad Kajtaz As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "Extensive interviews show that not one alcoholic has ever actually seen a pink elephant." -- Yale University Center of Alcohol Studies
RE: mirroring over net
On Oct 8, 10:31pm, Michael wrote: } Subject: RE: mirroring over net Perhaps this would be true on a normal network connection, but I would expect on a NDB raid setup one would want to run fo connections or the super fast proprietary linux network connection between the two machines to enhance access time. This should give bandwidth between machines comparable to the bandwidth to the disks themselves. The network connection isn't the bottleneck, its the NBD components. I have run RAID0 composites built on top of NBD block devices over switched Gigabit ethernet with poor performance. Actually the performance is quite good until a certain I/O volume level is hit and than things get really dismal. My understanding is that it is due to the NBD system not dealing with out of order rights. I haven't had anyone give me any hints as to how to best attack this problem. Michael Greg }-- End of excerpt from Michael As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "... then the day came when the risk to remain tight in a bud was more painful than the risk it took to blossom." -- Anais Nin
Re: Beginner's experiences
On Sep 23, 10:46pm, Dave Wreski wrote: } Subject: Re: Beginner's experiences - No kernel patches for 2.2.12 available (using 2.2.11 causes confusion) ftp://ftp.nodak.edu/pub/linux/ESD/kernel/raid0145-19990824-2.2.12.gz Who built this patch? Is there any word on a patch being distributed by Ingo and crew? I built it. Based on the 0824 2.2.11 patch set. Trivial but I run a lot of boxes and I don't want to putz with worrying about rejects and the like on kernels that are headed for production boxes. I just offerred its location in the hope that it may save other people a step or two. Thanks, Dave Greg }-- End of excerpt from Dave Wreski As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "Atilla The Hun's Maxim: If you're going to rape, pillage and burn, be sure to do things in that order." -- P.J. Plauger Programming On Purpose
Re: Patch and raid tools for 2.2.12
On Sep 21, 8:03am, "Robert E. Lee" wrote: } Subject: Patch and raid tools for 2.2.12 The latest kernel patch and raid tool combination I found on ftp.us.kernel.org/pub/linux/daemons/raid/alpha was for the 2.2.11 kernel. Where might I find the 2.2.12 kernel patches? The folowing is a compressed diff created against a virgin 2.2.12 kernel of the 0824 patchset. We have it running RAID1 mirroring on production boxes: ftp://ftp.nodak.edu/pub/linux/ESD/kernel/raid0145-19990824-2.2.12.gz Robert E. Lee Good luck with it. Greg }-- End of excerpt from "Robert E. Lee" As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "Whenever you find that you are on the side of the majority, it is time to reform." -- Mark Twin
Re: Newbie: What to do when a disk fails?
On Sep 15, 2:01pm, Chris Mauritz wrote: } Subject: Re: Newbie: What to do when a disk fails? This all pretty much implies that a new RAID patchset will be required when 2.2.13 hits the streets. Sigh. So what do the RAID deities suggest someone use for "production" if they're starting out with a clean install and can use any ol' kernel or raid patch they desire? What's the most stable at this point? I am far from a deity but I have been using 2.2.12 with the 2.2.11 patches and omitting the memory leak patch for those boxes that I need software RAID support for. A clean diff of the 08/24/1999 patches against a virgin 2.2.12 kernel can be obtained from: ftp://ftp.nodak.edu/pub/linux/ESD/kernel/raid0145-19990824-2.2.12.gz We are running production boxes on this code base with RAID1 mirroring for /, swap and /var without any problems. The boxes haven't run into memory problems either but the leak appears to be somewhat specific to the system load mix. The other alternative is to use 2.2.11. The directory above contains an older patch set which will drop clearnly in against a 2.2.11 kernel. In this case you will probably want to obtain the TCP/IP memory leak fixes and apply those as well. We are running production boxes with that mix and all seems to be well. Cheers, Chris Good luck. Greg }-- End of excerpt from Chris Mauritz As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "It means your NE2000 clone wet itself. Some of them do this when they get collisions. The driver response is basically to bang the card on the head repeatedly until it talks sense." -- Alan Cox Linux-Net
Re: Problem getting raid1 devices to come up on boot
On Aug 31, 12:42pm, [EMAIL PROTECTED] wrote: } Subject: Problem getting raid1 devices to come up on boot Hi, Good morning. I have been working on getting raid1 software support to work with Redhat 6.0 kernel version 2.2.5-15smp and raidtools 0.90 I can created raid1 devices now but cant get them to run on bootup. Everytime I reboot I have to restart the raid devices. I set up a /dev/md0 and a /dev/md1 and mounted them to /newhome and /newvar. However if I try to add these devices into my /etc/fstab file then I halt on bootup when it tried to access /dev/md0 because it doesn't recognize it as a valid file system. Read the documentation in the raidtools closely especially where it talks about the RAID autodetect. I assume that you are using partitions as the devices for your RAID1 array. In a nutshell you use fdisk or a similar utility to set the partition ID to fd. If you have the RAID code compiled into the kernel the startup code will recognize these partitions and piece together the md configuration based on the superblocks found on these partitions. The kernel will than automatically start the devices running. It all works, VERY well. I have three big production servers running with this setup and I don't hear a peep out of them. There have been notes published lately about how to get LILO to boot from these devices. You can study these but on all my machines I use a separate standalone boot/root partition so that I can take a very defensive administrative position. Thank, Kevin Adams Greg }-- End of excerpt from [EMAIL PROTECTED] As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "... Doc (interviews) is closest in lookfeel to a Windows word-processor. It's even slow. Very slow. Hard to set up fonts and printing in the version I have" -- David Johnson
Re: Some questions.
On Aug 18, 4:52pm, Marc Mutz wrote: } Subject: Re: Some questions. Good day to everyone interested in Linux raid. I hope that this note is helpful to your day. You can try to patch 2.2.11 with the 2.2.10 patch. It should apply more or less cleanly. The rest can be corrected by hand. Or use 2.2.12-final from Alan Cox's directory on ftp.*.kernel.org. It has the new raid stuff integrated. Probably Linus will say 'no' to that for the 'real' 2.2.12, but then this should be the only difference. I think it's a shame that the kernel guys don't let you integrate this code directly into the kernel, that would make it a lot easier for the users. It makes it "difficult" for all the all the others that are using old-style raid semantics and have chosen a rather poor distribution (SuSE, in my case) that still comes with the old raidtools :-( We are using the new raid code heavily to do root, var and swap mirroring on our production Linux servers. Since I wanted to get out of 2.2.10 I back-ported the 2.2.12 patches into 2.2.11 so that we would have a clean diff against the stock 2.2.11 sources. The patch can be picked up at: ftp://ftp.nodak.edu/pub/linux/ESD/kernel/raid0145-19990724-2.2.11.gz We currently have production machines running RAID1 mirrors based on this patch set so I think that it is correct. Standard caveats about being careful with this kind of stuff do apply though. Hopefully others will find this useful. Have a pleasant day. Marc Greg }-- End of excerpt from Marc Mutz As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "I had far rather walk, as I do, in daily terror of eternity, than feel that this was only a children's game in which all of the contestants would get equally worthless prizes in the end." -- T. S. Elliot
Re: RAID1 over RAID0 on 2.2.10
On Aug 18, 4:10pm, Alan Meadows wrote: } Subject: RAID1 over RAID0 on 2.2.10 Hello, From past messages I've gotten the feeling that some people consider 2.2.10 unstable just as 2.2.9, with the corrupt filesystem issue. Does anyone here have experience with 2.2.10 enough to know its stable, or is anyone firm on the idea that its unstable? None of the previous e-mails seem to be conclusive enough for me =) We have production boxes running 2.2.10 with H.J.'s knfs patches and the 0724 release of the RAID code for mirroring. We haven't had a speck of trouble with it. We also have virgin 2.2.10's in production with no problems. Once I started to hear rumbles about problems I held off further deployments of 2.2.10 until we had something a bit more stable. Hopefully 2.2.12 will be what everyone is looking for. Incidently the 2.2.10 machines running the RAID mirroring code have their mirrors built on top of two drives which are driven by separate AIC7890 channels (440BX motherboards). There was some talk that the aic7xxx drivers might be implicated in the problem but we have not experienced any difficulties. Thanks, Alan Meadows [EMAIL PROTECTED] Greg }-- End of excerpt from Alan Meadows As always, Dr. G.W. Wettstein Enjellic Systems Development - Specializing 4206 N. 19th Ave.in information infra-structure solutions. Fargo, ND 58102 WWW: http://www.enjellic.com Phone: 701-281-1686 EMAIL: [EMAIL PROTECTED] -- "One of the reporters asked if the could "see" the INTERNET worm. They tried to explain that it wasn't something that you could actually see but is was merely a program that was running in the background. One of the reporters asked, 'What if you had a color monitor?'" -- UNKNOWN