RE: Patch for kernel 2.2.13?
thanks everyone for the help, stupid me for not trying it before :) -Original Message- From: Uwe Schmeling [SMTP:[EMAIL PROTECTED]] Sent: Quinta-feira, 4 de Novembro de 1999 8:11 To: Richard Costa Cc: [EMAIL PROTECTED] Subject: Re: Patch for kernel 2.2.13? On Wed, 3 Nov 1999 [EMAIL PROTECTED] wrote: Does anyone know when the raid 0.90 patch for kernel 2.2.13 should be released? I've looked at kernel.org but latest there is 2.2.11. Applying the 2.2.11 works fine for me on 2.2.13 (you may safely ignore the rejects). Uwe
superblock Q/clarification
My impression was that the s/w raid code only wrote to the ends (last 4k) of each device, so I'm trying to clarify the following paragraph from http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/Software-RAID.HOWTO-4.html#ss4.7 The persistent superblocks solve these problems. When an array is initialized with the persistent-superblock option in the /etc/raidtab file, a special superblock is written in the *beginning* of all disks participating in the array. This allows the kernel to read the configuration of RAID devices directly from the disks involved, instead of reading from some configuration file that may not be available at all times. Is the paragraph wrong or am I misunderstanding persistent superblocks? Thanks, James -- Miscellaneous Engineer --- IBM Netfinity Performance Development
Re: Double failure and RAID 5 array is still up...
[ Thursday, November 4, 1999 ] Marc Merlin wrote: I'm using 2.2.12 into which I patched in raid0145-19990724-2.2.10 Because of an apparent SCSI problem, I had two errors in a row on two different disks (2 out of 9), and yet the array didn't shut down: kernel: raid5: Disk failure on sdg1, disabling device. Operation continuing on 7 devices Yes, this is a bug in the RAID 0.90 that was reported initially (as far as I can tell) by Shane Owenby about a week ago in this message: http://www.mail-archive.com/linux-raid@vger.rutgers.edu/msg04257.html Looks like the raid5_error doesn't handle this case well yet... (perhaps check conf-working_disks vs. conf-raid_disks? not sure) It doesn't look like it handles multiple spares either, but since conf-spare seems to be taken as a single disk in other parts of the code, that looks to be a known restriction James -- Miscellaneous Engineer --- IBM Netfinity Performance Development
RE: Patch for kernel 2.2.13?
On 03-Nov-99 [EMAIL PROTECTED] wrote: Does anyone know when the raid 0.90 patch for kernel 2.2.13 should be released? I've looked at kernel.org but latest there is 2.2.11. you can find it here: ftp://ftp.fr.kernel.org/mirrors/ftp.kernel.org/linux/kernel/alan/2.2.13ac/ Christopher
Re: superblock Q/clarification
On Thu, Nov 04, 1999 at 08:09:31AM -0500, James Manning wrote: My impression was that the s/w raid code only wrote to the ends (last 4k) of each device, so I'm trying to clarify the following paragraph from http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/Software-RAID.HOWTO-4.html#ss4.7 The persistent superblocks solve these problems. When an array is initialized with the persistent-superblock option in the /etc/raidtab file, a special superblock is written in the *beginning* of all disks participating in the array. This allows the It's a bug ! The superblocks are written in the end of all disks. I'll fix this in the HOWTO ASAP. kernel to read the configuration of RAID devices directly from the disks involved, instead of reading from some configuration file that may not be available at all times. Is the paragraph wrong or am I misunderstanding persistent superblocks? I can't believe I actually wrote *beginning* in the HOWTO... I should know where the superblocks are... :) Thanks, -- : [EMAIL PROTECTED] : And I see the elder races, : :.: putrid forms of man: : Jakob Østergaard : See him rise and claim the earth, : :OZ9ABN : his downfall is at hand. : :.:{Konkhra}...:
Re: superblock Q/clarification
At 03:17 PM 11/4/1999 +0100, Jakob Østergaard wrote: On Thu, Nov 04, 1999 at 08:09:31AM -0500, James Manning wrote: My impression was that the s/w raid code only wrote to the ends (last 4k) of each device, so I'm trying to clarify the following paragraph from http://ostenfeld.dk/~jakob/Software-RAID.HOWTO/Software-RAID.HOWTO-4.html#ss4.7 The persistent superblocks solve these problems. When an array is initialized with the persistent-superblock option in the /etc/raidtab file, a special superblock is written in the *beginning* of all disks participating in the array. This allows the It's a bug ! The superblocks are written in the end of all disks. I'll fix this in the HOWTO ASAP. Does this mean we no longer have to start raid partitions at block 1 instead of block zero? === David Cooley N5XMT Internet: [EMAIL PROTECTED] Packet: N5XMT@KQ4LO.#INT.NC.USA.NA T.A.P.R. Member #7068 We are Borg... Prepare to be assimilated! ===
Re: superblock Q/clarification
On Thu, Nov 04, 1999 at 10:01:41AM -0500, David Cooley wrote: ... Does this mean we no longer have to start raid partitions at block 1 instead of block zero? I'm sorry, but I have no idea of what you're referring to... When accessing the RAID device you can't see that there is a superblock at all. You can't access the superblock via valid accesses to the MD device. I might be misunderstanding your question... When did you ever have to start anything at block 1 instead of block 0 ? (Gee, I hope this isn't in the HOWTO as well ;) -- : [EMAIL PROTECTED] : And I see the elder races, : :.: putrid forms of man: : Jakob Østergaard : See him rise and claim the earth, : :OZ9ABN : his downfall is at hand. : :.:{Konkhra}...:
Re: superblock Q/clarification
[ Thursday, November 4, 1999 ] David Cooley wrote: Does this mean we no longer have to start raid partitions at block 1 instead of block zero? Just out of curiosity, when was this the case? I've done s/w raid's on drives (not making partitions, so I lost autorun unfortunately) and never had a problem... James -- Miscellaneous Engineer --- IBM Netfinity Performance Development
Re: superblock Q/clarification
When I first set up my Raid 5, I made the partitions with Fdisk, and started /dev/hdc1 at block 0, the end was the end of the disk (single partition per drive except /dev/hdc5 is type whole disk). It ran fine until I rebooted, when it came up and said there was no valid superblock. I re-fdisked the drives and re-ran mkraid and all was well until I rebooted. I read somewhere (can't remember where though) that block 0 had to be isolated as the superblock was written there... I re-fdisked all my drives so partition /dev/hdx1 started at block 1 instead of zero and haven't had a problem since. I'm running Kernel 2.2.12 with raidtools 0.90 and all the patches.. At 04:17 PM 11/4/1999 +0100, Jakob Østergaard wrote: On Thu, Nov 04, 1999 at 10:01:41AM -0500, David Cooley wrote: ... Does this mean we no longer have to start raid partitions at block 1 instead of block zero? I'm sorry, but I have no idea of what you're referring to... When accessing the RAID device you can't see that there is a superblock at all. You can't access the superblock via valid accesses to the MD device. I might be misunderstanding your question... When did you ever have to start anything at block 1 instead of block 0 ? (Gee, I hope this isn't in the HOWTO as well ;) -- : [EMAIL PROTECTED] : And I see the elder races, : :.: putrid forms of man: : Jakob Østergaard : See him rise and claim the earth, : :OZ9ABN : his downfall is at hand. : :.:{Konkhra}...: === David Cooley N5XMT Internet: [EMAIL PROTECTED] Packet: N5XMT@KQ4LO.#INT.NC.USA.NA T.A.P.R. Member #7068 We are Borg... Prepare to be assimilated! ===
RE: raid on 2.2.9
Do we need to patch Linux 2.2.9 before we can use the raidtools (like mkraid) to install raid. yes 2.2.9 was not a good version for RAID (or generally - filesystem corruption problems). It is recommended to upgrade to 2.2.11+ or drop back to 2.2.7. Either way, you will still need to patch the kernel source, although with different versions of the patch for the different kernel versions. Cheers, Bruno Prior [EMAIL PROTECTED]
linux RAID on 2.0.38 Kernel
Is it possible to put S/W raid on 2.0.x kernels? I am pretty happy with my current box (2.0.38) and I want to add some more drives to it. Is the 2.0 kernel okay? Sean
RE: superblock Q/clarification
Does this mean we no longer have to start raid partitions at block 1 instead of block zero? I'm sorry, but I have no idea of what you're referring to... See David's messages of 17/09/99 in the "Problem with mkraid for /dev/md0" thread. David's curious experience and resulting misconception was left unresolved, from what I can remember. Hence the continued confusion. Is the original experience detailed in those messages anything to do with the fact that David's system is Sparc-based? Or something to do with having to specify disk geometry at boot-time? I don't know, but it's nothing to do with a limitation of RAID. Anyway, the simple answer is no. The new-style RAID superblocks have always been at the end of the partitions, so this typo changes nothing in practice. Cheers, Bruno Prior [EMAIL PROTECTED] -Original Message- From: [EMAIL PROTECTED] [mailto:[EMAIL PROTECTED]]On Behalf Of Jakob Østergaard Sent: 04 November 1999 15:17 To: [EMAIL PROTECTED] Subject: Re: superblock Q/clarification On Thu, Nov 04, 1999 at 10:01:41AM -0500, David Cooley wrote: ... Does this mean we no longer have to start raid partitions at block 1 instead of block zero? I'm sorry, but I have no idea of what you're referring to... When accessing the RAID device you can't see that there is a superblock at all. You can't access the superblock via valid accesses to the MD device. I might be misunderstanding your question... When did you ever have to start anything at block 1 instead of block 0 ? (Gee, I hope this isn't in the HOWTO as well ;) -- : [EMAIL PROTECTED] : And I see the elder races, : :.: putrid forms of man: : Jakob Østergaard : See him rise and claim the earth, : :OZ9ABN : his downfall is at hand. : :.:{Konkhra}...:
Re: superblock Q/clarification
[ Thursday, November 4, 1999 ] David Cooley wrote: Does this mean we no longer have to start raid partitions at block 1 instead of block zero? Just out of curiosity, when was this the case? I've done s/w raid's on drives (not making partitions, so I lost autorun unfortunately) and never had a problem... James -- Miscellaneous Engineer --- IBM Netfinity Performance Development
Re: linux RAID on 2.0.38 Kernel
I've got one machine running this (2.0.37) and it works fine. I've got several Pentium I/II machines using 2.0.37 with raid 990824 patches and for me it's ROCK SOLID, even on SMP boards. I consider this a most stable Linux with SW-RAID setup, but your mileage may vary. :-) Egon Eckert
Which patch should I use WAS: mkraid aborted - device too small??
I'm using the 0.50 tools and RAID works but the 2.0.38 kernel was never patched. The tools did not contain any info that specifically stated that a patch was needed and for the most part, the tools work and I can get raid running. However I am starting to see some strange behaviour such as ckraid always finding that the array is OK when it isn't. Which patch should be applied against a 2.0.38 kernel when using the 0.50 tools? I can not find anything in the docs or howtos that describes this. Conversely, if I am running 2.0.38 kernel, which tools/patch combination could be used instead. Alex On Wed, 03 Nov 1999, Bruno Prior wrote: /dev/hda8: device too small (0kB) I take it from your success with /dev/md0 and /dev/md1 that you have raid-patched the kernel? I'm not sure about the patch. If I remember right, I think the older tools worked OK with the newer kernel. ( I could be wrong on this )
Re: superblock Q/clarification
David Cooley wrote: It's probably something to do with the fact that I'm on a Sparc Ultra 2 machine running Linux. Didn't think Linux saw the drives differently between platforms, but I guess it does. I'm guessing the drives were orginally used under Solaris/SunOS? i.e. they had a Sun disk label on them? I had the exact same problem (RAID5 was fine until I rebooted.. then they were reported as 'not a valid partition'). These were disks I used on a PC, but had salvaged them from work where they had been used on Suns. AFAIK the problem is in the Sun label. The way to fix it was to delete all partitions, write the label, exit fdisk, restart fdisk, and make a new empty DOS partition table. If you're wondering, I found it didn't work right if I didn't write the label, quit and restart before making the empty DOS label. I've also noticed that after you make the DOS label, it then does start from cylinder 1 instead of 0... maybe that's where that line of thought came from. -- Mike Marion - Unix SysAdmin/Engineer, Qualcomm Inc. Black holes are where God divided by zero.
--really force
What are the consequences of mkraid --reallyforce I need to test raid on my partitions /dev/hde1 and /dev/hde2 i have configured the /etc/raidtab file as needed. when i use mkraid /dev/md0 it tells me to try mkraid --force (i.e. reallyforce). I have no data whatsoever on either of these two partitions. and will put data in after i have installed raid. Kindly help Kartik Paramasivam
Re: --really force
At 04:50 PM 11/4/99 -0500, you wrote: What are the consequences of mkraid --reallyforce I need to test raid on my partitions /dev/hde1 and /dev/hde2 i have configured the /etc/raidtab file as needed. when i use mkraid /dev/md0 it tells me to try mkraid --force (i.e. reallyforce). I have no data whatsoever on either of these two partitions. and will put data in after i have installed raid. mkraid --really-force will force the partitions to be made into raid devices, adding the raid superblocks etc. After it finishes, then you must do "mke2fs /dev/md0". Once it's finished, mount it and off it goes. === David Cooley N5XMT Internet: [EMAIL PROTECTED] Packet: N5XMT@KQ4LO.#INT.NC.USA.NA T.A.P.R. Member #7068 Sponges grow in the ocean... Wonder how deep it would be if they didn't?! ===
Upgrading RAID
Is there a procedure for adding more drives to a RAID system and increasing the size of the partitions? We have mylex Accellaraid 250's (sp?) driving the RAID. I am a little lost as to how to do it. I mean when and if the Mysql server ever breaks 10-12 gig of data I would like to have an easy way out. Thanks, Sean
Re: New on list and some questions
On Tue, 2 Nov 1999, Shoggoth wrote: Hi, On Mon, 01 Nov 1999, Francisco Jose Montilla wrote: [Very Good stuff snipped] - and raid level 0 sets for disk2 and disk4, as you don't care about redundancy w/ index files (you can easily recreate them). I don't know if using raid 0 in that machine will give more performance, (although you'll benefit for larger storage capacity coupling those small disks) i'd bet no... just trying to generalise for other potential readers... Yes. Indeed i was trying to make profit of this disk that otherwise are useless (i deal with a 450Mb database) demosntrating to a enterprise that Linux can make use of their of their machines better than its actual server under NT (a 300MHz P][ -92Mb RAM). They accepted the challenge , and i has been working this weekend in the server assembly and tunning. I thinked Linear Raid was the better approach , and RAID-0 second choice. But as i want max perfomace with this limited system , i called advice. I'd bet raid0 will be better in terms of performance, and after all, you don't have redundancy with linear either... Take care, and do backups, you're risking your valuable data on old IDE disks, and raised twice the posibilities of losing it due to a disk crash; no linux/raid is gonna save you from that if it happens. By now , i'm winning. System gets a result of 0.1 secs x query in front of the 0.7 that gets the NT. After being parsed by PHP and served by apache , i get 0.5s x query. I have no means to measure the time w/o RAID , simply because the Database does not fit };- uh-oh, a big-one-table database? greetings, *---(*)---**-- Francisco J. Montilla System Network administrator [EMAIL PROTECTED] irc: pukkaSevilleSpain INSFLUG (LiNUX) Coordinator: www.insflug.org - ftp.insflug.org
SW-RAID Bug ?
Hi, got a strange message: Nov 4 01:13:37 nb010010143 kernel: raid5: bug: stripe-bh_new[0], sector 5180268 exists Nov 4 01:13:37 nb010010143 kernel: raid5: bh c9069a20, bh_new c9069120 What does that mean, is it possible to fix (how?) and how severe is that ??? I use kernel 2.2.13 with raidpatch/tools of 19990824. Thomas -- Thomas Waldmann (com_ma, Computer nach Masz) email: [EMAIL PROTECTED] www: www.com-ma.de Please be patient if sending me email, response may be slow. Bitte Geduld, wenn Sie mir email senden, Antwort kann dauern.