Re: FAQ
Tim Walberg wrote: On 08/04/2000 09:54 -0400, Theo Van Dinter wrote: The usual suggestion is: bzip2 -dc filename.tar.bz2 | tar -xf - or use bzcat, which is exactly the same as bzip2 -dc... most versions of tar now support either I or y for (un)compress -- Mathieu Arnold
big big problem
Hi a box i have crashed this morning, the problem is that : # raidstart -a # cat /proc/kmsg 4(read) hdb1's sb offset: 15016576 [events: 004a] 4(read) hde1's sb offset: 15016576 [events: 004a] 4(read) hdf1's sb offset: 15016576 [events: 004a] 4(read) hdg1's sb offset: 15016576 [events: 0048] 4(read) hdh1's sb offset: 15016576 [events: 0049] 4autorun ... 4considering hdh1 ... 4 adding hdh1 ... 4 adding hdg1 ... 4 adding hdf1 ... 4 adding hde1 ... 4 adding hdb1 ... 4created md0 4bindhdb1,1 4bindhde1,2 4bindhdf1,3 4bindhdg1,4 4bindhdh1,5 4running: hdh1hdg1hdf1hde1hdb1 4now! 4hdh1's event counter: 0049 4hdg1's event counter: 0048 4hdf1's event counter: 004a 4hde1's event counter: 004a 4hdb1's event counter: 004a 3md: superblock update time inconsistency -- using the most recent one 4freshest: hdf1 4md: kicking non-fresh hdg1 from array! 4unbindhdg1,4 4export_rdev(hdg1) 4md0: removing former faulty hdg1! 4md0: kicking faulty hdh1! 4unbindhdh1,3 4export_rdev(hdh1) 3md: md0: raid array is not clean -- starting background reconstruction 6raid5 personality registered 6md0: max total readahead window set to 512k 6md0: 4 data-disks, max readahead per data-disk: 128k 6raid5: device hdf1 operational as raid disk 2 6raid5: device hde1 operational as raid disk 1 6raid5: device hdb1 operational as raid disk 0 3raid5: not enough operational devices for md0 (2/5 failed) 4RAID5 conf printout: 4 --- rd:5 wd:3 fd:2 4 disk 0, s:0, o:1, n:0 rd:0 us:1 dev:hdb1 4 disk 1, s:0, o:1, n:1 rd:1 us:1 dev:hde1 4 disk 2, s:0, o:1, n:2 rd:2 us:1 dev:hdf1 4 disk 3, s:0, o:0, n:3 rd:3 us:1 dev:[dev 00:00] 4 disk 4, s:0, o:0, n:4 rd:4 us:1 dev:[dev 00:00] 4 disk 5, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 4 disk 6, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 4 disk 7, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 4 disk 8, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 4 disk 9, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 4 disk 10, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 4 disk 11, s:0, o:0, n:0 rd:0 us:0 dev:[dev 00:00] 1raid5: failed to run raid set md0 4pers-run() failed ... 4do_md_run() returned -22 4unbindhdf1,2 4export_rdev(hdf1) 4unbindhde1,1 4export_rdev(hde1) 4unbindhdb1,0 4export_rdev(hdb1) 6md0 stopped. 4... autorun DONE. how could i tell the kernel to start the raid array and do what it can to recover the datas ? as far as i can tell, the problem comes from : 4hdh1's event counter: 0049 4hdg1's event counter: 0048 4hdf1's event counter: 004a 4hde1's event counter: 004a 4hdb1's event counter: 004a how could i modify the event counter of one of the disks to get it work ? -- Mathieu Arnold
Re: Performance?
Tim Niemueller wrote: Hi there, I want to build up an array of four IBM DJNA 15 GB harddisks on an Abit PE6 with ATA/66 Controller. The array should be RAID-5, what so you know about the performance? I mean in general not only for this specific constellation? Is the source stable and usable for production? well, i've built an array of seven of thoses disks on a bp6 motherboard and well, i find it really good, i've got web hosting, and /var with a news server which carries news.* comp.* fr.* soc.* and a few more things with a few hundred users. It's like rock. -- Cordialement Mathieu Arnold PGP key id : 0x2D92519F IRC : _mat ICQ 1827742 http://www.mat.cc/
Re: raid problem
Luca Berra wrote: On Wed, Oct 27, 1999 at 11:09:46PM +0200, Mathieu ARNOLD wrote: I'm using a redhat 6.1 with a 2.3.21 kernel. ... well, i've gone back to the 2.2 series, i'm now using a 2.2.13 kernel, and well, it does exactly the same thing. does someone have a clue ? yup. neither 2.3.x nor 2.2.x do support new raid code, but you can find a patch for 2.2.x (while there is no patch for 2.3.x) i'd get ftp://ftp.fr.kernel.org/pub/linux/kernel/people/alan/2.2.13ac/patch-2.2.13ac1.gz other option is: ftp://ftp.fr.kernel.org/pub/linux/daemons/raid/alpha/raid0145-19990824-2.2.11.gz and fix the rejects by hand yep, i've read about it just after posting my msg :/ i now have : md0 : active raid5 hdh1[4] hdg1[3] hdf1[2] hde1[1] hdb1[0] 60066304 blocks level 5, 32k chunk, algorithm 2 [5/5] [U] resync=15% finish=163.4min and /dev/md0 56G 3.7G 53G 7% /opt1 i'm just happy now :) -- Cordialement Mathieu Arnold PGP key id : 0x2D92519F IRC : _mat ICQ 1827742 http://www.mat.cc/
Re: Re: raid problem
Luca Berra [EMAIL PROTECTED] said : On Fri, Oct 15, 1999 at 12:13:51AM +0200, Mathieu Arnold wrote: Hi I'm using a redhat 6.1 with a 2.3.21 kernel. do you have any need for a 2.3 kernel??? if not please reinstall the kernel that came with your distribution. well, i'm using a bi celeron motherboard (abit bp6) and smp is really better with 2.3 kernels Regards, Luca P.S. it seems to me (from this kind of messages) that not having the latest raid patches in th 2.3 kernel, is causing more problems that having these, could this be a suggestion to Linus where can i find the latest patchs ? -- Cordialement Mathieu Arnold PGP key id : 0x2D92519F IRC : _mat ICQ 1827742 http://www.mat.cc/ - La messagerie itinérante sans abonnement NetCourrier - Web : www.netcourrier.com Minitel : 3615 et 3623 NETCOURRIER Tél : 08 36 69 00 21
raid problem
Hi I'm using a redhat 6.1 with a 2.3.21 kernel. the raid/raid5 support is compiled in the kernel (aka not module). I have 5 hard drives (IBM-DJNA-351520, 14664MB w/430kB Cache, CHS=29795/16/63, (U)DMA) who are called hdb to hdf. All of these disks only have one partition on them which is hd?1 and which takes all the room. Here is my /etc/raidtab : raiddev /dev/md0 raid-level 5 nr-raid-disks 5 nr-spare-disks 0 persistent-superblock 1 parity-algorithmleft-symmetric chunk-size 32 device /dev/hdb1 raid-disk 0 device /dev/hdc1 raid-disk 1 device /dev/hdd1 raid-disk 2 device /dev/hde1 raid-disk 3 device /dev/hdf1 raid-disk 4 which is, conforming to the howto, right. and when i try : # mkraid /dev/md0 here is what i have : handling MD device /dev/md0 analyzing super-block disk 0: /dev/hdb1, 15016648kB, raid superblock at 15016576kB disk 1: /dev/hdc1, 15016648kB, raid superblock at 15016576kB disk 2: /dev/hdd1, 15016648kB, raid superblock at 15016576kB disk 3: /dev/hde1, 15016648kB, raid superblock at 15016576kB disk 4: /dev/hdf1, 15016648kB, raid superblock at 15016576kB mkraid: aborted, see the syslog and /proc/mdstat for potential clues. and, well, nothing appears in the syslog, and /proc/mdstat remains unchanged. anyone have a clue ? -- Cordialement Mathieu Arnold PGP key id : 0x2D92519F IRC : _mat ICQ 1827742 http://www.mat.cc/