Re: Problem with 5disk RAID5 array - two drives lost

2006-04-23 Thread Arthur Britto
On Sun, 2006-04-23 at 17:17 -0700, Tim Bostrom wrote: > I bought two extra 250GB drives - I'll try using dd_rescue as > recommended and see if I can get a "good" copy of hdf online. You might want to use dd_rhelp: http://www.kalysto.org/utilities/dd_rhelp/index.en.html -Arthur - To unsubscrib

proactive raid5 disk replacement success (using bitmap + raid1)

2006-04-23 Thread dean gaudet
i had a disk in a raid5 which i wanted to clone onto the hot spare... without going offline and without long periods without redundancy. a few folks have discussed using bitmaps and temporary (superblockless) raid1 mappings to do this... i'm not sure anyone has tried / reported success though.

Re: Problem with 5disk RAID5 array - two drives lost

2006-04-23 Thread Tim Bostrom
First let me say - thank you for responding. I'm still trying to figure out this problem. On Apr 21, 2006, at 8:54 PM Apr 21, 2006, Molle Bestefich wrote: Tim Bostrom wrote: It appears that /dev/hdf1 failed this past week and /dev/hdh1 failed back in February. An obvious question woul

Re: Recovery speed at 1MB/s/device, unable to change

2006-04-23 Thread Anssi Hannula
(resend, prev post missed the list) Neil Brown wrote: > On Monday April 24, [EMAIL PROTECTED] wrote: > >># mdadm --grow /dev/md_d1 --raid-devices=3 --backup-file backupfile >>mdadm: Need to backup 128K of critical section.. >>mdadm: /dev/md_d1: Cannot get array details from sysfs >> >>Strace show

Re: Recovery speed at 1MB/s/device, unable to change

2006-04-23 Thread Anssi Hannula
(sorry, prev post missed the list) Neil Brown wrote: > On Monday April 24, [EMAIL PROTECTED] wrote: > >># mdadm --grow /dev/md_d1 --raid-devices=3 --backup-file backupfile >>mdadm: Need to backup 128K of critical section.. >>mdadm: /dev/md_d1: Cannot get array details from sysfs >> >>Strace shows

Re: EVMS causing problems with mdadm?

2006-04-23 Thread Luca Berra
On Mon, Apr 24, 2006 at 07:48:00AM +1000, Neil Brown wrote: On Sunday April 23, [EMAIL PROTECTED] wrote: Did my latest updates for my Kubuntu (Ubuntu KDE variant) this morning, and noticed that EVMS has now "taken control" of my RAID array. Didn't think much about it until I tried to make a RAID

Re: Recovery speed at 1MB/s/device, unable to change

2006-04-23 Thread Neil Brown
On Monday April 24, [EMAIL PROTECTED] wrote: > # mdadm --grow /dev/md_d1 --raid-devices=3 --backup-file backupfile > mdadm: Need to backup 128K of critical section.. > mdadm: /dev/md_d1: Cannot get array details from sysfs > > Strace shows that it's trying to access > "/sys/block/md_d4/md/componen

Re: EVMS causing problems with mdadm?

2006-04-23 Thread Neil Brown
On Sunday April 23, [EMAIL PROTECTED] wrote: > Did my latest updates for my Kubuntu (Ubuntu KDE variant) this > morning, and noticed that EVMS has now "taken control" of my RAID > array. Didn't think much about it until I tried to make a RAID-1 array > with two disks I've just added to the system.

Re: to be or not to be...

2006-04-23 Thread Neil Brown
On Sunday April 23, [EMAIL PROTECTED] wrote: > Hi all, > to make a long story very very shorty: > a) I create /dev/md1, kernel latest rc-2-git4 and mdadm-2.4.1.tgz, > with this command: > /root/mdadm -Cv /dev/.static/dev/.static/dev/.static/dev/md1 \ > --b

Re: Recovery speed at 1MB/s/device, unable to change

2006-04-23 Thread Anssi Hannula
Anssi Hannula wrote: > The speed is only 2000K/sec, even after I set: > --- > # cat /proc/sys/dev/raid/speed_limit_min > 1 > # cat /proc/sys/dev/raid/speed_limit_max > 40 > --- > > The system is about 90% idle, so there should be more bandwidth. > > --- > # cat /proc/version > Linux versi

Recovery speed at 1MB/s/device, unable to change

2006-04-23 Thread Anssi Hannula
I created a raid array: --- # mdadm --create /dev/md_d0 -ap --level=5 --raid-devices=2 \ /dev/sda1 missing --- Then partitioned it: --- # LANGUAGE=C fdisk -l /dev/md_d0 Disk /dev/md_d0: 250.0 GB, 250056605696 bytes 2 heads, 4 sectors/track, 61048976 cylinders Units = cylinders of 8 * 512 = 4096 b

Re: disks becoming slow but not explicitly failing anyone?

2006-04-23 Thread Mark Hahn
> I've seen a lot of cheap disks say (generally deep in the data sheet > that's only available online after much searching and that nobody ever > reads) that they are only reliable if used for a maximum of twelve hours > a day, or 90 hours a week, or something of that nature. Even server I haven't

Re: replace disk in raid5 without linux noticing?

2006-04-23 Thread Martin Cracauer
Carlos Carvalho wrote on Sat, Apr 22, 2006 at 02:48:23PM -0300: > Martin Cracauer (cracauer@cons.org) wrote on 22 April 2006 11:08: > >> stop the array > >> dd warning disk => new one > >> remove warning disk > >> assemble the array again with the new disk > >> > >> The inconvenience is tha

EVMS causing problems with mdadm?

2006-04-23 Thread Ewan Grantham
Did my latest updates for my Kubuntu (Ubuntu KDE variant) this morning, and noticed that EVMS has now "taken control" of my RAID array. Didn't think much about it until I tried to make a RAID-1 array with two disks I've just added to the system. Trying to do a create verbose tells me that device /d

Re: to be or not to be...

2006-04-23 Thread Molle Bestefich
gelma wrote: > first run: lot of strange errors report about impossible i_size > values, duplicated blocks, and so on You mention only filesystem errors, no block device related errors. In this case, I'd say that it's more likely that dm-crypt is to blame rather than MD. I think you should try th

Re: disks becoming slow but not explicitly failing anyone?

2006-04-23 Thread Nix
On 23 Apr 2006, Mark Hahn said: > some people claim that if you put a normal (desktop) > drive into a 24x7 server (with real round-the-clock load), you should > expect failures quite promptly. I'm inclined to believe that with > MTBF's upwards of 1M hour, vendors would not clai

to be or not to be...

2006-04-23 Thread gelma
Hi all, to make a long story very very shorty: a) I create /dev/md1, kernel latest rc-2-git4 and mdadm-2.4.1.tgz, with this command: /root/mdadm -Cv /dev/.static/dev/.static/dev/.static/dev/md1 --bitmap-chunk=1024 --chunk=256 --assume-clean --bitmap=internal -