On Sun, 2006-04-23 at 17:17 -0700, Tim Bostrom wrote:
> I bought two extra 250GB drives - I'll try using dd_rescue as
> recommended and see if I can get a "good" copy of hdf online.
You might want to use dd_rhelp:
http://www.kalysto.org/utilities/dd_rhelp/index.en.html
-Arthur
-
To unsubscrib
i had a disk in a raid5 which i wanted to clone onto the hot spare...
without going offline and without long periods without redundancy. a few
folks have discussed using bitmaps and temporary (superblockless) raid1
mappings to do this... i'm not sure anyone has tried / reported success
though.
First let me say - thank you for responding. I'm still trying to
figure out this problem.
On Apr 21, 2006, at 8:54 PM Apr 21, 2006, Molle Bestefich wrote:
Tim Bostrom wrote:
It appears that /dev/hdf1 failed this past week and /dev/hdh1
failed back in February.
An obvious question woul
(resend, prev post missed the list)
Neil Brown wrote:
> On Monday April 24, [EMAIL PROTECTED] wrote:
>
>># mdadm --grow /dev/md_d1 --raid-devices=3 --backup-file backupfile
>>mdadm: Need to backup 128K of critical section..
>>mdadm: /dev/md_d1: Cannot get array details from sysfs
>>
>>Strace show
(sorry, prev post missed the list)
Neil Brown wrote:
> On Monday April 24, [EMAIL PROTECTED] wrote:
>
>># mdadm --grow /dev/md_d1 --raid-devices=3 --backup-file backupfile
>>mdadm: Need to backup 128K of critical section..
>>mdadm: /dev/md_d1: Cannot get array details from sysfs
>>
>>Strace shows
On Mon, Apr 24, 2006 at 07:48:00AM +1000, Neil Brown wrote:
On Sunday April 23, [EMAIL PROTECTED] wrote:
Did my latest updates for my Kubuntu (Ubuntu KDE variant) this
morning, and noticed that EVMS has now "taken control" of my RAID
array. Didn't think much about it until I tried to make a RAID
On Monday April 24, [EMAIL PROTECTED] wrote:
> # mdadm --grow /dev/md_d1 --raid-devices=3 --backup-file backupfile
> mdadm: Need to backup 128K of critical section..
> mdadm: /dev/md_d1: Cannot get array details from sysfs
>
> Strace shows that it's trying to access
> "/sys/block/md_d4/md/componen
On Sunday April 23, [EMAIL PROTECTED] wrote:
> Did my latest updates for my Kubuntu (Ubuntu KDE variant) this
> morning, and noticed that EVMS has now "taken control" of my RAID
> array. Didn't think much about it until I tried to make a RAID-1 array
> with two disks I've just added to the system.
On Sunday April 23, [EMAIL PROTECTED] wrote:
> Hi all,
> to make a long story very very shorty:
> a) I create /dev/md1, kernel latest rc-2-git4 and mdadm-2.4.1.tgz,
> with this command:
> /root/mdadm -Cv /dev/.static/dev/.static/dev/.static/dev/md1 \
> --b
Anssi Hannula wrote:
> The speed is only 2000K/sec, even after I set:
> ---
> # cat /proc/sys/dev/raid/speed_limit_min
> 1
> # cat /proc/sys/dev/raid/speed_limit_max
> 40
> ---
>
> The system is about 90% idle, so there should be more bandwidth.
>
> ---
> # cat /proc/version
> Linux versi
I created a raid array:
---
# mdadm --create /dev/md_d0 -ap --level=5 --raid-devices=2 \
/dev/sda1 missing
---
Then partitioned it:
---
# LANGUAGE=C fdisk -l /dev/md_d0
Disk /dev/md_d0: 250.0 GB, 250056605696 bytes
2 heads, 4 sectors/track, 61048976 cylinders
Units = cylinders of 8 * 512 = 4096 b
> I've seen a lot of cheap disks say (generally deep in the data sheet
> that's only available online after much searching and that nobody ever
> reads) that they are only reliable if used for a maximum of twelve hours
> a day, or 90 hours a week, or something of that nature. Even server
I haven't
Carlos Carvalho wrote on Sat, Apr 22, 2006 at 02:48:23PM -0300:
> Martin Cracauer (cracauer@cons.org) wrote on 22 April 2006 11:08:
> >> stop the array
> >> dd warning disk => new one
> >> remove warning disk
> >> assemble the array again with the new disk
> >>
> >> The inconvenience is tha
Did my latest updates for my Kubuntu (Ubuntu KDE variant) this
morning, and noticed that EVMS has now "taken control" of my RAID
array. Didn't think much about it until I tried to make a RAID-1 array
with two disks I've just added to the system. Trying to do a create
verbose tells me that device /d
gelma wrote:
> first run: lot of strange errors report about impossible i_size
> values, duplicated blocks, and so on
You mention only filesystem errors, no block device related errors.
In this case, I'd say that it's more likely that dm-crypt is to blame
rather than MD.
I think you should try th
On 23 Apr 2006, Mark Hahn said:
> some people claim that if you put a normal (desktop)
> drive into a 24x7 server (with real round-the-clock load), you should
> expect failures quite promptly. I'm inclined to believe that with
> MTBF's upwards of 1M hour, vendors would not clai
Hi all,
to make a long story very very shorty:
a) I create /dev/md1, kernel latest rc-2-git4 and mdadm-2.4.1.tgz,
with this command:
/root/mdadm -Cv /dev/.static/dev/.static/dev/.static/dev/md1
--bitmap-chunk=1024 --chunk=256 --assume-clean --bitmap=internal -
17 matches
Mail list logo