On Jan 20, 2008, at 2:18 PM, Bill Davidsen wrote:
One partitionable RAID-10, perhaps, then partition as needed. Read
the discussion here about performance of LVM and RAID. I personally
don't do LVM unless I know I will have to have great flexibility of
configuration and can give up perform
On Jan 20, 2008, at 1:21 PM, Steve Fairbairn wrote:
So the device I was trying to add was about 22 blocks too small.
Taking
Neils suggestion and looking at /proc/partitions showed this up
incredibly quickly.
Always leave a little space in the end; it makes sure you don't run
into that pa
On Jan 19, 2008, at 3:44 AM, Ask Bjørn Hansen wrote:
Replying to myself with an update, mostly for the sake of the archives
(I went through the linux-raid mail from the last year yesterday while
waiting for my raw-partition backups to finish).
I mentioned[1] my trouble with the multipath
Hi everyone,
I mentioned[1] my trouble with the multipath detection code on the
Fedora rescue mode messing up my raid yesterday.
My raid6 partitions recovered fine, but the raid10 device (/
sd[abc...k]5) somehow got messed up.
When I assemble the drive it says all 9 drives and 2 spares are
On Jan 18, 2008, at 4:33 AM, Heinz Mauelshagen wrote:
Much later I figured out that "dmraid -b" reported two of the disks
as
being the same:
Looks like the md sync duplicated the metadata and dmraid just spots
that duplication. You gotta remove one of the duplicates to clean
this up
but
Hi everyone,
One of my boxes crashed (with a hardware error, I think - CPU and
motherboard replacements are on their way). I booted it up on a
rescue disk (Fedora 8) to let the software raid sync up.
When it was running I noticed that one of the disks were listed as
"dm-5" and ... uh-oh
On Jan 18, 2008, at 3:17 AM, Ask Bjørn Hansen wrote:
[ Uh, I just realized that I forgot to update the subject line as I
figured out what was going on; it's obviously not a software raid
problem but a multipath problem ]
One of my boxes crashed (with a hardware error, I think - CP
Hi,
I have a logical volume I'm (trying to) use for a Xen box. When
it's installing the boot loader I get a bunch of errors like the ones
below. The kernel is the latest FC6 kernel - 2.6.18-1.2869.fc6xen.
The md device is a raid10 device (obviously) across 4 sata disks.
Any ideas?
[..
Hi,
I had a drive failing today. I had a loose cable when I booted after
replacing the failing drive (doh!) so now some of my md devices had
an extra failed drive. "Oh well, it'll just rebuild" I foolishly
thought.
Of course during the rebuild another drive (sdg12) failed (with read )
On Oct 5, 2006, at 3:15 AM, Jurriaan Kalkman wrote:
AFAIK, linux raid-10 is not exactly raid 1+0, it allows you to, for
example, use 3 disks.
I made a raid-10 device earlier today with 7 drives and I was
surprised to see that it reported to use all of them. I thought it'd
make one of the
On Sep 26, 2006, at 23:22, Oliver Paulus wrote:
I cannot use the created raid device as "/" partition. I get the
following error
in dmesg:
"invalid superblock checksum on sda1"
"invalid superblock checksum on sdb1"
Did you mkfs the md device?
- ask
--
http://askask.com/ - http://develo
On Sep 17, 2006, at 12:37 AM, Tuomas Leikola wrote:
It's recommended to use a script to scrub the raid device regularly,
to detect sleeping bad blocks early.
What's the best way to do that? dd the full md device to /dev/null?
- ask
--
http://www.askbjoernhansen.com/
-
To unsubscribe fr
On Sep 15, 2006, at 2:08, Reza Naima wrote:
Linux version 2.6.12-1.1381_FC3
Not much help, but newer kernels are more aggressive about not
failing a second disk in a raid-5.
(I noticed because the change came in just around when my old raid-5
did the same as yours; but before I upgraded
13 matches
Mail list logo