if a better solution is available soon, I'll wait for it.
thanks,
Mike
Be a better friend, newshound, and
know-it-all with Yahoo! Mobile. Try it now.
http://mobile.yahoo.com/;_ylt
and as such Linux (both
scsi and raid1) is completely unaware of any disconnect of the
physical device.
thanks,
Mike
-Original Message-
From: Mike Snitzer [mailto:[EMAIL PROTECTED]
Sent: Tuesday, January 22, 2008 7:10 PM
To: linux-raid@vger.kernel.org; NeilBrown
Cc: [EMAIL PROTECTED]; K
On Jan 22, 2008 12:29 AM, Mike Snitzer [EMAIL PROTECTED] wrote:
cc'ing Tanaka-san given his recent raid1 BUG report:
http://lkml.org/lkml/2008/1/14/515
On Jan 21, 2008 6:04 PM, Mike Snitzer [EMAIL PROTECTED] wrote:
Under 2.6.22.16, I physically pulled a SATA disk (/dev/sdac, connected
0x88b9353c raid1d+2440: callq 0x80459796
_spin_lock_irq
0x88b93541 raid1d+2445: jmp0x88b934f4 raid1d+2368
Any insight on this would be extremely helpful.
regards,
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid
cc'ing Tanaka-san given his recent raid1 BUG report:
http://lkml.org/lkml/2008/1/14/515
On Jan 21, 2008 6:04 PM, Mike Snitzer [EMAIL PROTECTED] wrote:
Under 2.6.22.16, I physically pulled a SATA disk (/dev/sdac, connected to
an aacraid controller) that was acting as the local raid1 member
(the same as the
bitmap update time) does not notciably affect resync performance.
Signed-off-by: Neil Brown [EMAIL PROTECTED]
Hi Neil,
You forgot to export bitmap_cond_end_sync. Please see the attached patch.
regards,
Mike
diff --git a/drivers/md/bitmap.c b/drivers/md/bitmap.c
index f31ea4f
message, and how to fix this. I'd really like to get this problem
resolved. Does anyone out there know how to fix this, so I can get partitions
correctly flagged as Linux RAID and the array autodetected at start?
Sorry if I missed something obvious.
Thanks,
Mike
that limit
existing options.
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
lvm2's MD v1.0 superblock detection doesn't work at all (because it
doesn't use v1 sb offsets).
I've tested the attached patch to work on MDs with v0.90.0, v1.0,
v1.1, and v1.2 superblocks.
please advise, thanks.
Mike
Index: lib/device/dev-md.c
On 10/23/07, Alasdair G Kergon [EMAIL PROTECTED] wrote:
On Tue, Oct 23, 2007 at 11:32:56AM -0400, Mike Snitzer wrote:
I've tested the attached patch to work on MDs with v0.90.0, v1.0,
v1.1, and v1.2 superblocks.
I'll apply this, thanks, but need to add comments (or reference) to explain
rdev-data_offset)
/* bitmap runs in to data */
return -EINVAL;
Thanks,
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info
on that I missed? I
included 505fa2c4a2f125a70951926dfb22b9cf273994f1 and
ab6085c795a71b6a21afe7469d30a365338add7a too.
*shrug*...
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
, it is fairly surprising that such a relatively small
difference in size would prevent it from working...
regards,
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 10/18/07, Goswin von Brederlow [EMAIL PROTECTED] wrote:
Mike Snitzer [EMAIL PROTECTED] writes:
All,
I have repeatedly seen that when a 2 member raid1 becomes degraded,
and IO continues to the lone good member, that if the array is then
stopped and reassembled you get:
md
with the other?
regards,
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 10/19/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 10/18/07, Neil Brown [EMAIL PROTECTED] wrote:
Sorry, I wasn't paying close enough attention and missed the obvious.
.
On Thursday October 18, [EMAIL PROTECTED] wrote:
On 10/18/07, Neil Brown [EMAIL PROTECTED] wrote
mdadm 2.4.1 through 2.5.6 works. mdadm-2.6's Improve allocation and
use of space for bitmaps in version1 metadata
(199171a297a87d7696b6b8c07ee520363f4603c1) would seem like the
offending change. Using 1.2 metdata works.
I get the following using the tip of the mdadm git repo or any other
version
On 10/17/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
mdadm 2.4.1 through 2.5.6 works. mdadm-2.6's Improve allocation and
use of space for bitmaps in version1 metadata
(199171a297a87d7696b6b8c07ee520363f4603c1) would seem like the
offending change. Using 1.2 metdata
On 9/19/07, Wiesner Thomas [EMAIL PROTECTED] wrote:
Has there been any progress on this? I think I saw it, or something
similar, during some testing of recent 2.6.23-rc kernels, on mke2fs took
about 11 min longer than all the others (~2 min) and it was not
repeatable. I worry that process
the brainspace it
deserves as I am travelling this fortnight.
Looking forward to further discussion. Thank you!
--
Mike Accetta
ECI Telecom Ltd.
Transport Networking Division, US (previously Laurel Networks)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message
even better approach.
Mike Accetta
ECI Telecom Ltd.
Transport Networking Division, US (previously Laurel Networks)
---
diff -Naurp 2.6.20/drivers/md/md.c kernel/drivers/md/md.c
--- 2.6.20/drivers/md/md.c 2007-06-04 13:52:42.0 -0400
+++ kernel/drivers/md/md.c 2007-08-30 16:28
On 8/17/07, Mike Accetta [EMAIL PROTECTED] wrote:
Neil Brown writes:
On Wednesday August 15, [EMAIL PROTECTED] wrote:
Neil Brown writes:
On Wednesday August 15, [EMAIL PROTECTED] wrote:
...
This happens in our old friend sync_request_write()? I'm dealing with
Yes
);
} else {
--
Mike Accetta
ECI Telecom Ltd.
Transport Networking Division, US (previously Laurel Networks)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
report
all errors, not just the corrected ones. Does this seem reasonable?
Are there other alternatives that might make sense here?
--
Mike Accetta
ECI Telecom Ltd.
Transport Networking Division, US (previously Laurel Networks)
-
To unsubscribe from this list: send the line unsubscribe linux-raid
understanding of how this all works, neither of these
paths would seem to apply here.
--
Mike Accetta
ECI Telecom Ltd.
Transport Networking Division, US (previously Laurel Networks)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED
the fresh bitmap?
Thanks, I really appreciate your insight.
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
to come back in using a quick
bitmap-based resync?
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
with the same faulty member missing. The user
later re-adds the faulty member
AFAIK both scenarios would bring about a full resync.
regards,
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
you mean to not even bring up the disk
as a degraded array? If all you do is fail and remove the partition, RAID is
actually still running to manage the single remaining partition, but I would
expect that overhead to be minimal.
--
Mike Accetta
ECI Telecom Ltd.
Transport Networking Division, US
the RAID'ing.
Mike
http://www.newegg.com/Product/Product.aspx?Item=N82E16816133001
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
In article [EMAIL PROTECTED], Justin Piszcz wrote:
On Mon, 18 Jun 2007, Mike wrote:
I'm creating a larger backup server that uses bacula (this
software works well). The way I'm going about this I need
lots of space in the filesystem where temporary files are
stored. I have been looking
On 6/14/07, Bill Davidsen [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
On 6/13/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 6/13/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
...
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote
the regression; I obviously need to
verify that 2.6.16 works in this situation on SMP.
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 6/14/07, Paul Clements [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
Here are the steps to reproduce reliably on SLES10 SP1:
1) establish a raid1 mirror (md0) using one local member (sdc1) and
one remote member (nbd0)
2) power off the remote machine, whereby severing nbd0's connection
3
On 6/14/07, Paul Clements [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
On 6/14/07, Paul Clements [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
Here are the steps to reproduce reliably on SLES10 SP1:
1) establish a raid1 mirror (md0) using one local member (sdc1) and
one remote member
On 6/13/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
...
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
On Tuesday June 12, [EMAIL PROTECTED] wrote:
I can provided more detailed information; please just ask.
A complete
accordingly if/when the current owner fails, etc. But this implies
that the MD is only ever active on one node at any given point in
time.
Mike
On 6/13/07, Xinwei Hu [EMAIL PROTECTED] wrote:
Hi all,
Steven Dake proposed a solution* to make MD layer and tools to be cluster
aware in early 2003
On 6/13/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 6/13/07, Mike Snitzer [EMAIL PROTECTED] wrote:
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
...
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
On Tuesday June 12, [EMAIL PROTECTED] wrote:
I can provided more detailed
] do_sys_open+0x44/0xbe
[800097e1] tracesys+0xd1/0xdc
I can provided more detailed information; please just ask.
thanks,
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo
On 6/12/07, Neil Brown [EMAIL PROTECTED] wrote:
On Tuesday June 12, [EMAIL PROTECTED] wrote:
I can provided more detailed information; please just ask.
A complete sysrq trace (all processes) might help.
I'll send it to you off list.
thanks,
Mike
-
To unsubscribe from this list: send
knowledgeable will comment soon.
--
Mike Accetta
ECI Telecom Ltd.
Data Networking Division (previously Laurel Networks)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
;
+ }
if (j = 0)
mddev-resync_mismatches +=
r1_bio-sectors;
if (j 0 || test_bit(MD_RECOVERY_CHECK,
mddev-recovery)) {
--
Mike Accetta
ECI Telecom Ltd.
Data Networking Division (previously Laurel Networks)
-
To unsubscribe
David Greaves writes:
...
It looks like the same (?) problem as Mike (see below - Mike do you have a
patch?) but I'm on 2.6.20.7 with mdadm v2.5.6
...
We have since started assembling the array from the initrd using
--homehost and --auto-update-homehost which takes a different path through
Gabor Gombas wrote:
On Fri, Mar 02, 2007 at 09:04:40AM -0500, Mike Accetta wrote:
Thoughts or other suggestions anyone?
This is a case where a very small /boot partition is still a very good
idea... 50-100MB is a good choice (some initramfs generators require
quite a bit of space under /boot
Bill Davidsen wrote:
Gabor Gombas wrote:
On Fri, Mar 02, 2007 at 09:04:40AM -0500, Mike Accetta wrote:
Thoughts or other suggestions anyone?
This is a case where a very small /boot partition is still a very good
idea... 50-100MB is a good choice (some initramfs generators require
H. Peter Anvin wrote:
Mike Accetta wrote:
I've been considering trying something like having the re-sync algorithm
on a whole disk array defer the copy for sector 0 to the very end of the
re-sync operation. Assuming the BIOS makes at least a minimal
consistency
check on sector 0 before
On Fri, 12 Jan 2007, Neil Brown might have said:
On Thursday January 11, [EMAIL PROTECTED] wrote:
Can someone tell me what this means please? I just received this in
an email from one of my servers:
A FailSpare event had been detected on md device /dev/md2.
It could be
17466 17847 3068415 fd Linux raid autodetect
I have partition 2 of drive sde as one of the raid devices for md. Does the (S)
on sde3[2](S) mean the device is a spare for md1 and the same for md0?
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
a bad/failing device. Is there
a way to blink the drives in linux?
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
google BadBlockHowto
Any just google it response sounds glib, but this is actually how to
do it :-)
If you're new to md and mdadm, don't forget to actually remove the drive
from the array before you start working on it with 'dd'
-Mike
Mike wrote:
On Fri, 12 Jan 2007, Neil Brown might have
it would be best to
solve this problem?
--
Mike Accetta
ECI Telecom Ltd.
Data Networking Division (previously Laurel Networks)
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo
be grateful for any suggestions.
Thanks,
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
that the array isn't completely usable until the assembly file
descriptor is closed, even on return from the ioctl(), and hence the
kernel add_disk() isn't having the desired partitioning side effect at
the point it is being invoked.
This is all with kernel 2.6.18 and mdadm 2.3.1
--
Mike Accetta
with no issues.
-Mike
Timo Bernack wrote:
Hi there,
i am running a 5-disk RAID5 using mdadm on a suse 10.1 system. As the
array is running out of space, i consider adding three more HDDs. Before
i set up the current array, i made a small test with raidreconf:
- build a 4-disk RAID5 /dev/md0
anything about cables or controllers or
power or anything else that could and may be wrong. It's just for the
drive media and firmware.
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
forget the long SMART test before running degraded for real. Could
save you some pain.
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
to fail message.
They've been in service now for around 2 months, and they do have an
okay temperature, and I have not been beating the crap out of them.
More than a little disappointing.
They are fast though...
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
,
but the SAS5 card can only drive 4. Lame.
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
berlin % rpm -qf /usr/lib/nagios/plugins/contrib/check_linux_raid.pl
nagios-plugins-1.4.1-1.2.fc4.rf
It is built in to my nagios plugins package at least, and works great.
-Mike
Tomasz Chmielewski wrote:
I would like to have RAID status monitored by nagios.
This sounds like a simple script
insertion (or removal)
of any other SATA or SCSI (or USB storage) device
I think you want is a DEVICE partitions line accompanied by ARRAY
lines that have the UUID attribute you've already got in there.
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
by smartmontools.
Note that page is slightly out of date - they mention SMART for SATA is
supported through a patch to mainline, but it is in fact mainline now.
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info
into a raid set that touched on
these issues, and how to do the FS shrink so it would have room for the
raid superblock. I'd refer to that. The goal being to shrink 1MB or so
off the FS, create the raid, then grow the FS to max again (or let it
be, whatever)
-Mike
Peter Greis wrote:
Greetings,
I
On 8/5/06, Mike Snitzer [EMAIL PROTECTED] wrote:
Aside from this write-mostly sysfs support, is there a way to toggle
the write-mostly bit of an md member with mdadm? I couldn't identify
a clear way to do so.
It'd be nice if mdadm --assemble would honor --write-mostly...
I went ahead
FYI, with both mdadm ver 2.4.1 and 2.5.2 I can't mdadm --create with a
ver1 superblock and a write intent bitmap on x86_64.
running: mdadm --create /dev/md2 -e 1.0 -l 1 --bitmap=internal -n 2
/dev/sdd --write-mostly /dev/nbd2
I get: mdadm: RUN_ARRAY failed: Invalid argument
Mike
Aside from this write-mostly sysfs support, is there a way to toggle
the write-mostly bit of an md member with mdadm? I couldn't identify
a clear way to do so.
It'd be nice if mdadm --assemble would honor --write-mostly...
On 6/1/06, NeilBrown [EMAIL PROTECTED] wrote:
It appears in
On 7/26/06, Paul Clements [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
I tracked down the thread you referenced and these posts (by you)
seems to summarize things well:
http://marc.theaimsgroup.com/?l=linux-raidm=16563016418w=2
http://marc.theaimsgroup.com/?l=linux-raidm
happen until the primary's dirty bits can be
collected right? Waiting for the failed server to come back to
harvest the dirty bits it has seems wrong (why failover at all?); so I
must be missing something.
please advise, thanks.
Mike
-
To unsubscribe from this list: send the line unsubscribe linux
On 7/26/06, Paul Clements [EMAIL PROTECTED] wrote:
Mike Snitzer wrote:
I tracked down the thread you referenced and these posts (by you)
seems to summarize things well:
http://marc.theaimsgroup.com/?l=linux-raidm=16563016418w=2
http://marc.theaimsgroup.com/?l=linux-raidm
a 3ware 9550SX-16ML, which is a 133mhz pci-x card. They also
have a 9590SE that does the same with PCI-E, though I don't know if the
stock 2.6.x kernel supports these yet.
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More
/gmane.linux.kernel.device-mapper.devel/1980/
The MaxLine III's (7V300F0) with VA111630/670 firmware currently timeout
on a weekly or less basis.. I'm still testing VA111680 on a 15x300 gig
array
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message
impressive though. I'll almost be sorry to see it
fixed :-)
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
is the only way to have it be read-only in the data
regions. That'll let you make a mistake and still be able to recover
data after you find the right command line.
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
.
Are you using the whole disk, or the first partition?
It appears that to some extent, you are using both.
Perhaps some confusion on that point between your boot scripts and your
manual run explains things?
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body
for online resizing, you can resize any ext2/3
filesystem offline, and it doesn't take very long. You just use
resize2fsf instead of ext2online
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http
about the
first write through md not being a superblock write...
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
not on
the list.
Yes, I would re-create the array with 1 missing disk. mount read-only,
verify your data. If things are ok, remount read-write and remember to
add a new disk to fix the degrade array.
With the missing keyword, no resync/recovery, thus the data on disk
will be intact.
--
Regards,
Mike
permutations of
4-disk arrays with one missing until you see your data, and you should
find it.
-Mike
Sam Hopkins wrote:
Hello,
I have a client with a failed raid5 that is in desperate need of the
data that's on the raid. The attached file holds the mdadm -E
superblocks that are hopefully the keys
be extremely nice to have. Any estimate on
when you might get to this?
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
installing. Here's a good page for the
grub incantations:
http://gentoo-wiki.com/HOWTO_Gentoo_Install_on_Software_RAID_mirror_and_LVM2_on_top_of_RAID#Bootloader_installation_and_configuration
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL
to everybody that helped and offered advice,
I'm really glad that I finally managed to resolve this problem - even
with a backup, I still get very worried about messing with my data.
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED
.
Is this a regression in mdadm 2.4, 2.3.1 and 2.3 (NOTE: mdadm 2.2's
ver1 sb works!)?
please advise, thanks.
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
for the report.
Yes, try 2.4.1 (just released).
Works great.. thanks for the extremely quick response and fix.
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
.
Regards,
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
On 4/5/06, Tuomas Leikola [EMAIL PROTECTED] wrote:
On 4/5/06, Mike Garey [EMAIL PROTECTED] wrote:
I tried booting from /dev/hdc1 (as /dev/md0 in grub) using a 2.6.15
kernel with md and raid1 support built in and this is what I now get:
md: autodetecting raid arrays
md: autorun ...
md
On 4/5/06, Paul Clements [EMAIL PROTECTED] wrote:
Mike Garey wrote:
I seem to be getting closer.. If I try booting from a kernel without
raid1 and md support, but using an initrd with raid1/md modules, then
I get the ALERT! /dev/md0 does not exist. Dropping to a shell!
message. I can't
?
If anybody has the time to read through my message
and give me some advice, I would very much appreciate it. Thanks,
Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo
to be prepared.
If anyone can make heads or tails of what I'm talking about, I would
greatly appreciate any information or suggestions. Thanks in advance,
Mike
P.S. output of mdadm --examine is given below (output is the same for
/dev/hda1 and /dev/hdc1)
Magic : a92b4efc
Version
had already tried to fsck the filesystem
on this thing, so you may have hashed the remaining drive. It's hard to
say. Truly bleak though...
-Mike
Technomage wrote:
mike.
given the problem, I have a request.
On Friday 31 March 2006 15:55, Mike Hardy wrote:
I can't imagine how to coax
kernels drive it fine, and FC5 (with 2.6.16+) should be fine
with it.
-Mike
Ian Thurlbeck wrote:
Dear All
I have 4x500GB Maxtor SATA drives and I want to attach
these to a 4-port SATA PCI card and RAID5 them using md
Could anybody recommend a card that will have out of
box support
I can think of two things I'd do slightly differently...
Do a smartctl -t long on each disk before you do anything, to verify
that you don't have single sector errors on other drives
Use ddrescue for better results copying a failing drive
-Mike
PFC wrote:
I have a raid5 array that contain
is that the array will most likely be clean in all
circumstances even a crash, and you simply won't need to resync
That's a good thing!
-Mike
Kasper Dupont wrote:
I have a FC4 installation (upgraded from FC3) using kernel
version 2.6.15-1.1831_FC4. I see some symptoms in the software
raid, which
that checks
(maybe via some deep check using ssh to execute remote commands, or just
a ping) the hosts status and just prints a little table of host status
that can only be avoided by passing a special --yes-i-know flag to the
wrapper
-Mike
-
To unsubscribe from this list: send the line
are running - just examples of it working.
-Mike
Martin Ritchie wrote:
Sorry if these are total newbie questions.
Why can't I have more than 4 active drives in my md RAID?
Why can't I easily migrate a RAID 0 to RAID 5. As I see it RAID 0 is
just RAID 5 with a failed parity check drive
I saw this on my array, and other(s) have reported it as well.
Apparently the reconstruction speed algorithm doesn't understand that
it's not syncing all the blocks and hilarity ensues. I believe that was
it, anyway
Either that or you really have a hell of a server :-)
-Mike
jurriaan wrote
for is for md to guess that accessing
any partition component on a disk that has a partition being rebuilt
should throttle the rebuild, right?
Can that heuristic be successful at all times? I think it might.
Does md have enough information to do that? I don't know...
-Mike
-
To unsubscribe from
group
descriptor table can grow to support a filesystem
that has max-online-resize blocks.
I have done it, and it works.
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo
keyword
Mount the raid5 array read-only (hope that it will mount :)
--
Mike T.
-
To unsubscribe from this list: send the line unsubscribe linux-raid in
the body of a message to [EMAIL PROTECTED]
More majordomo info at http://vger.kernel.org/majordomo-info.html
these
filesystems under the
new OS?
Please reply to [EMAIL PROTECTED] as I'm NOT on
this alias.
Thanks,
Mike
__
Yahoo! Mail - PC Magazine Editors' Choice 2005
http://mail.yahoo.com
-
To unsubscribe from this list: send the line unsubscribe
misunderstanding that, I'd love to be corrected. I was under the
impression that the silent corruption issue was mythical at this point
and if it's not I'd like to know.
-Mike
Dan Stromberg wrote:
Would it really be that much slower to have a journal of RAID 5 writes?
On Fri, 2005-11-18 at 15:05
is lost, but the FS is consistent and
all the data it can reach is consistent with what it thinks is there.
So, I continue to believe silent corruption is mythical. I'm still open
to good explanation it's not though.
-Mike
-
To unsubscribe from this list: send the line unsubscribe linux-raid
1 - 100 of 144 matches
Mail list logo