On 2013-11-30 19:48 +0100, François Patte wrote:
> If I summarize the situation: the faulty disk (with badblocks)
> is sdd3, but it is the only active disk in the md1 array and I
> can fully access the data of this disk which is normally
> mounted at boot time, while the disk sdc3 has no badblocks
Le 30/11/2013 12:56, Andre Majorel a écrit :
> On 2013-11-29 23:43 +0100, François Patte wrote:
>
>> I have a problem with 2 raid arrays: I have 2 disks (sdc and sdd) in
>> raid1 arrays.
>>
>> One disk (sdc) failed and I replaced it by a new one. Copying the
>> partition table from sdd disk using
On 2013-11-29 23:43 +0100, François Patte wrote:
> I have a problem with 2 raid arrays: I have 2 disks (sdc and sdd) in
> raid1 arrays.
>
> One disk (sdc) failed and I replaced it by a new one. Copying the
> partition table from sdd disk using sfdisk:
>
> sfdisk -d /dev/sdd | sfdisk /dev/sdc
>
Le 30/11/2013 06:39, Stan Hoeppner a écrit :
> On 11/29/2013 4:43 PM, François Patte wrote:
>> Bonsoir,
>>
>> I have a problem with 2 raid arrays: I have 2 disks (sdc and sdd) in
>> raid1 arrays.
>>
>> One disk (sdc) failed and I replaced it by a new one. Copying the
>> partition table from sdd dis
On 11/29/2013 4:43 PM, François Patte wrote:
> Bonsoir,
>
> I have a problem with 2 raid arrays: I have 2 disks (sdc and sdd) in
> raid1 arrays.
>
> One disk (sdc) failed and I replaced it by a new one. Copying the
> partition table from sdd disk using sfdisk:
>
> sfdisk -d /dev/sdd | sfdisk /de
Bonsoir,
I have a problem with 2 raid arrays: I have 2 disks (sdc and sdd) in
raid1 arrays.
One disk (sdc) failed and I replaced it by a new one. Copying the
partition table from sdd disk using sfdisk:
sfdisk -d /dev/sdd | sfdisk /dev/sdc
then I "added" the 2 partitions (sdc1 and sdc3) to the a
On Saturday 23 June 2012 05:28:17 Camaleón wrote:
> > Are there any known problems with Marvell SATA and Wheezy 64-bit?
>
> (...)
>
> None that I'm aware of :-?
>
> Anyway, something that was working fine in Squeeze is expected to be
> working in upcoming kernel versions. Unless you missed somet
On Sat, 23 Jun 2012 03:04:16 -0400, Neal Murphy wrote:
> Wheezy 64-bit. Marvell PCIE SATA/Raid card with one drive (in a CRU
> DP10), non-RAID. Main board has two identical Hitachi 1TB drives on
> on-board SATA ports used in md RAID.
So you have a total of 3 hard disks, one connected to the PCI-e
Wheezy 64-bit. Marvell PCIE SATA/Raid card with one drive (in a CRU DP10),
non-RAID. Main board has two identical Hitachi 1TB drives on on-board SATA
ports used in md RAID.
Running Squeeze 32-bit, it was handling hot-plugged drives just fine. Switched
to Wheezy 64-bit and it no longer detects h
In <4d49816b.1040...@alcor.concordia.ca>, David Gaudine wrote:
>I have one more question, just out of curiousity so bottom priority.
>Why does this work? mdadm.conf is in the initramfs which is in /boot
>which is on /dev/md0, but /dev/md0 doesn't exist until the arrays are
>assembled, which requir
On 11-01-31 8:47 PM, Andrew Reid wrote:
The easy way out is to boot from a rescue disk, fix the mdadm.conf
file, rebuild the initramfs, and reboot.
The Real Sysadmin way is to start the array by hand from inside
the initramfs. You want "mdadm -A /dev/md0" (or possibly
"mdadm -A -u") to s
On Tue, Feb 1, 2011 at 10:38 AM, David Gaudine
wrote:
> On 11-01-31 8:47 PM, Andrew Reid wrote:
>> On Monday 31 January 2011 10:51:04 dav...@alcor.concordia.ca wrote:
>>>
>>> I posted in a panic and left out a lot of details. I'm using Squeeze,
>>> and
>>> set up the system about a month ago, so
On 11-01-31 8:47 PM, Andrew Reid wrote:
On Monday 31 January 2011 10:51:04 dav...@alcor.concordia.ca wrote:
I posted in a panic and left out a lot of details. I'm using Squeeze, and
set up the system about a month ago, so there have been some upgrades. I
wonder if maybe the kernel or Grub was
Hello,
dav...@alcor.concordia.ca a écrit :
> My system went down because of a power failure, and now it won't start. I
> use RAID 1, and I don't know if that's related to the problem. The screen
> shows the following.
>
> Loading, please wait...
> Gave up waiting for rood device. Common proble
On Monday 31 January 2011 10:51:04 dav...@alcor.concordia.ca wrote:
> I posted in a panic and left out a lot of details. I'm using Squeeze, and
> set up the system about a month ago, so there have been some upgrades. I
> wonder if maybe the kernel or Grub was upgraded and I neglected to install
>
My system went down because of a power failure, and now it won't start. I
use RAID 1, and I don't know if that's related to the problem. The screen
shows the following.
Loading, please wait...
Gave up waiting for rood device. Common problems:
- Boot args (cat /proc/cmdline)
- Check rootdelay-
oc/modules
does show md_mod.
David
Original Message
Subject: Can't reboot after power failure (RAID problem?)
From:dav...@alcor.concordia.ca
Date:Mon, January 31, 2011 10:18 am
To: debian-u
Hi all,
I have installed Etch on LSI 8704ELP HW RAID controller. Installation
process went just fine but after reboot starting proccess freezed.
If you have any suggestions I will be glad.
Regards,
Jarek
--
To UNSUBSCRIBE, email to [EMAIL PROTECTED]
with a subject of "unsubscribe". Trouble?
Hi,
My Debian 3.1 (x86_64) system has suffered a very nasty mishap.
First, the IOMMU code ran out of space to map I/O to the SATA drives.
This looked to the md code like a faulty drive - so one by one, drives
were marked 'failed', until three components had failed out of the
seven-drive array, at
> - have you evern wondered about other folks that try to build a sata-based
> raid subsystem ??
> - how did their sata pass the "failed disk" test if it
> reassigns its drive numbers upon reboot
>
> > With the second disk removed the disks names are changed,
>
> exactly that i
On Fri, 9 Sep 2005 [EMAIL PROTECTED] wrote:
> I'm testing a server before I put it in production, and I've got a problem
> with
> mdraid.
>
> The config:
> - Dell PowerEdge 800
> - 4 x 250 Go SATA attached to the mobo
> - /boot 4 x 1 GB (1 GB available)in RAID1, 3 active + 1 spare
> - / 4 x 25
Hello,
I'm testing a server before I put it in production, and I've got a problem with
mdraid.
The config:
- Dell PowerEdge 800
- 4 x 250 Go SATA attached to the mobo
- /boot 4 x 1 GB (1 GB available)in RAID1, 3 active + 1 spare
- / 4 x 250 GB (500 GB available) in RAID5, 3 active + 1 spare
No pr
I've been having an enjoyable time tinkering with software raid with
Sarge and the RC2 installer. The system boots fine with Raid 1 for
/boot and Raid 5 for /. I decided to experiment with Raid 10 for /opt
since there's nothing there to destroy :). Using mdadm to create a Raid
0 array from two R
23 matches
Mail list logo