Re: What to do with dead raid 1 partitions under mdadm

2014-11-02 Thread lee
mett m...@pmars.jp writes:

 Hi, 

 I'm running Squeeze under raid 1 with mdadm.
 One of the raid failed and I replace it with space I had available on
 that same disk.

 Today, when rebooting I got an error cause the boot flag was still on
 both partitions(sdb1 and sdb3 below). I used the rescue part of the
 debian installer CD to remove the boot flag with fdisk, and now
 everything is working.

 My question is what to do with the dead raid partition on that disk
 (sdb1 and sdb2 below)?

Replace the failed disk.


-- 
Again we must be afraid of speaking of daemons for fear that daemons
might swallow us.  Finally, this fear has become reasonable.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/878ujt3d2e@gulltop.yagibdah.de



Re: What to do with dead raid 1 partitions under mdadm

2014-10-26 Thread Gary Dale

On 25/10/14 11:19 PM, mett wrote:

-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

Hi,

I'm running Squeeze under raid 1 with mdadm.
One of the raid failed and I replace it with space I had available on
that same disk.

Today, when rebooting I got an error cause the boot flag was still on
both partitions(sdb1 and sdb3 below). I used the rescue part of the
debian installer CD to remove the boot flag with fdisk, and now
everything is working.

My question is what to do with the dead raid partition on that disk
(sdb1 and sdb2 below)?

Can I safely delete them and mark them unusable or similar?

Below are some details about the system.

/dev/sdb is 250G; I had an sdb1 and sdb2 failure. I
created sdb3 and sdb4 and add them to the array. They are the current
member of the md array.

/mett# uname -a
Linux asus 3.2.0-0.bpo.4-686-pae #1 SMP Debian 3.2.57-3+deb7u2~bpo60+1
i686 GNU/Linux

root@asus:/home/mett#
root@asus:/home/mett# mdadm --detail /dev/md1
/dev/md1:
 Version : 1.2
   Creation Time : Mon Feb  4 22:46:04 2013
  Raid Level : raid1
  Array Size : 97654712 (93.13 GiB 100.00 GB)
   Used Dev Size : 97654712 (93.13 GiB 100.00 GB)
Raid Devices : 2
   Total Devices : 2
 Persistence : Superblock is persistent

 Update Time : Sun Oct 26 12:03:37 2014
   State : clean
  Active Devices : 2
Working Devices : 2
  Failed Devices : 0
   Spare Devices : 0

Name : asus:1  (local to host asus)
UUID : 639af1ab:8ec418b5:8254ef0d:ad9a728d
  Events : 75946

 Number   Major   Minor   RaidDevice State
2   820  active sync   /dev/sda2
3   8   201  active sync   /dev/sdb4

(/dev/md0 is same structure as above with sda1 and sdb3 as raid members)


root@asus:/home/mett#
Disk /dev/sdb: 250.1 GB, 250059350016 bytes
255 heads, 63 sectors/track, 30401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00066b3e

Device Boot  Start End  Blocks   Id  System
/dev/sdb1   1  64  514048+  fd  Linux raid
autodetect
/dev/sdb2  65   12515   100012657+  fd  Linux
raid
autodetect
/dev/sdb3   *   12516   12581   530145   fd  Linux raid
autodetect
/dev/sdb4   12582   25636   104864287+  fd  Linux raid
autodetect

Command (m for help):

Thanks a lot in advance.
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iF4EAREIAAYFAlRMaDMACgkQGYZUGVwcQVJTNQEAtTFXt5o+TJUA6v7XQiUL1MCQ
f24zTUpe7Zqrcz6XLi4BAJNEuPRx8QFZZeSHK9f1Qg/zAHhXBVTn3G21ODgEp+XQ
=eaQS
-END PGP SIGNATURE-

As I undertand your issue:
- you had RAID 1 arrays md0 (sda1+sdb1) and md1 (sda2+sdb2),
- sdb1  sdb2 showed an error, so you removed them from the arrays and 
added sdb3  sdb4 from the same physical disk,
- you are now wondering what to do with two partitions on device sdb 
(sdb1  sdb2).


I'm guessing that sdb is nearly toast. Run smartctl -H /dev/sdb on it. 
If it passes, remove it from the array and repartition it, then add it 
back into the array.


If it fails, remove if from your computer and replace it. Whatever new 
drive you get will probably be larger than your current drives, so 
partition it so that the sdb1 is larger than the current sd1a and the 
rest of the space goes to sdb2. In this way, you can expand md1 when you 
eventually have to replace sda (it will happen - disks eventually fail).


In general it is a really bad idea to keep a filing disk in your system. 
It not only will fail sooner rather than later but will also slow down 
your system due to i/o failures.



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/544ce3a6.1030...@torfree.net



Re: What to do with dead raid 1 partitions under mdadm

2014-10-26 Thread mett
-BEGIN PGP SIGNED MESSAGE-
Hash: SHA256

On Sun, 26 Oct 2014 08:05:58 -0400
Gary Dale garyd...@torfree.net wrote:

 On 25/10/14 11:19 PM, mett wrote:
  -BEGIN PGP SIGNED MESSAGE-
  Hash: SHA256
 
  Hi,
(snip)
  -BEGIN PGP SIGNATURE-
  Version: GnuPG v1.4.10 (GNU/Linux)
 
  iF4EAREIAAYFAlRMaDMACgkQGYZUGVwcQVJTNQEAtTFXt5o+TJUA6v7XQiUL1MCQ
  f24zTUpe7Zqrcz6XLi4BAJNEuPRx8QFZZeSHK9f1Qg/zAHhXBVTn3G21ODgEp+XQ
  =eaQS
  -END PGP SIGNATURE-
 As I undertand your issue:
 - you had RAID 1 arrays md0 (sda1+sdb1) and md1 (sda2+sdb2),
 - sdb1  sdb2 showed an error, so you removed them from the arrays
 and added sdb3  sdb4 from the same physical disk,
 - you are now wondering what to do with two partitions on device sdb 
 (sdb1  sdb2).

- --exactly
 
 I'm guessing that sdb is nearly toast. Run smartctl -H /dev/sdb on
 it. If it passes, remove it from the array and repartition it, then
 add it back into the array.
 
 If it fails, remove if from your computer and replace it. Whatever
 new drive you get will probably be larger than your current drives,
 so partition it so that the sdb1 is larger than the current sd1a and
 the rest of the space goes to sdb2. In this way, you can expand md1
 when you eventually have to replace sda (it will happen - disks
 eventually fail).
 
 In general it is a really bad idea to keep a filing disk in your
 system. It not only will fail sooner rather than later but will also
 slow down your system due to i/o failures.
 
 

I'll try that and update the results.

Thanks a lot for both answers
-BEGIN PGP SIGNATURE-
Version: GnuPG v1.4.10 (GNU/Linux)

iF4EAREIAAYFAlRM/jIACgkQGYZUGVwcQVKtXwEAlWMQuEh3OITQpXIjxMe0ldQU
XCYQZwsAgG1GUIm2DsYA/2fyJZ8jZsnVu2XFAFmR9SDkQUODn02wTeaSr58cLXmt
=CqrV
-END PGP SIGNATURE-


Re: What to do with dead raid 1 partitions under mdadm

2014-10-25 Thread David Christensen

On 10/25/2014 08:19 PM, mett wrote:

I'm running Squeeze under raid 1 with mdadm.
One of the raid failed and I replace it with space I had available on
that same disk.


I suggest that your read the SMART data, download the manufacturer's 
diagnostics utility disk, and run the manufacturer's full suite of 
diagnostics.  Then reset SMART, wipe the drive, run the diagnostics 
again, and look at SMART again.  If everything looks good, put it back 
into your box and rebuild RAID.  If everything doesn't look good, put 
the drive in the recycle pile and get another drive.



HTH,

David


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: https://lists.debian.org/544c7656.20...@holgerdanske.com