No software raid and no LVM with netinstaller 10.04

2020-06-08 Thread Bernd Recktor

Hello again,

same error with netinstaller 10.04




.Hello,
.
.md/raid0:md0:cannot assamble multi_zone RAID0 with default_layot settings
.md/raid0:md0: please set raid0.default_layout to 1 or 2
.
.this is fatal if You use to have the rootfs on a RAID0
.
.since Debian 10.3 there is no way to create a software raid during 
installing with the net install cd.

.
.
.best regards, Bernd



Re: Debian 10 software raid

2019-07-28 Thread Lucas Castro
Is your system running over uefi?
 If yes, ESP partition doesnt work over RAID software, so booting relaying just 
one of the disk.

If no,  make sure grub installed MBR in both disk. 

grub-install /dev/sd[AB]

It would be more helpfull if you post error as it is rather than just telling 
"my system not booting"

Em 28 de julho de 2019 15:17:33 BRT, Finariu Florin  
escreveu:
>Hi everyone,
>I have installed Debian 10 buster and I created a software raid.
>After I finished Debian installation the system don't boot.Can somebody
>tell me why?
>Or what I have to do to be able to boot it?
>Thank you!

-- 
Enviado de meu dispositivo Android com K-9 mail. Desculpe-me pela brevidade.

Re: Debian 10 software raid

2019-07-28 Thread Nicholas Geovanis
On Sun, Jul 28, 2019, 1:18 PM Finariu Florin  wrote:

> Hi everyone,
> I have installed Debian 10 buster and I created a software raid.
> After I finished Debian installation the system don't boot.
> Can somebody tell me why?
> Or what I have to do to be able to boot it?
> Thank you!
>

Are you sure that the BIOS settings boot from the correct volume now?


Debian 10 software raid

2019-07-28 Thread Finariu Florin
Hi everyone,
I have installed Debian 10 buster and I created a software raid.
After I finished Debian installation the system don't boot.Can somebody tell me 
why?
Or what I have to do to be able to boot it?
Thank you!

Re: Software RAID blocks

2019-01-13 Thread deloptes
Tom Bachreier wrote:

> So it is most likely that I have a problem with the software raid or the
> harddisks, isn't it? SMART is activated on all disks and does not show
> any error.

don't know exactly but I replaced all Seagate drives with WD - especially WD
Red 2TB NAS (WD20EFRX). Now just ordered couple of them for a RAID5 storage
solution. The 2TB Red seems to be very good from all I have seen on the
customer market.




Re: Software RAID blocks

2019-01-13 Thread Jens Holzhäuser
Hi!


On Sun, Jan 13, 2019 at 12:27:19PM +0100, Tom Bachreier wrote:
> Last night I got a "blocked for more than 300 seconds." message in syslog -
> see > 
> (link valid for 90 days).
> 
> Log summary:
> Jan 13 02:34:44 osprey kernel: [969696.242745] INFO: task md127_raid5:238 
> blocked for more than 300 seconds.
> Jan 13 02:34:44 osprey kernel: [969696.242772] Call Trace:
> Jan 13 02:34:44 osprey kernel: [969696.242789]  ? __schedule+0x2a2/0x870
> Jan 13 02:34:44 osprey kernel: [969696.242995] INFO: task dmcrypt_write:904 
> blocked for more than 300 seconds.
> Jan 13 02:34:44 osprey kernel: [969696.243223] INFO: task jbd2/dm-2-8:917 
> blocked for more than 300 seconds.
> Jan 13 02:34:44 osprey kernel: [969696.243525] INFO: task mpc:6622 blocked 
> for more than 300 seconds.
> Jan 13 02:34:44 osprey kernel: [969696.243997] INFO: task kworker/u8:0:6625 
> blocked for more than 300 seconds.

I am occasionally having very similar issues with my RAID1, task
blocking for more than 120 seconds, for no obvious reason.

I've started playing around with the vm.dirty_background_ratio and
vm.dirty_ratio kernel parameters, suspecting file system caching being
slow and the issue. [1]

While lowering the values does seem to have helped, it has not
completely eliminated the issue. So the jury for me is still out.

> In this case I did a
>   $ fdisk -l /dev/sdf
> and everything worked again.

Not sure if/how this would interact with ongoing cache flushing.

Jens


[1] 
https://www.blackmoreops.com/2014/09/22/linux-kernel-panic-issue-fix-hung_task_timeout_secs-blocked-120-seconds-problem/



Re: Software RAID blocks

2019-01-13 Thread Reco
On Sun, Jan 13, 2019 at 02:22:09PM +0100, Tom Bachreier wrote:
> 
> Hi Reco!
> 
> Jan 13, 2019, 1:47 PM by recovery...@enotuniq.net:
> 
> > On Sun, Jan 13, 2019 at 01:20:50PM +0100, Tom Bachreier wrote:
> >
> >> Jan 13, 2019, 12:46 PM by >> recovery...@enotuniq.net 
> >> >> :
> >>
> >> > On Sun, Jan 13, 2019 at 12:27:19PM +0100, Tom Bachreier wrote:
> >> >
> >> >> TLDR;
> >> >> My /home on dmcrypt -> software Raid5 blocks irregular usually without
> >> >> any error messages.
> >> >>
> >> >> I can get it going again with "fdisk -l /dev/sdx".
> >> >>
> >> >> Do you have an ideas how I can debug this issue further? Is it a 
> >> >> dmcrypt,
> >> >> a dm-softraid or a hardware issue?
> >> >>
> >> >
> >> > Let's start with something uncommon:
> >> >
> >>
> >> Thanks for your suggestions.
> >>
> >
> > My suspicion is that either some/all HDDs' firmware or disk controller
> > puts drive(s) in sleep mode.
> >
> 
> In this case: Why don't they awake with a write from dm-raid but with
> a read from fdisk? I don't see the logic behind.

RAID5 may be the reason. If you're reading a short sequence of bytes
from the array it does not mean you're utilizing all the drives.


> >> hdparm seems OK. Keep in mind only sdc and sdf ar WD drives.
> >>
> >
> > Since you have Seagates, please check them with 'hdparm -Z'.
> >
> 
> I don't think that did much to my drives.

I agree. It seems that drive's firmware rejected the request.


> $ smartctl -l scterc,70,70 /dev/sdd
> SCT Error Recovery Control set to:
>    Read: 70 (7.0 seconds)
>   Write: 70 (7.0 seconds)
> 

This setting may not survive the powercycle. Happen to have four such
drives. I just apply it at every reboot.

Reco



Re: Software RAID blocks

2019-01-13 Thread Tom Bachreier


Hi Reco!

Jan 13, 2019, 1:47 PM by recovery...@enotuniq.net:

> On Sun, Jan 13, 2019 at 01:20:50PM +0100, Tom Bachreier wrote:
>
>> Jan 13, 2019, 12:46 PM by >> recovery...@enotuniq.net 
>> >> :
>>
>> > On Sun, Jan 13, 2019 at 12:27:19PM +0100, Tom Bachreier wrote:
>> >
>> >> TLDR;
>> >> My /home on dmcrypt -> software Raid5 blocks irregular usually without
>> >> any error messages.
>> >>
>> >> I can get it going again with "fdisk -l /dev/sdx".
>> >>
>> >> Do you have an ideas how I can debug this issue further? Is it a dmcrypt,
>> >> a dm-softraid or a hardware issue?
>> >>
>> >
>> > Let's start with something uncommon:
>> >
>>
>> Thanks for your suggestions.
>>
>
> My suspicion is that either some/all HDDs' firmware or disk controller
> puts drive(s) in sleep mode.
>

In this case: Why don't they awake with a write from dm-raid but with
a read from fdisk? I don't see the logic behind.


>> hdparm seems OK. Keep in mind only sdc and sdf ar WD drives.
>>
>
> Since you have Seagates, please check them with 'hdparm -Z'.
>

I don't think that did much to my drives.

$ hdparm -Z /dev/sdb
/dev/sdb:
disabling Seagate auto powersaving mode
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 40 00 21 04 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00

$ hdparm -Z /dev/sde
/dev/sde:
disabling Seagate auto powersaving mode
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 40 00 21 04 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00


> Unsure about that Toshiba drive, though.
>
> I'd be wary of including those into the RAID (a single bad block can
> paralyze your whole RAID):
>
>> DISK: /dev/sdc
>> DISK: /dev/sde
>>

Thanks, I keep that in mind. I try to replace them in the near future.


> And I'd enable it for sdd.
>

Done.

$ smartctl -l scterc,70,70 /dev/sdd
SCT Error Recovery Control set to:
   Read: 70 (7.0 seconds)
  Write: 70 (7.0 seconds)

Tom





Re: Software RAID blocks

2019-01-13 Thread Reco
Hi.

On Sun, Jan 13, 2019 at 01:20:50PM +0100, Tom Bachreier wrote:
> 
> Hi Reco!
> 
> Jan 13, 2019, 12:46 PM by recovery...@enotuniq.net:
> 
> > On Sun, Jan 13, 2019 at 12:27:19PM +0100, Tom Bachreier wrote:
> >
> >> TLDR;
> >> My /home on dmcrypt -> software Raid5 blocks irregular usually without
> >> any error messages.
> >>
> >> I can get it going again with "fdisk -l /dev/sdx".
> >>
> >> Do you have an ideas how I can debug this issue further? Is it a dmcrypt,
> >> a dm-softraid or a hardware issue?
> >>
> >
> > Let's start with something uncommon:
> >
> > for x in /dev/sd{b..f}; do
> >  smartctl -l scterc $x
> >  hdparm -J $x
> > done
>
> Thanks for your suggestions.

My suspicion is that either some/all HDDs' firmware or disk controller
puts drive(s) in sleep mode.


> hdparm seems OK. Keep in mind only sdc and sdf ar WD drives.

Since you have Seagates, please check them with 'hdparm -Z'.
Unsure about that Toshiba drive, though.


That looks good, though:

> /dev/sdc:
> wdidle3  = disabled
> 
> /dev/sdf:
> wdidle3  = disabled


I'd be wary of including those into the RAID (a single bad block can
paralyze your whole RAID):

> -
> And here comes the SCT state:
> 
> DISK: /dev/sdc
> SCT Error Recovery Control command not supported
> 
> 
> DISK: /dev/sde
> SCT Error Recovery Control command not supported

And I'd enable it for sdd.

Reco



Re: Software RAID blocks

2019-01-13 Thread Tom Bachreier


Hi Reco!

Jan 13, 2019, 12:46 PM by recovery...@enotuniq.net:

> On Sun, Jan 13, 2019 at 12:27:19PM +0100, Tom Bachreier wrote:
>
>> TLDR;
>> My /home on dmcrypt -> software Raid5 blocks irregular usually without
>> any error messages.
>>
>> I can get it going again with "fdisk -l /dev/sdx".
>>
>> Do you have an ideas how I can debug this issue further? Is it a dmcrypt,
>> a dm-softraid or a hardware issue?
>>
>
> Let's start with something uncommon:
>
> for x in /dev/sd{b..f}; do
>  smartctl -l scterc $x
>  hdparm -J $x
> done
>
Thanks for your suggestions.

hdparm seems OK. Keep in mind only sdc and sdf ar WD drives.

$ hdparm -J /dev/sd[bcdef]
/dev/sdb:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 a0 00 21 04 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 a0 00 21 04 
00 00 00 be 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 a0 00 21 04 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
wdidle3  = 1 ??

/dev/sdc:
wdidle3  = disabled

/dev/sdd:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 00 00 21 04 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 00 00 21 04 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
wdidle3  = disabled

/dev/sde:
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 a0 00 21 04 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 a0 00 21 04 
00 00 00 be 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  70 00 05 00 00 00 00 0a 04 51 a0 00 21 04 
00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
wdidle3  = 1 ??

/dev/sdf:
wdidle3  = disabled

-
And here comes the SCT state:

$ for i in /dev/sd{b..f}; do echo "DISK: ${i}"; smartctl -l scterc "${i}"; done
DISK: /dev/sdb
smartctl 6.6 2017-11-05 r4594 [x86_64-linux-4.19.0-1-amd64] (local build)
Copyright (C) 2002-17, Bruce Allen, Christian Franke, www.smartmontools.org 


SCT Error Recovery Control:
   Read: 70 (7.0 seconds)
  Write: 70 (7.0 seconds)

DISK: /dev/sdc
SCT Error Recovery Control command not supported

DISK: /dev/sdd
SCT Error Recovery Control:
   Read: Disabled
  Write: Disabled

DISK: /dev/sde
SCT Error Recovery Control command not supported

DISK: /dev/sdf
SCT Error Recovery Control:
   Read: 70 (7.0 seconds)
  Write: 70 (7.0 seconds)

Hope this helps...
Tom



Re: Software RAID blocks

2019-01-13 Thread Reco
Hi.

On Sun, Jan 13, 2019 at 12:27:19PM +0100, Tom Bachreier wrote:
> TLDR;
> My /home on dmcrypt -> software Raid5 blocks irregular usually without
> any error messages.
> 
> I can get it going again with "fdisk -l /dev/sdx".
> 
> Do you have an ideas how I can debug this issue further? Is it a dmcrypt,
> a dm-softraid or a hardware issue?

Let's start with something uncommon:

for x in /dev/sd{b..f}; do
smartctl -l scterc $x
hdparm -J $x
done

Reco



Software RAID blocks

2019-01-13 Thread Tom Bachreier
Hi!

TLDR;
My /home on dmcrypt -> software Raid5 blocks irregular usually without
any error messages.

I can get it going again with "fdisk -l /dev/sdx".

Do you have an ideas how I can debug this issue further? Is it a dmcrypt,
a dm-softraid or a hardware issue?

---

Long version:
My /home "partition" is a dmcrypt on software RAID5 with 5 SATA disks.
See System info further down in this mail.
Once in a while user programs freeze because the dmcrypt or something
else further down the chain blocks during a write? on /home.
Am I lycky and had a running root shell open I can run a
  $ fdisk -l /dev/sdx
to one of the harddisks in the RAID and the block disappears instantly.

I checked if it could be a spindown power management problem but all
disks which have a PM feature have it disabled. So I don't think this is
the problem.

Last night I got a "blocked for more than 300 seconds." message in syslog -
see <https://paste.debian.net/1060134/ <https://paste.debian.net/1060134/>> 
(link valid for 90 days).

Log summary:
Jan 13 02:34:44 osprey kernel: [969696.242745] INFO: task md127_raid5:238 
blocked for more than 300 seconds.
Jan 13 02:34:44 osprey kernel: [969696.242772] Call Trace:
Jan 13 02:34:44 osprey kernel: [969696.242789]  ? __schedule+0x2a2/0x870
Jan 13 02:34:44 osprey kernel: [969696.242995] INFO: task dmcrypt_write:904 
blocked for more than 300 seconds.
Jan 13 02:34:44 osprey kernel: [969696.243223] INFO: task jbd2/dm-2-8:917 
blocked for more than 300 seconds.
Jan 13 02:34:44 osprey kernel: [969696.243525] INFO: task mpc:6622 blocked for 
more than 300 seconds.
Jan 13 02:34:44 osprey kernel: [969696.243997] INFO: task kworker/u8:0:6625 
blocked for more than 300 seconds.

In this case I did a
  $ fdisk -l /dev/sdf
and everything worked again.

As I understand the log mpc (user program) started and maybe accessed the
config file on /home. The ext4 tried to save the new access time which
got down the chain jbd2 -> dmcrypt and blocked in the end in md127_raid5.

So it is most likely that I have a problem with the software raid or the
harddisks, isn't it? SMART is activated on all disks and does not show
any error.

How can I debug this further to solve the problem? Thanks in advance for
your suggestions.

Tom

---
System info:

Debian testing

$ uname -a
Linux osprey 4.19.0-1-amd64 #1 SMP Debian 4.19.12-1 (2018-12-22) x86_64 
GNU/Linux

$ lsblk -i
NAME  MAJ:MIN RM  SIZE RO TYPE  MOUNTPOINT
sda 8:0    0 74.5G  0 disk
|-sda1  8:1    0    4G  0 part
| `-cswap1    253:1    0    4G  0 crypt [SWAP]
`-sda2  8:2    0 70.5G  0 part
  `-osprey_root   253:0    0 70.5G  0 crypt /
sdb 8:16   0  2.7T  0 disk
`-sdb1  8:17   0  2.7T  0 part
  `-md127   9:127  0 10.9T  0 raid5
    `-osprey_home 253:2    0 10.9T  0 crypt /home
sdc 8:32   0  2.7T  0 disk
`-sdc1  8:33   0  2.7T  0 part
  `-md127   9:127  0 10.9T  0 raid5
    `-osprey_home 253:2    0 10.9T  0 crypt /home
sdd 8:48   0  2.7T  0 disk
`-sdd1  8:49   0  2.7T  0 part
  `-md127   9:127  0 10.9T  0 raid5
    `-osprey_home 253:2    0 10.9T  0 crypt /home
sde 8:64   0  2.7T  0 disk
`-sde1  8:65   0  2.7T  0 part
  `-md127   9:127  0 10.9T  0 raid5
    `-osprey_home 253:2    0 10.9T  0 crypt /home
sdf 8:80   0  2.7T  0 disk
`-sdf1  8:81   0  2.7T  0 part
  `-md127   9:127  0 10.9T  0 raid5
    `-osprey_home 253:2    0 10.9T  0 crypt /home

$ sdparm --get STANDBY /dev/sd[bcdef]
    /dev/sdb: ATA   ST3000VN000-1H41  SC43
STANDBY not found in Power condition [po] mode page
    /dev/sdc: ATA   WDC WD30EURX-63T  0A80
STANDBY not found in Power condition [po] mode page
    /dev/sdd: ATA   TOSHIBA DT01ACA3  ABB0
STANDBY not found in Power condition [po] mode page
    /dev/sde: ATA   ST3000DM001-1CH1  CC27
STANDBY not found in Power condition [po] mode page
    /dev/sdf: ATA   WDC WD30EFRX-68E  0A80
STANDBY not found in Power condition [po] mode page

$ hdparm -B /dev/sd[bcdef]
/dev/sdb:
APM_level  = 254
/dev/sdc:
APM_level  = not supported
/dev/sdd:
APM_level  = off
/dev/sde:
APM_level  = 254
/dev/sdf:
APM_level  = not supported

$ cat /proc/mdstat
Personalities : [raid6] [raid5] [raid4] [linear] [multipath] [raid0] [raid1] 
[raid10]
md127 : active raid5 sdc1[1] sdd1[2] sdb1[0] sdf1[5] sde1[3]
  11719766016 blocks super 1.2 level 5, 512k chunk, algorithm 2 [5/5] 
[U]
  bitmap: 1/22 pages [4KB], 65536KB chunk
unused devices: 

$ for i in {b..f}; do echo "DISK: ${i}"; smartctl -a "/dev/sd${i}" |grep "SMART 
overall-health self-assessment test result"; done
DISK: b
SMART overal

Re: SSD TRIM software raid (mdadm)

2019-01-10 Thread basti
On 10.01.19 00:51, Pascal Hambourg wrote:
> Why did you flag the SSD as write-mostly ? I would have expected the
> opposite.

Oh Sorry I understand this option in a wrong way.
Ok, I will try TRIM on LVM.

Thanks a lot.



Re: SSD TRIM software raid (mdadm)

2019-01-09 Thread David Christensen

On 1/9/19 1:22 PM, basti wrote:

Hello, I have create a software raid level 1 with mdadm.

One drive is a "classic" HDD. The 2'nd drive is a SSD with option 
"write-mostly".


Over the raid I have create and LVM with all the partitions 
(root,swap and qemu/KVM VM's).


When I understand mdadm the hole space is marked as used. So my 
question is how useful is fstrim on /dev/mdx and would it relay trim 
the SSD?


Best Regards,


On 1/9/19 3:51 PM, Pascal Hambourg wrote:
Why did you flag the SSD as write-mostly ? I would have expected the 
opposite.


+1  RTFM mdadm(8), --write-mostly would make more sense on the HDD.


But, one SSD and one HDD in an MD mirror seems strange.  If mirroring is 
not required, I would:


1.  Partition the SSD with boot, swap, root, and VM partitions.  Give 
each VM a small virtual drive image file for its system drive.


2.  Put one large partition on the HDD.  Give each VM a virtual drive 
image file, sized as required, for its data drive.



If you can install an additional SSD, mirror the two SSD's.  Similarly 
so for an additional HDD.



In any case:

1.  Try to characterize your I/O workload -- synchronous vs. 
asynchronous, read vs. write, sequential vs. random, small vs. large.


2.  Configure things to use asynchronous I/O, where possible.

3.  Install plenty of RAM.

4.  Try multiple configurations and benchmark each, preferably with 
realistic workloads.



David



Re: SSD TRIM software raid (mdadm)

2019-01-09 Thread Pascal Hambourg

Le 09/01/2019 à 22:22, basti a écrit :

I have create a software raid level 1 with mdadm.

One drive is a "classic" HDD.
The 2'nd drive is a SSD with option "write-mostly".


Why did you flag the SSD as write-mostly ? I would have expected the 
opposite.



Over the raid I have create and LVM with all the partitions (root,swap
and qemu/KVM VM's).

When I understand mdadm the hole space is marked as used.


I don't understand what you mean.


So my question is how useful is fstrim on /dev/mdx and would it relay
trim the SSD?


RAID 1 supports TRIM since kernel 3.7. Since the HDD does not support 
TRIM, it will make discarded bloc contents inconsistent between the SSD 
and the HDD but it does not matter.


However you wrote that the RAID device is used by LVM, so you cannot run 
fstrim on it directly. You must enable TRIM/discard in LVM (see 
lvm.conf) and run fstrim on LVs which contain mounted filesystems 
supporting TRIM/discard.




SSD TRIM software raid (mdadm)

2019-01-09 Thread basti
Hello,
I have create a software raid level 1 with mdadm.

One drive is a "classic" HDD.
The 2'nd drive is a SSD with option "write-mostly".

Over the raid I have create and LVM with all the partitions (root,swap
and qemu/KVM VM's).

When I understand mdadm the hole space is marked as used.
So my question is how useful is fstrim on /dev/mdx and would it relay
trim the SSD?

Best Regards,



Re: software raid settings and ext4

2017-01-24 Thread Miguel González
On 01/24/17 8:56 PM, Pascal Hambourg wrote:
> Le 24/01/2017 à 00:45, Miguel González a écrit :
>>
>>  I´m running Proxmox 4.2 which is debian jessy with a software RAID of 2
>> Tb SATA disks of 7200 RPM in a dedicated server.
>>
>>   I have set it up using OVH Proxmox installer with ext4. I have
>> realized that another server with just one SATA disk has writes of about
>> 140 MB/s while this server with 2 disks has just reads around 40 MB/s:
>>
>>   root@myserver:~# hdparm -Tt /dev/sda
>>
>> /dev/sda:
>>  Timing cached reads:   11800 MB in  2.00 seconds = 5903.84 MB/sec
>>  Timing buffered disk reads: 136 MB in  3.17 seconds =  42.94 MB/sec
> 
> Is this a physical disk ? Then it is slow by today's standards, or
> defective, or busy doing other reads while you run the test.
> 
> There is nothing RAID or ext4 can do to make it faster.
> 
> 

Yes, it´s a physical disk. I have raised a ticket with the hosting company.

Thanks!



Re: software raid settings and ext4

2017-01-24 Thread Pascal Hambourg

Le 24/01/2017 à 00:45, Miguel González a écrit :


 I´m running Proxmox 4.2 which is debian jessy with a software RAID of 2
Tb SATA disks of 7200 RPM in a dedicated server.

  I have set it up using OVH Proxmox installer with ext4. I have
realized that another server with just one SATA disk has writes of about
140 MB/s while this server with 2 disks has just reads around 40 MB/s:

  root@myserver:~# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   11800 MB in  2.00 seconds = 5903.84 MB/sec
 Timing buffered disk reads: 136 MB in  3.17 seconds =  42.94 MB/sec


Is this a physical disk ? Then it is slow by today's standards, or 
defective, or busy doing other reads while you run the test.


There is nothing RAID or ext4 can do to make it faster.



Re: software raid settings and ext4

2017-01-24 Thread Felix Miata

Miguel González composed on 2017-01-24 09:33 (UTC+0100):


Felix Miata wrote:



Miguel González composed on 2017-01-24 00:45 (UTC+0100):
..

 Is there any way to improve the performance of ext4 both in the hosting
proxmox server and the virtual machines?



Who made those SATA cables, when, and are they red? Have you tried other
cables?



Subject: Re: SATA cables
https://lists.debian.org/debian-user/2016/02/msg00372.html



As I said, the server is in a hosting company (OVH), so I can´t check
the color of the cables or anything similar.



Are you using software RAID and ext4?


I meant to say as much (10 md devices comprised from 20 disk partitions), but 
apparently forgot.

--
"The wise are known for their understanding, and pleasant
words are persuasive." Proverbs 16:21 (New Living Translation)

 Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata  ***  http://fm.no-ip.com/



Re: software raid settings and ext4

2017-01-24 Thread Miguel González
On 01/24/17 1:29 AM, Felix Miata wrote:
> Miguel González composed on 2017-01-24 00:45 (UTC+0100):
> ..
>>  Is there any way to improve the performance of ext4 both in the hosting
>> proxmox server and the virtual machines?
> 
> Who made those SATA cables, when, and are they red? Have you tried other
> cables?
> 
> Subject: Re: SATA cables
> https://lists.debian.org/debian-user/2016/02/msg00372.html

As I said, the server is in a hosting company (OVH), so I can´t check
the color of the cables or anything similar.

Are you using software RAID and ext4?

Thanks,

Miguel



Re: software raid settings and ext4

2017-01-23 Thread Felix Miata

Miguel González composed on 2017-01-24 00:45 (UTC+0100):
...

 Is there any way to improve the performance of ext4 both in the hosting
proxmox server and the virtual machines?


Who made those SATA cables, when, and are they red? Have you tried other cables?

Subject: Re: SATA cables
https://lists.debian.org/debian-user/2016/02/msg00372.html

I'm running RAID1 on cheap 1TB Seagates here with 4.1 kernel:

# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   11880 MB in  2.00 seconds = 5944.19 MB/sec
 Timing buffered disk reads: 492 MB in  3.01 seconds = 163.65 MB/sec

# hdparm -Tt /dev/sdb

/dev/sdb:
 Timing cached reads:   15168 MB in  2.00 seconds = 7590.89 MB/sec
 Timing buffered disk reads: 526 MB in  3.01 seconds = 174.82 MB/sec

My cables are not red and were shipped with my MSI motherboard around 17 months 
ago.
--
"The wise are known for their understanding, and pleasant
words are persuasive." Proverbs 16:21 (New Living Translation)

 Team OS/2 ** Reg. Linux User #211409 ** a11y rocks!

Felix Miata  ***  http://fm.no-ip.com/



software raid settings and ext4

2017-01-23 Thread Miguel González
Hi,

 I´m running Proxmox 4.2 which is debian jessy with a software RAID of 2
Tb SATA disks of 7200 RPM in a dedicated server.

  I have set it up using OVH Proxmox installer with ext4. I have
realized that another server with just one SATA disk has writes of about
140 MB/s while this server with 2 disks has just reads around 40 MB/s:

  root@myserver:~# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   11800 MB in  2.00 seconds = 5903.84 MB/sec
 Timing buffered disk reads: 136 MB in  3.17 seconds =  42.94 MB/sec

root@myserver:~# hdparm -Tt /dev/md4

/dev/md4:
 Timing cached reads:   11356 MB in  2.00 seconds = 5682.49 MB/sec
 Timing buffered disk reads: 132 MB in  3.10 seconds =  42.52 MB/sec

 root@myserver:~# pveperf /vz/
CPU BOGOMIPS:  42669.12
REGEX/SECOND:  935261
HD SIZE:   1809.50 GB (/dev/mapper/pve-data)
BUFFERED READS:94.56 MB/sec
AVERAGE SEEK TIME: 17.88 ms
FSYNCS/SECOND: 10.15
DNS EXT:   16.93 ms
DNS INT:   23.36 ms (ibertrix-node2)


from specs from the manufacturer average seek time is around 8 ms

writes on the other hand seem to be fine:

root@myserver:~# dd if=/dev/zero of=/mytempfile
^C15210305+0 records in
15210305+0 records out
7787676160 bytes (7.8 GB) copied, 52.1116 s, 149 MB/s


performance from a VM (I have 5 VMs running in this 32 Gb RAM server)
running Centos 6.8:

root@vm [~]# dd if=/dev/zero of=/mytempfile
^C9360176+0 records in
9360176+0 records out
4792410112 bytes (4.8 GB) copied, 112.571 s, 42.6 MB/s

root@vm [~]# hdparm -Tt /dev/sda

/dev/sda:
 Timing cached reads:   7778 MB in  2.00 seconds = 3892.00 MB/sec
 Timing buffered disk reads: 122 MB in  3.59 seconds =  34.02 MB/sec



 I´m running 4.4.15-1-pve kernel.

 Is there any way to improve the performance of ext4 both in the hosting
proxmox server and the virtual machines?

 Regards,



Re: Unable to install bootloader on software raid

2014-12-15 Thread Darac Marjal
On Sat, Dec 13, 2014 at 11:36:58AM +0200, Johann Spies wrote:
After two of my hard disks failed, I decided to use a software raid in
future and tried to install Debian Testing on the raid.  The configuration
of the raid and software installation went well until I tried to install
the bootloader.  Both Grub and Lilo refused to install.
 
If it is not possible to install Grub or Lilo onto a software raid, why is
the option available in Grub?
 
I have seen several efforts to solve this problem on the internet, but
none of them worked for me.
 
What is the way to install Debian onto a software raid?

I think the problem is that, depending on the RAID level, the BIOS won't
be able to read the disks in order to load GRUB. If you're running
RAID0, then you're in luck; both drives appear as normal disks and all
that RAID does is ensure they are kept identical. If you're running
RAID1, though, then each disk is a chopped-up mess of data and you NEED
the RAID to provide the hint that half the data is on another device.

Now, once grub is loaded, it CAN assemble the RAID to a sufficient point
to load all the pieces of the kernel, but if the BIOS can't make enough
sense out of the disk to load GRUB then you have a problem. In that
case, you would normally leave a portion of the disk unRAIDed and
install GRUB onto that.

 
Regards
Johann
--
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


signature.asc
Description: Digital signature


Re: Unable to install bootloader on software raid

2014-12-15 Thread Sven Hartge
Darac Marjal mailingl...@darac.org.uk wrote:

 I think the problem is that, depending on the RAID level, the BIOS won't
 be able to read the disks in order to load GRUB. If you're running
 RAID0, then you're in luck; both drives appear as normal disks and all
 that RAID does is ensure they are kept identical. If you're running
 RAID1, though, then each disk is a chopped-up mess of data and you NEED
 the RAID to provide the hint that half the data is on another device.

It's the other way round: RAID0 is striped, aka chopped up mess and
RAID1 are mirrored disks.

S°

-- 
Sigmentation fault. Core dumped.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/7b7phlg9v...@mids.svenhartge.de



Re: Unable to install bootloader on software raid

2014-12-15 Thread Pascal Hambourg
Sven Hartge a écrit :
 Darac Marjal mailingl...@darac.org.uk wrote:
 
 I think the problem is that, depending on the RAID level, the BIOS won't
 be able to read the disks in order to load GRUB. If you're running
 RAID0, then you're in luck; both drives appear as normal disks and all
 that RAID does is ensure they are kept identical. If you're running
 RAID1, though, then each disk is a chopped-up mess of data and you NEED
 the RAID to provide the hint that half the data is on another device.
 
 It's the other way round: RAID0 is striped, aka chopped up mess and
 RAID1 are mirrored disks.

Besides, the Linux RAID arrays usually do no reside on whole raw disks
but on RAID partitions. GRUB's boot image and core image are usually
installed on each disk outside these partitions. This way all the
sectors needed to load GRUB's core image can be found on each single
disk, regardless of the RAID level.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/548f5683.6010...@plouf.fr.eu.org



Unable to install bootloader on software raid

2014-12-13 Thread Johann Spies
After two of my hard disks failed, I decided to use a software raid in
future and tried to install Debian Testing on the raid.  The configuration
of the raid and software installation went well until I tried to install
the bootloader.  Both Grub and Lilo refused to install.

If it is not possible to install Grub or Lilo onto a software raid, why is
the option available in Grub?

I have seen several efforts to solve this problem on the internet, but none
of them worked for me.

What is the way to install Debian onto a software raid?

Regards
Johann

-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


Re: Unable to install bootloader on software raid

2014-12-13 Thread Ron
On Sat, 13 Dec 2014 11:36:58 +0200
Johann Spies johann.sp...@gmail.com wrote:

 What is the way to install Debian onto a software raid?

I have installed 7.7.0-64 from the DVD without any problem.

So it might be something in Testing ?

Did you try to install from a Live-CD/DVD ? ISTR that in that case you must add 
dmraid=true at the end of the GRUB boot line.
 
Cheers,
 
Ron.
-- 
  Distrust all those who love you extremely
   upon a very slight acquaintance
   and without any visible reason.
   -- Lord Chesterfield

   -- http://www.olgiati-in-paraguay.org --
 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/20141213065435.7230a...@ron.cerrocora.org



Re: Unable to install bootloader on software raid

2014-12-13 Thread Johann Spies
Thanks for your reply, Ron.

In the end I configured my partions to exclude the first 250Mb (/boot) from
the raid and I could install Grub just to have a working system.

It is not the best solution.  The installer should work as expected.

Regards
Johann

-- 
Because experiencing your loyal love is better than life itself,
my lips will praise you.  (Psalm 63:3)


Re: Unable to install bootloader on software raid

2014-12-13 Thread Pascal Hambourg
Johann Spies a écrit :
 
 In the end I configured my partions to exclude the first 250Mb (/boot) from
 the raid and I could install Grub just to have a working system.
 
 It is not the best solution.

Obviously it's not. /boot should be on a RAID (and the bootloader
installed on all disks, which the installer won't do automatically) in
order for the system to be able to boot after any disk failure.

How did you prepare the disks the first time ?

PS : dm-raid is needed only if you want to use software BIOS RAID (aka
fakeRAID, not recommended), not Linux native RAID.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: https://lists.debian.org/548cb11f.5060...@plouf.fr.eu.org



Re: Replacing failed drive in software RAID

2013-11-07 Thread Veljko
On Tue, Nov 05, 2013 at 02:15:07PM +0100, Veljko wrote:
 On Thu, Oct 31, 2013 at 02:41:01PM -0600, Bob Proulx wrote:
  But if you are concerned about writes to sdb
  then I would simply plan to boot from the debian-installer image in
  rescue mode, assemble the raid, sync, then replace sdb, and repeat.
  You can always install grub to the boot sectors after replacing the
  suspect disks.  Hopefully this makes sense.
 
 I replaced sdd drive and that went without problem, but after replacing sda,
 the drive with boot partition and MBR, system stalled at veryfying dmi pool
 data. So I inserted debian CD and went with rescue mode. I haven't used it so
 I have some questions.
 
 I'm offered to reassemble RAID. Is it safe to use auto reconfigure option or
 should I assemble all three manually?
 
 If I should go with manually, what to do with md0? It's RAID1 for boot
 partition and now there is only one drive.
 
 Should I recreate md1 and md2 with three drives? Would that work?
 
 After this is done successfully, I assume I should go with:
 # vgscan
 # vgchange -a y volume_group_name
 
 and mount manually all partitions (there is root and swap, so I guess I only
 need to mount root). Am I right?
 
 Then, after creating partition table and adding new drive into RAID, would
 simple:
 # grub-install /dev/sda
 do the job?
 
 Anything else to think about?

Can anyone just confirm that it is safe to use auto reassemble RAID feature
from Debian installer?

Thanks!

Regards,
Veljko
 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131107084947.ga6...@angelina.example.com



Re: Replacing failed drive in software RAID

2013-11-07 Thread Bob Proulx
Veljko wrote:
 Veljko wrote:
  I replaced sdd drive and that went without problem, but after
  replacing sda, the drive with boot partition and MBR, system
  stalled at veryfying dmi pool data. So I inserted debian CD and
  went with rescue mode. I haven't used it so I have some questions.
  
  I'm offered to reassemble RAID. Is it safe to use auto reconfigure
  option or should I assemble all three manually?

As long as all of the disks are to be assembled then automatic mode
should be okay.  Don't use automatic mode if you have disks attached
that you do not want to be assembled.  The automated mode will scan to
see what is there and assemble anything that it can assemble.  But if
the only disks attached are the ones you want to assemble then I think
the automated mode is okay.

If the rescue tries to assemble two arrays that both have the same
minor number then it will assign a new minor number to one of the
arrays.  This really isn't bad but causes the renumbered ones to be in
the wrong place.  This can be corrected using the --update=super-minor
option.  This shouldn't be needed but I will mention it just in case.

  If I should go with manually, what to do with md0? It's RAID1 for boot
  partition and now there is only one drive.

You can only start it in degraded mode with one disk.  One disk is
enough to start the array.

  Should I recreate md1 and md2 with three drives? Would that work?

You have lost me on the context of this question.

  After this is done successfully, I assume I should go with:
  # vgscan
  # vgchange -a y volume_group_name

That should not be needed.  LVM happens at the next layer up.  LVM
shouldn't ever notice that the physical volume has gone away.  There
shouldn't be any need to make any LVM changes at all.

Unless you are doing something that you haven't explained.  :-/

In rescue mode simply assemble the raid.  Then it will ask you to
select a root file system.  If you are using lvm and have named them
appropriately then this should be easy.  I always name my root root
so that I can find it easily.

  and mount manually all partitions (there is root and swap, so I
  guess I only need to mount root). Am I right?

The installer will ask you for your root partition.  Then it will
offer you a shell in the target environment.  After getting a shell in
the root partition you will need to mount the other partitions.

  # mount -a

  Then, after creating partition table and adding new drive into
  RAID, would simple:
  # grub-install /dev/sda
  do the job?

Yes.

  Anything else to think about?
 
 Can anyone just confirm that it is safe to use auto reassemble RAID feature
 from Debian installer?

It has always worked for me.  As long as all of the attached disks are
to be assembled.  Don't do the automated if you have extra disks that
should not be assembled.

Bob


signature.asc
Description: Digital signature


Re: Replacing failed drive in software RAID

2013-11-07 Thread Veljko
On Thu, Nov 07, 2013 at 02:12:02AM -0700, Bob Proulx wrote:
   I'm offered to reassemble RAID. Is it safe to use auto reconfigure
   option or should I assemble all three manually?
 
 As long as all of the disks are to be assembled then automatic mode
 should be okay.  Don't use automatic mode if you have disks attached
 that you do not want to be assembled.  The automated mode will scan to
 see what is there and assemble anything that it can assemble.  But if
 the only disks attached are the ones you want to assemble then I think
 the automated mode is okay.

Three disks are from original RAID and fourth disk is new one that should
replace failed sda. So I guess all disks are the ones I want to assemble.

 If the rescue tries to assemble two arrays that both have the same
 minor number then it will assign a new minor number to one of the
 arrays.  This really isn't bad but causes the renumbered ones to be in
 the wrong place.  This can be corrected using the --update=super-minor
 option.  This shouldn't be needed but I will mention it just in case.

I'll keep that in mind.

   Should I recreate md1 and md2 with three drives? Would that work?
 
 You have lost me on the context of this question.

This was in case I should reassemble RAID device with root partition manually. 
In that
case I would have to recreate other two RAID devices as well. You already
answered that they will be reassembled in degraded mode, so that answers my
question. Three devices would be enough. 

   After this is done successfully, I assume I should go with:
   # vgscan
   # vgchange -a y volume_group_name
 
 That should not be needed.  LVM happens at the next layer up.  LVM
 shouldn't ever notice that the physical volume has gone away.  There
 shouldn't be any need to make any LVM changes at all.
 
 Unless you are doing something that you haven't explained.  :-/

No, just wanted to know if any additional step is needed. 

 In rescue mode simply assemble the raid.  Then it will ask you to
 select a root file system.  If you are using lvm and have named them
 appropriately then this should be easy.  I always name my root root
 so that I can find it easily.
 
   and mount manually all partitions (there is root and swap, so I
   guess I only need to mount root). Am I right?
 
 The installer will ask you for your root partition.  Then it will
 offer you a shell in the target environment.  After getting a shell in
 the root partition you will need to mount the other partitions.
 
   # mount -a
 
   Then, after creating partition table and adding new drive into
   RAID, would simple:
   # grub-install /dev/sda
   do the job?
 
 Yes.
 
   Anything else to think about?
  
  Can anyone just confirm that it is safe to use auto reassemble RAID feature
  from Debian installer?
 
 It has always worked for me.  As long as all of the attached disks are
 to be assembled.  Don't do the automated if you have extra disks that
 should not be assembled.
 
 Bob

Thanks for all your help, Bob. I don't usually need this step by step approach
but in this case I needed to be sure I won't loose my data. Your patience and
answers are much appreciated.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131107094221.gf6...@angelina.example.com



Re: Replacing failed drive in software RAID

2013-11-07 Thread Bob Proulx
Veljko wrote:
 Thanks for all your help, Bob. I don't usually need this step by
 step approach but in this case I needed to be sure I won't loose my
 data. Your patience and answers are much appreciated.

You are very welcome for the help.  Don't worry about it at all.  It's
no trouble.  Just keep working carefully and I am sure you will get
through it.  If we work together as a team then the sum total of us
are greater than any of us individually.

If you learn something interesting about the process consider updating
a wiki.debian.org page with whatever you think would be information
you would have liked to have had before you started this process.  I
don't know where this would go but if we can improve the documentation
then everyone benefits.

Good luck!
Bob


signature.asc
Description: Digital signature


Re: Replacing failed drive in software RAID

2013-11-05 Thread Veljko
On Thu, Oct 31, 2013 at 02:41:01PM -0600, Bob Proulx wrote:
 But if you are concerned about writes to sdb
 then I would simply plan to boot from the debian-installer image in
 rescue mode, assemble the raid, sync, then replace sdb, and repeat.
 You can always install grub to the boot sectors after replacing the
 suspect disks.  Hopefully this makes sense.

I replaced sdd drive and that went without problem, but after replacing sda,
the drive with boot partition and MBR, system stalled at veryfying dmi pool
data. So I inserted debian CD and went with rescue mode. I haven't used it so
I have some questions.

I'm offered to reassemble RAID. Is it safe to use auto reconfigure option or
should I assemble all three manually?

If I should go with manually, what to do with md0? It's RAID1 for boot
partition and now there is only one drive.

Should I recreate md1 and md2 with three drives? Would that work?

After this is done successfully, I assume I should go with:
# vgscan
# vgchange -a y volume_group_name

and mount manually all partitions (there is root and swap, so I guess I only
need to mount root). Am I right?

Then, after creating partition table and adding new drive into RAID, would
simple:
# grub-install /dev/sda
do the job?

Anything else to think about?

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131105131507.ga7...@angelina.example.com



Re: Replacing failed drive in software RAID

2013-11-01 Thread Veljko
Hello,,

On Thu, Oct 31, 2013 at 05:06:33PM -0500, Stan Hoeppner wrote:
 On 10/31/2013 3:41 PM, Bob Proulx wrote:
  Is this a BIOS boot ordering boot system booting from sda?  In which
  case replacing sda won't have an MBR to boot from.  You can probably
  use your BIOS boot to select a different disk to boot from.  And then
  after having booted install grub on the other disk.  (Sometimes the
  BIOS boot order will be quite different from the Linux kernel drive
  ordering.)

I think it is BIOS boot ordering. I don't remember how I installed MBR. Is it
even possible to have MBR on both sda and sdb? I think I was considering that
option but can't remember.

  I am unfamiliar with the sgdisk backup and load-backup operation.  I
  am not sure that will restore the grub boot sector.  This isn't too
  scary because you can always boot one of the other drives.  Or boot a
  debian-install rescue media.  But after setting up the replacement
  disk it will probably be necessary to install grub upon it in order
  for it to be bootable as the first BIOS boot media.

I don't know either, but in case that boot sector is not copied, could I just
copy first 446 bytes? It is the place where MBR is located, without touching
partition table. So could something like this work:

dd if/dev/sdb of /dev/sda bs=446 count=1

This can be done from live cd after drive is replaced in case it won't boot.

  And very often I have found that a second disk that I thought should
  have had grub installed upon it did not and when removing sda I find
  that the system won't grub boot from sdb.  Therefore I normally
  restore sda, boot, install grub on sdb, then try again.  But if you
  know ahead of time you can re-install grub on sdb and avoid the
  possible hiccup there.  But if you are concerned about writes to sdb
  then I would simply plan to boot from the debian-installer image in
  rescue mode, assemble the raid, sync, then replace sdb, and repeat.
  You can always install grub to the boot sectors after replacing the
  suspect disks.  Hopefully this makes sense.
 
 This is precisely why I use hardware RAID HBAs for boot disks (and most
 often for data disks as well).  The HBA's BIOS makes booting transparent
 after drive failure.  In addition you only have one array (hardware)
 instead of 3 (mdraid).  You have only 3 partitions to create instead of
 9, these residing on top of the one array device, not used to build
 multiple software array devices.  So you have one /boot, root fs, and
 data, and only one MBR to maintain.  The RAID controller literally turns
 your 4 drives into one, unlike soft RAID.
 
 The 4 port Adaptec is cheap, $200 USD, and a perfect fit for 4 drives:
 http://www.adaptec.com/en-us/products/series/6e/
 http://www.newegg.com/Product/Product.aspx?Item=N82E16816103229
 
 And because it has 128MB cache you get a small performance boost.

Indeed, hardware RAID makes life much simpler. I just contacted guy from
company we bought those drives to check if he can find some Adaptec from 6E
series. Not sure if boss is willing to buy one, but worth the try. 


  I was also thinking about inserting one drive and copying data from
  RIAD to it so I have backup if something goes wrong. Would that be
  right thing to do, or that would just load drives unnecessarily and
  accelerate their failure?
  
  Are you asking about the one drive inserted being large enough to do a
  full system backup?  If so then I think it is hard to argue against a
  full backup.  I think I would do the full backup even with the extra
  disk activity.  It is read, not write, and so not as bad as normal
  read-write disk activity.
 
 Agreed.

This is what I'm doing now. I inserted some 2TB drive to make backup, but
after booting the machine I noticed that my data are 2.3TB in size. :( Since
most of the data are rsnapshot backups, I will copy just newest one to have
something if something happens. Can't do full backup. 

One question: since most of my data are hard links, what would happen if I
just use
# cp -a /data /mnt/newdrive

Would this command copy every file more than once (every hard link as separate
file) or -d from -a argument would copy them as hard links on destination
file system?

Could this be done with midnight commander too?

 
  In which case you might consider that instead of replacing all disks
  one by one that you could simply do a full backup, then create the new
  system with lvm and raid as desired, and then restore the backup onto
  the newly constructed partitions.  After you have the full backup then
  your original drives would be shut off and available as a backup image
  too in that case.  So that also seems a very safe operation.
 
 This is my preferred method.  Cleaner, simpler.  Still not as simple as
 moving to hardware RAID though.

Problem is that I don't have another 3TB drive to do a full backup. Also, this
method requires another 4 SATA ports which I don't have and maybe PSU to
support 8 drives (this one is 

Re: Replacing failed drive in software RAID

2013-11-01 Thread Jonathan Dowland
On Thu, Oct 31, 2013 at 03:41:18PM +0100, Veljko wrote:
 # sgdisk --backup=table /dev/sdb
 # sgdisk --load-backup=table /dev/sda
 # sgdisk -G /dev/sda

I'm not familiar wiht sgdisk but you may need to call partprobe
after these stages and before these ones…

 # mdadm --manage /dev/md0 --add /dev/sda2

…to ensure that /dev/sdaX have appeared.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131101122307.ga10...@bryant.redmars.org



Re: Replacing failed drive in software RAID

2013-11-01 Thread Veljko
On Fri, Nov 01, 2013 at 12:23:07PM +, Jonathan Dowland wrote:
 On Thu, Oct 31, 2013 at 03:41:18PM +0100, Veljko wrote:
  # sgdisk --backup=table /dev/sdb
  # sgdisk --load-backup=table /dev/sda
  # sgdisk -G /dev/sda
 
 I'm not familiar wiht sgdisk but you may need to call partprobe
 after these stages and before these ones…
 
  # mdadm --manage /dev/md0 --add /dev/sda2
 
 …to ensure that /dev/sdaX have appeared.

Isn't that to inform the OS of partition table changes? In this case,
partition table stays the same. 

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131101124737.gb8...@angelina.example.com



Re: Replacing failed drive in software RAID

2013-11-01 Thread Pascal Hambourg
Hello,

Veljko a écrit :
 
 I think it is BIOS boot ordering. I don't remember how I installed MBR. Is it
 even possible to have MBR on both sda and sdb?

Of course.

 I don't know either, but in case that boot sector is not copied, could I just
 copy first 446 bytes? It is the place where MBR is located, without touching
 partition table. So could something like this work:
 
 dd if/dev/sdb of /dev/sda bs=446 count=1

I guess no. When installed in the MBR, grub usually installs another
part in the unallocated space between the MBR and the first partition.
Use grub-install (or install-grub, I never remember) instead.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5273b71f.3030...@plouf.fr.eu.org



Re: Replacing failed drive in software RAID

2013-11-01 Thread Veljko
On Fri, Nov 01, 2013 at 03:13:51PM +0100, Pascal Hambourg wrote:
  I don't know either, but in case that boot sector is not copied, could I 
  just
  copy first 446 bytes? It is the place where MBR is located, without touching
  partition table. So could something like this work:
  
  dd if/dev/sdb of /dev/sda bs=446 count=1
 
 I guess no. When installed in the MBR, grub usually installs another
 part in the unallocated space between the MBR and the first partition.
 Use grub-install (or install-grub, I never remember) instead.

Thanks Pascal, guess I'll just use grub-install.

Regards,
Veljko 


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131101143509.gc8...@angelina.example.com



Re: Replacing failed drive in software RAID

2013-11-01 Thread Pascal Hambourg
Stan Hoeppner a écrit :
 
 This is precisely why I use hardware RAID HBAs for boot disks (and most
 often for data disks as well).  The HBA's BIOS makes booting transparent
 after drive failure.  In addition you only have one array (hardware)
 instead of 3 (mdraid).

MD RAID arrays can be partitionned, or contain multiple LVM logical
volumes. So you don't have to create multiple arrays, unless they are of
different types (e.g. RAID 1 and RAID 10 as in this thread).


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5273b875.8080...@plouf.fr.eu.org



Re: Replacing failed drive in software RAID

2013-11-01 Thread Jonathan Dowland
On Fri, Nov 01, 2013 at 01:47:37PM +0100, Veljko wrote:
 Isn't that to inform the OS of partition table changes? In this case,
 partition table stays the same. 

It has changed: prior to sgdisk, sda has no partitions and sda1, sda2
etc. do not exist. After you clone the table from sdb, they do. My lack
of familiarity with sgdisk means I don't know whether it triggers a
rescan itself or not.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131101151812.ga13...@bryant.redmars.org



Re: Replacing failed drive in software RAID

2013-11-01 Thread Veljko
On Fri, Nov 01, 2013 at 03:18:12PM +, Jonathan Dowland wrote:
 On Fri, Nov 01, 2013 at 01:47:37PM +0100, Veljko wrote:
  Isn't that to inform the OS of partition table changes? In this case,
  partition table stays the same. 
 
 It has changed: prior to sgdisk, sda has no partitions and sda1, sda2
 etc. do not exist. After you clone the table from sdb, they do. My lack
 of familiarity with sgdisk means I don't know whether it triggers a
 rescan itself or not.

OK, thanks for clarification.

Regards,
Veljko


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20131101153533.gd8...@angelina.example.com



Re: Replacing failed drive in software RAID

2013-11-01 Thread Stan Hoeppner
On 11/1/2013 9:19 AM, Pascal Hambourg wrote:
 Stan Hoeppner a écrit :

 This is precisely why I use hardware RAID HBAs for boot disks (and most
 often for data disks as well).  The HBA's BIOS makes booting transparent
 after drive failure.  In addition you only have one array (hardware)
 instead of 3 (mdraid).
 
 MD RAID arrays can be partitionned, or contain multiple LVM logical
 volumes. So you don't have to create multiple arrays, unless they are of
 different types (e.g. RAID 1 and RAID 10 as in this thread).

Yes, I'm well aware of md's capabilities.  I was speaking directly to
the OP's situation.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5273d7b2.2000...@hardwarefreak.com



Re: Replacing failed drive in software RAID

2013-11-01 Thread Pascal Hambourg
Stan Hoeppner a écrit :
 On 11/1/2013 9:19 AM, Pascal Hambourg wrote:
 Stan Hoeppner a écrit :
 This is precisely why I use hardware RAID HBAs for boot disks (and most
 often for data disks as well).  The HBA's BIOS makes booting transparent
 after drive failure.  In addition you only have one array (hardware)
 instead of 3 (mdraid).

 MD RAID arrays can be partitionned, or contain multiple LVM logical
 volumes. So you don't have to create multiple arrays, unless they are of
 different types (e.g. RAID 1 and RAID 10 as in this thread).
 
 Yes, I'm well aware of md's capabilities.  I was speaking directly to
 the OP's situation.

So was I. In the OP's situation, there are arrays of different types (1
and 10) so you cannot have one array even with hardware RAID.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5273e39c.4030...@plouf.fr.eu.org



Re: Replacing failed drive in software RAID

2013-11-01 Thread Stan Hoeppner


On 11/1/2013 12:23 PM, Pascal Hambourg wrote:
 Stan Hoeppner a écrit :
 On 11/1/2013 9:19 AM, Pascal Hambourg wrote:
 Stan Hoeppner a écrit :
 This is precisely why I use hardware RAID HBAs for boot disks (and most
 often for data disks as well).  The HBA's BIOS makes booting transparent
 after drive failure.  In addition you only have one array (hardware)
 instead of 3 (mdraid).

 MD RAID arrays can be partitionned, or contain multiple LVM logical
 volumes. So you don't have to create multiple arrays, unless they are of
 different types (e.g. RAID 1 and RAID 10 as in this thread).

 Yes, I'm well aware of md's capabilities.  I was speaking directly to
 the OP's situation.
 
 So was I. In the OP's situation, there are arrays of different types (1
 and 10) so you cannot have one array even with hardware RAID.

Of course he can, and it is preferable to use a single array.  The only
reason the OP has a separate RAID1 is the fact that it is much simpler
to implement boot disk failover with md/RAID1 than with md/RAID10.  And
in fact that is why pretty much everyone who uses only md/RAID has at
least two md arrays on their disks:  a RAID1 set for dual MBR, /boot,
and rootfs, and a separate RAID5/6/10/etc for data.

With hardware based RAID there are no such limitations.  You can create
one array and throw everything on it.  No manually writing an MBR to
multiple disks, none of md's PITA requirements.  Zero downside, lots 'o
upside.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/52745675.4030...@hardwarefreak.com



Replacing failed drive in software RAID

2013-10-31 Thread Veljko

Hi guys,

I'm using four 3TB drives, so I had to use GPT. Although I'm pretty sure 
I know what I need to do, I want to make sure so I don't loose data. 
Three drives are dying so I'm gonna replace them one by one.


This is the situation:
sda and sdb have four partitions.
sda1, sdb1 - 1MB partitions at the beginning
sda2, sdb2 - boot partition (RAID1 - md0)
sda3, sdb3 - root partition (RAID10 - md1)
sda4, sdb4 - data (RAID10 - md2)

sdc and sdd have three partitions:
sdc1, sdd1 - 1MB partitions at the beginning
sdc2, sdd2 - root partition (RAID10 - md1)
sdc3, sdd3 - data (RAID10 - md2)

There is one more unnecessary complication, I have root and swap logical 
volume on md1 (sda3, sdb3, sdc2, sdd2). Don't know if I should take that 
into account when replacing the drives.


This is what I plan to do:

Replacing sda
1. Removing sda from all RAID devices
mdadm --manage /dev/md0 --fail /dev/sda2
mdadm --manage /dev/md0 --remove /dev/sda2

mdadm --manage /dev/md1 --fail /dev/sda3
mdadm --manage /dev/md1 --remove /dev/sda3

mdadm --manage /dev/md2 --fail /dev/sda4
mdadm --manage /dev/md2 --remove /dev/sda4

Checking what is the serial number of sda:
# hdparm -i /dev/sda

2. Replacing failed drive
halt
Replace drive with the right serial number.

3. Adding the new hard drive
Here I need to copy partition data from sdb to newly inserted sda. 
sfdisk won't work with GPT so I'm installing gdisk.

# aptitude install gdisk
# sgdisk --backup=table /dev/sdb
# sgdisk --load-backup=table /dev/sda
# sgdisk -G /dev/sda

# mdadm --manage /dev/md0 --add /dev/sda2
# mdadm --manage /dev/md1 --add /dev/sda3
# mdadm --manage /dev/md2 --add /dev/sda4

4. Check if synchronization is in progress:
# cat /proc/mdstat


After sync complete I will do this for all other drives, so all of them 
will be WD from Red series.


Did I overlook something? Will this going to work? I was also thinking 
about inserting one drive and copying data from RIAD to it so I have 
backup if something goes wrong. Would that be right thing to do, or that 
would just load drives unnecessarily and accelerate their failure?


Regards,
Veljko


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/52726c0e.9090...@gmail.com



Re: Replacing failed drive in software RAID

2013-10-31 Thread Bob Proulx
Veljko wrote:
 I'm using four 3TB drives, so I had to use GPT. Although I'm pretty
 sure I know what I need to do, I want to make sure so I don't loose
 data. Three drives are dying so I'm gonna replace them one by one.

Sounds like a good plan to me.  It is what I would do.  It is what I
have done before when upgrading sizes to larger sizes.

 This is what I plan to do:
 Replacing sda
 ...
 Did I overlook something? Will this going to work?

Very well thought out plan!  Looks okay to me.  I like it.  Some boot
issues to discuss however.

Is this a BIOS boot ordering boot system booting from sda?  In which
case replacing sda won't have an MBR to boot from.  You can probably
use your BIOS boot to select a different disk to boot from.  And then
after having booted install grub on the other disk.  (Sometimes the
BIOS boot order will be quite different from the Linux kernel drive
ordering.)

I am unfamiliar with the sgdisk backup and load-backup operation.  I
am not sure that will restore the grub boot sector.  This isn't too
scary because you can always boot one of the other drives.  Or boot a
debian-install rescue media.  But after setting up the replacement
disk it will probably be necessary to install grub upon it in order
for it to be bootable as the first BIOS boot media.

And very often I have found that a second disk that I thought should
have had grub installed upon it did not and when removing sda I find
that the system won't grub boot from sdb.  Therefore I normally
restore sda, boot, install grub on sdb, then try again.  But if you
know ahead of time you can re-install grub on sdb and avoid the
possible hiccup there.  But if you are concerned about writes to sdb
then I would simply plan to boot from the debian-installer image in
rescue mode, assemble the raid, sync, then replace sdb, and repeat.
You can always install grub to the boot sectors after replacing the
suspect disks.  Hopefully this makes sense.

 I was also thinking about inserting one drive and copying data from
 RIAD to it so I have backup if something goes wrong. Would that be
 right thing to do, or that would just load drives unnecessarily and
 accelerate their failure?

Are you asking about the one drive inserted being large enough to do a
full system backup?  If so then I think it is hard to argue against a
full backup.  I think I would do the full backup even with the extra
disk activity.  It is read, not write, and so not as bad as normal
read-write disk activity.

In which case you might consider that instead of replacing all disks
one by one that you could simply do a full backup, then create the new
system with lvm and raid as desired, and then restore the backup onto
the newly constructed partitions.  After you have the full backup then
your original drives would be shut off and available as a backup image
too in that case.  So that also seems a very safe operation.

Or since you have four new drives go ahead and construct a new base
configuration with the four new drives with lvm+raid as desired.  And
then clone directly from the old system disks to the new system
disks.  Then boot the new system disks.  This has much more offline
time than the replace one disk at a time that you outlined above.  I
normally do the sync one disk at a time since the system is online and
running services normally during the sync.  But there are many ways to
accomplish the task.

Bob


signature.asc
Description: Digital signature


Re: Replacing failed drive in software RAID

2013-10-31 Thread Stan Hoeppner
On 10/31/2013 3:41 PM, Bob Proulx wrote:
 Veljko wrote:
 I'm using four 3TB drives, so I had to use GPT. Although I'm pretty
 sure I know what I need to do, I want to make sure so I don't loose
 data. Three drives are dying so I'm gonna replace them one by one.
 
 Sounds like a good plan to me.  It is what I would do.  It is what I
 have done before when upgrading sizes to larger sizes.
 
 This is what I plan to do:
 Replacing sda
 ...
 Did I overlook something? Will this going to work?
 
 Very well thought out plan!  Looks okay to me.  I like it.  Some boot
 issues to discuss however.
 
 Is this a BIOS boot ordering boot system booting from sda?  In which
 case replacing sda won't have an MBR to boot from.  You can probably
 use your BIOS boot to select a different disk to boot from.  And then
 after having booted install grub on the other disk.  (Sometimes the
 BIOS boot order will be quite different from the Linux kernel drive
 ordering.)
 
 I am unfamiliar with the sgdisk backup and load-backup operation.  I
 am not sure that will restore the grub boot sector.  This isn't too
 scary because you can always boot one of the other drives.  Or boot a
 debian-install rescue media.  But after setting up the replacement
 disk it will probably be necessary to install grub upon it in order
 for it to be bootable as the first BIOS boot media.
 
 And very often I have found that a second disk that I thought should
 have had grub installed upon it did not and when removing sda I find
 that the system won't grub boot from sdb.  Therefore I normally
 restore sda, boot, install grub on sdb, then try again.  But if you
 know ahead of time you can re-install grub on sdb and avoid the
 possible hiccup there.  But if you are concerned about writes to sdb
 then I would simply plan to boot from the debian-installer image in
 rescue mode, assemble the raid, sync, then replace sdb, and repeat.
 You can always install grub to the boot sectors after replacing the
 suspect disks.  Hopefully this makes sense.

This is precisely why I use hardware RAID HBAs for boot disks (and most
often for data disks as well).  The HBA's BIOS makes booting transparent
after drive failure.  In addition you only have one array (hardware)
instead of 3 (mdraid).  You have only 3 partitions to create instead of
9, these residing on top of the one array device, not used to build
multiple software array devices.  So you have one /boot, root fs, and
data, and only one MBR to maintain.  The RAID controller literally turns
your 4 drives into one, unlike soft RAID.

The 4 port Adaptec is cheap, $200 USD, and a perfect fit for 4 drives:
http://www.adaptec.com/en-us/products/series/6e/
http://www.newegg.com/Product/Product.aspx?Item=N82E16816103229

And because it has 128MB cache you get a small performance boost.

 I was also thinking about inserting one drive and copying data from
 RIAD to it so I have backup if something goes wrong. Would that be
 right thing to do, or that would just load drives unnecessarily and
 accelerate their failure?
 
 Are you asking about the one drive inserted being large enough to do a
 full system backup?  If so then I think it is hard to argue against a
 full backup.  I think I would do the full backup even with the extra
 disk activity.  It is read, not write, and so not as bad as normal
 read-write disk activity.

Agreed.

 In which case you might consider that instead of replacing all disks
 one by one that you could simply do a full backup, then create the new
 system with lvm and raid as desired, and then restore the backup onto
 the newly constructed partitions.  After you have the full backup then
 your original drives would be shut off and available as a backup image
 too in that case.  So that also seems a very safe operation.

This is my preferred method.  Cleaner, simpler.  Still not as simple as
moving to hardware RAID though.

 Or since you have four new drives go ahead and construct a new base
 configuration with the four new drives with lvm+raid as desired.  And
 then clone directly from the old system disks to the new system
 disks.  Then boot the new system disks.  This has much more offline
 time than the replace one disk at a time that you outlined above.  I
 normally do the sync one disk at a time since the system is online and
 running services normally during the sync.  But there are many ways to
 accomplish the task.

And yes there is more down time with this method.

-- 
Stan


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/5272d469.8010...@hardwarefreak.com



Re: amd64 debian wheezy installation: software raid: partition misaligned by 512 bytes

2013-05-27 Thread Virgo Pärna
On Sun, 26 May 2013 15:42:26 -0700, Alexandru Cardaniuc cardan...@gmail.com 
wrote:

 I can try to move the md2p1 partition with the parted. Would that work? :)



As I said, I've never done it myself. And you probably need to shrink those 
partitions 
first, to create room for moving. 

-- 
Virgo Pärna 
virgo.pa...@mail.ee


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/slrnkq6hqt.4ih.virgo.pa...@dragon.gaiasoft.ee



Re: amd64 debian wheezy installation: software raid: partition misaligned by 512 bytes

2013-05-26 Thread Alexandru Cardaniuc
I can try to move the md2p1 partition with the parted. Would that work? :)


On Tue, May 21, 2013 at 11:03 PM, Virgo Pärna virgo.pa...@mail.ee wrote:

 On Tue, 21 May 2013 09:12:10 -0700, Alexandru Cardaniuc 
 cardan...@gmail.com wrote:
  --089e01175e3d288e0104dd3cb6f1
  Content-Type: text/plain; charset=ISO-8859-1
  Content-Transfer-Encoding: quoted-printable
 
  Ok, so any way to fix that now? Without reinstalling everything? Any way
 to
  move the partitiions?
 

 Don't really know. Possibly, by making partition shorter and then
 moving.
 Never had to do this.

  Also, I don't remember creating these md partitions. The installer does
  that by default? I should do that part manually during installation? But
 I
  didn't see that option in the installer...
 

 I have never set up RAID at install time. My exprience is only with
 moving
 existing system to RAID using additional disk and moving that same system
 to new
 mirror.
 But if it automatically sets up system like this, then it should
 account as
 a bug in installer - AFAIK.


 --
 Virgo Pärna
 virgo.pa...@mail.ee


 --
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact
 listmas...@lists.debian.org
 Archive:
 http://lists.debian.org/slrnkponu8.4ih.virgo.pa...@dragon.gaiasoft.ee




-- 
Sincerely yours,
Alexandru Cardaniuc


Re: amd64 debian wheezy installation: software raid: partition misaligned by 512 bytes

2013-05-24 Thread Dan Ritter
On Tue, May 21, 2013 at 09:12:10AM -0700, Alexandru Cardaniuc wrote:
 Ok, so any way to fix that now? Without reinstalling everything? Any way to
 move the partitiions?
 
 Also, I don't remember creating these md partitions. The installer does
 that by default? I should do that part manually during installation? But I
 didn't see that option in the installer...
 

md partitions are created when you select software RAID in the
installer, or manually created with mdadm.

-dsr-


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20130521161533.gi27...@randomstring.org



Re: amd64 debian wheezy installation: software raid: partition misaligned by 512 bytes

2013-05-22 Thread Virgo Pärna
On Tue, 21 May 2013 09:12:10 -0700, Alexandru Cardaniuc cardan...@gmail.com 
wrote:
 --089e01175e3d288e0104dd3cb6f1
 Content-Type: text/plain; charset=ISO-8859-1
 Content-Transfer-Encoding: quoted-printable

 Ok, so any way to fix that now? Without reinstalling everything? Any way to
 move the partitiions?


Don't really know. Possibly, by making partition shorter and then moving. 
Never had to do this.

 Also, I don't remember creating these md partitions. The installer does
 that by default? I should do that part manually during installation? But I
 didn't see that option in the installer...


I have never set up RAID at install time. My exprience is only with moving 
existing system to RAID using additional disk and moving that same system to 
new 
mirror. 
But if it automatically sets up system like this, then it should account as 
a bug in installer - AFAIK.


-- 
Virgo Pärna 
virgo.pa...@mail.ee


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/slrnkponu8.4ih.virgo.pa...@dragon.gaiasoft.ee



Re: amd64 debian wheezy installation: software raid: partition misaligned by 512 bytes

2013-05-21 Thread Virgo Pärna
On Sun, 19 May 2013 21:41:33 -0700, Alexandru Cardaniuc cardan...@gmail.com 
wrote:

 1. Should the /dev/md{0,1,2}p1 devices Start at 64 and not 63? Or some
 other cylinder number to be properly aligned? That would explain the exact
 512 bytes misalignment (1 sector)?

 2. How did that happen? The partitiions on the disk were properly aligned,
 the sda1 sda2 sda3 sdb1 sdb2 sdb3? If they were properly aligned why
 weren't md devices created with proper alignment?


/dev/md? devices themselves were aligned properly, but you also created 
partitions inside md devices. And those partitions are not aligned properly 
anymore. So they are aligned now to 1 MiB+63sectors instead of 1 MiB.
In your case, you should have used those md devices without any additional 
partition  tables (since you only created one partition on them anyway). Or you 
should have aligned those partitons same way the original sdisk partitions were 
aligned.


-- 
Virgo Pärna 
virgo.pa...@mail.ee


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/slrnkpmu6m.4ih.virgo.pa...@dragon.gaiasoft.ee



Re: amd64 debian wheezy installation: software raid: partition misaligned by 512 bytes

2013-05-21 Thread Alexandru Cardaniuc
Ok, so any way to fix that now? Without reinstalling everything? Any way to
move the partitiions?

Also, I don't remember creating these md partitions. The installer does
that by default? I should do that part manually during installation? But I
didn't see that option in the installer...


thanks,
Alexandru


On Tue, May 21, 2013 at 6:38 AM, Virgo Pärna virgo.pa...@mail.ee wrote:

 On Sun, 19 May 2013 21:41:33 -0700, Alexandru Cardaniuc 
 cardan...@gmail.com wrote:
 
  1. Should the /dev/md{0,1,2}p1 devices Start at 64 and not 63? Or some
  other cylinder number to be properly aligned? That would explain the
 exact
  512 bytes misalignment (1 sector)?
 
  2. How did that happen? The partitiions on the disk were properly
 aligned,
  the sda1 sda2 sda3 sdb1 sdb2 sdb3? If they were properly aligned why
  weren't md devices created with proper alignment?
 

 /dev/md? devices themselves were aligned properly, but you also created
 partitions inside md devices. And those partitions are not aligned properly
 anymore. So they are aligned now to 1 MiB+63sectors instead of 1 MiB.
 In your case, you should have used those md devices without any
 additional
 partition  tables (since you only created one partition on them anyway).
 Or you
 should have aligned those partitons same way the original sdisk partitions
 were
 aligned.


 --
 Virgo Pärna
 virgo.pa...@mail.ee


 --
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
 with a subject of unsubscribe. Trouble? Contact
 listmas...@lists.debian.org
 Archive:
 http://lists.debian.org/slrnkpmu6m.4ih.virgo.pa...@dragon.gaiasoft.ee




-- 
Sincerely yours,
Alexandru Cardaniuc


amd64 debian wheezy installation: software raid: partition misaligned by 512 bytes

2013-05-19 Thread Alexandru Cardaniuc
Hi,

I built my current desktop and installed debian etch a few years ago. Then
in time I successfully upgraded to lenny and squeeze when they were both
officially released at the time. A couple of weeks ago I updated to wheezy,
and that screwed up grub configuration and debian refused to boot from
mdraid devices (I am running software RAID-1). Spent some time fixing it,
but by that time my drives started failing, so I decided to upgrade.
I ordered 2 1T WD Blue WD10EZEX. These drives apparently use Advanced
Format, which I didn't pay attention to at the time.

So, I reinstalled Debian Wheezy using netinst.iso. I configured during
installation 3 partitions:

/dev/sda1 50G /
/dev/sda2 10G swap
/dev/sda3 (rest of the drive ~940G) /home

Then same with /dev/sdb

/dev/sdb1 50G /
/dev/sdb2 10G swap
/dev/sdb3 ~940G /home

then used the installer to configure software RAID-1 like that:
/dev/sda1 + /dev/sdb1 - /dev/md0
/dev/sda2 + /dev/sdb2 - /dev/md1
/dev/sda3 + /dev/sdb3 - /dev/md2

With that installation proceeded nicely and I have debian wheezy now
installed. The problem is that after I logged in Gnome I see in Disk
Utility, for all 3 RAID md devices:

WARNING: The partition is misaligned by 512 bytes. This may result in very
poor performance. Repartitioning is suggested.

Now, when I check the partitions that we created during installation on the
disk seem to actually be aligned properly with the offset of 2048 sectors
(x 512 bytes = 1M?):

root@sonata:~# fdisk -l /dev/sda /dev/sdb

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x00030b9a

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   *20489765683148827392   fd  Linux raid
autodetect
/dev/sda297656832   117188607 9765888   fd  Linux raid
autodetect
/dev/sda3   117188608  1953523711   918167552   fd  Linux raid
autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders, total 1953525168 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0006affd

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1   *20489765683148827392   fd  Linux raid
autodetect
/dev/sdb297656832   117188607 9765888   fd  Linux raid
autodetect
/dev/sdb3   117188608  1953523711   918167552   fd  Linux raid
autodetect

On the other hand, when I check the RAID-1 devices I get the misalignment:

root@sonata:~# fdisk -l /dev/md0 /dev/md1 /dev/md2

Disk /dev/md0: 50.0 GB, 49965563904 bytes
255 heads, 63 sectors/track, 6074 cylinders, total 97588992 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000cfa3c

Device Boot  Start End  Blocks   Id  System
/dev/md0p1   *  639757880948789373+  83  Linux
Partition 1 does not start on physical sector boundary.

Disk /dev/md1: 9991 MB, 9991749632 bytes
255 heads, 63 sectors/track, 1214 cylinders, total 19515136 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000e0613

Device Boot  Start End  Blocks   Id  System
/dev/md1p1  6319502909 9751423+  82  Linux swap /
Solaris
Partition 1 does not start on physical sector boundary.

Disk /dev/md2: 940.1 GB, 940069158912 bytes
255 heads, 63 sectors/track, 114290 cylinders, total 1836072576 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x0001ead5

Device Boot  Start End  Blocks   Id  System
/dev/md2p1  63  1836068849   918034393+  83  Linux
Partition 1 does not start on physical sector boundary.

Now, the questions :)

1. Should the /dev/md{0,1,2}p1 devices Start at 64 and not 63? Or some
other cylinder number to be properly aligned? That would explain the exact
512 bytes misalignment (1 sector)?

2. How did that happen? The partitiions on the disk were properly aligned,
the sda1 sda2 sda3 sdb1 sdb2 sdb3? If they were properly aligned why
weren't md devices created with proper alignment?

3. How significant of a performance drag are we talking in this particular
case?

4. What are my options right now? How do I fix it? Can I fix it without
reinstalling Debian Wheezy again?

5. If I have to go through the process of reinstalling again using debian
wheezy netinst.iso, what exactly (and how) I am supposed to choose to make
sure that md0

Grub Loading software raid.

2013-05-14 Thread Simon Jones
Hello folks,

I have built a system with Debian software RAID-1 and everything works great 
until you pull a drive and try to boot, the system just hangs at Grub loading

If I pop the drive back in the system will boot normally then do # mdadm 
/dev/md0 -a /dev/sdb it adds the drive and begins the rebuild, as shown by cat 
/proc/mdstat

I installed the boot loader to the second drive by doing # grub-install /dev/sdb

Then checked the disks;

# fdisk -l

Disk /dev/sda: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000a1072

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   1  121602   976760832   fd  Linux raid autodetect

Disk /dev/sdb: 1000.2 GB, 1000204886016 bytes
255 heads, 63 sectors/track, 121601 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disk identifier: 0x000a4dbb

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1   1  121602   976760832   fd  Linux raid autodetect

But still when I remove the drive it just hangs at Grub loading and doesn't 
go any further, what am I doing wrong?

Thanks.

- Simon.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/C1ABE3AA84CD964DB07F21189E2C4BF99E3DFD86@ALTERNATEREALIT.thematrix.local



Booting Wheezy Install on Software Raid 6

2013-04-15 Thread deb...@paulscrap.com
Hi Folks,

I'm in the process of assembling a storage system, and  am running into
an issue while testing the setup in a VM.

The setup has 6 three terabyte harddrives that I'd like to put in RAID
6 (Eventually more will be added, expanding the array).  I'd like
everything to be on there, with every HD capable of booting the system.
Ultimately the RAID 6 array will host an LVM partition that will be used
for the whole system (unless /boot is put a separate array).

I've made several attempts with the current Wheezy (testing) installer,
but all have failed at installing a bootloader.  I've tried using the
whole disk as a RAID partition, setting up the disks with two partitions
(one small one as part of a RAID 1 array for boot, the rest of the drive
for the raid 6 array), and the same but with a gap at the front of the
drive (I've read that Grub sometimes needs this?).

Any suggestions or pointers?  Most of what I've found seems to assume
you have a separate boot drive.

- PaulNM




-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/516be3ad.6060...@paulscrap.com



Re: Booting Wheezy Install on Software Raid 6

2013-04-15 Thread Gary Dale

On 15/04/13 07:25 AM, deb...@paulscrap.com wrote:

Hi Folks,

I'm in the process of assembling a storage system, and  am running into
an issue while testing the setup in a VM.

The setup has 6 three terabyte harddrives that I'd like to put in RAID
6 (Eventually more will be added, expanding the array).  I'd like
everything to be on there, with every HD capable of booting the system.
Ultimately the RAID 6 array will host an LVM partition that will be used
for the whole system (unless /boot is put a separate array).

I've made several attempts with the current Wheezy (testing) installer,
but all have failed at installing a bootloader.  I've tried using the
whole disk as a RAID partition, setting up the disks with two partitions
(one small one as part of a RAID 1 array for boot, the rest of the drive
for the raid 6 array), and the same but with a gap at the front of the
drive (I've read that Grub sometimes needs this?).

Any suggestions or pointers?  Most of what I've found seems to assume
you have a separate boot drive.

- PaulNM


There are few significant differences between booting from a RAID array 
and booting from any other type of media. The major one I believe is 
that the physical boot device is actually multiple pieces of hardware.


This can mean that each physical drive must be bootable and must contain 
the bootloader. You can ensure this by installing grub on each physical 
drive (/dev/sda, /dev/sdb, etc.). Should the normal boot drive fail, 
booting will fallback onto another drive.


Wheezy can boot from RAID arrays so I recommend partitioning each drive 
into a single large partition and use those to create a large RAID 
array. You can partition this array as you like.


If the boot still fails, try to track down why it is failing. 
/etc/mdadm/mdadm.conf should contain the UUIDs for the RAID array(s). 
/boot/grub/grub.cfg should have the UUID for the / partition. If you 
have a partitioned array, these should be different. Verify that the 
arrays and partitions are identified correctly.


It rarely hurts to update-initramfs -u and update-grub afterwards. This 
will ensure that any changes you have made are reflected in the boot 
process (Squeeze doesn't generate the correct UUIDs for partitioned RAID 
arrays but Wheezy should).





--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/516bf4aa.5010...@rogers.com



Re: Software Raid, recovery after drive broke

2011-10-29 Thread Tom H
On Sat, Oct 29, 2011 at 1:47 PM, Raf Czlonka r...@linuxstuff.pl wrote:
 On Sat, Oct 29, 2011 at 01:45:43PM BST, Tom H wrote:
 On Fri, Oct 28, 2011 at 2:28 PM, Raf Czlonka r...@linuxstuff.pl wrote:
  On Fri, Oct 28, 2011 at 06:50:05PM BST, Tom H wrote:
  Given your posts, you're clearly confused...
 
  Would you care to elaborate?

 No.

 Please refrain from posts claiming that someone's confused if you can't
 point out what makes you think that.

No comment...


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=SwJNPd3a+g7SRBBJfFQ+N7P+rWmEu6qe_0OB=skxur...@mail.gmail.com



Re: Software Raid, recovery after drive broke

2011-10-29 Thread Tom H
On Fri, Oct 28, 2011 at 2:28 PM, Raf Czlonka r...@linuxstuff.pl wrote:
 On Fri, Oct 28, 2011 at 06:50:05PM BST, Tom H wrote:
 Given your posts, you're clearly confused...

 Would you care to elaborate?

No.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=Sxr6947DeZo74waHE6fXhJmo2=koi7kunmkray1vm6...@mail.gmail.com



Re: Software Raid, recovery after drive broke

2011-10-29 Thread Raf Czlonka
On Sat, Oct 29, 2011 at 01:45:43PM BST, Tom H wrote:
 On Fri, Oct 28, 2011 at 2:28 PM, Raf Czlonka r...@linuxstuff.pl wrote:
  On Fri, Oct 28, 2011 at 06:50:05PM BST, Tom H wrote:
  Given your posts, you're clearly confused...
 
  Would you care to elaborate?
 
 No.

Please refrain from posts claiming that someone's confused if you can't
point out what makes you think that.

Regards,
-- 
Raf


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20111029174741.ga17...@linuxstuff.pl



Re: Software Raid, recovery after drive broke

2011-10-28 Thread Raf Czlonka
On Fri, Oct 21, 2011 at 06:12:37PM BST, Tom H wrote:
 I'm sorry that I've confused you. I was just pointing out that the
 demarcation line between mdraid and dmraid isn't as straightforward as
 you made it out to be because mdadm (which is an mdraid tool) can be
 used to manage dmraid arrays.

You hadn't confused me :^)
I've never claimed it cannot. The difference is quite substantial
though:

dmraid

DESCRIPTION
dmraid discovers block and software RAID devices (eg, ATARAID)
by using multiple different metadata format handlers which
support various formats (eg, Highpoint 37x series).
It offers activating RAID sets made up by 2 or more discovered RAID
devices, display properties of devices and sets
(see option -l for supported metadata formats).
Block device access to activated RAID sets occurs via device-mapper
nodes /dev/mapper/RaidSetName.
RaidSetName starts with the format name (see -l option) which can be
used to access all RAID sets of a specific format easily with
certain options (eg, -a below).

mdadm is for kernel software RAID and can handle *any* block device,
no underlying RAID required whatsoever.

Regards,
-- 
Raf


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20111028135057.ga24...@linuxstuff.pl



Re: Software Raid, recovery after drive broke

2011-10-28 Thread Tom H
On Fri, Oct 28, 2011 at 9:50 AM, Raf Czlonka r...@linuxstuff.pl wrote:
 On Fri, Oct 21, 2011 at 06:12:37PM BST, Tom H wrote:

 I'm sorry that I've confused you. I was just pointing out that the
 demarcation line between mdraid and dmraid isn't as straightforward as
 you made it out to be because mdadm (which is an mdraid tool) can be
 used to manage dmraid arrays.

 You hadn't confused me :^)

Given your posts, you're clearly confused...


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=Syex1zZU1w3=d-xv4bdmfpmd9ghmofzbberyuklzr-...@mail.gmail.com



Re: Software Raid, recovery after drive broke

2011-10-28 Thread Raf Czlonka
On Fri, Oct 28, 2011 at 06:50:05PM BST, Tom H wrote:
 Given your posts, you're clearly confused...

Would you care to elaborate?

-- 
Raf


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20111028182841.ga25...@linuxstuff.pl



Re: Software Raid, recovery after drive broke

2011-10-21 Thread Raf Czlonka
On Wed, Oct 19, 2011 at 12:29:14AM BST, Tom H wrote:
 On Tue, Oct 18, 2011 at 8:55 AM, Raf Czlonka r...@linuxstuff.pl wrote:
 
  dmraid and mdraid are not the same thing.
 
 Except for the fact that you can manage a dmraid array with mdadm
 (IIRC, you have to have containers on the DEVICE line in
 mdadm.conf, but there may be more to it than that).

Device mapper managed ATARAID can be in the container - I never claimed
it cannot. Please, do not confuse users.

dmraid as is Serial ATA RAID, SATA RAID or Fake RAID and mdadm (md raid)
as in Linux Software RAID[0].

[0] http://wiki.debian.org/DebianInstaller/SataRaid

Regards,
-- 
Raf


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20111021125757.ga1...@linuxstuff.pl



Re: Software Raid, recovery after drive broke

2011-10-21 Thread Tom H
On Fri, Oct 21, 2011 at 8:57 AM, Raf Czlonka r...@linuxstuff.pl wrote:
 On Wed, Oct 19, 2011 at 12:29:14AM BST, Tom H wrote:
 On Tue, Oct 18, 2011 at 8:55 AM, Raf Czlonka r...@linuxstuff.pl wrote:

 dmraid and mdraid are not the same thing.

 Except for the fact that you can manage a dmraid array with mdadm
 (IIRC, you have to have containers on the DEVICE line in
 mdadm.conf, but there may be more to it than that).

 Device mapper managed ATARAID can be in the container - I never claimed
 it cannot. Please, do not confuse users.

I'm sorry that I've confused you. I was just pointing out that the
demarcation line between mdraid and dmraid isn't as straightforward as
you made it out to be because mdadm (which is an mdraid tool) can be
used to manage dmraid arrays.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=Sxw0zFt-HpNRWP0rn=vxgnrOHRZJidP=PE1J_r=jwu...@mail.gmail.com



Software Raid, recovery after drive broke

2011-10-18 Thread Bartek W. aka Mastier

Hi,

I got two drives in RAID1 matrix, each one got 2 partitions, first boot 
also in raid, containg grub and such


Disk /dev/sda: 1500.3 GB, 1500301910016 bytes
255 heads, 63 sectors/track, 182401 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk identifier: 0x00033dcf

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   *   1  63  498688   fd  Linux raid 
autodetect

Partition 1 does not end on cylinder boundary.
/dev/sda2  63  182402  1464638413   fd  Linux raid 
autodetect

Partition 2 does not end on cylinder boundary.

The second drive broke, so I need to replace it now.
I got two software raids, one for boot, the second for lvm2...

root@hydra:~# cat /proc/mdstat
Personalities : [raid1]
md1 : active raid1 sda2[0]
  1464637253 blocks super 1.2 [2/1] [U_]

md0 : active raid1 sda1[0]
  498676 blocks super 1.2 [2/1] [U_]

unused devices: none

My question is, what if I connect the new drive and it will appear as 
/dev/sda, and the already working one will be /dev/sdb. Will it 
overwrite it ?


The second question, Should I first create partitions partions on this 
new drive ? What about GRUB ? I was struggling last time to install it 
on both drives in MBR (do you have any proper procedure, because GRUB2 
it can boot from dmraid, but cannot install on /dev/md0 for instance).



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4e9d5fb2.6070...@gmail.com



Re: Software Raid, recovery after drive broke

2011-10-18 Thread Raf Czlonka
On Tue, Oct 18, 2011 at 12:14:58PM BST, Bartek W. aka Mastier wrote:
 Hi,
 
 I got two drives in RAID1 matrix, each one got 2 partitions, first
 boot also in raid, containg grub and such

RAID or RAID array, not matrix.

 My question is, what if I connect the new drive and it will appear
 as /dev/sda, and the already working one will be /dev/sdb. Will it
 overwrite it ?

http://zeldor.biz/2011/09/raid1-replace-broken-hdd/
http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array
http://www.debian-administration.org/articles/238
http://wiki.yobi.be/wiki/Debian_Soft_Raid

 The second question, Should I first create partitions partions on
 this new drive ? What about GRUB ? I was struggling last time to
 install it on both drives in MBR (do you have any proper procedure,
 because GRUB2 it can boot from dmraid, but cannot install on
 /dev/md0 for instance).

dmraid and mdraid are not the same thing.

http://wiki.debian.org/DebianInstaller/SataRaid
http://www.howtoforge.com/software-raid1-grub-boot-debian-etch
http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm

That's after a couple of minutes of using google.
You could've found all of the bove yourself.

And above all:

% man mdadm

Regards,
-- 
Raf


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20111018125537.ga25...@linuxstuff.pl



Re: Software Raid, recovery after drive broke

2011-10-18 Thread Bartek W. aka Mastier

W dniu 18.10.2011 14:55, Raf Czlonka pisze:

On Tue, Oct 18, 2011 at 12:14:58PM BST, Bartek W. aka Mastier wrote:

Hi,

I got two drives in RAID1 matrix, each one got 2 partitions, first
boot also in raid, containg grub and such

RAID or RAID array, not matrix.


My question is, what if I connect the new drive and it will appear
as /dev/sda, and the already working one will be /dev/sdb. Will it
overwrite it ?

http://zeldor.biz/2011/09/raid1-replace-broken-hdd/
http://www.howtoforge.com/replacing_hard_disks_in_a_raid1_array
http://www.debian-administration.org/articles/238
http://wiki.yobi.be/wiki/Debian_Soft_Raid


The second question, Should I first create partitions partions on
this new drive ? What about GRUB ? I was struggling last time to
install it on both drives in MBR (do you have any proper procedure,
because GRUB2 it can boot from dmraid, but cannot install on
/dev/md0 for instance).

dmraid and mdraid are not the same thing.

http://wiki.debian.org/DebianInstaller/SataRaid
http://www.howtoforge.com/software-raid1-grub-boot-debian-etch
http://wiki.clug.org.za/wiki/RAID-1_in_a_hurry_with_grub_and_mdadm

That's after a couple of minutes of using google.
You could've found all of the bove yourself.

And above all:

% man mdadm

Regards,
Wow! The first link is exactly what I need. I utmostly thank you, you 
roxx in googling. Sorry for my inconvience, I will not bother you next 
time unnecessary. Thank you very much!



--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4e9d7d92.1040...@gmail.com



Re: Software Raid, recovery after drive broke

2011-10-18 Thread Tom H
On Tue, Oct 18, 2011 at 8:55 AM, Raf Czlonka r...@linuxstuff.pl wrote:

 dmraid and mdraid are not the same thing.

Except for the fact that you can manage a dmraid array with mdadm
(IIRC, you have to have containers on the DEVICE line in
mdadm.conf, but there may be more to it than that).


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/CAOdo=sxw7az-f_mnwhcln9qgrw7fuxh2m0os9stabkmas-r...@mail.gmail.com



installer, software raid and grub

2011-04-27 Thread Milos Negovanovic
Hi all,

Ive just performed squeeze install on a server with 2 drives
configured in RAID1. I followed closely installer messages and I don't
remember seeing GRUB being installed on /dev/sdb. For a split second
there was a message saying something about GRUB being installed on
/dev/sda ... and thats it. Basically if / is on software raid (linux md
raid) does installer install GRUB on all available hard drives? Do I
need to install GRUB manually on /dev/sdb? Idea is that if one of the
drives fail I can still use this system, which includes being able to
boot from either of the drives.

Regards
-- 
Milos Negovanovic
milos.negovano...@gmail.com


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110427193312.ga2...@googlemail.com



Re: installer, software raid and grub

2011-04-27 Thread Roger Leigh
On Wed, Apr 27, 2011 at 08:33:29PM +0100, Milos Negovanovic wrote:
 Hi all,
 
 Ive just performed squeeze install on a server with 2 drives
 configured in RAID1. I followed closely installer messages and I don't
 remember seeing GRUB being installed on /dev/sdb. For a split second
 there was a message saying something about GRUB being installed on
 /dev/sda ... and thats it. Basically if / is on software raid (linux md
 raid) does installer install GRUB on all available hard drives? Do I
 need to install GRUB manually on /dev/sdb? Idea is that if one of the
 drives fail I can still use this system, which includes being able to
 boot from either of the drives.

Just dpkg-reconfigure grub-pc and select all the drives to boot from.

Note if you used gpt partition tables, grub will need a tiny boot
partition to install itself on (bios_grub).  Example from parted:

Number  Start  End Size   File system  Name   Flags
 1  1.00MiB5.00MiB 4.00MiB grub   bios_grub
 2  5.00MiB953867MiB   953862MiB  btrfsbtrfsraid
 3  953867MiB  1907728MiB  953861MiB   mdraid raid

All the main Linux partitions are then on LVM on the mdraid RAID1
device.  The other disc is partitioned identically.

Annoyingly, the installer partitioner is not currently intelligent
enough to set up the bios_grub partition by default, so you have to do
it my hand in parted.


Regards,
Roger

-- 
  .''`.  Roger Leigh
 : :' :  Debian GNU/Linux http://people.debian.org/~rleigh/
 `. `'   Printing on GNU/Linux?   http://gutenprint.sourceforge.net/
   `-GPG Public Key: 0x25BFB848   Please GPG sign your mail.


signature.asc
Description: Digital signature


Re: installer, software raid and grub

2011-04-27 Thread Tom H
On Wed, Apr 27, 2011 at 3:33 PM, Milos Negovanovic
milos.negovano...@gmail.com wrote:

 Ive just performed squeeze install on a server with 2 drives
 configured in RAID1. I followed closely installer messages and I don't
 remember seeing GRUB being installed on /dev/sdb. For a split second
 there was a message saying something about GRUB being installed on
 /dev/sda ... and thats it. Basically if / is on software raid (linux md
 raid) does installer install GRUB on all available hard drives? Do I
 need to install GRUB manually on /dev/sdb? Idea is that if one of the
 drives fail I can still use this system, which includes being able to
 boot from either of the drives.

In my experience, it isn't installed on both by default (although I'm
sure that someone'll tell us both that you have that option with an
expert install) and you have to install it manually on sdb.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/BANLkTi=ovjecemxp0bbuhsjrgtasybf...@mail.gmail.com



Help with Software RAID needed

2011-04-23 Thread Jo Galara
I just got my new dedicated server, running Debian Lenny 64bit.
It has 2x500gb Software RAID but df -hT only shows a few GB for each
partition:

# df -hT
FilesystemTypeSize  Used Avail Use% Mounted on
/dev/md1  ext33.7G  339M  3.4G  10% /
tmpfstmpfs2.0G 0  2.0G   0% /lib/init/rw
udev tmpfs 10M  576K  9.5M   6% /dev
tmpfstmpfs2.0G 0  2.0G   0% /dev/shm
/dev/mapper/vg00-usr
   xfs4.0G  281M  3.8G   7% /usr
/dev/mapper/vg00-var
   xfs4.0G   48M  4.0G   2% /var
/dev/mapper/vg00-home
   xfs4.0G  4.2M  4.0G   1% /home
none tmpfs2.0G 0  2.0G   0% /tmp




fdisk -l shows the correct sizes:

# fdisk -l

Disk /dev/sda: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0xb6a3db46

   Device Boot  Start End  Blocks   Id  System
/dev/sda1   1 487 3911796   fd  Linux raid
autodetect
/dev/sda2 488 731 1959930   82  Linux swap / Solaris
/dev/sda3 732   60801   482512275   fd  Linux raid
autodetect

Disk /dev/sdb: 500.1 GB, 500107862016 bytes
255 heads, 63 sectors/track, 60801 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x7e96aee6

   Device Boot  Start End  Blocks   Id  System
/dev/sdb1   1 487 3911796   fd  Linux raid
autodetect
/dev/sdb2 488 731 1959930   82  Linux swap / Solaris
/dev/sdb3 732   60801   482512275   fd  Linux raid
autodetect

Disk /dev/md1: 4005 MB, 4005560320 bytes
2 heads, 4 sectors/track, 977920 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x

Disk /dev/md1 doesn't contain a valid partition table

Disk /dev/md3: 494.0 GB, 494092484608 bytes
2 heads, 4 sectors/track, 120628048 cylinders
Units = cylinders of 8 * 512 = 4096 bytes
Disk identifier: 0x

Disk /dev/md3 doesn't contain a valid partition table

Disk /dev/dm-0: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x

Disk /dev/dm-0 doesn't contain a valid partition table

Disk /dev/dm-1: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x

Disk /dev/dm-1 doesn't contain a valid partition table

Disk /dev/dm-2: 4294 MB, 4294967296 bytes
255 heads, 63 sectors/track, 522 cylinders
Units = cylinders of 16065 * 512 = 8225280 bytes
Disk identifier: 0x

Disk /dev/dm-2 doesn't contain a valid partition table






How can I increase the size of the partitions?


-- 
Regards,

Jo Galara



signature.asc
Description: OpenPGP digital signature


Re: Help with Software RAID needed

2011-04-23 Thread Jo Galara
More information:


# cat /proc/mdstat
Personalities : [raid1]
md3 : active raid1 sdb3[0] sda3[1]
  482512192 blocks [2/2] [UU]

md1 : active raid1 sda1[0] sdb1[1]
  3911680 blocks [2/2] [UU]

unused devices: none


# mount
/dev/md1 on / type ext3 (rw)
tmpfs on /lib/init/rw type tmpfs (rw,nosuid,mode=0755)
proc on /proc type proc (rw,noexec,nosuid,nodev)
sysfs on /sys type sysfs (rw,noexec,nosuid,nodev)
procbususb on /proc/bus/usb type usbfs (rw)
udev on /dev type tmpfs (rw,mode=0755)
tmpfs on /dev/shm type tmpfs (rw,nosuid,nodev)
devpts on /dev/pts type devpts (rw,noexec,nosuid,gid=5,mode=620)
/dev/mapper/vg00-usr on /usr type xfs (rw)
/dev/mapper/vg00-var on /var type xfs (rw,usrquota)
/dev/mapper/vg00-home on /home type xfs (rw,usrquota)
none on /tmp type tmpfs (rw)

-- 
Regards,

Jo Galara



signature.asc
Description: OpenPGP digital signature


Re: Help with Software RAID needed

2011-04-23 Thread Arno Schuring
Jo Galara (jogal...@gmail.com on 2011-04-23 19:55 +0200):
 I just got my new dedicated server, running Debian Lenny 64bit.
 It has 2x500gb Software RAID but df -hT only shows a few GB for each
 partition:
[...]
 /dev/mapper/vg00-usr on /usr type xfs (rw)
 /dev/mapper/vg00-var on /var type xfs (rw,usrquota)
 /dev/mapper/vg00-home on /home type xfs (rw,usrquota)
 
You're using LVM. What does
# pvdisplay

show?


Regards,
Arno


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20110424002924.1f2ab...@neminis.loos.site



Software RAID on external USB disk: boot problems after upgrade to squeeze

2011-02-23 Thread Panayiotis Karabassis

Hi!

To simplify things a little: I have an external USB disk (/dev/sdc1) 
that is part of an MD array (/dev/md0).


During boot, I experienced the following problem: /dev/md0 was assembled 
with only 1 drive out of 2 (the internal one). This was due to /dev/sdc1 
taking some time to be detected, i.e. being detected after /dev/md0 was 
assembled.


I had solved this in lenny by using the following 
/usr/share/initramfs-tools/scripts/local-top/mdadm script:


!/bin/sh
#
# Copyright © 2006-2008 Martin F. Krafft madd...@debian.org
# based on the scripts in the initramfs-tools package.
# released under the terms of the Artistic Licence.
#
set -eu

case ${1:-} in
  prereqs) echo multipath; exit 0;;
esac

. /scripts/functions

maybe_break pre-mdadm

# - local change here !!! -
sleep 13

if [ -e /scripts/local-top/md ]; then
  log_warning_msg old md initialisation script found, getting out of 
its way...

  exit 1
fi

MDADM=/sbin/mdadm
[ -x $MDADM ] || exit 0

...

My solution is broken after the upgrade to squeeze and the problem has 
regressed. Any insight appreciated!


Regards,
Panayiotis


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4d64d396.9020...@gmail.com



Re: Software RAID on external USB disk: boot problems after upgrade to squeeze

2011-02-23 Thread martin f krafft
also sprach Panayiotis Karabassis pan...@gmail.com [2011.02.23.1029 +0100]:
 I had solved this in lenny by using the following
 /usr/share/initramfs-tools/scripts/local-top/mdadm script:

Set rootdelay=30 on the kernel command line.

 My solution is broken after the upgrade to squeeze and the problem
 has regressed. Any insight appreciated!

You will need to provide a lot more information.

-- 
 .''`.   martin f. krafft madduck@d.o  Related projects:
: :'  :  proud Debian developer   http://debiansystem.info
`. `'`   http://people.debian.org/~madduckhttp://vcs-pkg.org
  `-  Debian - when you have better things to do than fixing systems
 
however jewel-like the good will may be in its own right, there is
 a morally significant difference between rescuing someone from
 a burning building and dropping him from a twelfth-storey window
 while trying to rescue him.
   -- thomas nagel


digital_signature_gpg.asc
Description: Digital signature (see http://martin-krafft.net/gpg/sig-policy/999bbcc4/current)


Re: Software RAID on external USB disk: boot problems after upgrade to squeeze

2011-02-23 Thread Panayiotis Karabassis

On 02/23/2011 02:22 PM, martin f krafft wrote:

also sprach Panayiotis Karabassispan...@gmail.com  [2011.02.23.1029 +0100]:
   

I had solved this in lenny by using the following
/usr/share/initramfs-tools/scripts/local-top/mdadm script:
 

Set rootdelay=30 on the kernel command line.
   


Your solution seems to have worked.

Possibly I had added the line in lenny but it got lost in squeeze due to 
the upgrade from grub to grub2.


Also my local changes to 
/usr/share/initramfs-tools/scripts/local-top/mdadm seem to be unnecessary.


BTW, could someone explain the difference between 
GRUB_CMDLINE_LINUX_DEFAULT and GRUB_CMDLINE_LINUX in /etc/default/grub?


Many thanks and regards!
Panayiotis


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4d65053c.2010...@gmail.com



Re: Software RAID on external USB disk: boot problems after upgrade to squeeze

2011-02-23 Thread Tom H
On Wed, Feb 23, 2011 at 8:01 AM, Panayiotis Karabassis pan...@gmail.com wrote:

 BTW, could someone explain the difference between GRUB_CMDLINE_LINUX_DEFAULT
 and GRUB_CMDLINE_LINUX in /etc/default/grub?

For every vmlinuz... in /boot, GRUB_CMDLINE_LINUX_DEFAULT applies to
the runlevel 2 entry and GRUB_CMDLINE_LINUX applies to both the
runlevel 2 and runlevel S entries.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: 
http://lists.debian.org/aanlktikx33m88komr1_bxsqlggtgmrrly4egd60tt...@mail.gmail.com



Re: Software RAID on external USB disk: boot problems after upgrade to squeeze

2011-02-23 Thread Panayiotis Karabassis

On 02/23/2011 03:10 PM, Tom H wrote:

On Wed, Feb 23, 2011 at 8:01 AM, Panayiotis Karabassispan...@gmail.com  wrote:
   

BTW, could someone explain the difference between GRUB_CMDLINE_LINUX_DEFAULT
and GRUB_CMDLINE_LINUX in /etc/default/grub?
 

For every vmlinuz... in /boot, GRUB_CMDLINE_LINUX_DEFAULT applies to
the runlevel 2 entry and GRUB_CMDLINE_LINUX applies to both the
runlevel 2 and runlevel S entries.


   

Crystal clear. Thanks!


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org

Archive: http://lists.debian.org/4d650798.6010...@gmail.com



Re: Software RAID on external USB disk: boot problems after upgrade to squeeze

2011-02-23 Thread Tom H
On Wed, Feb 23, 2011 at 8:11 AM, Panayiotis Karabassis pan...@gmail.com wrote:
 On 02/23/2011 03:10 PM, Tom H wrote:
 On Wed, Feb 23, 2011 at 8:01 AM, Panayiotis Karabassispan...@gmail.com
  wrote:

 BTW, could someone explain the difference between
 GRUB_CMDLINE_LINUX_DEFAULT
 and GRUB_CMDLINE_LINUX in /etc/default/grub?

 For every vmlinuz... in /boot, GRUB_CMDLINE_LINUX_DEFAULT applies to
 the runlevel 2 entry and GRUB_CMDLINE_LINUX applies to both the
 runlevel 2 and runlevel S entries.

 Crystal clear. Thanks!

You're welcome.


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/AANLkTimB5CAF_
qp-5gg53e6aqhayopzapbjkr6...@mail.gmail.com



replacing hardware raid with software raid

2010-05-07 Thread Andreas Sachs
Hi,
i created a raid-5 array with a LSI MegaRaid 150-6 controller (i used 4 sata 
drives). Is it possible to use the array without the hardware raid controller 
with linux software raid/ mdadm?

I think a read something about that, but i can't find it anymore. Mdadm does 
not autodetect the array.
Do i have to provide a raid-configuration manually?

Another problem could be, that the raid configuration on the discs is destroyed 
(i think the data is still ok, but the raid configuration data stored on the 
discs could be damaged). Is there a possibility to restore it?


(I have a backup of the data from the raid array. I'm just interested if it's 
possible to replace the controller and to restore the data on a test system)



Thanks Andi
-- 
GRATIS für alle GMX-Mitglieder: Die maxdome Movie-FLAT!
Jetzt freischalten unter http://portal.gmx.net/de/go/maxdome01


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/20100507065047.180...@gmx.net



Re: replacing hardware raid with software raid

2010-05-07 Thread Stan Hoeppner
Andreas Sachs put forth on 5/7/2010 1:50 AM:
 Hi,
 i created a raid-5 array with a LSI MegaRaid 150-6 controller (i used 4 sata 
 drives). Is it possible to use the array without the hardware raid controller 
 with linux software raid/ mdadm?

No, this is not possible.  Did your MegaRAID card die?  If so, you should be
able to purchase a new (I noticed the 150 is past EOL) similar MegaRAID
card, drop it in, enter its BIOS, and have it scan the drives for the RAID
configuration, then save it, and reboot.  (Link to a new in box LSI 150-6 on
Ebay is below)

This is precisely why American Megatrend's RAID division (now a division of
LSI) implemented writing the RAID configuration to the disks themselves
instead of only keeping it in nvram on the card.  This facilitates replacing
a failed controller card without having to restore from tape or other backup
media.  The SCSI AMI MegaRAID cards from the late 1990s had this feature (I
still use one of them, a three channel model 428).  But IIRC back then you
had to replace the failed card with an identical model.  Today, I'm pretty
sure you can use any card in the series which has the same basic BIOS.  It
doesn't have to be the exact BIOS rev, but it has to be the same BIOS code
family IIRC.

Call LSI and tell them what's up.  They should be able to assist you with
getting the correct controller to replace yours and get your array going
again without needing to restore from tape.  If you're just wanting to go to
software RAID after a controller failure, you have no choice but to connect
all the drives to a standard SATA controller, wipe the drives, then
partition them or setup LVM groups, then use mdadm to create a new array.
Then you'll need to restore from tape or other media if you have backups.
If you have no backups, for all practical purposes, your data is lost.  If
you have a few tens of thousands of dollars, I'm sure some of the data
recovery companies could correctly reassemble the data.  But if you have
such financial resources, you'd just buy another LSI card and get back up
and running with little pain or cash outlay.

Here's a new in box LSI 150-6 with cables, identical to your card, on Ebay
for $100 USD Buy It Now.  Sale ends in 5 days.

http://cgi.ebay.com/LSI-MegaRAID-SATA-150-6-RAID-Card-1506064-w-cables-/220600932830?cmd=ViewItempt=LH_DefaultDomain_0hash=item335cd719de

Ships to:   United States, Europe, Canada, Australia, Mexico, Japan

I'm assuming you're in Germany.  The card is in Utah, USA.  Might take a few
days to get to you.  Once it does, plug it in, scan the disks for the
previous RAID configuration, save it, boot up, done.  It _should_ be that
simple.

I hope I've been able to help you in one way or another Andreas.

Good luck.

-- 
Stan




-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
Archive: http://lists.debian.org/4be3d416.7060...@hardwarefreak.com



Software raid OK?

2009-04-20 Thread BAGI Akos

Hi List!

I installed a software raid, level1 with 3 disks, one of them is a spare.

I have 2 partitions:
md0 is for / and is made of sda1,sdb1, sdc1
md1 is for swap and made of sda2,sdb2, sdc2

- I can boot form both disks,
- the system works fine.
- mdstat says the raids are active
- mdmadm --detail seems to be fine ( Superblock is persistent )

However
mdadm -E says: no md superblock detected on /dev/md0
and
fdisk -l says: no valid partition table found on md0


Is the raid OK or not?
If not, how can I fix it?

THX
Akos


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org




Re: Software raid OK?

2009-04-20 Thread Gilles Mocellin
Le Monday 20 April 2009 11:44:31 BAGI Akos, vous avez écrit :
 Hi List!

 I installed a software raid, level1 with 3 disks, one of them is a spare.

 I have 2 partitions:
 md0 is for / and is made of sda1,sdb1, sdc1
 md1 is for swap and made of sda2,sdb2, sdc2

 - I can boot form both disks,
 - the system works fine.
 - mdstat says the raids are active
 - mdmadm --detail seems to be fine ( Superblock is persistent )

 However
 mdadm -E says: no md superblock detected on /dev/md0

mdadm -E handles RAID components, not the resulting RAID device.
You can have informations with mdadm -E /dev/sda1 for example.
To see the state of your RAID device, you can do :
$ cat /proc/mdstat
or
$ mdadm --details /dev/md0

 and
 fdisk -l says: no valid partition table found on md0

fdisk handles disks, not partitions.
md0 = RAID of sda1,sdb1,sdc1 partitions = a partition, with a filesystem on it.

 Is the raid OK or not?
 If not, how can I fix it?

No problem with your raid.



signature.asc
Description: This is a digitally signed message part.


Re: Software raid OK?

2009-04-20 Thread Michael Iatrou
When the date was Monday 20 April 2009, BAGI Akos wrote:

 Hi List!

 I installed a software raid, level1 with 3 disks, one of them is a spare.

 I have 2 partitions:
 md0 is for / and is made of sda1,sdb1, sdc1
 md1 is for swap and made of sda2,sdb2, sdc2

There is no particularly good reason to have the swap on RAID. You should 
define three independed swap partitions; if disk fails, kernel will use the 
other available.

-- 
 Michael Iatrou


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Software raid OK?

2009-04-20 Thread Douglas A. Tutty
On Mon, Apr 20, 2009 at 03:29:00PM +0300, Michael Iatrou wrote:
 When the date was Monday 20 April 2009, BAGI Akos wrote:
 
  Hi List!
 
  I installed a software raid, level1 with 3 disks, one of them is a spare.
 
  I have 2 partitions:
  md0 is for / and is made of sda1,sdb1, sdc1
  md1 is for swap and made of sda2,sdb2, sdc2
 
 There is no particularly good reason to have the swap on RAID. You should 
 define three independed swap partitions; if disk fails, kernel will use the 
 other available.

If swap fails, what happens if something important to the running of the
system (not just a user app) is swapped-out?  I've seen advice on this
list many times that to avoid a crash, if other system stuff is on raid,
that swap should be as well.

Doug.


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Software raid OK?

2009-04-20 Thread Michael Iatrou
When the date was Monday 20 April 2009, Douglas A. Tutty wrote:

 On Mon, Apr 20, 2009 at 03:29:00PM +0300, Michael Iatrou wrote:
  When the date was Monday 20 April 2009, BAGI Akos wrote:
   Hi List!
  
   I installed a software raid, level1 with 3 disks, one of them is a
   spare.
  
   I have 2 partitions:
   md0 is for / and is made of sda1,sdb1, sdc1
   md1 is for swap and made of sda2,sdb2, sdc2
 
  There is no particularly good reason to have the swap on RAID. You
  should define three independed swap partitions; if disk fails, kernel
  will use the other available.

 If swap fails, what happens if something important to the running of the
 system (not just a user app) is swapped-out?  I've seen advice on this
 list many times that to avoid a crash, if other system stuff is on raid,
 that swap should be as well.

I cannot confirm that; instead I am assuming a workflow like the following:

1. A disk is about to fail
2. Notification from SMART hits sysadmin's mailbox
3. # swapoff /dev/sdXY
4. Replace disk, create partitions
5. # swapon /dev/sdXY
6. # mdadm /dev/mdK -a /dev/sdXZ

-- 
 Michael Iatrou


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Software raid OK?

2009-04-20 Thread Sam Kuper
Michael,

2009/4/20 Michael Iatrou m.iat...@freemail.gr

 When the date was Monday 20 April 2009, Douglas A. Tutty wrote:

  On Mon, Apr 20, 2009 at 03:29:00PM +0300, Michael Iatrou wrote:
   There is no particularly good reason to have the swap on RAID. You
   should define three independed swap partitions; if disk fails, kernel
   will use the other available.
 
  If swap fails, what happens if something important to the running of the
  system (not just a user app) is swapped-out?  I've seen advice on this
  list many times that to avoid a crash, if other system stuff is on raid,
  that swap should be as well.

 I cannot confirm that; instead I am assuming a workflow like the following:

 1. A disk is about to fail
 2. Notification from SMART hits sysadmin's mailbox
 3. # swapoff /dev/sdXY
 4. Replace disk, create partitions
 5. # swapon /dev/sdXY
 6. # mdadm /dev/mdK -a /dev/sdXZ


If the system is running unattended - for instance if it's a server being
run by a hobbyist, which doesn't have a sysadmin permanently available to
respond to problems - then step 3 may not occur before the disk fails. In
this scenario, isn't Douglas right that it's better to have the swap on
(redundant) RAID?

Many thanks,

Sam


Re: Software raid OK?

2009-04-20 Thread Mark Allums

Michael Iatrou wrote:

When the date was Monday 20 April 2009, Douglas A. Tutty wrote:


On Mon, Apr 20, 2009 at 03:29:00PM +0300, Michael Iatrou wrote:

When the date was Monday 20 April 2009, BAGI Akos wrote:

Hi List!

I installed a software raid, level1 with 3 disks, one of them is a
spare.

I have 2 partitions:
md0 is for / and is made of sda1,sdb1, sdc1
md1 is for swap and made of sda2,sdb2, sdc2

There is no particularly good reason to have the swap on RAID. You
should define three independed swap partitions; if disk fails, kernel
will use the other available.

If swap fails, what happens if something important to the running of the
system (not just a user app) is swapped-out?  I've seen advice on this
list many times that to avoid a crash, if other system stuff is on raid,
that swap should be as well.


I cannot confirm that; instead I am assuming a workflow like the following:

1. A disk is about to fail
2. Notification from SMART hits sysadmin's mailbox
3. # swapoff /dev/sdXY
4. Replace disk, create partitions
5. # swapon /dev/sdXY
6. # mdadm /dev/mdK -a /dev/sdXZ



Relying on S.M.A.R.T. is playing with atomic bombs.  Put everything on 
redundant storage, even swap.


Mark Allums


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org




Re: Software raid OK?

2009-04-20 Thread Michael Iatrou
When the date was Monday 20 April 2009, Sam Kuper wrote:

 Michael,

 2009/4/20 Michael Iatrou m.iat...@freemail.gr

  When the date was Monday 20 April 2009, Douglas A. Tutty wrote:
   On Mon, Apr 20, 2009 at 03:29:00PM +0300, Michael Iatrou wrote:
There is no particularly good reason to have the swap on RAID. You
should define three independed swap partitions; if disk fails,
kernel will use the other available.
  
   If swap fails, what happens if something important to the running of
   the system (not just a user app) is swapped-out?  I've seen advice on
   this list many times that to avoid a crash, if other system stuff is
   on raid, that swap should be as well.
 
  I cannot confirm that; instead I am assuming a workflow like the
  following:
 
  1. A disk is about to fail
  2. Notification from SMART hits sysadmin's mailbox
  3. # swapoff /dev/sdXY
  4. Replace disk, create partitions
  5. # swapon /dev/sdXY
  6. # mdadm /dev/mdK -a /dev/sdXZ

 If the system is running unattended - for instance if it's a server being
 run by a hobbyist, which doesn't have a sysadmin permanently available to
 respond to problems - then step 3 may not occur before the disk fails. In
 this scenario, isn't Douglas right that it's better to have the swap on
 (redundant) RAID?

I don't think there is a silver bullet for this.

There is a performance penalty related to soft-RAID. Also swappiness 
configuration must be taken into account. Physical memory and memory usage 
patterns from application perspective count too. And of course the required 
availability for the application is an important factor.

All I am saying is that when thorough evaluation of parameters like the 
above is out of scope, there is probably no good reason to have swap on 
RAID.

-- 
 Michael Iatrou


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Software raid OK?

2009-04-20 Thread Alex Samad
On Mon, Apr 20, 2009 at 10:08:22PM +0300, Michael Iatrou wrote:
 When the date was Monday 20 April 2009, Sam Kuper wrote:
 
  Michael,
 

[snip]

 
 I don't think there is a silver bullet for this.
 
 There is a performance penalty related to soft-RAID. Also swappiness 
 configuration must be taken into account. Physical memory and memory usage 
 patterns from application perspective count too. And of course the required 
 availability for the application is an important factor.
 
 All I am saying is that when thorough evaluation of parameters like the 
 above is out of scope, there is probably no good reason to have swap on 
 RAID.

with the cost of hd's being so low, I would suggest the default should
be swap on a raid1

 

-- 
Microsoft is not the answer.
Microsoft is the question.
NO (or Linux) is the answer.
(Taken from a .signature from someone from the UK, source unknown)


signature.asc
Description: Digital signature


Re: Software raid OK?

2009-04-20 Thread Sam Kuper
2009/4/21 Alex Samad a...@samad.com.au
 On Mon, Apr 20, 2009 at 10:08:22PM +0300, Michael Iatrou wrote:
  When the date was Monday 20 April 2009, Sam Kuper wrote:
   Michael,
 [snip]
 
  I don't think there is a silver bullet for this.
 
  There is a performance penalty related to soft-RAID. Also swappiness
  configuration must be taken into account. Physical memory and memory usage
  patterns from application perspective count too. And of course the required
  availability for the application is an important factor.
 
  All I am saying is that when thorough evaluation of parameters like the
  above is out of scope, there is probably no good reason to have swap on
  RAID.

 with the cost of hd's being so low, I would suggest the default should
 be swap on a raid1

I'm grateful to Mark, Michael and Alex for their replies.

I'm planning to go ahead with using RAID 1 for swap, as a possible
slight performance hit is more acceptable to me than a crash or data
loss would be. Indeed, that's why I'm using redundant RAID in the
first place.

Apologies for hijacking Bagi's thread!

Best,

Sam


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Software raid OK?

2009-04-20 Thread Sam Kuper
2009/4/21 Sam Kuper sam.ku...@uclmail.net:
 Apologies for hijacking Bagi's thread!

s/Bagi/Akos/


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Installation with multiple Software RAID devices + LVM

2009-03-29 Thread Touko Korpela
I'd like to install Lenny into existing LVM volume group built top of
Software RAID-1 physical volumes like this:

pvs:
  PV VG   Fmt  Attr PSize  PFree
  /dev/md0   LVM_VG_2 lvm2 a-   50,13G 0
  /dev/md1   LVM_VG_2 lvm2 a-   14,80G 80,00M

/proc/mdstat:
Personalities : [raid1]
md1 : active raid1 hda9[0] hdc6[1]
  15518686 blocks super 1.2 [2/2] [UU]

md0 : active raid1 hda7[0] hdc5[1]
  52564544 blocks [2/2] [UU]

Partitions are type fd Linux raid autodetect and autodetected fine
using existing installation (mix of etch and lenny).
I used priority=medium boot option to give more control during install.
But Lenny installer or kernel doesn't autoassemble md1, only md0. I think it
should bring md1 up too. Could it be about superblock version (md1 has
version 1.2 superblock). Why version 0.90 is default?
As workaround how I assemble array manually from installer without erasing
data?

(I'm not subscribed)


-- 
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org



Re: Software RAID w/ mdadm -- Need Partitions?

2009-02-26 Thread Alex Samad
On Thu, Feb 26, 2009 at 02:26:25AM +, Kelly Harding wrote:
 
  Also, with Linux Software RAID, you can convert the metadata of RAID1
  to RAID5 to expand a 2 drive RAID1 mirrored array to a 2 drive RAID5
  degraded array to add another drive to later. I've done this multiple
  times and had no problems with it.
 
  I thought it wasn't possible to convert raid type. sure it wasn't a
  degraded raid5 you started with ?
 
 
 I'm quite sure.
 
 Read this blog post:
 http://scott.wallace.sh/2007/04/converting-raid1-to-raid5-with-no-data-loss/
 
 It does indeed work, as I've done it twice now.

so the blog states to recreate the raid as a 2 disk raid5, just don;t
overwrite the information because it is in the same state as a raid5
array with just 2 disks.

I thought when you said convert, you were using the --grow command to
convert from raid1 to raid5.



 
 Kelly
 
 
 -- 
 To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
 with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org
 
 

-- 
I don't want nations feeling like that they can bully ourselves and our 
allies. I want to have a ballistic defense system so that we can make the world 
more peaceful, and at the same time I want to reduce our own nuclear capacities 
to the level commiserate with keeping the peace.

- George W. Bush
10/23/2000
Des Moines, IA


signature.asc
Description: Digital signature


Re: how does GRUB read from /boot on software-RAID partition?

2009-02-25 Thread Barclay, Daniel
Jack Schneider wrote:
 On Tue, 24 Feb 2009 17:14:54 -0500
 Barclay, Daniel dan...@fgm.com wrote:
 
 Jack Schneider wrote:

 ...  I have run without incident for over a year.
 But if you haven't had any disk-failure incidents, do you know whether
 your setup will reliably work if either disk fails?  (Did you mean
 that you simulated disk failure?)

 Sorry, no failures to date.  ... I pulled a sata cable from one.

That's what I meant by simulated disk failure.


Daniel
-- 
(Plain text sometimes corrupted to HTML courtesy of Microsoft Exchange.) [F]




Re: how does GRUB read from /boot on software-RAID partition?

2009-02-25 Thread Barclay, Daniel
Douglas A. Tutty wrote:
...
 I had a box with ...
 
 I tested booting by unplugging (power and data) each drive in turn,
 booting with grub was never a problem.

Thanks.

Daniel

-- 
(Plain text sometimes corrupted to HTML courtesy of Microsoft Exchange.) [F]




Software RAID w/ mdadm -- Need Partitions?

2009-02-25 Thread Hal Vaughan
I've created RAIDs in the past where I just used the entire drive and  
ones where I created a single partition on the drives and used the  
partition.  It seems that there is no real difference in behavior.


If I'm planning on using a drive for a RAID, is there any reason I  
should create a single partition spanning the entire drive and using  
that instead of just using the drive in its entirety for the RAID?


Thanks!

Hal


--
To UNSUBSCRIBE, email to debian-user-requ...@lists.debian.org 
with a subject of unsubscribe. Trouble? Contact listmas...@lists.debian.org




  1   2   3   4   5   6   7   >