Re: [CentOS] Questions about software RAID, LVM.

2013-02-15 Thread SilverTip257
On Thu, Feb 14, 2013 at 11:58 PM, Ted Miller tedli...@sbcglobal.net wrote:

 On 02/04/2013 06:40 PM, Robert Heller wrote:
  I am planning to increase the disk space on my desktop system.  It is
  running CentOS 5.9 w/XEN.  I have two 160Gig 2.5 laptop (2.5) SATA
 drives
  in two slots of a 4-slot hot swap bay configured like this:
 
  Disk /dev/sda: 160.0 GB, 160041885696 bytes
  255 heads, 63 sectors/track, 19457 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start End  Blocks   Id  System
  /dev/sda1   *   1 125 1004031   fd  Linux raid
 autodetect
  /dev/sda2 126   19457   155284290   fd  Linux raid
 autodetect
 
  Disk /dev/sdb: 160.0 GB, 160041885696 bytes
  255 heads, 63 sectors/track, 19457 cylinders
  Units = cylinders of 16065 * 512 = 8225280 bytes
 
  Device Boot  Start End  Blocks   Id  System
  /dev/sdb1   *   1 125 1004031   fd  Linux raid
 autodetect
  /dev/sdb2 126   19457   155284290   fd  Linux raid
 autodetect
 
  sauron.deepsoft.com% cat /proc/mdstat
  Personalities : [raid1]
  md0 : active raid1 sdb1[1] sda1[0]
 1003904 blocks [2/2] [UU]
 
  md1 : active raid1 sdb2[1] sda2[0]
 155284224 blocks [2/2] [UU]
 
  unused devices:none
 
  That is I have two RAID1 arrays: a small (1Gig) one mounted as /boot
  and a larger 148Gig one that is a LVM Volume Group (which contains a
  pile of file systems, some for DOM0 and some that are for other VMs).
  What I plan on doing is getting a pair of 320Gig 2.5 (laptop) SATA
  disks and fail over the existing disks to this new pair.  I believe I
  can then 'grow' the second RAID array to be like ~300Gig.  My question
  is: what happens to the LVM Volume Group?  Will it grow when the RAID
  array grows?

 Not on its own, but you can grow it.  I believe the recommended way to do
 the LVM volume is to
 partition new drive as type fd


LVM is 8e

Software RAID is fd


 install new PV on new partition (will be new, larger size)
 make new PV part of old volume group
 migrate all volumes on old PV onto new PV
 remove old PV from volume group

 You have to do this separately for each drive, but it isn't very hard.  Of
 course your boot partition will have to be handled separately.


This is what I said ;)
http://lists.centos.org/pipermail/centos/2013-February/131917.html



  Or should I leave /dev/md1 its current size and create a
  new RAID array and add this as a second PV and grow the Volume Group
  that way?

 That is a solution to a different problem.  You would end up with a VG of
 about 450 GB total.  If that is what you want to do, that works too.


He has to leave /dev/md1 at its current size ... it's a raid1.


   The documentation is not clear as to what happens -- the VG
  is marked 'resisable'.
 
  sauron.deepsoft.com% sudo pvdisplay
 --- Physical volume ---
 PV Name   /dev/md1
 VG Name   sauron
 PV Size   148.09 GB / not usable 768.00 KB
 Allocatable   yes
 PE Size (KByte)   4096
 Total PE  37911
 Free PE   204
 Allocated PE  37707
 PV UUID   ttB15B-3eWx-4ioj-TUvm-lAPM-z9rD-Prumee
 
  sauron.deepsoft.com% sudo vgdisplay
 --- Volume group ---
 VG Name   sauron
 System ID
 Formatlvm2
 Metadata Areas1
 Metadata Sequence No  65
 VG Access read/write
 VG Status resizable
 MAX LV0
 Cur LV17
 Open LV   12
 Max PV0
 Cur PV1
 Act PV1
 VG Size   148.09 GB
 PE Size   4.00 MB
 Total PE  37911
 Alloc PE / Size   37707 / 147.29 GB
 Free  PE / Size   204 / 816.00 MB
 VG UUID   qG8gCf-3vou-7dp2-Ar0B-p8jz-eXZF-3vOONr
 
 Doesn't look like anyone answered your question, so I'll tell you that the
 answer is Yes.

 Ted Miller


 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos



-- 
---~~.~~---
Mike
//  SilverTip257  //
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Questions about software RAID, LVM.

2013-02-14 Thread Ted Miller
On 02/04/2013 06:40 PM, Robert Heller wrote:
 I am planning to increase the disk space on my desktop system.  It is
 running CentOS 5.9 w/XEN.  I have two 160Gig 2.5 laptop (2.5) SATA drives
 in two slots of a 4-slot hot swap bay configured like this:

 Disk /dev/sda: 160.0 GB, 160041885696 bytes
 255 heads, 63 sectors/track, 19457 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes

 Device Boot  Start End  Blocks   Id  System
 /dev/sda1   *   1 125 1004031   fd  Linux raid autodetect
 /dev/sda2 126   19457   155284290   fd  Linux raid autodetect

 Disk /dev/sdb: 160.0 GB, 160041885696 bytes
 255 heads, 63 sectors/track, 19457 cylinders
 Units = cylinders of 16065 * 512 = 8225280 bytes

 Device Boot  Start End  Blocks   Id  System
 /dev/sdb1   *   1 125 1004031   fd  Linux raid autodetect
 /dev/sdb2 126   19457   155284290   fd  Linux raid autodetect

 sauron.deepsoft.com% cat /proc/mdstat
 Personalities : [raid1]
 md0 : active raid1 sdb1[1] sda1[0]
1003904 blocks [2/2] [UU]

 md1 : active raid1 sdb2[1] sda2[0]
155284224 blocks [2/2] [UU]

 unused devices:none

 That is I have two RAID1 arrays: a small (1Gig) one mounted as /boot
 and a larger 148Gig one that is a LVM Volume Group (which contains a
 pile of file systems, some for DOM0 and some that are for other VMs).
 What I plan on doing is getting a pair of 320Gig 2.5 (laptop) SATA
 disks and fail over the existing disks to this new pair.  I believe I
 can then 'grow' the second RAID array to be like ~300Gig.  My question
 is: what happens to the LVM Volume Group?  Will it grow when the RAID
 array grows?

Not on its own, but you can grow it.  I believe the recommended way to do 
the LVM volume is to
partition new drive as type fd
install new PV on new partition (will be new, larger size)
make new PV part of old volume group
migrate all volumes on old PV onto new PV
remove old PV from volume group

You have to do this separately for each drive, but it isn't very hard.  Of 
course your boot partition will have to be handled separately.


 Or should I leave /dev/md1 its current size and create a
 new RAID array and add this as a second PV and grow the Volume Group
 that way?

That is a solution to a different problem.  You would end up with a VG of 
about 450 GB total.  If that is what you want to do, that works too.

 The documentation is not clear as to what happens -- the VG
 is marked 'resisable'.

 sauron.deepsoft.com% sudo pvdisplay
--- Physical volume ---
PV Name   /dev/md1
VG Name   sauron
PV Size   148.09 GB / not usable 768.00 KB
Allocatable   yes
PE Size (KByte)   4096
Total PE  37911
Free PE   204
Allocated PE  37707
PV UUID   ttB15B-3eWx-4ioj-TUvm-lAPM-z9rD-Prumee

 sauron.deepsoft.com% sudo vgdisplay
--- Volume group ---
VG Name   sauron
System ID
Formatlvm2
Metadata Areas1
Metadata Sequence No  65
VG Access read/write
VG Status resizable
MAX LV0
Cur LV17
Open LV   12
Max PV0
Cur PV1
Act PV1
VG Size   148.09 GB
PE Size   4.00 MB
Total PE  37911
Alloc PE / Size   37707 / 147.29 GB
Free  PE / Size   204 / 816.00 MB
VG UUID   qG8gCf-3vou-7dp2-Ar0B-p8jz-eXZF-3vOONr

Doesn't look like anyone answered your question, so I'll tell you that the 
answer is Yes.

Ted Miller


___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos


Re: [CentOS] Questions about software RAID, LVM.

2013-02-04 Thread SilverTip257
On Mon, Feb 4, 2013 at 6:40 PM, Robert Heller hel...@deepsoft.com wrote:

 I am planning to increase the disk space on my desktop system.  It is
 running CentOS 5.9 w/XEN.  I have two 160Gig 2.5 laptop (2.5) SATA drives
 in two slots of a 4-slot hot swap bay configured like this:


I would certainly suggest testing these steps out in some form or another
as a dry run.  And verify your backups and/or create new ones ahead of
time. :)

Create your new softraid mirror with the larger disks, add that device
(we'll say md3 - since the new /boot will be md2 temporarily) to your
existing LVM volume group, then pvmove the physical extents from the
existing disk /dev/md1 to /dev/md3.  Then verify that the extents have been
moved off md1 and are on md3.  Finally remove md1 from the VG with pvremove.

I don't recall if I specifically used these notes, but the process matches
the one I used. [0]
A few more bits of info I recognize from my readings a while back. [1] [2]
[3]

I expect you could easily tweak your mdadm config and rebuild your initial
ramdisk so that the next time you reboot there isn't an issue.  This all
depends on how much of this process you plan on doing online.

[0]
http://www.rhcedan.com/2010/10/20/migrating-physical-volumes-in-a-lvm2-volume-group/
[1]
http://www.centos.org/docs/5/html/Cluster_Logical_Volume_Manager/move_new_ex4.html
[2] http://tldp.org/HOWTO/LVM-HOWTO/removeadisk.html
[3]
http://www.whmcr.com/2011/06/21/moving-part-of-an-lvm-vg-from-one-pv-to-another/

sauron.deepsoft.com% cat /proc/mdstat
 Personalities : [raid1]
 md0 : active raid1 sdb1[1] sda1[0]
   1003904 blocks [2/2] [UU]

 md1 : active raid1 sdb2[1] sda2[0]
   155284224 blocks [2/2] [UU]

 unused devices: none

 That is I have two RAID1 arrays: a small (1Gig) one mounted as /boot
 and a larger 148Gig one that is a LVM Volume Group (which contains a
 pile of file systems, some for DOM0 and some that are for other VMs).
 What I plan on doing is getting a pair of 320Gig 2.5 (laptop) SATA
 disks and fail over the existing disks to this new pair.  I believe I
 can then 'grow' the second RAID array to be like ~300Gig.  My question
 is: what happens to the LVM Volume Group?  Will it grow when the RAID


In my experience the VG will grow by whatever extents the new PV has.
I found it quite helpful to use loopback devices to test possible softraid
and/or LVM scenarios.


 array grows?  Or should I leave /dev/md1 its current size and create a
 new RAID array and add this as a second PV and grow the Volume Group


Just add md3 to the VG and move the extents as noted above.


 that way?  The documentation is not clear as to what happens -- the VG
 is marked 'resisable'.


The biggest gotcha is with PVs.
Example even though this doesn't apply:
In that if you plan on enlarging a disk (ex: LVM backing for a VM that uses
LVM), the PV in the VM has to be offline to resize (the OS doesn't
recognize the larger disk until after a reboot, but cannot be resized since
the PV is online again!).  You avoid that pitfall since you will have a
separate disk (md3) that can be hot added.



 sauron.deepsoft.com% sudo pvdisplay
   --- Physical volume ---
   PV Name   /dev/md1
   VG Name   sauron
   PV Size   148.09 GB / not usable 768.00 KB
   Allocatable   yes
   PE Size (KByte)   4096
   Total PE  37911
   Free PE   204
   Allocated PE  37707
   PV UUID   ttB15B-3eWx-4ioj-TUvm-lAPM-z9rD-Prumee

 sauron.deepsoft.com% sudo vgdisplay
   --- Volume group ---
   VG Name   sauron
   System ID
   Formatlvm2
   Metadata Areas1
   Metadata Sequence No  65
   VG Access read/write
   VG Status resizable
   MAX LV0
   Cur LV17
   Open LV   12
   Max PV0
   Cur PV1
   Act PV1
   VG Size   148.09 GB
   PE Size   4.00 MB
   Total PE  37911
   Alloc PE / Size   37707 / 147.29 GB
   Free  PE / Size   204 / 816.00 MB
   VG UUID   qG8gCf-3vou-7dp2-Ar0B-p8jz-eXZF-3vOONr



 --
 Robert Heller -- 978-544-6933 / hel...@deepsoft.com
 Deepwoods Software-- http://www.deepsoft.com/
 ()  ascii ribbon campaign -- against html e-mail
 /\  www.asciiribbon.org   -- against proprietary attachments



 ___
 CentOS mailing list
 CentOS@centos.org
 http://lists.centos.org/mailman/listinfo/centos



Hope this helps.

Best Regards,
-- 
---~~.~~---
Mike
//  SilverTip257  //
___
CentOS mailing list
CentOS@centos.org
http://lists.centos.org/mailman/listinfo/centos