Re: Buster: problem changing partition size on a RAID 5 array

2017-08-21 Thread Gary Dale

On 20/08/17 10:04 AM, Pascal Hambourg wrote:

Le 15/08/2017 à 21:47, Gary Dale a écrit :


That still sounds like a bug. If I did a DD from a smaller to a 
larger hard disk then used gdisk, I'd expect it to see the new drive 
size and handle it correctly.


Gdisk does handle it correctly. It just does not correct it 
automatically. If you ask to write the partition table, gdisk will ask 
if you want to correct it. If you ask to verify the disk, it will 
display the discrepancy and suggest how to correct it.


I guess it is because gdisk is designed to give full control to the 
user, unlike parted and Gparted. I like it for this.




If I went to a Doctor, I wouldn't expect to have to prompt him to find 
out if he thought there was a problem.


Gparted flags that there is an error while gdisk expects you to notice 
it. Gparted offers a solution. Gdisk just sits there waiting for you to 
learn enough to find out how to fix it. Both offer the same control. The 
difference is that gparted does it in a friendlier manner.




Re: Buster: problem changing partition size on a RAID 5 array

2017-08-20 Thread Pascal Hambourg

Le 15/08/2017 à 21:47, Gary Dale a écrit :


That still sounds like a bug. If I did a DD from a smaller to a larger 
hard disk then used gdisk, I'd expect it to see the new drive size and 
handle it correctly.


Gdisk does handle it correctly. It just does not correct it 
automatically. If you ask to write the partition table, gdisk will ask 
if you want to correct it. If you ask to verify the disk, it will 
display the discrepancy and suggest how to correct it.


I guess it is because gdisk is designed to give full control to the 
user, unlike parted and Gparted. I like it for this.




Re: Buster: problem changing partition size on a RAID 5 array

2017-08-17 Thread Jimmy Johnson

On 08/13/2017 09:32 PM, Gary Dale wrote:
I just added another drive to my RAID 5 array, so it now has 6. 
Unfortunately I can't seem to use the extra space. After the array 
finished reshaping, I tried to grow the file system but resize2fs 
complained that the file system was already the maximum size. Running 
gdisk revealed the following:


Disk /dev/md1: 39068861440 sectors, 18.2 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): EFF29D11-D982-4933-9B57-B836591DEF02
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 31255089118
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)End (sector)  Size   Code  Name
12048 31255089118   14.6 TiB8300  Linux filesystem


As you can see, there is a discrepancy between the device size (18.2T) 
and the partition size (14.6T).


cat /proc/mdstat confirms that the device has all the drives running:

md1 : active raid5 sdb1[0] sdh1[6] sdc1[5] sdl1[3] sdm1[2] sdk1[1]
   19534430720 blocks super 1.2 level 5, 512k chunk, algorithm 2 
[6/6] [UU]

   bitmap: 0/30 pages [0KB], 65536KB chunk


Neither fdisk nor gdisk let me create a partition larger than the 
existing one. Nor do they let me create a new partition anywhere except 
in the first 2047 sectors.


I tried commenting out the mount lines in /etc/fstab and rebooting to 
rescue mode but that didn't help.


I finally tried gparted and it immediately noticed the problem and 
offered to fix it. It "found" the free space at the end of the device 
and I was able to resize the existing partition to fill it. I'm now 
using the full array size!


I'm not sure what the problem is that gparted was able to see but fdisk 
and gdisk couldn't and whether this is a bug in mdadm or something else, 
but I thought I should report it somewhere.


I would suggest you boot to a gparted live disc, so none of your 
partitions are mounted and I will just work better.

--
Jimmy Johnson

Ubuntu 14.04 LTS - KDE 4.13.2 - AMD A8-7600 - EXT4 at sda1
Registered Linux User #380263



Re: Buster: problem changing partition size on a RAID 5 array

2017-08-16 Thread Gary Dale

On 16/08/17 01:48 PM, Nicholas Geovanis wrote:


On Tue, Aug 15, 2017 at 2:47 PM, Gary Dale > wrote:


The reason its rare is more likely that Linux hasn't been able to
boot from mdadm partitions until recently. 



IIRC it's been available in Debian since lenny in 2009. But yes, if 
you began using linux in, say, 1994 or so,

that's "recently" ;-)


Well, I've been using Debian since Potato but before that I was using 
other distributions so yes, I do go back a ways. However being able to 
boot from a RAID 5 partition isn't the same as being able to boot from a 
partition on a RAID 5 array.  I believe you could do the former at least 
one version before you could do the latter.



I'm one of those people who see little value in LVM. It just adds
complexity without doing anything that a little planning could
usually avoid. Of course, I'm not running a large datacentre with
the need to frequently reallocate disk space on the fly...


It's hard to overstate just how much flexibility LVM brings to the 
table in the day job. And if you've done any AIX work, it's basically 
a port of AIX's LVM, so you already know it. Frankly I use it at home 
now too, disks are big these days. AND (pet peeve...) you don't need 
to partition beneath it anymore.


LVM to me is more like a solution looking for a problem. Back when disk 
space was at a premium, the ability to reallocate it was important. 
However these days it's often easier to just to throw more hardware at 
it. It seems to me also that BTRFS makes LVM obsolete, so that I may 
have successfully avoided learning a technology that I will never need - 
assuming BTRFS ever gets optimized enough to compete with Ext file systems.


Obviously my use cases aren't the same as yours. You find that LVM 
solves problems for you. I have yet to encounter a situation where I 
said "I wish I'd installed LVM".


Re: Buster: problem changing partition size on a RAID 5 array

2017-08-16 Thread Nicholas Geovanis
On Tue, Aug 15, 2017 at 2:47 PM, Gary Dale  wrote:
>
> The reason its rare is more likely that Linux hasn't been able to boot
> from mdadm partitions until recently.


IIRC it's been available in Debian since lenny in 2009. But yes, if you
began using linux in, say, 1994 or so,
that's "recently" ;-)


> I'm one of those people who see little value in LVM. It just adds
> complexity without doing anything that a little planning could usually
> avoid. Of course, I'm not running a large datacentre with the need to
> frequently reallocate disk space on the fly...
>

It's hard to overstate just how much flexibility LVM brings to the table in
the day job. And if you've done any AIX work, it's basically a port of
AIX's LVM, so you already know it. Frankly I use it at home now too, disks
are big these days. AND (pet peeve...) you don't need to partition beneath
it anymore.


Re: Buster: problem changing partition size on a RAID 5 array

2017-08-15 Thread Gary Dale

On 14/08/17 01:58 PM, Pascal Hambourg wrote:

Le 14/08/2017 à 06:32, Gary Dale a écrit :


Disk /dev/md1: 39068861440 sectors, 18.2 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): EFF29D11-D982-4933-9B57-B836591DEF02
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 31255089118

 ^
You created a GPT partition table on the md array. When the table is 
created, the first and last usable sector numbers (depending on the 
device size at creation time) are recorded in the GPT header and 
define the total available space. The reason is because before the 
first available sector and after the last available sector are the two 
copies of the partition table. So changing the device size is not 
enough : you need to move the secondary partition table at the new end 
and adjust the last usable sector number.


That still sounds like a bug. If I did a DD from a smaller to a larger 
hard disk then used gdisk, I'd expect it to see the new drive size and 
handle it correctly. In fact it did notice that the md array was larger 
but didn't update its tables.


Neither fdisk nor gdisk let me create a partition larger than the 
existing one. Nor do they let me create a new partition anywhere 
except in the first 2047 sectors.


Because they rely on the GPT header to determine the available space.
With gdisk you could have used the "v" command to verify the disk and 
adjust the partition table to the new size. I don't know if fdisk can 
do it too.
Again, gdisk does appear to know that the device is larger than its 
tables indicate but doesn't update its tables. At the very least, I'd 
expect it to produce a message telling me about the issue and suggesting 
a resolution the way gparted did.




I'm not sure what the problem is that gparted was able to see but 
fdisk and gdisk couldn't and whether this is a bug in mdadm or 
something else, but I thought I should report it somewhere.


In the first place, the bug was to create a partition table on an md 
array. Almost nobody does this and I can see no value in doing it. It 
is useless. If you want to use the whole array as a single volume, 
don't partition it. If you want to create multiple volumes, use LVM as 
most people still do even after md arrays could be partitioned. It is 
much more flexible than partitions.


The reason its rare is more likely that Linux hasn't been able to boot 
from mdadm partitions until recently. I'm one of those people who see 
little value in LVM. It just adds complexity without doing anything that 
a little planning could usually avoid. Of course, I'm not running a 
large datacentre with the need to frequently reallocate disk space on 
the fly...


For me, booting from a partitioned RAID array makes more sense. It adds 
no extra programs that can be hacked and that add to the system overhead 
while still allowing me to divide up the available disk space as if it 
were a single drive.


Creating multiple RAID arrays seems like the less desirable solution 
since they'd either require more drives or make resizing more 
complicated (depending on whether you were creating one array per group 
of drives or multiple arrays on each group of drives).


Being old school, I also note that the RAID controllers from the 1990s 
did pretty much the same thing. You'd create the arrays on a bunch of 
disks through the controller utility then use the OS to partition the 
arrays.




Re: Buster: problem changing partition size on a RAID 5 array

2017-08-14 Thread Pascal Hambourg

Le 14/08/2017 à 06:32, Gary Dale a écrit :


Disk /dev/md1: 39068861440 sectors, 18.2 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): EFF29D11-D982-4933-9B57-B836591DEF02
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 31255089118

 ^
You created a GPT partition table on the md array. When the table is 
created, the first and last usable sector numbers (depending on the 
device size at creation time) are recorded in the GPT header and define 
the total available space. The reason is because before the first 
available sector and after the last available sector are the two copies 
of the partition table. So changing the device size is not enough : you 
need to move the secondary partition table at the new end and adjust the 
last usable sector number.


Neither fdisk nor gdisk let me create a partition larger than the 
existing one. Nor do they let me create a new partition anywhere except 
in the first 2047 sectors.


Because they rely on the GPT header to determine the available space.
With gdisk you could have used the "v" command to verify the disk and 
adjust the partition table to the new size. I don't know if fdisk can do 
it too.


I'm not sure what the problem is that gparted was able to see but fdisk 
and gdisk couldn't and whether this is a bug in mdadm or something else, 
but I thought I should report it somewhere.


In the first place, the bug was to create a partition table on an md 
array. Almost nobody does this and I can see no value in doing it. It is 
useless. If you want to use the whole array as a single volume, don't 
partition it. If you want to create multiple volumes, use LVM as most 
people still do even after md arrays could be partitioned. It is much 
more flexible than partitions.




Buster: problem changing partition size on a RAID 5 array

2017-08-13 Thread Gary Dale
I just added another drive to my RAID 5 array, so it now has 6. 
Unfortunately I can't seem to use the extra space. After the array 
finished reshaping, I tried to grow the file system but resize2fs 
complained that the file system was already the maximum size. Running 
gdisk revealed the following:


Disk /dev/md1: 39068861440 sectors, 18.2 TiB
Logical sector size: 512 bytes
Disk identifier (GUID): EFF29D11-D982-4933-9B57-B836591DEF02
Partition table holds up to 128 entries
First usable sector is 34, last usable sector is 31255089118
Partitions will be aligned on 2048-sector boundaries
Total free space is 2014 sectors (1007.0 KiB)

Number  Start (sector)End (sector)  Size   Code  Name
   12048 31255089118   14.6 TiB8300  Linux filesystem


As you can see, there is a discrepancy between the device size (18.2T) 
and the partition size (14.6T).


cat /proc/mdstat confirms that the device has all the drives running:

md1 : active raid5 sdb1[0] sdh1[6] sdc1[5] sdl1[3] sdm1[2] sdk1[1]
  19534430720 blocks super 1.2 level 5, 512k chunk, algorithm 2 
[6/6] [UU]

  bitmap: 0/30 pages [0KB], 65536KB chunk


Neither fdisk nor gdisk let me create a partition larger than the 
existing one. Nor do they let me create a new partition anywhere except 
in the first 2047 sectors.


I tried commenting out the mount lines in /etc/fstab and rebooting to 
rescue mode but that didn't help.


I finally tried gparted and it immediately noticed the problem and 
offered to fix it. It "found" the free space at the end of the device 
and I was able to resize the existing partition to fill it. I'm now 
using the full array size!


I'm not sure what the problem is that gparted was able to see but fdisk 
and gdisk couldn't and whether this is a bug in mdadm or something else, 
but I thought I should report it somewhere.