Re: [DRBD-user] Can't shrink backing block device offline

2021-07-22 Thread Christophorus Reyhan
>For a gross device size of 1,610,612,736 kiB (1,536 GiB), your 
available  net size is 1,610,366,940 kiB, so since your calculation 
yielded a  somewhat smaller net size, it would have been sufficiently 
accurate to enable creation of the DRBD meta data in the empty space 
behind the file system.


>Here are the calculation results from the algorithms also used in 
LINSTOR:


>Calculation results
>***
>Peers: 5
>Activity log size:32 kiB   (   64 sectors)
>Activitiy log stripes: 1
>***
>Net size: 1610366940 kiB   (   3220733880 sectors)
>(~  1.50 TiB)
>Gross size:   1610612736 kiB   (   3221225472 sectors)
>(~  1.50 TiB)
>Effective meta data size: 245796 kiB   (   491592 sectors)
>(~240.04 MiB)

>Unless I mistyped something, these results should be accurate.

I see, since it's pretty close & sufficient, I think I'll use Sgross / 
32768 * Cpeer + 1

MiB in the future if needed.

Thank you so much for the help, Mr. Robert and Mr. Adam.___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Can't shrink backing block device offline

2021-07-21 Thread Robert Altnoeder
On 7/16/21 10:53 AM, Christophorus Reyhan wrote:
> I appologize, I seem to made a big miscalculation on the size to
> sectors, I think I should've divide size in Bytes (instead of KiB)
> with 512 Bytes to get the sectors. 
> So the correct calculation (if what I'm understanding is right) is this:
>
> 1.5 TiB = 1,610,612,736 KiB
> 1,610,612,736 KiB = 1,649,267,441,664 Bytes
> 1,649,267,441,664 Bytes / 512 bytes = 3,221,225,472 Sectors
> Snet= 1,610,612,736 KiB - ((3,221,225,472 Sectors/32768 * 5 peers) +
> 1024KiB)
> Snet= 1,610,612,736 KiB - (491,520 Sectors + 1024 KiB)
> Snet= 1,610,612,736 KiB - (245,760 KiB + 1024 KiB)
> Snet= 1,610,612,736 KiB - 246,784 KiB
> Snet= 1,610,365,952 KiB = 1,608,793,329 MiB 
>
> Is this correct?

For a gross device size of 1,610,612,736 kiB (1,536 GiB), your available
net size is 1,610,366,940 kiB, so since your calculation yielded a
somewhat smaller net size, it would have been sufficiently accurate to
enable creation of the DRBD meta data in the empty space behind the file
system.

Here are the calculation results from the algorithms also used in LINSTOR:

Calculation results
***
Peers: 5
Activity log size:    32 kiB   (   64 sectors)
Activitiy log stripes: 1
***
Net size: 1610366940 kiB   (   3220733880 sectors)
    (~  1.50 TiB)
Gross size:   1610612736 kiB   (   3221225472 sectors)
    (~  1.50 TiB)
Effective meta data size: 245796 kiB   (   491592 sectors)
    (~    240.04 MiB)

Unless I mistyped something, these results should be accurate.

br,
Robert

___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Can't shrink backing block device offline

2021-07-16 Thread Christophorus Reyhan
I appologize, I seem to made a big miscalculation on the size to 
sectors, I think I should've divide size in Bytes (instead of KiB) with 
512 Bytes to get the sectors.

So the correct calculation (if what I'm understanding is right) is this:

1.5 TiB = 1,610,612,736 KiB
1,610,612,736 KiB = 1,649,267,441,664 Bytes
1,649,267,441,664 Bytes / 512 bytes = 3,221,225,472 Sectors
Snet= 1,610,612,736 KiB - ((3,221,225,472 Sectors/32768 * 5 peers) + 
1024KiB)

Snet= 1,610,612,736 KiB - (491,520 Sectors + 1024 KiB)
Snet= 1,610,612,736 KiB - (245,760 KiB + 1024 KiB)
Snet= 1,610,612,736 KiB - 246,784 KiB
Snet= 1,610,365,952 KiB = 1,608,793,329 MiB

Is this correct?___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Can't shrink backing block device offline

2021-07-16 Thread Christophorus Reyhan
I used formula from http://www.exonotes.com/node/74 to calculate 
md_size_sect and la-sect-size then compare it to md_size_sect given by 
create-md after shrinking the volume, and it’s different (because it 
doesn’t account for max-peers I assume?).



The formula:

la-size-sect = device_size_in_sectors – md_size_sect



where md_size_sect = int((device_size_in_sectors / 32768 + 79)/8)*8



 >Either resize your filesystem to 179M, resize the LV to 180M, and then 
after everything is working again, you can resize the filesystem to take 
all available space, and it should increase to 184252kB or 179.9+ MB



So without using all that formula you just mention, and formula provided 
in the source above, generally using :




Filesystem size = total backing block device size (MB) - 1



Should work just fine right (I tested it and it works)?



>For practical purposes, it’s normally sufficient to calculate Sgross / 
32768 * Cpeer + 1 MiB. So you’re basically making the device too large 
(or the filesystem too small), that way you end up with a bit more space 
for the meta data than what you actually need. That is also what is 
documented in the DRBD User’s Guide.




But if its in GB/TB, it’s sufficient to use Sgross / 32768 * Cpeer + 1 
MiB ?




For example:

I have 1.5 TB LVM with sector size of 512bytes, so

1.5 TiB = 1,610,612,736 KiB

1,610,612,736 KiB / 512bytes = 3145728 Sectors

Snet= 1,610,612,736 KiB - ((3145728 Sectors/32768 * 5 peers) + 1024KiB)

Snet= 1,610,612,736 KiB - (480 Sectors + 1024 KiB)

Snet= 1,610,612,736 KiB - (240 KiB + 1024 KiB)

Snet= 1,610,612,736 KiB – 1,264KiB

Snet= 1,610,611,472KiB



So I set the filesystem to 1610612256 KiB and

backing block device to 1610612736 KiB.

Am I in the right path here?___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Can't shrink backing block device offline

2021-07-16 Thread creyhanta
I used formula from http://www.exonotes.com/node/74 to calculate md_size_sect and la-sect-size then compare it to md_size_sect given by create-md after shrinking the volume, and it’s different (because it doesn’t account for max-peers I assume?). The formula: la-size-sect = device_size_in_sectors – md_size_sect where md_size_sect = int((device_size_in_sectors / 32768 + 79)/8)*8 >Either resize your filesystem to 179M, resize the LV to 180M, and then after everything is working again, you can resize the filesystem to take all available space, and it should increase to 184252kB or 179.9+ MB So without using all that formula you just mention, and formula provided in the source above, generally using : Filesystem size = total backing block device size (MB) - 1  Should work just fine right (I tested it and it works)?  >For practical purposes, it’s normally sufficient to calculate Sgross / 32768 * Cpeer + 1 MiB. So you’re basically making the device too large (or the filesystem too small), that way you end up with a bit more space for the meta data than what you actually need. That is also what is documented in the DRBD User’s Guide. But if its in GB/TB, it’s sufficient to use Sgross / 32768 * Cpeer + 1 MiB ? For example: I have 1.5 TB LVM with sector size of 512bytes, so1.5 TiB = 1,610,612,736 KiB1,610,612,736 KiB / 512bytes = 3145728 Sectors Snet= 1,610,612,736 KiB - ((3145728 Sectors/32768 * 5 peers) + 1024KiB)Snet= 1,610,612,736 KiB - (480 Sectors + 1024 KiB)Snet= 1,610,612,736 KiB - (240 KiB + 1024 KiB)Snet= 1,610,612,736 KiB – 1,264KiBSnet= 1,610,611,472KiB So I set the filesystem to 1610612256 KiB and backing block device to 1610612736 KiB.Am I in the right path here?
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Can't shrink backing block device offline

2021-07-16 Thread creyhanta
 I used formula from OCFS2/DRBD/LVM2 Offline Resizing | exonotes to calculate md_size_sect and la-sect-size then compare it to md_size_sect given by create-md after shrinking the volume, and it’s different (because it doesn’t account for max-peers I assume?). The formula: la-size-sect = device_size_in_sectors – md_size_sect where md_size_sect = int((device_size_in_sectors / 32768 + 79)/8)*8 >Either resize your filesystem to 179M, resize the LV to 180M, and then after everything is working again, you can resize the filesystem to take all available space, and it should increase to 184252kB or 179.9+ MB So without using all that formula you just mention, and formula provided in the source above, generally using : Filesystem size = total backing block device size (MB) - 1  Should work just fine right (I tested it and it works)?  >For practical purposes, it’s normally sufficient to calculate Sgross / 32768 * Cpeer + 1 MiB. So you’re basically making the device too large (or the filesystem too small), that way you end up with a bit more space for the meta data than what you actually need. That is also what is documented in the DRBD User’s Guide. But if its in GB/TB, it’s sufficient to use Sgross / 32768 * Cpeer + 1 MiB ? For example: I have 1.5 TB LVM with sector size of 512bytes, so1.5 TiB = 1,610,612,736 KiB1,610,612,736 KiB / 512bytes = 3145728 Sectors Snet= 1,610,612,736 KiB - ((3145728 Sectors/32768 * 5 peers) + 1024KiB)Snet= 1,610,612,736 KiB - (480 Sectors + 1024 KiB)Snet= 1,610,612,736 KiB - (240 KiB + 1024 KiB)Snet= 1,610,612,736 KiB – 1,264KiBSnet= 1,610,611,472KiB So I set the filesystem to 1610612256 KiB and backing block device to 1610612736 KiB.Am I in the right path here?
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Can't shrink backing block device offline

2021-07-15 Thread Adam Goryachev


On 15/7/21 20:40, Christophorus Reyhan wrote:

Hello,

I'm trying to shrink LVM backing block device offline because I 
use internal metadata, and I'm following this guide 
https://linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-shrinking-offline 
 
. But it failed at create-md step.


*Here's the log for first machine (called ol1):*
[root@ol1 drbd.d]# e2fsck -f /dev/drbd0
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/drbd0: 12/51200 files (0.0% non-contiguous), 109774/204732 blocks
[root@ol1 drbd.d]# drbdadm status
test role:Primary
  disk:UpToDate
  ol2 role:Secondary
    peer-disk:UpToDate

[root@ol1 drbd.d]# resize2fs /dev/drbd0 180M

So the filesystem inside your DRBD device is going to be 180M

resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/drbd0 to 184320 (1k) blocks.
The filesystem on /dev/drbd0 is now 184320 blocks long.

[root@ol1 drbd.d]# drbdadm down test
[root@ol1 drbd.d]# drbdadm dump-md test > /tmp/metadata
Found meta data is "unclean", please apply-al first
[root@ol1 drbd.d]# drbdadm apply-al test
[root@ol1 drbd.d]# drbdadm dump-md test > /tmp/metadata
[root@ol1 drbd.d]# lvreduce -L 180M /dev/mapper/vgdrbd-vol_drbd
  WARNING: Reducing active logical volume to 180.00 MiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vgdrbd/vol_drbd? [y/n]: y
  Size of logical volume vgdrbd/vol_drbd changed from 200.00 MiB (50 
extents) to 180.00 MiB (45 extents).
Also, the size of the volume containing both your ext filesystem AND 
your DRBD data is going to be 180M

  Logical volume vgdrbd/vol_drbd successfully resized.
[root@ol1 drbd.d]# vi /tmp/metadata
[root@ol1 drbd.d]# blockdev --getsz /dev/mapper/vgdrbd-vol_drbd
368640
[root@ol1 drbd.d]# vi /tmp/metadata
[root@ol1 drbd.d]# drbdadm create-md test --max-peers=5
You want me to create a v09 style flexible-size internal meta data block.
There appears to be a v09 flexible-size internal meta data block
already in place on /dev/vgdrbd/vol_drbd at byte offset 188739584

Do you really want to overwrite the existing meta-data?
[need to type 'yes' to confirm] yes

md_offset 188739584
al_offset 188706816
bm_offset 188674048

Found ext3 filesystem
      184320 kB data area apparently used
      184252 kB left usable by current configuration


As per the messages, there isn't enough space to preserve both your 
filesystem data AND add the DRBD data. Either resize your filesystem to 
179M, resize the LV to 180M, and then after everything is working again, 
you can resize the filesystem to take all available space, and it should 
increase to 184252kB or 179.9+ MB


I'm pretty sure the DRBD manual has some details on how to calculate 
exactly how much space you will need to leave available for the DRBD 
metadata, but I usually just do the above (make the filesystem smaller 
than my target, and then after DRBD has been resized, increase the FS to 
take up any extra space).


Hope that helps...

Regards,
Adam

___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] Can't shrink backing block device offline

2021-07-15 Thread Robert Altnoeder
On 15 Jul 2021, at 13:35, Adam Goryachev  
wrote:
> 
> I'm pretty sure the DRBD manual has some details on how to calculate exactly 
> how much space you will need to leave available for the DRBD metadata

The formulas for exact calculation are not documented in the manual, because 
they are so complicated that you would not expect users to use them and 
actually get them right too.

Anyhow, if you *really* want to know, you get the net size (usable size for a 
filesystem) from the gross size (total available space) by calculating:

Snet = a4096(Sgross) - A4096(A1024(A8(A4096(Sgross) * 32768 ^ -1) * Cpeer) + 
Sal + Ssb

where
ax = floor align to x
Ax = ceiling align to x
Snet = usable size
Sgross = available space
Sal = size of the activity log
Ssb = size of the superblock
Cpeer = peer count

The superblock size Ssb is 4096 kiB. The activity log size Sal is 32 KiB by 
default, but may change if the al stripe size and/or al stripe count is changed 
(normally when using DRBD on a striped RAID device).

The other way around (gross size from usable size) is even more complicated, 
because it involves the calculation of the mathematical limit of the sum of 
fractions of the bitmap size, but for additional fun, you have to do the whole 
calculation with integer arithmetic and with a couple alignments added in.

The only place where those accurate calculations are currently used, in both 
directions, gross to net and net to gross, is in the LINSTOR controller.

For practical purposes, it’s normally sufficient to calculate Sgross / 32768 * 
Cpeer + 1 MiB. So you’re basically making the device too large (or the 
filesystem too small), that way you end up with a bit more space for the meta 
data than what you actually need. That is also what is documented in the DRBD 
User’s Guide.

Cheers,
Robert

___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] Can't shrink backing block device offline

2021-07-15 Thread Christophorus Reyhan
Hello,

I'm trying to shrink LVM backing block device offline because I
use internal metadata, and I'm following this guide
https://linbit.com/drbd-user-guide/drbd-guide-9_0-en/#s-shrinking-offline .
But it failed at create-md step.

*Here's the log for first machine (called ol1):*
[root@ol1 drbd.d]# e2fsck -f /dev/drbd0
e2fsck 1.42.9 (28-Dec-2013)
Pass 1: Checking inodes, blocks, and sizes
Pass 2: Checking directory structure
Pass 3: Checking directory connectivity
Pass 4: Checking reference counts
Pass 5: Checking group summary information
/dev/drbd0: 12/51200 files (0.0% non-contiguous), 109774/204732 blocks
[root@ol1 drbd.d]# drbdadm status
test role:Primary
  disk:UpToDate
  ol2 role:Secondary
peer-disk:UpToDate

[root@ol1 drbd.d]# resize2fs /dev/drbd0 180M
resize2fs 1.42.9 (28-Dec-2013)
Resizing the filesystem on /dev/drbd0 to 184320 (1k) blocks.
The filesystem on /dev/drbd0 is now 184320 blocks long.

[root@ol1 drbd.d]# drbdadm down test
[root@ol1 drbd.d]# drbdadm dump-md test > /tmp/metadata
Found meta data is "unclean", please apply-al first
[root@ol1 drbd.d]# drbdadm apply-al test
[root@ol1 drbd.d]# drbdadm dump-md test > /tmp/metadata
[root@ol1 drbd.d]# lvreduce -L 180M /dev/mapper/vgdrbd-vol_drbd
  WARNING: Reducing active logical volume to 180.00 MiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vgdrbd/vol_drbd? [y/n]: y
  Size of logical volume vgdrbd/vol_drbd changed from 200.00 MiB (50
extents) to 180.00 MiB (45 extents).
  Logical volume vgdrbd/vol_drbd successfully resized.
[root@ol1 drbd.d]# vi /tmp/metadata
[root@ol1 drbd.d]# blockdev --getsz /dev/mapper/vgdrbd-vol_drbd
368640
[root@ol1 drbd.d]# vi /tmp/metadata
[root@ol1 drbd.d]# drbdadm create-md test --max-peers=5
You want me to create a v09 style flexible-size internal meta data block.
There appears to be a v09 flexible-size internal meta data block
already in place on /dev/vgdrbd/vol_drbd at byte offset 188739584

Do you really want to overwrite the existing meta-data?
[need to type 'yes' to confirm] yes

md_offset 188739584
al_offset 188706816
bm_offset 188674048

Found ext3 filesystem
  184320 kB data area apparently used
  184252 kB left usable by current configuration

Device size would be truncated, which
would corrupt data and result in
'access beyond end of device' errors.
You need to either
   * use external meta data (recommended)
   * shrink that filesystem first
   * zero out the device (destroy the filesystem)
Operation refused.

Command 'drbdmeta 0 v09 /dev/vgdrbd/vol_drbd internal create-md 5'
terminated with exit code 40
[root@ol1 drbd.d]# drbdadm create-md test
You want me to create a v09 style flexible-size internal meta data block.
There appears to be a v09 flexible-size internal meta data block
already in place on /dev/vgdrbd/vol_drbd at byte offset 188739584

Do you really want to overwrite the existing meta-data?
[need to type 'yes' to confirm] yes

md_offset 188739584
al_offset 188706816
bm_offset 188698624

Found ext3 filesystem
  184320 kB data area apparently used
  184276 kB left usable by current configuration

Device size would be truncated, which
would corrupt data and result in
'access beyond end of device' errors.
You need to either
   * use external meta data (recommended)
   * shrink that filesystem first
   * zero out the device (destroy the filesystem)
Operation refused.

Command 'drbdmeta 0 v09 /dev/vgdrbd/vol_drbd internal create-md 1'
terminated with exit code 40

*Here's the log for second machine (ol2):*
[root@ol2 ~]# drbdadm down test
[root@ol2 ~]# rm /tmp/metadata
rm: remove regular file /tmp/metadata? y
[root@ol2 ~]# drbdadm dump-md test > /tmp/metadata
Found meta data is "unclean", please apply-al first
[root@ol2 ~]# drbdadm apply-al test
[root@ol2 ~]# lvreduce -L 180M /dev/mapper/vgdrbd-vol_drbd
  WARNING: Reducing active logical volume to 180.00 MiB.
  THIS MAY DESTROY YOUR DATA (filesystem etc.)
Do you really want to reduce vgdrbd/vol_drbd? [y/n]: y
  Size of logical volume vgdrbd/vol_drbd changed from 200.00 MiB (50
extents) to 180.00 MiB (45 extents).
  Logical volume vgdrbd/vol_drbd successfully resized.
[root@ol2 ~]# vi /tmp/metadata

At this point, ol1 can't create-md so I stopped and restore the size back
to 200MB, then create-md, and succeeded.

Any help will be much appreciated.

-- 
Best Regards,
Christophorus Reyhan Tantio A.
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
https://lists.linbit.com/mailman/listinfo/drbd-user