Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-26 Thread Robert Altnoeder
It seems that some of the size values have changed compared to the
original problem report:

On 7/25/19 5:59 PM, Paul Clements wrote:
>  ~]# size_gb=71680
> [root@sios0 ~]# lvcreate -n $LV -L ${size_gb} $VG
>   Logical volume "my_lv" created.
> [root@sios0 ~]# lvs
>   LVVGAttr   LSize  Pool Origin Data%  Meta%  Move Log
> Cpy%Sync Convert
>   my_lv my_vg -wi-a- 70.00g

So that's one 71680 MiB LVM logical volume, exactly 70 GiB.

However, your original problem report stated:

> # cat /proc/partitions | grep dm-0
>  253    0   73433088 dm-0

which is 71712 MiB, or approximately 70.03 GiB

Then, in your last transcript, we see:
> [1295819.009396] drbd r0/0 drbd0: Current (diskless) capacity
> 146861616, cannot attach smaller (146791608) disk

The "current (diskless) capacity" would be 146861616 sectors, each 512
bytes, which equals 75193147392 bytes.
That's 73430808 kiB, which is approximately 71709.77 MiB, or 70.02 GiB,
and that is exactly the net size of a DRBD with a gross size of 71712
MiB and only one peer slot (2 nodes in the cluster) instead of 2 peer
slots (3 nodes in the cluster).

My guess what's happening here is this:

Either the local node or some remote node that it can still successfully
connect to while initializing the local DRBD was either not stopped, or
the DRBD kernel module on it did not stop the resource successfully.
Therefore, some node still had size information from the original 71712
MiB volume with only one peer slot (a 2 node cluster configuration).

Then you created a new 71680 MiB volume and initialized it with meta
data with 2 peer slots (a 3 node cluster configuration), and when you
started the local DRBD, it either had not been stopped/uninitialized
completely (less likely) or it had succeeded to connect to some remote
node where DRBD had not been stopped/uninitialized completely (more
likely) before it attached its local disk, and that's where it got size
information about the old net size of the disk from, which is still the
~70.02 GiB of the 2 node cluster setup.

When it tried to attach the local disk, which is only 146791608 sectors
= 73395804 kiB = ~71675.59 MiB (that is consistent with meta data for a
71680 MiB gross-size 3 node cluster setup), it could not attach that
disk, because it was smaller than the ~71709.77 MiB peer disk it is
supposed to replicate.

I suggest to repeat the entire procedure, that is, stop DRBD on all
nodes, destroy backend storage LVs, recreate backend storage LVs, check
the size of the newly created LVs, zero-out the LVs, create meta data,
and then start DRBD, all while monitoring the DRBD state of all three
nodes with the drbdmon utility.
I would also recommend to unload and reload the DRBD kernel module on
all three nodes to make sure it has actually stopped all resources, or
even safer, reboot all three nodes.

Another recommendation would be to upgrade to the most recent DRBD and
drbd-utils versions.

I have tried to reproduce the problem with the exact versions and sizes
you are using, but attaching the disk worked normally in my test.

br,
Robert

___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-25 Thread Paul Clements
Let me repost that inline, as the scrubbing seems to have mangled the text:

 ~]# size_gb=71680
[root@sios0 ~]# lvcreate -n $LV -L ${size_gb} $VG
  Logical volume "my_lv" created.
[root@sios0 ~]# lvs
  LVVGAttr   LSize  Pool Origin Data%  Meta%  Move Log
Cpy%Sync Convert
  my_lv my_vg -wi-a- 70.00g
[root@sios0 ~]# blkdiscard -v -z -o 0 -l 1M /dev/$VG/$LV
/dev/my_vg/my_lv: Zero-filled 1048576 bytes from the offset 0
[root@sios0 ~]# blkdiscard -v -z -o $(( ${size_gb} * 2**20 - 2**20 ))
-l 1M /dev/$VG/$LV
/dev/my_vg/my_lv: Zero-filled 1048576 bytes from the offset 75160879104
[root@sios0 ~]# dmesg -c > /dev/null
[root@sios0 ~]# drbdadm -v create-md all
open(/dev/mapper/my_vg/my_lv) failed: No such file or directory
drbdmeta 0 v09 /dev/mapper/my_vg/my_lv internal create-md 2
open(/dev/mapper/my_vg/my_lv) failed: No such file or directory
Command 'drbdmeta 0 v09 /dev/mapper/my_vg/my_lv internal create-md 2'
terminated with exit code 20
[root@sios0 ~]# ls -l /dev/mapper/
total 0
crw---. 1 root root 10, 236 Jul 10 15:49 control
lrwxrwxrwx. 1 root root   7 Jul 25 15:40 my_vg-my_lv -> ../dm-0
[root@sios0 ~]# vi /etc/drbd.conf
[root@sios0 ~]# vi /etc/drbd.conf
[root@sios0 ~]# drbdadm -v create-md all
drbdmeta 0 v09 /dev/mapper/my_vg-my_lv internal create-md 2
initializing activity log
initializing bitmap (4480 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
success
drbdmeta 0 v09 /dev/mapper/my_vg-my_lv internal write-dev-uuid 0A8D0B293091FD30
[root@sios0 ~]# drbdadm -v up all
drbdsetup new-resource r0 0
drbdsetup new-minor r0 0 0
drbdsetup new-peer r0 1 --_name=sios1 --protocol=C
drbdsetup new-peer r0 2 --_name=sios2 --protocol=A
drbdsetup new-path r0 1 ipv4:10.0.0.4:7788 ipv4:10.0.0.7:7788
drbdsetup new-path r0 2 ipv4:10.0.0.4:7788 ipv4:10.0.0.5:7788
drbdsetup peer-device-options r0 1 0 --c-max-rate=1000M --c-min-rate=50M
drbdsetup peer-device-options r0 2 0 --c-max-rate=1000M --c-min-rate=50M
drbdmeta 0 v09 /dev/mapper/my_vg-my_lv internal apply-al
drbdsetup attach 0 /dev/mapper/my_vg-my_lv /dev/mapper/my_vg-my_lv internal
0: Failure: (111) Low.dev. smaller than requested DRBD-dev. size.
Command 'drbdsetup attach 0 /dev/mapper/my_vg-my_lv
/dev/mapper/my_vg-my_lv internal' terminated with exit code 10
[root@sios0 ~]# dmesg
[1295818.933401] drbd r0: Starting worker thread (from drbdsetup [15783])
[1295818.944922] drbd r0 sios1: Starting sender thread (from drbdsetup [15787])
[1295818.960986] drbd r0 sios2: Starting sender thread (from drbdsetup [15791])
[1295819.009396] drbd r0/0 drbd0: Current (diskless) capacity
146861616, cannot attach smaller (146791608) disk
[root@sios0 ~]# drbdsetup status -vvs
r0 node-id:0 role:Secondary suspended:no
write-ordering:none
  volume:0 minor:0 disk:Diskless client:no quorum:yes
  size:73430808 read:0 written:0 al-writes:0 bm-writes:0
upper-pending:0 lower-pending:0
  al-suspended:no blocked:no
  sios1 node-id:1 connection:StandAlone role:Unknown congested:no ap-in-flight:0
  rs-in-flight:0
volume:0 replication:Off peer-disk:DUnknown resync-suspended:no
received:0 sent:0 out-of-sync:0 pending:0 unacked:0
  sios2 node-id:2 connection:StandAlone role:Unknown congested:no ap-in-flight:0
  rs-in-flight:0
volume:0 replication:Off peer-disk:DUnknown resync-suspended:no
received:0 sent:0 out-of-sync:0 pending:0 unacked:0

[root@sios0 ~]# drbdsetup show
resource r0 {
_this_host {
node-id 0;
volume 0 {
device  minor 0;
}
}
connection {
_peer_node_id 1;
path {
_this_host ipv4 10.0.0.4:7788;
_remote_host ipv4 10.0.0.7:7788;
}
_is_standalone;
net {
_name   "sios1";
}
volume 0 {
disk {
c-max-rate  1024000k; # bytes/second
c-min-rate  51200k; # bytes/second
}
}
}
connection {
_peer_node_id 2;
path {
_this_host ipv4 10.0.0.4:7788;
_remote_host ipv4 10.0.0.5:7788;
}
_is_standalone;
net {
protocolA;
_name   "sios2";
}
volume 0 {
disk {
c-max-rate  1024000k; # bytes/second
c-min-rate  51200k; # bytes/second
}
}
}
}

[root@sios0 ~]# exit
exit
Script done, file is typescript


On Thu, Jul 25, 2019 at 11:52 AM Paul Clements
 wrote:
>
> Lars,
>
> Thanks for your help.
>
> > Can you post a "transscript"?
>
> Attached is the typescript of the failure.
>
> --
> Paul
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-25 Thread Lars Ellenberg
On Tue, Jul 23, 2019 at 11:51:50AM -0400, Paul Clements wrote:
> > One thing you could check just to be sure is whether the configuration
> > files are identical on all three systems.
> 
> Yes, it's identical. md5sums check out.
> 
> > Was DRBD stopped on all three nodes before you recreated the meta data?
> > (Did you do 'drbdsetup down resourcename' on all three nodes, then
> > recreate the meta data on all three nodes, and then try 'drbdadm up
> > resourcename'?)
> 
> I did:
> 
> # drbdadm down r0
> 
> Before I did the rest of those steps, yes.

> > With that information I may be able to recreate the situation if the
> > problem persists.
> 
> Thanks for the help.
> 
> The conf file is at bottom for reference. Does anyone have a working
> three node drbd.conf they'd be willing to post? I'm starting to think
> something in my conf file is odd.

Config file looks okay on first glance.
Though usually people don't directly refer to /dev/dm-X,
but to the "logical name" (devicemapper minors are not stable).

So I'd expected a /dev/mapper/vg-lv or /dev/vg/lv to be used
instead of /dev/dm-0.

Can you post a "transscript"?

Volume group scratch, LV name s0:
VG=scratch
LV=s0
size_gb=75

lvcreate -n $LV -L ${size_gb} $VG
# zero out first and last megabyte;
# if you prefer, just leave off -o and -l, to zero-out all of it.
blkdiscard -v -z -o 0 -l 1M /dev/$VG/$LV
blkdiscard -v -z -o $(( ${size_gb} * 2**30 - 2**20 )) -l 1M /dev/$VG/$LV

Make your config file refer to disk /dev/$VG/$LV

dmesg -c > /dev/null# clear dmesg before the test
drbdadm -v create-md# on all nodes
drbdadm -v up all   # on all nodes
dmesg   # if it failed
drbdsetup status -vvs
drbdsetup show
*now* do the mkfs on drbd, once, on one node.


-- 
: Lars Ellenberg
: LINBIT | Keeping the Digital World Running
: DRBD -- Heartbeat -- Corosync -- Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT
__
please don't Cc me, but send to list -- I'm subscribed
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-23 Thread Robert Altnoeder
On 7/22/19 9:55 PM, Paul Clements wrote:
> Is there something wrong with my conf file maybe?

One thing you could check just to be sure is whether the configuration
files are identical on all three systems.

Was DRBD stopped on all three nodes before you recreated the meta data?
(Did you do 'drbdsetup down resourcename' on all three nodes, then
recreate the meta data on all three nodes, and then try 'drbdadm up
resourcename'?)

You said you were using DRBD 9.0.16, correct?
Could you please also check exact output of "cat /proc/drbd" and
"drbdadm --version" for me please?
With that information I may be able to recreate the situation if the
problem persists.

br,
Robert

___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-22 Thread Gianni Milo
Not sure what dm-0 is, but you could try to either create a partition or a
LVM volume on top of it and tell drbd to use that instead ? Just a thought
...

Gianni

On Mon, 22 Jul 2019 at 20:59, Paul Clements 
wrote:

> [root@sios0 ~]# cat /proc/partitions | grep dm-0
>  2530   73433088 dm-0
>
> [root@sios1 ~]# cat /proc/partitions | grep dm-0
>  2530   73433088 dm-0
>
> [root@sios2 ~]# cat /proc/partitions | grep dm-0
>  2530   73433088 dm-0
>
>
> On Mon, Jul 22, 2019 at 3:57 PM Trevor Hemsley 
> wrote:
> >
> > On 22/07/2019 20:55, Paul Clements wrote:
> > > [root@sios0 ~]# drbdadm up r0
> > > 0: Failure: (111) Low.dev. smaller than requested DRBD-dev. size.
> > > Command 'drbdsetup attach 0 /dev/dm-0 /dev/dm-0 internal' terminated
> >
> > Probably a silly question and already answered but I can't be bothered
> > to mine my deleted folder to find out: you are sure the backing devices
> > are the same (and correct) size?
> >
> > Trevor
> >
> >
> > Disclaimer
> >
> > The information contained in this communication from the sender is
> confidential. It is intended solely for use by the recipient and others
> authorized to receive it. If you are not the recipient, you are hereby
> notified that any disclosure, copying, distribution or taking action in
> relation of the contents of this information is strictly prohibited and may
> be unlawful.
> >
> > This email has been scanned for viruses and malware, and may have been
> automatically archived by Mimecast Ltd, an innovator in Software as a
> Service (SaaS) for business. Providing a safer and more useful place for
> your human generated data. Specializing in; Security, archiving and
> compliance. To find out more Click Here.
>
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-22 Thread Trevor Hemsley
On 22/07/2019 20:55, Paul Clements wrote:
> [root@sios0 ~]# drbdadm up r0
> 0: Failure: (111) Low.dev. smaller than requested DRBD-dev. size.
> Command 'drbdsetup attach 0 /dev/dm-0 /dev/dm-0 internal' terminated

Probably a silly question and already answered but I can't be bothered
to mine my deleted folder to find out: you are sure the backing devices
are the same (and correct) size?

Trevor

Disclaimer

The information contained in this communication from the sender is 
confidential. It is intended solely for use by the recipient and others 
authorized to receive it. If you are not the recipient, you are hereby notified 
that any disclosure, copying, distribution or taking action in relation of the 
contents of this information is strictly prohibited and may be unlawful.

This email has been scanned for viruses and malware, and may have been 
automatically archived by Mimecast Ltd, an innovator in Software as a Service 
(SaaS) for business. Providing a safer and more useful place for your human 
generated data. Specializing in; Security, archiving and compliance. To find 
out more visit the Mimecast website.
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-22 Thread Paul Clements
[root@sios0 ~]# cat /proc/partitions | grep dm-0
 2530   73433088 dm-0

[root@sios1 ~]# cat /proc/partitions | grep dm-0
 2530   73433088 dm-0

[root@sios2 ~]# cat /proc/partitions | grep dm-0
 2530   73433088 dm-0


On Mon, Jul 22, 2019 at 3:57 PM Trevor Hemsley  wrote:
>
> On 22/07/2019 20:55, Paul Clements wrote:
> > [root@sios0 ~]# drbdadm up r0
> > 0: Failure: (111) Low.dev. smaller than requested DRBD-dev. size.
> > Command 'drbdsetup attach 0 /dev/dm-0 /dev/dm-0 internal' terminated
>
> Probably a silly question and already answered but I can't be bothered
> to mine my deleted folder to find out: you are sure the backing devices
> are the same (and correct) size?
>
> Trevor
>
>
> Disclaimer
>
> The information contained in this communication from the sender is 
> confidential. It is intended solely for use by the recipient and others 
> authorized to receive it. If you are not the recipient, you are hereby 
> notified that any disclosure, copying, distribution or taking action in 
> relation of the contents of this information is strictly prohibited and may 
> be unlawful.
>
> This email has been scanned for viruses and malware, and may have been 
> automatically archived by Mimecast Ltd, an innovator in Software as a Service 
> (SaaS) for business. Providing a safer and more useful place for your human 
> generated data. Specializing in; Security, archiving and compliance. To find 
> out more Click Here.
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-22 Thread Paul Clements
Thanks for the tip. I tried it, but no luck. Did this on all three
nodes. Same output:

[root@sios0 ~]# dd if=/dev/zero of=/dev/dm-0 bs=4k
dd: error writing '/dev/dm-0': No space left on device
18358273+0 records in
18358272+0 records out
75195482112 bytes (75 GB) copied, 165.102 s, 455 MB/s

[root@sios0 ~]# drbdadm wipe-md r0
There appears to be no drbd meta data to wipe out?

[root@sios0 ~]# drbdadm create-md --max-peers=3 r0
initializing activity log
initializing bitmap (6724 KB) to all zero
Writing meta data...
New drbd meta data block successfully created.
success

[root@sios0 ~]# drbdadm up r0
0: Failure: (111) Low.dev. smaller than requested DRBD-dev. size.
Command 'drbdsetup attach 0 /dev/dm-0 /dev/dm-0 internal' terminated
with exit code 10

[root@sios0 ~]# cat /proc/partitions | grep dm-0
 2530   73433088 dm-0


Is there something wrong with my conf file maybe?

Thanks,
Paul

On Mon, Jul 22, 2019 at 3:01 PM Gianni Milo  wrote:
>
> You'll probably have to use the "--max-peers" parameter in the "drbdadm 
> create-md" command, to add support for additional peer nodes.
>
> See chapter 11.7.8 in the docs "Changing the meta-data"
>
> Gianni
>
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-22 Thread Gianni Milo
You'll probably have to use the "--max-peers" parameter in the "drbdadm
create-md" command, to add support for additional peer nodes.

See chapter 11.7.8 in the docs "Changing the meta-data"

Gianni
___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-22 Thread Robert Altnoeder
On 7/22/19 5:38 PM, Paul Clements wrote:
> Is there some metadata somewhere else, beside the device?

No

> Or is there some other problem?

The problem is that the new meta data for three nodes is larger than the
old meta data for two nodes, therefore the sum of meta data and user
data would be larger than the size of the device.
You will have to either reduce the size of the filesystem, so that the
new meta data fits in at the end of the device, or increase the size of
the device.

br,
Robert

___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] DRBD 9: 3-node mirror error (Low.dev. smaller than requested DRBD-dev. size.)

2019-07-22 Thread Robert Altnoeder
On 7/19/19 9:33 PM, Paul Clements wrote:
> Any ideas? Something I'm doing wrong? I didn't specify a size
> anywhere, so not sure why this isn't working.

Apparently, you used the default values when you initially created the
DRBD meta data for this installation. Since you had only two nodes
listed in the configuration file and did not specify otherwise, drbdadm
created meta data for a two node cluster.
Then you added another node to the configuration file, and now you have
three nodes in the configuration file, so drbdadm creates meta data for
a three node cluster. This will create meta data with 2 peer slots
instead of only 1 peer slot, which requires approximately twice the
amount of space. Since you are using internal meta data, the net size of
the DRBD device will be smaller.

You have two problems here:
1. The net size of the storage on the third node is smaller than on the
other two nodes, so DRBD can not resync to that node
2. The other two nodes have only one peer slot each, so they cannot
connect to a third node

You need to recreate the meta data with at least 2 peer slots to run a
three node cluster.

To make space for the additional peer slots, you will have to either
- increase the size of the storage device
or
- decrease the size of the data (e.g. filesystem) on the storage device
or
- switch to external meta data

br,
Robert

___
Star us on GITHUB: https://github.com/LINBIT
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user