Dear Roland,

I cleared the current cluster configuration with drbdmanage uninit in all nodes 
and started fresh after manually clearing the zvol in the ZFS pool as well and 
rebooting the servers.


Once again there is a hang when I try to create and XFS filesystem on top of 
the drbd device. I do see some panics on the logs (scroll all the way to the 
end):


http://termbin.com/b5u3


I am running this on RHEL 7.5 on kernel: 3.10.0-862.el7.x86_64


Am I hitting a bug? Is it possible the problem is I am not waiting for the 
initial sync to finish? I have already upgraded the kernel module to the latest 
0.9.14 announced today.


# rpm -qa |grep kmod-drbd

kmod-drbd-9.0.14_3.10.0_862-1.el7.x86_64


[root@ae-fs01 tmp]# drbdmanage list-nodes
+------------------------------------------------------------------------------------------------------------+
| Name    | Pool Size | Pool Free |                                             
                     | State |
|------------------------------------------------------------------------------------------------------------|
| ae-fs01 |  13237248 |  12730090 |                                             
                     |    ok |
| ae-fs02 |  13237248 |  12730095 |                                             
                     |    ok |
| ae-fs03 |  13237248 |  12730089 |                                             
                     |    ok |
+------------------------------------------------------------------------------------------------------------+
[root@ae-fs01 tmp]# drbdmanage list-volumes
+------------------------------------------------------------------------------------------------------------+
| Name | Vol ID |       Size | Minor |                                          
                     | State |
|------------------------------------------------------------------------------------------------------------|
| test |      0 | 465.66 GiB |   100 |                                          
                     |    ok |
+------------------------------------------------------------------------------------------------------------+
[root@ae-fs01 tmp]# drbdadm status
.drbdctrl role:Primary
 volume:0 disk:UpToDate
 volume:1 disk:UpToDate
 ae-fs02 role:Secondary
   volume:0 peer-disk:UpToDate
   volume:1 peer-disk:UpToDate
 ae-fs03 role:Secondary
   volume:0 peer-disk:UpToDate
   volume:1 peer-disk:UpToDate

test role:Primary
 disk:UpToDate
 ae-fs02 role:Secondary
   replication:SyncSource peer-disk:Inconsistent done:5.12
 ae-fs03 role:Secondary
   replication:SyncSource peer-disk:Inconsistent done:5.15


Thanks,


Diego

________________________________
From: [email protected] <[email protected]> 
on behalf of Roland Kammerer <[email protected]>
Sent: Wednesday, May 2, 2018 2:30:54 AM
To: [email protected]
Subject: Re: [DRBD-user] New 3-way drbd setup does not seem to take i/o

On Tue, May 01, 2018 at 04:14:52PM +0000, Remolina, Diego J wrote:
> Hi, was wondering if you could guide me as to what could be the issue here. I 
> configured 3 servers with drbdmanage-0.99.16-1 and drbd-9.3.1-1 and related 
> packages.
>
>
> I created a zfs pool, then use zfs2.Zfs2 plugin and created a
> resource. All seems fine, up to the point when I want to test the
> resource and create a file system in it. At that point, if I try to
> create say an XFS filesystem, things freeze. If I create a ZFS pool on
> the drbd device, the creation succeeds, but then I cannot write or
> read from that.
>
>
> # zfs list
> NAME                 USED  AVAIL  REFER  MOUNTPOINT
> mainpool            11.6T  1.02T    24K  none
> mainpool/export_00  11.6T  12.6T  7.25G  -
>
> The plugin configuration:
> [GLOBAL]
>
> [Node:ae-fs01]
> storage-plugin = drbdmanage.storage.zvol2.Zvol2
>
> [Plugin:Zvol2]
> volume-group = mainpool
>
>
> # drbdmanage list-nodes
> +------------------------------------------------------------------------------------------------------------+
> | Name    | Pool Size | Pool Free |                                           
>                        | State |
> |------------------------------------------------------------------------------------------------------------|
> | ae-fs01 |  13237248 |   1065678 |                                           
>                        |    ok |
> | ae-fs02 |  13237248 |   1065683 |                                           
>                        |    ok |
> | ae-fs03 |  13237248 |   1065672 |                                           
>                        |    ok |
> +------------------------------------------------------------------------------------------------------------+
>
>
> # drbdmanage list-volumes
> +------------------------------------------------------------------------------------------------------------+
> | Name   | Vol ID |      Size | Minor |                                       
>                        | State |
> |------------------------------------------------------------------------------------------------------------|
> | export |      0 | 10.91 TiB |   106 |                                       
>                        |    ok |
> +------------------------------------------------------------------------------------------------------------+
>
> But trying to make one node primary and creating a file system, either
> a new zfs pool for data or XFS file system fail.
>
>
> # drbdadm primary export
> # drbdadm status
> .drbdctrl role:Secondary
>  volume:0 disk:UpToDate
>  volume:1 disk:UpToDate
>  ae-fs02 role:Primary
>    volume:0 peer-disk:UpToDate
>    volume:1 peer-disk:UpToDate
>  ae-fs03 role:Secondary
>    volume:0 peer-disk:UpToDate
>    volume:1 peer-disk:UpToDate
>
> export role:Primary
>  disk:UpToDate
>  ae-fs02 role:Secondary
>    peer-disk:UpToDate
>  ae-fs03 role:Secondary
>    peer-disk:UpToDate
>
> # zpool create export /dev/drbd106
> # zfs set compression=lz4 export
> # ls /export
> ls: reading directory /export: Not a directory
>
> If I destroy the pool and try to format /dev/drbd106 as XFS, it just
> hangs forever. Any ideas as to what is happening?

Carving out zvols which are then use by DRBD should work. Putting
another zfs/zpool on top might have it's quirks, especially with
auto-promote. And maybe the failed XFS was then a follow up problem.

So start with somthing easier then:
create a small (like 10M) resource with DM and then try to create the
XFS on on (without the additional zfs steps).

Regards, rck
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user
_______________________________________________
drbd-user mailing list
[email protected]
http://lists.linbit.com/mailman/listinfo/drbd-user

Reply via email to