Re: [DRBD-user] One resource per disk?

2018-05-01 Thread Paul O'Rorke
I create a large data drive out of a bunch of small SSDs using RAID and 
make that RAID drive an LVM PV.


I can then create LVM volume groups and volumes for each use (in my case 
virtual drives for KVM) to back specific DRBD resources for each VM.  It 
allows me to have a DRBD resource for each VM, each backed by an LVM 
volume which is in turn on a large LVM PV.


VM has DRBD resource as it's block device --> DRBD resources backed by 
LVM volume --> LVM volume on a large RAID based Physical Volume.


I can resize resources to my heart's content.  :-)

*Paul O'Rorke*
*Tracker Software Products (Canada) Limited *
www.tracker-software.com 
Tel: +1 (250) 324 1621
Fax: +1 (250) 324 1623



Support:
http://www.tracker-software.com/support
Download latest Releases
http://www.tracker-software.com/downloads/




On 2018-05-01 07:38 AM, Yannis Milios wrote:
I would prefer the 2nd option. Ideally all disks would be members of a 
RAID(10?) array, with  DRBD sitting on top for the replication, and 
LVM for managing the volume.
Another option would be ZFS managing the disks and the volume, while 
DRBD sitting on top for the replication. This very same scenario would 
also apply for LVM (thin or thick).

There's no right or wrong, depends on what your needs are.

I would avoid having single points of failure though, like single 
drives in any case...


On Tue, May 1, 2018 at 3:12 PM, Gandalf Corvotempesta 
> wrote:


Hi to all
Let's assume 3 servers with 12 disks each
Would you create one resource per disk and then manage them with
something like LVM or a single resource from a huge volume over
all disks?

___
drbd-user mailing list
drbd-user@lists.linbit.com 
http://lists.linbit.com/mailman/listinfo/drbd-user





___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] New 3-way drbd setup does not seem to take i/o

2018-05-01 Thread Roland Kammerer
On Tue, May 01, 2018 at 04:14:52PM +, Remolina, Diego J wrote:
> Hi, was wondering if you could guide me as to what could be the issue here. I 
> configured 3 servers with drbdmanage-0.99.16-1 and drbd-9.3.1-1 and related 
> packages.
> 
> 
> I created a zfs pool, then use zfs2.Zfs2 plugin and created a
> resource. All seems fine, up to the point when I want to test the
> resource and create a file system in it. At that point, if I try to
> create say an XFS filesystem, things freeze. If I create a ZFS pool on
> the drbd device, the creation succeeds, but then I cannot write or
> read from that.
> 
> 
> # zfs list
> NAME USED  AVAIL  REFER  MOUNTPOINT
> mainpool11.6T  1.02T24K  none
> mainpool/export_00  11.6T  12.6T  7.25G  -
> 
> The plugin configuration:
> [GLOBAL]
> 
> [Node:ae-fs01]
> storage-plugin = drbdmanage.storage.zvol2.Zvol2
> 
> [Plugin:Zvol2]
> volume-group = mainpool
> 
> 
> # drbdmanage list-nodes
> ++
> | Name| Pool Size | Pool Free |   
>| State |
> ||
> | ae-fs01 |  13237248 |   1065678 |   
>|ok |
> | ae-fs02 |  13237248 |   1065683 |   
>|ok |
> | ae-fs03 |  13237248 |   1065672 |   
>|ok |
> ++
> 
> 
> # drbdmanage list-volumes
> ++
> | Name   | Vol ID |  Size | Minor |   
>| State |
> ||
> | export |  0 | 10.91 TiB |   106 |   
>|ok |
> ++
> 
> But trying to make one node primary and creating a file system, either
> a new zfs pool for data or XFS file system fail.
> 
> 
> # drbdadm primary export
> # drbdadm status
> .drbdctrl role:Secondary
>  volume:0 disk:UpToDate
>  volume:1 disk:UpToDate
>  ae-fs02 role:Primary
>volume:0 peer-disk:UpToDate
>volume:1 peer-disk:UpToDate
>  ae-fs03 role:Secondary
>volume:0 peer-disk:UpToDate
>volume:1 peer-disk:UpToDate
> 
> export role:Primary
>  disk:UpToDate
>  ae-fs02 role:Secondary
>peer-disk:UpToDate
>  ae-fs03 role:Secondary
>peer-disk:UpToDate
> 
> # zpool create export /dev/drbd106
> # zfs set compression=lz4 export
> # ls /export
> ls: reading directory /export: Not a directory
> 
> If I destroy the pool and try to format /dev/drbd106 as XFS, it just
> hangs forever. Any ideas as to what is happening?

Carving out zvols which are then use by DRBD should work. Putting
another zfs/zpool on top might have it's quirks, especially with
auto-promote. And maybe the failed XFS was then a follow up problem.

So start with somthing easier then:
create a small (like 10M) resource with DM and then try to create the
XFS on on (without the additional zfs steps).

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] New 3-way drbd setup does not seem to take i/o

2018-05-01 Thread Remolina, Diego J
Hi, was wondering if you could guide me as to what could be the issue here. I 
configured 3 servers with drbdmanage-0.99.16-1 and drbd-9.3.1-1 and related 
packages.


I created a zfs pool, then use zfs2.Zfs2 plugin and created a resource. All 
seems fine, up to the point when I want to test the resource and create a file 
system in it. At that point, if I try to create say an XFS filesystem, things 
freeze. If I create a ZFS pool on the drbd device, the creation succeeds, but 
then I cannot write or read from that.


# zfs list
NAME USED  AVAIL  REFER  MOUNTPOINT
mainpool11.6T  1.02T24K  none
mainpool/export_00  11.6T  12.6T  7.25G  -

The plugin configuration:
[GLOBAL]

[Node:ae-fs01]
storage-plugin = drbdmanage.storage.zvol2.Zvol2

[Plugin:Zvol2]
volume-group = mainpool


# drbdmanage list-nodes
++
| Name| Pool Size | Pool Free | 
 | State |
||
| ae-fs01 |  13237248 |   1065678 | 
 |ok |
| ae-fs02 |  13237248 |   1065683 | 
 |ok |
| ae-fs03 |  13237248 |   1065672 | 
 |ok |
++


# drbdmanage list-volumes
++
| Name   | Vol ID |  Size | Minor | 
 | State |
||
| export |  0 | 10.91 TiB |   106 | 
 |ok |
++

But trying to make one node primary and creating a file system, either a new 
zfs pool for data or XFS file system fail.


# drbdadm primary export
# drbdadm status
.drbdctrl role:Secondary
 volume:0 disk:UpToDate
 volume:1 disk:UpToDate
 ae-fs02 role:Primary
   volume:0 peer-disk:UpToDate
   volume:1 peer-disk:UpToDate
 ae-fs03 role:Secondary
   volume:0 peer-disk:UpToDate
   volume:1 peer-disk:UpToDate

export role:Primary
 disk:UpToDate
 ae-fs02 role:Secondary
   peer-disk:UpToDate
 ae-fs03 role:Secondary
   peer-disk:UpToDate

# zpool create export /dev/drbd106
# zfs set compression=lz4 export
# ls /export
ls: reading directory /export: Not a directory

If I destroy the pool and try to format /dev/drbd106 as XFS, it just hangs 
forever. Any ideas as to what is happening?


Diego




___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] One resource per disk?

2018-05-01 Thread Yannis Milios
I would prefer the 2nd option. Ideally all disks would be members of a
RAID(10?) array, with  DRBD sitting on top for the replication, and LVM for
managing the volume.
Another option would be ZFS managing the disks and the volume, while DRBD
sitting on top for the replication. This very same scenario would also
apply for LVM (thin or thick).
There's no right or wrong, depends on what your needs are.

I would avoid having single points of failure though, like single drives in
any case...

On Tue, May 1, 2018 at 3:12 PM, Gandalf Corvotempesta <
gandalf.corvotempe...@gmail.com> wrote:

> Hi to all
> Let's assume 3 servers with 12 disks each
> Would you create one resource per disk and then manage them with something
> like LVM or a single resource from a huge volume over all disks?
>
> ___
> drbd-user mailing list
> drbd-user@lists.linbit.com
> http://lists.linbit.com/mailman/listinfo/drbd-user
>
>
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] One resource per disk?

2018-05-01 Thread Gandalf Corvotempesta
Hi to all
Let's assume 3 servers with 12 disks each
Would you create one resource per disk and then manage them with something
like LVM or a single resource from a huge volume over all disks?
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] format /dev/drbd0

2018-05-01 Thread Simon Ironside

On 01/05/18 08:22, Roland Kammerer wrote:

On Mon, Apr 30, 2018 at 02:59:23PM +0300, Ran Shalit wrote:

Hello,

I've seen in tutorial that we format the /dev/drbd0 block device
(mkfs.ext4 /dev/drbd0).
Should it give the same result as if we formatted the real device (for
example /dev/sdb1)  or are there any differences ?


When you use a device as backing device for DRBD, you then should never
ever touch the backing device directly. Just saying. You "have to" (you
really should) format the DRBD dev, and not the backing device.

Besides that, "the user visible behavior" should be the same. Obviously
the DRBD device might be slower because of the replication over the
network. But it should contain a normal ext4 filesystem...


Most likely difference would be when using internal metadata, the DRBD 
device will be a smidge smaller than the backing device to account for 
this which may cause you problems. As Roland says, don't touch the 
backing device!


Simon
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] format /dev/drbd0

2018-05-01 Thread Roland Kammerer
On Mon, Apr 30, 2018 at 02:59:23PM +0300, Ran Shalit wrote:
> Hello,
> 
> I've seen in tutorial that we format the /dev/drbd0 block device
> (mkfs.ext4 /dev/drbd0).
> Should it give the same result as if we formatted the real device (for
> example /dev/sdb1)  or are there any differences ?

When you use a device as backing device for DRBD, you then should never
ever touch the backing device directly. Just saying. You "have to" (you
really should) format the DRBD dev, and not the backing device.

Besides that, "the user visible behavior" should be the same. Obviously
the DRBD device might be slower because of the replication over the
network. But it should contain a normal ext4 filesystem...

Regards, rck
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user