Re: [DRBD-user] Strange drbdtop results

2018-05-11 Thread Yannis Milios
>drbdtop on a resource with detailed >status give me OutOfSync on some
>nodes.

I try to adjust all resources without any success on solving this problem


That can be solved by “disconnect/connect” the resource that has out of
sync blocks.
-- 
Sent from Gmail Mobile
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


[DRBD-user] Strange drbdtop results

2018-05-11 Thread Jean-Daniel TISSOT
Hi,

I have some strange results with drbdtop :

On pve1:
│  Name  | Role  | Disks | Peer Disks | Connections |
Overall    │
│  vm-102-disk-1 | Secondary | ✓ | ✓  | ✓   |
✓  │
│  .drbdctrl | Secondary | ✓ | ✗ (1)  | ✓   | ✗
(1)  │
│  vm-101-disk-1 | Secondary | ✓ | ✗ (3)  | ✓   | ✗
(3)  │
│  vm-100-disk-1 | Secondary | ✓ | ✗ (5)  | ✓   | ✗
(5)  │
On pve2:
│  Name  | Role  | Disks | Peer Disks | Connections |
Overall    │
│  .drbdctrl | Primary   | ✓ | ✓  | ✓   |
✓  │
│  vm-101-disk-1 | Secondary | ✓ | ✓  | ✓   |
✓  │
│  vm-102-disk-1 | Secondary | ✓ | ✓  | ✓   |
✓  │
│  vm-100-disk-1 | Secondary | ✓ | ✗ (3)  | ✓   | ✗
(3)  │
On pve3:
│  Name  | Role  | Disks | Peer Disks | Connections |
Overall    │
│  .drbdctrl | Secondary | ✓ | ✓  | ✓   |
✓  │
│  vm-102-disk-1 | Primary   | ✓ | ✓  | ✓   |
✓  │
│  vm-101-disk-1 | Primary   | ✓ | ✗ (3)  | ✓   | ✗
(3)  │
│  vm-100-disk-1 | Primary   | ✓ | ✗ (5)  | ✓   | ✗
(5)  │

drbdadm status and drbd-overview give me nice results. All my resources
are UpToDate
root@dmz-pve1:~ # drbdadm status
.drbdctrl role:Secondary
  volume:0 disk:UpToDate
  volume:1 disk:UpToDate
  dmz-pve2 role:Primary
    volume:0 peer-disk:UpToDate
    volume:1 peer-disk:UpToDate
  dmz-pve3 role:Secondary
    volume:0 peer-disk:UpToDate
    volume:1 peer-disk:UpToDate

vm-100-disk-1 role:Secondary
  disk:UpToDate
  dmz-pve2 role:Secondary
    peer-disk:UpToDate
  dmz-pve3 role:Primary
    peer-disk:UpToDate

vm-101-disk-1 role:Secondary
  disk:UpToDate
  dmz-pve2 role:Secondary
    peer-disk:UpToDate
  dmz-pve3 role:Primary
    peer-disk:UpToDate

vm-102-disk-1 role:Secondary
  disk:UpToDate
  dmz-pve2 role:Secondary
    peer-disk:UpToDate
  dmz-pve3 role:Primary
    peer-disk:UpToDate

root@dmz-pve1:~ # drbd-overview
NOTE: drbd-overview will be deprecated soon.
Please consider using drbdtop.

  0:.drbdctrl/0  Connected(3*)
Seco(dmz-pve3,dmz-pve1)/Prim(dmz-pve2)
UpTo(dmz-pve1)/UpTo(dmz-pve2,dmz-pve3)
  1:.drbdctrl/1  Connected(3*)
Seco(dmz-pve1,dmz-pve3)/Prim(dmz-pve2)
UpTo(dmz-pve1)/UpTo(dmz-pve2,dmz-pve3)
100:vm-100-disk-1/0  Connected(3*)
Seco(dmz-pve1,dmz-pve2)/Prim(dmz-pve3)
UpTo(dmz-pve1)/UpTo(dmz-pve2,dmz-pve3)
101:vm-101-disk-1/0  Connected(3*)
Seco(dmz-pve2,dmz-pve1)/Prim(dmz-pve3)
UpTo(dmz-pve1)/UpTo(dmz-pve3,dmz-pve2)
102:vm-102-disk-1/0  Connected(3*)
Seco(dmz-pve1,dmz-pve2)/Prim(dmz-pve3)
UpTo(dmz-pve1)/UpTo(dmz-pve2,dmz-pve3)

drbdtop on a resource with detailed status give me OutOfSync on some nodes.
I try to adjust all resources without any success on solving this problem.

Can I have some help for that.
Thanks.

-- 
Best regards, Jean-Daniel TISSOT


___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] One resource per disk?

2018-05-11 Thread Gandalf Corvotempesta
Il giorno ven 11 mag 2018 alle ore 11:58 Robert Altnoeder <
robert.altnoe...@linbit.com> ha scritto:
> If it s supposed to become a storage system (e.g., one that is used by
> the Hypervisors via NFS), then the whole thing is a different story, and
> we may be talking about an active/passive NFS storage cluster that the
> Hypervisors connect to. The setup mentioned above is probably still the
> way to go with regard to the storage stack layout, however there could
> obviously be a single large NFS volume, which would only be active
> (accessible) on one of the storage nodes at a time.

In my case, this would be used for Maildir hosting.
Multiple drbd resources would be unmanageable and useless, with Maildir i'll
have a huge NFS server (maybe v3 to be stateless) in active/passive. Any
short failure
in terms of couple of seconds should be unnoticed by everyone (users,
dovecot deliveries and so on)
because all software are configured to retry some times.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user


Re: [DRBD-user] One resource per disk?

2018-05-11 Thread Robert Altnoeder
On 05/10/2018 09:44 PM, Gandalf Corvotempesta wrote:
> Il giorno mer 2 mag 2018 alle ore 08:33 Paul O'Rorke
> > ha scritto:
>
> I create a large data drive out of a bunch of small SSDs using
> RAID and make that RAID drive an LVM PV.
>
> I can then create LVM volume groups and volumes for each use (in
> my case virtual drives for KVM) to back specific DRBD resources
> for each VM.  It allows me to have a DRBD resource for each VM,
> each backed by an LVM volume which is in turn on a large LVM PV.
>
> VM has DRBD resource as it's block device --> DRBD resources
> backed by LVM volume --> LVM volume on a large RAID based Physical
> Volume.
>
> Too many drbd resources to manage. 
> I prefere a single resource, if possible.

The only sane active-active setup is one with separate DRBD resources
per VM, because write access is granted per resource, so if one VM is
running on hypervisor A and another VM is running on hypervisor B, they
must each be writable on different nodes. That is easy with separate
resources, whereas with a single resource, it would require a dual
primary setup and cluster-aware volume management (e.g., Cluster LVM) or
filesystems on top), and the slightest interruption of the replication
link is guaranteed to cause a split-brain situation, which means that
half of the VMs will lose some data as soon as the split-brain is
resolved by dropping one of the two split datasets.

So the standard setup is indeed to have each VM backed by a DRBD
resource, the DRBD resource backed by LVM or ZFS volumes, and that
backed by an LVM volume group or ZFS zpool, which in turn is backed by
RAID or single harddisks, SSDs, etc.

> What I would like is to create an HA NFS storage, if possible with ZFS
> but without putting ZFS on top of DRBD (i prefere the opposite: DRBD
> on top of ZFS)

If it s supposed to become a storage system (e.g., one that is used by
the Hypervisors via NFS), then the whole thing is a different story, and
we may be talking about an active/passive NFS storage cluster that the
Hypervisors connect to. The setup mentioned above is probably still the
way to go with regard to the storage stack layout, however there could
obviously be a single large NFS volume, which would only be active
(accessible) on one of the storage nodes at a time.

Best regards,
-- 
Robert Altnoeder
+43 1 817 82 92 0
robert.altnoe...@linbit.com

LINBIT | Keeping The Digital World Running
DRBD - Corosync - Pacemaker

DRBD® and LINBIT® are registered trademarks of LINBIT, Austria.
___
drbd-user mailing list
drbd-user@lists.linbit.com
http://lists.linbit.com/mailman/listinfo/drbd-user