Hi Ronald,
Thank you for the reply.
The reason why I have asked you this question is because if we run
dbus-drbdmanaged on all three nodes simultaneously, if one of the two
secondary node goes down and when it comes up then it sees another
secondary node as outdated. To fix it, I have to kill
Hi Ronald,
Thank you for the reply.
The reason why I have asked you this question is because if we run
dbus-drbdmanaged on all three nodes simultaneously, if one of the two
secondary node goes down and when it comes up then it sees another
secondary node as outdated. To fix it, I have to kill
Hi,
This time testers where more active than usual. All the brought
up issues where _not_ regressions of the current cycle.
So, again, the call for all testers: Let us know if there is
anything that needs to be fixed before the 9.0.8 release.
9.0.8rc2-1 (api:genl2/proto:86-112/transport:14)
On Mon, Jun 12, 2017 at 5:45 PM, Lars Ellenberg
wrote:
> On Fri, Jun 09, 2017 at 11:39:05PM +0800, David Lee wrote:
> > Hi,
> >
> > I am experimenting with DRBD dual-primary with OCFS 2, and DRBD client as
> > well.
> > With the hope that every node can access the
Hi,
I have a strange situation on a three nodes Proxmox with drbd.
root@dmz-pve3:~ # drbd-overview
0:.drbdctrl/0 Connected(3*) Secondary(3*)
UpTo(dmz-pve3)/UpTo(dmz-pve2,dmz-pve1)
1:.drbdctrl/1 Connected(3*) Secondary(3*)
Le 12/06/2017 à 10:09, Robert Altnoeder a écrit :
> On 06/12/2017 09:39 AM, Julien Escario wrote:
>
>> Finally, I've been able to fully restore vm4 and vm5 (drbdsetup and
>> drbdmanage
>> working) but not vm7.
>>
>> I've done that by firewalling port 6999 (port used by .drbdctrl ressource)
>>
On Fri, Jun 09, 2017 at 11:39:05PM +0800, David Lee wrote:
> Hi,
>
> I am experimenting with DRBD dual-primary with OCFS 2, and DRBD client as
> well.
> With the hope that every node can access the storage in an unified way.
> But I got a
> kernel call trace and huge number of ASSERTION failure
Le 12/06/2017 à 09:57, Roland Kammerer a écrit :
> Without access to that machine, I'd say that is how you have to resolve
> it (reboot). And yes, we also saw these hangs in old drbd9 versions.
Ok, good to know. Of course, I won't ask you to go further without a proper
support contract. It was
On 06/12/2017 09:39 AM, Julien Escario wrote:
> Finally, I've been able to fully restore vm4 and vm5 (drbdsetup and drbdmanage
> working) but not vm7.
>
> I've done that by firewalling port 6999 (port used by .drbdctrl ressource) and
> issuing a down/up on drbdctrl on vm4 and vm5.
>
> [...]
>
>
On Mon, Jun 12, 2017 at 09:39:08AM +0200, Julien Escario wrote:
> Le 09/06/2017 à 14:24, Julien Escario a écrit :
> > Le 09/06/2017 à 09:59, Robert Altnoeder a écrit :
> >> On 06/08/2017 04:14 PM, Julien Escario wrote:
> >>> Hello,
> >>> A drbdmanage cluster is actually stuck in this state :
> >>>
Le 09/06/2017 à 14:24, Julien Escario a écrit :
> Le 09/06/2017 à 09:59, Robert Altnoeder a écrit :
>> On 06/08/2017 04:14 PM, Julien Escario wrote:
>>> Hello,
>>> A drbdmanage cluster is actually stuck in this state :
>>> .drbdctrl role:Secondary
>>> volume:0 disk:UpToDate
>>> volume:1
On Sat, Jun 10, 2017 at 03:44:39PM +0500, Jaz Khan wrote:
> Hi,
>
> I have a very simple & important question to ask.
>
> If I have a 3 nodes setup, do I need to have "dbus-drbdmanaged"
> daemon/process to be running in the background on all 3 nodes OR just on
> the primary node?
all.
> P.S. I
On Sun, Jun 11, 2017 at 06:18:40AM +, Tomer Azran wrote:
> Hello all,
>
> I wat to use DRBD 9.0 on CentOS 7. The elrepo rpms are not updated so
> I will compile drbd from sources. I want to create a kernel
> independent module, since I don't want to compile the DRBD module
> every time
13 matches
Mail list logo