>>> Marc Smith schrieb am 08.11.2016 um 17:37 in Nachricht
:
> Hi,
>
> First, I realize ocf:lvm2:VolumeGroup comes from the LVM2 package and
> not resource-agents, but I'm hoping someone on this list is
I had a feeling it was something to do with that. It was confusing because
I could use the move command to move between my three original hosts, just
not the fourth. Then there was the device not found errors which added to
the confusion.
I have the resource stickiness set because I have critical
On 11/08/2016 12:54 PM, Ryan Anstey wrote:
> I've been running a ceph cluster with pacemaker for a few months now.
> Everything has been working normally, but when I added a fourth node it
> won't work like the others, even though their OS is the same and the
> configs are all synced via salt. I
On 11/07/2016 09:08 AM, Toni Tschampke wrote:
> We managed to change the validate-with option via workaround (cibadmin
> export & replace) as setting the value with cibadmin --modify doesn't
> write the changes to disk.
>
> After experimenting with various schemes (xml is correctly interpreted
>
On 11/04/2016 01:57 PM, CART Andreas wrote:
> Hi
>
> I have a basic 2 node active/passive cluster with Pacemaker (1.1.14 ,
> pcs: 0.9.148) / CMAN (3.0.12.1) / Corosync (1.4.7) on RHEL 6.8.
> This cluster runs NFS on top of DRBD (8.4.4).
>
> Basically the system is working on both nodes and I
On 11/04/2016 05:51 AM, IT Nerb GmbH wrote:
> Zitat von Klaus Wenninger :
>
>> On 11/02/2016 06:32 PM, Ken Gaillot wrote:
>>> On 10/26/2016 06:12 AM, Rainer Nerb wrote:
Hello all,
we're currently testing a 2-node-cluster with 2 vms and live migration
on
On 11/03/2016 08:49 AM, Detlef Gossrau wrote:
> Hi all,
>
> is it possible to prevent a switchover in a active/passive cluster if a
> ping node completely fails ?
>
> Situation:
>
> A ping node is put into maintenance and not reachable for a certain
> time. The cluster nodes getting the
On Tue, Nov 08, 2016 at 12:54:10PM +0100, Klaus Wenninger wrote:
> On 11/08/2016 11:40 AM, Kostiantyn Ponomarenko wrote:
> > Hi,
> >
> > I need a way to do a manual fail-back on demand.
> > To be clear, I don't want it to be ON/OFF; I want it to be more like
> > "one shot".
> > So far I found that
Perfect, thanks for the quick reply.
--Marc
On Tue, Nov 8, 2016 at 12:00 PM, Ken Gaillot wrote:
> On 11/08/2016 10:37 AM, Marc Smith wrote:
>> Hi,
>>
>> First, I realize ocf:lvm2:VolumeGroup comes from the LVM2 package and
>> not resource-agents, but I'm hoping someone on
Hi,
First, I realize ocf:lvm2:VolumeGroup comes from the LVM2 package and
not resource-agents, but I'm hoping someone on this list is familiar
with this RA and can provide some insight.
In my cluster configuration, I'm using ocf:lvm2:VolumeGroup to manage
my LVM VG's, and I'm using the cluster
On 11/8/2016 5:08 PM, Ulrich Windl wrote:
Niu Sibo schrieb am 07.11.2016 um 16:59 in
Nachricht <5820a4cc.9030...@linux.vnet.ibm.com>:
Hi Ken,
Thanks for the clarification. Now I have another real problem that needs
your advise.
The cluster consists of 5 nodes
On 11/08/2016 11:40 AM, Kostiantyn Ponomarenko wrote:
> Hi,
>
> I need a way to do a manual fail-back on demand.
> To be clear, I don't want it to be ON/OFF; I want it to be more like
> "one shot".
> So far I found that the most reasonable way to do it - is to set
> "resource stickiness" to a
Hi,
I need a way to do a manual fail-back on demand.
To be clear, I don't want it to be ON/OFF; I want it to be more like "one
shot".
So far I found that the most reasonable way to do it - is to set "resource
stickiness" to a different value, and then set it back to what it was.
To do that I
Ferenc Wágner napsal(a):
Jan Friesse writes:
Ferenc Wágner napsal(a):
Have you got any plans/timeline for 2.4.2 yet?
Yep, I'm going to release it in few minutes/hours.
Man, that was quick. I've got a bunch of typo fixes queued..:) Please
consider announcing
>>> Niu Sibo schrieb am 07.11.2016 um 16:59 in
Nachricht <5820a4cc.9030...@linux.vnet.ibm.com>:
> Hi Ken,
>
> Thanks for the clarification. Now I have another real problem that needs
> your advise.
>
> The cluster consists of 5 nodes and one of the node got a 1
>>> Ken Gaillot schrieb am 07.11.2016 um 16:15 in
>>> Nachricht
:
> On 11/07/2016 01:41 AM, Ulrich Windl wrote:
> Ken Gaillot schrieb am 04.11.2016 um 22:37 in
> Nachricht
>>
16 matches
Mail list logo