Ken and Co,
Thanks for the useful information.
I bumped the migrate-to timeout value from 1200ms to 360s , which should be
more than enough time
to successfully migrate the resource (i.e. the KVM guest). The migration
was again interrupted with a timeout
at the 2ms (20 second) mark, thus
No it is not a typo... I have tried backport but the version is still 1.2.0.
I think the easiest way is to upgrade my system.
Thank you
2017-01-17 9:27 GMT+01:00 Jan Friesse :
> Hi all,
>>
>> I have a two node cluster with the following details:
>> - Ubuntu 10.04.4 LTS (I
On 01/17/2017 10:05 AM, Oscar Segarra wrote:
> Hi,
>
> * It is also possible to configure a monitor to ensure that the resource
> is not running on nodes where it's not supposed to be (a monitor with
> role="Stopped"). You don't have one of these (which is fine, and common).
>
> Can you provide
Hi..
I've been testing live guest migration (LGM) with VirtualDomain resources,
which are guests running on Linux KVM / System Z
managed by pacemaker.
I'm looking for documentation that explains how to configure my
VirtualDomain resources such that they will not timeout
prematurely when there
>>> lejeczek schrieb am 17.01.2017 um 16:27 in Nachricht
:
> hi everyone
>
> asking here as I hope experts here have already done it
> dozens times.
> I've gotten a nice answer from samba authors, but explaining
> the idea
Hi,
* It is also possible to configure a monitor to ensure that the resource
is not running on nodes where it's not supposed to be (a monitor with
role="Stopped"). You don't have one of these (which is fine, and common).
Can you provide more information/documentation about role="Stopped"
And,
Hi Ken,
Somehow I've manged to make it working.
I had to use "(crm_resource -L |grep "Resource Group:" |awk '{print
$NF}')" and after to loop through resources...
But in the case of a stopped RG: something like this is not so clear :
"resource rgcl0111 is NOT running"
On 2017-01-16
On 01/17/2017 08:52 AM, Ulrich Windl wrote:
Oscar Segarra schrieb am 17.01.2017 um 10:15 in
> Nachricht
> :
>> Hi,
>>
>> Yes, I will try to explain myself better.
>>
>> *Initially*
>> On node1
hi everyone
asking here as I hope experts here have already done it
dozens times.
I've gotten a nice answer from samba authors, but explaining
the idea in general, here I hope someone could actually
explain how this(ha cluster) should be configured, set up.
I asked following question:
Hi,
if you have a cluster between VMs, please note that it may be problematic to
use multicast (which is the default after setting up the cluster with pcs
cluster setup), instead use unicast. That was what I ran into initially.
Regards,
Daniel
--
Daniel Souvignier
IT Center
Gruppe:
>>> Oscar Segarra schrieb am 17.01.2017 um 10:15 in
Nachricht
:
> Hi,
>
> Yes, I will try to explain myself better.
>
> *Initially*
> On node1 (vdicnode01-priv)
>>virsh list
> ==
> vdicdb01
Hi,
I attach cluster configuration:
Note that "migration_network_suffix=tcp://" is correct in my environment as
I have edited the VirtualDomain resource agent in order to build the
correct url "tcp://vdicnode01-priv"
mk_migrateuri() {
local target_node
local migrate_target
show your cluster configuration.
2017-01-17 10:15 GMT+01:00 Oscar Segarra :
> Hi,
>
> Yes, I will try to explain myself better.
>
> Initially
> On node1 (vdicnode01-priv)
>>virsh list
> ==
> vdicdb01 started
>
> On node2 (vdicnode02-priv)
>>virsh list
>
Hi,
Yes, I will try to explain myself better.
*Initially*
On node1 (vdicnode01-priv)
>virsh list
==
vdicdb01 started
On node2 (vdicnode02-priv)
>virsh list
==
vdicdb02 started
--> Now, I execute the migrate command (outside the cluster <-- not using
pcs resource
sorry,
But do you mean, when you say, you migrated the vm outside of the
cluster? one server out side of you cluster?
2017-01-17 9:27 GMT+01:00 Oscar Segarra :
> Hi,
>
> I have configured a two node cluster whewe run 4 kvm guests on.
>
> The hosts are:
> vdicnode01
>
Hi,
I have configured a two node cluster whewe run 4 kvm guests on.
The hosts are:
vdicnode01
vdicnode02
And I have created a dedicated network card for cluster management. I have
created required entries in /etc/hosts:
vdicnode01-priv
vdicnode02-priv
The four guests have collocation rules in
Dear Pacemaker Users,
I would like to refresh the question of order constraints using resource
sets with require-all="false" option and the way they work.
The scenario is (tried with pacemaker v 1.1.11 and 1.1.14):
- create the following order constraint:
$ pcs constraint order set dummyRes1 set
Dne 16.1.2017 v 18:18 Gerhard Wiesinger napsal(a):
Hello Ken,
thank you for the answers.
On 16.01.2017 16:43, Ken Gaillot wrote:
On 01/16/2017 08:56 AM, Gerhard Wiesinger wrote:
Hello,
I'm new to corosync and pacemaker and I want to setup a nginx cluster
with quorum.
Requirements:
- 3
18 matches
Mail list logo