Steven Hale em...@stevenhale.co.uk writes:
There is no mention at all of a --live option on the libvirt Wiki
http://libvirt.org/migration.html
Live or offline migration is irrelevant to the topic of that page. The
documentation of this option should be (but is not) present on
Moullé Alain alain.mou...@bull.net writes:
Le 28/07/2014 11:45, Dang Zhiqiang a écrit :
I want to modify op start timeout value through command line, but I search
on internet find nothing.
I try crm_resource comand, but I only modify params and meta.
root@host2:~# crm configure show
Tom Parker tpar...@cbnco.com writes:
On 09/17/2013 04:18 AM, Lars Marowsky-Bree wrote:
On 2013-09-16T16:36:38, Tom Parker tpar...@cbnco.com wrote:
It definitely leads to data corruption and I think has to do with the
way that the locking is not working properly on my lvm partitions.
Lars Marowsky-Bree l...@suse.com writes:
The RA thinks the guest is gone, the cluster reacts and schedules it
to be started (perhaps elsewhere); and then the hypervisor starts it
locally again *too*.
I think changing those libvirt settings to destroy could work - the
cluster will then
Lars Marowsky-Bree l...@suse.com writes:
On 2013-09-17T11:38:34, Ferenc Wagner wf...@niif.hu wrote:
On the other hand, doesn't the recover action after a monitor failure
consist of a stop action on the original host before the new start, just
to make sure? Or maybe I'm confusing things
Tom Parker tpar...@cbnco.com writes:
I have attached my original crm config with 201 primitives to this e-mail.
Hi,
Sorry to sidetrack this thread, but I really wonder why you only have
order constraints for your Xen resources, without any colocation
constraints. After all, they can only
Ferenc Wagner wf...@niif.hu writes:
Arnold Krille arn...@arnoldarts.de writes:
If I understand you correctly, the problem only arises when adding new
bridges while the cluster is running. And your vms will (rightfully)
get restarted when you add a non-running bridge-resource to the
cloned
Andrew Beekhof and...@beekhof.net writes:
On 22/08/2013, at 10:22 PM, Ferenc Wagner wf...@niif.hu wrote:
man cibadmin says: the tagname and all attributes must match in order
for the element to be deleted,
for the element to be deleted --- not the children of the element
to be deleted
Ah
Andrew Beekhof and...@beekhof.net writes:
On 22/08/2013, at 10:08 PM, Ferenc Wagner wf...@niif.hu wrote:
Our setup uses some cluster wide pieces of meta information. Think
access control lists for resource instances used by some utilities or
some common configuration data used
Hi,
Under Pacemaker 1.1.7:
# cibadmin --query --local --scope=resources --no-children
resources
clone id=storage-clone
group id=storage
primitive class=ocf id=dlm provider=pacemaker type=controld
operations
op id=dlm-monitor-120 interval=120 name=monitor/
Arnold Krille arn...@arnoldarts.de writes:
If I understand you correctly, the problem only arises when adding new
bridges while the cluster is running. And your vms will (rightfully)
get restarted when you add a non-running bridge-resource to the
cloned dependency-group.
Exactly.
You might
Lars Marowsky-Bree l...@suse.com writes:
Poisoned resources indeed should just fail to start and that should be
that. What instead can happen is that the resource agent notices it
can't start, reports back to the cluster, and the cluster manager goes
Oh no, I couldn't start the resource
Hi,
Our setup uses some cluster wide pieces of meta information. Think
access control lists for resource instances used by some utilities or
some common configuration data used by the resource agents. Currently
this info is stored in local files on the nodes or replicated in each
primitive as
Hi,
man cibadmin says: the tagname and all attributes must match in order
for the element to be deleted, but experience says otherwise: the
primitive is deleted even if it was created with different attributes
than those provided to the --delete call, cf. 'foo' vs 'bar' in the
example below. Do
Hi,
I built a Pacemaker cluster to manage virtual machines (VMs). Storage
is provided by cLVM volume groups, network access is provided by
software bridges. I wanted to avoid maintaining precise VG and bridge
dependencies, so I created two cloned resource groups:
group storage dlm clvmd vg-vm
Vladislav Bogdanov bub...@hoster-ok.com writes:
22.08.2013 15:08, Ferenc Wagner wrote:
Our setup uses some cluster wide pieces of meta information.
You may use meta attributes of any primitives for that. Although crmsh
doe not like that very much, it can be switched to a relaxed mode.
OK
Lars Marowsky-Bree l...@suse.com writes:
On 2013-06-05T15:36:47, Ulrich Windl ulrich.wi...@rz.uni-regensburg.de
wrote:
Also set placement-strategy=utilization then. If you put
utilization big_thing=1 into each fat primitives, your server won't
be overloaded, but groups aren't moved if
Thomas Glanzmann tho...@glanzmann.de writes:
Installing pacemaker's debug symbols would also make the stack trace
more useful.
we tried to install heartbeat-dev to see more, but there are no
debugging symbols available.
You'd probably need the pacemaker-dbg package, which is not present for
Dejan Muhamedagic deja...@fastmail.fm writes:
On Mon, Jun 03, 2013 at 06:19:06PM +0200, Ferenc Wagner wrote:
I've got a script for resource creation, which puts the new resource in
a shadow CIB together with the necessary constraints, runs a simulation
and finally offers to commit
Hi,
I've got a script for resource creation, which puts the new resource in
a shadow CIB together with the necessary constraints, runs a simulation
and finally offers to commit the shadow CIB into the live config (by
invoking an interactive crm). This works well. My concern is that if
somebody
Andrew Beekhof and...@beekhof.net writes:
On 15/05/2013, at 10:06 PM, Ulrich Windl
ulrich.wi...@rz.uni-regensburg.de wrote:
Is it true that adding/removing a monitor operation in the resource
configuration requites a reload operation in the RA to prevent the
resource from being restarted?
Hi,
I learnt that it's possible to change non-unique parameters of a
resource without restarting it if the agent implements the reload
action. On the other hand, adding a monitor operation does not seem to
have an effect until the resource is restarted. Is there a good reason
for this, or am I
Hi,
If a resource fails to stop, the node hosting it is fenced immediately.
Isn't it possible to move the other resources off the node before it's
fenced? Something like on-fail=standby, but with additional fencing at
the end, so that the failed resource can be restarted elsewhere.
--
Thanks,
Andrew Beekhof and...@beekhof.net writes:
On 15/05/2013, at 5:23 PM, Ferenc Wagner wf...@niif.hu wrote:
If a resource fails to stop, the node hosting it is fenced immediately.
Isn't it possible to move the other resources off the node before it's
fenced?
Sometimes, but you're in trouble
Andrew Beekhof and...@beekhof.net writes:
On 15/05/2013, at 5:19 PM, Ferenc Wagner wf...@niif.hu wrote:
I learnt that it's possible to change non-unique parameters of a
resource without restarting it if the agent implements the reload
action. On the other hand, adding a monitor operation
Andrew Beekhof and...@beekhof.net writes:
On 10/05/2013, at 11:37 PM, Ferenc Wagner wf...@niif.hu wrote:
An hour ago one node (n02) of our 4-node cluster started to shutdown.
Someone, probably the init script, sent SIGTERM to pacemakerd.
Hi Andrew,
thanks for the reply! Here I actually
Hi,
An hour ago one node (n02) of our 4-node cluster started to shutdown.
No idea why. But during shutdown, it asked another node (n01) to shut
down as well:
May 10 13:59:42 n02 pacemakerd: [10851]: info: crm_signal_dispatch: Invoking
handler for signal 15: Terminated
May 10 13:59:42 n02
Dejan Muhamedagic deja...@fastmail.fm writes:
On Fri, Apr 19, 2013 at 08:01:04AM -0600, Greg Woods wrote:
Is there a recommended method for taking a cluster out of service
cleanly?
If you don't want you resources to failover, just stop them, either
one by one (if there are not so many) or
Andreas Mock andreas.m...@web.de writes:
It seems that the IMM device does a soft shutdown despite
documented differently.
In some cases the power down command is implemented very similarly to
the actual power switch, which must be held for a couple of seconds to
power off the machine.
29 matches
Mail list logo