Hi Andreas!
On 15.05.2013 22:55, Andreas Kurz wrote:
On 2013-05-15 15:34, Klaus Darilion wrote:
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist=pace1 dom0=xentest1 \
op start
Hi!
I have a 2 nodes cluster: a simple test setup with a
ocf:heartbeat:IPaddr2 resource, using xen VMs and stonith:external/xen0.
Please see the complete config below.
Basically everything works fine, except in the case of broken corosync
communication between the nodes (simulated by
On 15.05.2013 14:51, Digimer wrote:
On 05/15/2013 08:37 AM, Klaus Darilion wrote:
primitive st-pace1 stonith:external/xen0 \
params hostlist=pace1 dom0=xentest1 \
op start start-delay=15s interval=0
Try;
primitive st-pace1 stonith:external/xen0 \
params hostlist
Just for the records: I had forgotten to setup a order constraint to
start the filesystem after the promotion of the master.
order drbd_before_grp_database inf: ms_drbd0:promote grp_database:start
regards
Klaus
Am 09.06.2011 16:18, schrieb Klaus Darilion:
Am 09.06.2011 01:05, schrieb Anton
Am 09.06.2011 01:05, schrieb Anton Altaparmakov:
Hi Klaus,
On 8 Jun 2011, at 22:21, Klaus Darilion wrote:
Hi!
Currently I have a 2 node cluster and I want to add a 3rd node to use
quorum to avoid split brain.
The service (DRBD+DB) should only run either on node1 or node2. Node3
can
Hi!
Currently I have a 2 node cluster and I want to add a 3rd node to use
quorum to avoid split brain.
The service (DRBD+DB) should only run either on node1 or node2. Node3
can not provide the service - it should just help the other nodes to
find out if their network is broken or the other
Klaus Darilion wrote:
Hi!
Currently I have a 2 node cluster and I want to add a 3rd node to use
quorum to avoid split brain.
The service (DRBD+DB) should only run either on node1 or node2. Node3
can not provide the service - it should just help the other nodes to
find out if their network
Am 14.03.2011 12:49, schrieb Pavel Levshin:
14.03.2011 12:27, Klaus Darilion пишет:
2. before adding the IP address, it will delete the IP address if the
address is already configured (on any interface, with any netmask).
Thus
the add will always work.
This particular part is not good
Hi!
For maintenance reasons (e.g. updating pacemaker) it might be necessary
to shut down pacemaker. But in such cases I want that the services to
keep running.
Is it possible to shut down pacemaker but keep the current service
state, ie. all services should keep running on their current node.
Hi!
I wonder what a proper value for dampen would be. Dampen is documented as:
# attrd_updater --help|grep dampen
-d, --delay=value The time to wait (dampening) in seconds further
changes occur
So, I would read this as the delay to forward changes, e.g. to not
trigger fail-over on the
Hi!
Are there somewhere any debian packages of 1.1 branch available? If no,
are there somewhere instructions how to build them? I tried but failed.
regards
Klaus
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
Hi!
Instead of adding a virtual IP address to an interface
(ocf:heartbeat:IPaddr2), how do I manage a physical interface? Are there
any special resource scripts?
Thanks
Klaus
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
Hi!
I still suffer the problem, that the fail-count is not cleared after
failure-timeout. After the second failure of Kamailio the IP-address is
moved to the other node, and restarted on the previous node after
failure-timeout.
But as the fail-count is not cleared, subsequent failures will cause
Hi!
I'm using pacemaker on Lenny with packages from lenny-backports and miss
man pages. Is there a certain package I need to install to get the man
pages, or is it just a deficiency of the packages?
regards
Klaus
___
Pacemaker mailing list:
Am 11.02.2011 16:13, schrieb Raoul Bhatia [IPAX]:
On 02/11/2011 03:07 PM, Klaus Darilion wrote:
...
Or, how should pacemaker behave if Kamailio on the active node crashes.
Shall it just restart Kamailio or shall it migrate the IP address to the
other node and then try to restart Kamailio
Am 14.02.2011 14:45, schrieb Raoul Bhatia [IPAX]:
On 02/14/2011 02:37 PM, Klaus Darilion wrote:
ps. i'd very much love to see a ocf compatible ra instead of the lsb
script ;)
But if the LSB script is conform, will I get better results? I will
replacing the lsb with an ocf resource when
Am 14.02.2011 14:45, schrieb Raoul Bhatia [IPAX]:
On 02/14/2011 02:37 PM, Klaus Darilion wrote:
Somehow pacemaker does not react as I would expect it. My config is:
primitive failover-ip ocf:heartbeat:IPaddr \
params ip=83.136.32.161 \
op monitor interval=3s
primitive
Am 14.02.2011 16:43, schrieb ruslan usifov:
I have two internet providers, with preferred one of them(main provider),
and want when main provider if down, then second provider is up, and
internet will steel work. I see for myself this like:
define 2 primitive throw ocf:pacemaker:pingd,
Florian Haas wrote:
On 02/11/2011 07:58 PM, paul harford wrote:
Hi Florian
i had seen apache 2 in one of the pacemaker mails, it may have been a
typo but i just wanted to check, thanks for your help
Welcome. And I noticed I left out a 2 in the dumbest of places in my
original reply, but I
Am 11.02.2011 11:27, schrieb Raoul Bhatia [IPAX]:
hi,
On 02/09/2011 03:04 PM, Klaus Darilion wrote:
...
server1 server2
ip1ip2
-virtual-IP---
...
Kamailio should always be running on both
Hi Raoul!
Am 11.02.2011 16:13, schrieb Raoul Bhatia [IPAX]:
On 02/11/2011 03:07 PM, Klaus Darilion wrote:
...
Is there some protection in pacemaker to not endlessly trying to restart
such a broken service?
Or, how should pacemaker behave if Kamailio on the active node crashes.
Shall it just
3. Now, server1, hosting the virtual-IP, loost connectivity to the ping
target (I inserted a firewall rule) - The virtual-IP stayed with server1.
Now I put server2 online again: # crm node online server2.
That means, server2 is online and has ping connectivity, server1 is
online and doesn't
Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
Then I changed node server2 to standby: # crm node standby server2.
Node server2: standby
Online: [ server1 ]
failover-ip (ocf::heartbeat:IPaddr):Started server1
Clone Set: clonePing
Started: [
Am 08.02.2011 18:20, schrieb Florian Haas:
On 02/08/2011 06:03 PM, Klaus Darilion wrote:
Hi!
I'm a newbie and have a problem with a simple virtual-IP config. I want
the virtual-IP to be either on server1 or server2, depending on which of
the server is having network connectivity (ping
Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
On Tuesday 08 February 2011 18:03:31 Klaus Darilion wrote:
...
3. Now, server1, hosting the virtual-IP, loost connectivity to the ping
target (I inserted a firewall rule) - The virtual-IP stayed with server1.
Now I put server2 online
Am 09.02.2011 09:48, schrieb Florian Haas:
On 2011-02-09 09:25, Klaus Darilion wrote:
Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
Then I changed node server2 to standby: # crm node standby server2.
Node server2: standby
Online: [ server1 ]
failover-ip (ocf
forgot to mention that armani=server1 and bulgari=server2 (showing some
respect to fashion brands :-)
Am 09.02.2011 10:16, schrieb Klaus Darilion:
Am 08.02.2011 19:17, schrieb Michael Schwartzkopff:
On Tuesday 08 February 2011 18:03:31 Klaus Darilion wrote:
...
3. Now, server1, hosting
Am 08.02.2011 18:20, schrieb Florian Haas:
On 02/08/2011 06:03 PM, Klaus Darilion wrote:
Now I put server2 online again: # crm node online server2.
That means, server2 is online and has ping connectivity, server1 is
online and doesn't have ping connectivity. But the virtual-IP stayed
Hi!
I wonder if someone gives me some ideas how to achieve automatic
failover between my redundant load balancers. The setup I want to
implement is:
server1 server2
ip1ip2
-virtual-IP---
The
Hi!
I'm a newbie and have a problem with a simple virtual-IP config. I want
the virtual-IP to be either on server1 or server2, depending on which of
the server is having network connectivity (ping) to the outside. My
config is:
node server1 \
attributes standby=off
node server2 \
30 matches
Mail list logo