Michal wrote:
Hi,
When I try to start mysql with config:
primitive drbd1 ocf:heartbeat:drbd \
params drbd_resource=db \
op monitor role=Master interval=59s timeout=30s \
op monitor role=Slave interval=60s timeout=30s
ms ms-drbd1 drbd1 \
meta clone-max=2 master-max=1 master-node-max=1
Dominik Klein wrote:
Michal wrote:
Hi,
When I try to start mysql with config:
primitive drbd1 ocf:heartbeat:drbd \
params drbd_resource=db \
op monitor role=Master interval=59s timeout=30s \
op monitor role=Slave interval=60s timeout=30s
ms ms-drbd1 drbd1 \
meta clone-max=2 master-max=1
Hi,
I wrote a fast HOWTO compile and install a cluster from
- corosync
- the remains of the heartbeat project: cluster-glue and agents
- pacemaker
on a debian lenny system. See: http://www.clusterlabs.org/wiki/DebianCorosync
I hope it helps for the further development of packages.
Greetings,
Hi,
On Mon, Aug 17, 2009 at 07:38:39AM +0530, Abhin GS wrote:
Hello,
The node1 was chocked due to a big messages file. we have fixed that
problem in node1, then we ran a update for SLES11, every required
patches were installed properly (service openais stop - was done before
patching). We
Dnia Pn Sierpnia 17 2009, 9:05 am, Dominik Klein napisał(a):
Dominik Klein wrote:
Reading more carefully (it's monday morning ...): You already have the
ip and mysql in a group. That makes the resources run on the same node and
in the order you specified in the group-statement.
So drop that
On Fri, Aug 14, 2009 at 11:18 PM, Dan Uristdur...@ucar.edu wrote:
I have a 2-node cluster which runs several sets of vservers. Each set of
vservers is dependent on a different drbd volume. I would like to use
the MailTo agent to notify me when a resource fails over (either by
colocating an
I have the following definition for an LVM volume, extracted from
cibadmin -Q:
!-- LVM volume vsvg1 --
primitive class=ocf id=vsvg1 provider=heartbeat type=LVM
instance_attributes id=vsvg1-instance_attributes
nvpair id=vsvg1-instance_attributes-volgrpname
I was able to delete the resource through the crm shell, so that's at
least a workaround for me.
Dan Urist wrote:
I have the following definition for an LVM volume, extracted from
cibadmin -Q:
!-- LVM volume vsvg1 --
primitive class=ocf id=vsvg1 provider=heartbeat type=LVM
On Mon, Aug 17, 2009 at 7:36 PM, Dan Uristdur...@ucar.edu wrote:
I have the following definition for an LVM volume, extracted from
cibadmin -Q:
!-- LVM volume vsvg1 --
primitive class=ocf id=vsvg1 provider=heartbeat type=LVM
instance_attributes
Hi,
I am very simple Master/Slave in RHEL 5.3 with pacemaker 1.0.4 and heartbea
2.99. I unplugged the power cable at the master machine, and I expected the
slave becomes master. But the slave stays at slave state. Is this correct
behavior or a bug? How does Pacemaker (or heartbeat) handle this
Andrew Beekhof wrote:
You were right.
There was a Conflicts: heartbeat directive left over from the
initial submission to Fedora.
I've fixed it now, thanks for the feedback and sorry for the
inconvenience.
On Fri, Aug 14, 2009 at 6:17 PM, Eric
Heydrickeric...@hq.speakeasy.net wrote:
Hi,
On Mon, Aug 17, 2009 at 01:46:14PM -0600, Dan Urist wrote:
Andrew Beekhof wrote:
Possibly, but not in this case.
I'd guess you had some constraints that reference the vsvg1 resource.
Deleting the resource would therefore leave dangling dependancies
which is not allowed.
Hi,
On Mon, Aug 17, 2009 at 03:08:06PM -0600, hj lee wrote:
Hi,
I am very simple Master/Slave in RHEL 5.3 with pacemaker 1.0.4 and heartbea
2.99. I unplugged the power cable at the master machine, and I expected the
slave becomes master. But the slave stays at slave state. Is this correct
Hi,
I defined stonith:ssh, they are running in both machines as a clone. How is
the stonith related to promoting standby? When the Pacemaker detects master
node was gone in cluster, then why doesn't Pacemaker promote the standby?
Thanks very much
On Mon, Aug 17, 2009 at 4:00 PM, Dejan
Hi,
On Mon, Aug 17, 2009 at 04:18:34PM -0600, hj lee wrote:
Hi,
I defined stonith:ssh, they are running in both machines as a clone. How is
How do you expect ssh to work if you pull the power plug from the
target node?
the stonith related to promoting standby? When the Pacemaker detects
Good Morning Dejan,
Thank you for your input, will increase the time out to 60 sec. I guess
we have switched the other machine off (pull the plug we did), the
fencing was strapped to ilore, i guess since there was no supply to
node2, HP ilo2 did not function, so node 1 could not reach it.
Pull
16 matches
Mail list logo