Hi
2011/6/9 Lars Ellenberg :
> On Wed, Jun 08, 2011 at 05:04:42PM +0200, Lars Ellenberg wrote:
>> On Wed, Jun 08, 2011 at 05:59:24PM +0900, Takatoshi MATSUO wrote:
>> > Hello
>> >
>> > I am writing Master/Slave resource agent.
>> > I want to use "$OCF_RESKEY_CRM_meta_notify_master_uname" in monito
Hi Klaus,
On 8 Jun 2011, at 22:21, Klaus Darilion wrote:
> Hi!
>
> Currently I have a 2 node cluster and I want to add a 3rd node to use
> quorum to avoid split brain.
>
> The service (DRBD+DB) should only run either on node1 or node2. Node3
> can not provide the service - it should just help th
Klaus Darilion wrote:
Hi!
Currently I have a 2 node cluster and I want to add a 3rd node to use
quorum to avoid split brain.
The service (DRBD+DB) should only run either on node1 or node2. Node3
can not provide the service - it should just help the other nodes to
find out if their network is br
Hi!
Currently I have a 2 node cluster and I want to add a 3rd node to use
quorum to avoid split brain.
The service (DRBD+DB) should only run either on node1 or node2. Node3
can not provide the service - it should just help the other nodes to
find out if their network is broken or the other node's
Hi,
On Mon, Jun 06, 2011 at 05:23:34AM +, Guido Schmidt wrote:
> Hi to all,
> please can someone explain to me why resource OCFS2 in my configuration
> stopped
> on node#2 by pacemakers-gui will also after a while stop upon node#1.
Can you please rephrase. I'm not sure I understand what hap
Hi,
On Wed, Jun 08, 2011 at 10:50:23AM +0200, Pawel Warowny wrote:
> Hi
>
> I need to migrate my cluster from Debian Squeeze to Redhat 6.0
> There were 2 nodes, so I migrated all resources to node_1 and on the
> second node I installed redhat and configured cluster
> The first problem - after sta
I'm not quite sure about your requirements because it sounds like you want to
be able to perform a manual check of a host post failure AND have automatic
fail back but perhaps you could remove the resource stickiness and look at
setting migration-threshold to 1 and failure-timeout to 3600:
http:
Great! I will try it later,thans!
发自我的 iPad
在 2011-6-8,19:40,Dan Frincu 写道:
> Hi,
>
> 2011/6/8 飞爱曦
>> How to use the crm command to modify the heartbeat / corosync the interval /
>> timeout value?
>> For example:
>> primitive Filesystem_3 ocf: heartbeat: Filesystem \
>> op monitor in
I believe it is preferable to bring failed nodes up manually. You'll want to
investigate why the node failed exactly, resolve the issue if any, then bring
the node up manually. Automatically bringing the node back up when it may be
facing some random issue is ill-advised.
--Daniel
On Jun 8, 20
On Wed, Jun 08, 2011 at 05:04:42PM +0200, Lars Ellenberg wrote:
> On Wed, Jun 08, 2011 at 05:59:24PM +0900, Takatoshi MATSUO wrote:
> > Hello
> >
> > I am writing Master/Slave resource agent.
> > I want to use "$OCF_RESKEY_CRM_meta_notify_master_uname" in monitor
> > to know where master is, but I
Hello,
I`ve configured heartbeat and pacemaker with 2 nodes, all resources work
well, all resources start when some of them is down, when the host is down,
pacemaker moves all resources to the other one, my configuration:
node $id="06d57c5a-3d47-4ef1-b518-7b8501f5ca9d" premailman1.mpt.es
node $id=
On Wed, Jun 08, 2011 at 05:59:24PM +0900, Takatoshi MATSUO wrote:
> Hello
>
> I am writing Master/Slave resource agent.
> I want to use "$OCF_RESKEY_CRM_meta_notify_master_uname" in monitor
> to know where master is, but It's empty.
> Meanwhile It's not empty in start.
>
> Is this specification ?
About two month ago, dealing with a bug report of some paying customer,
I fixed some long standing bugs in the heartbeat communication layer
that caused heartbeat to segfault, and other bad behaviour.
These bugs where triggered by "misbehaving" API clients,
respectively massive packet loss on the
Dejan Muhamedagic writes:
> lsb:dirsrv doesn't understand master/slave. That's OK, none of
> LSB agents do. You can only try to use clones (clone ldap-clone
> ldap ...).
That worked perfectly. I was getting master/slave and basic
clone stuff mixed up. Thanks!
I want exactly slave role will be launch only drbd3 (stacked drbd resource).
So disable launch resource on drbd3 at all not good solution for me
2011/6/8 Dominik Klein
> Try without role. If the resource must not run on the node at all, then
> the role does not matter. Maybe there's a bug with r
Hello,
I ran pacemaker/heartbeat on many redhat EL5 server, but I have a
customer with EL4, and when I try to install pacemaker/heartbeat on that
redhat EL4, it fails because of rpm dependancy : libnssutil3.so is
needed for corosynclib-1.2.1-1.el4.i386.
Do you know how to get the prerequisite
Hi,
2011/6/8 飞爱曦
> How to use the crm command to modify the heartbeat / corosync the interval
> / timeout value?
> For example:
> primitive Filesystem_3 ocf: heartbeat: Filesystem \
> op monitor interval = "120s" timeout = "60s" \
> params device = "-U56c48cba-c365-40fc-8895-d859167
How to use the crm command to modify the heartbeat / corosync the interval /
timeout value?
For example:
primitive Filesystem_3 ocf: heartbeat: Filesystem \
op monitor interval = "120s" timeout = "60s" \
params device = "-U56c48cba-c365-40fc-8895-d85916755f28" directory = "/
d7" fstype
On Wed, Jun 08, 2011 at 12:22:44PM +0400, ruslan usifov wrote:
> 2011/6/8 Dejan Muhamedagic
>
> > On Tue, Jun 07, 2011 at 11:19:25AM -0600, Serge Dubrouski wrote:
> > > On Tue, Jun 7, 2011 at 9:55 AM, Dejan Muhamedagic > >wrote:
> > >
> > > > On Tue, Jun 07, 2011 at 09:47:17AM -0600, Serge Dubro
Try without role. If the resource must not run on the node at all, then
the role does not matter. Maybe there's a bug with role="slave"?
On 06/08/2011 10:56 AM, ruslan usifov wrote:
> i try follow:
>
> location ms_drbd_web-U_slave_on_drbd3 ms_drbd_web-U \
> rule role="slave" -inf: #uname
Hello
I am writing Master/Slave resource agent.
I want to use "$OCF_RESKEY_CRM_meta_notify_master_uname" in monitor
to know where master is, but It's empty.
Meanwhile It's not empty in start.
Is this specification ?
Regards,
Takatoshi MATSUO
___
Pacem
i try follow:
location ms_drbd_web-U_slave_on_drbd3 ms_drbd_web-U \
rule role="slave" -inf: #uname ne drbd3
result is identical, pacemaker try launch slave role on other nodes:-(((
2011/6/8 Dominik Klein
> >> but when i shutdown drbd3 host Pacemaker try start slave role on
> >> other
Hi
I need to migrate my cluster from Debian Squeeze to Redhat 6.0
There were 2 nodes, so I migrated all resources to node_1 and on the
second node I installed redhat and configured cluster
The first problem - after starting new redhat node_2 - node_1 with
debian is dying with logs:
Jun 07 14:30:5
>> but when i shutdown drbd3 host Pacemaker try start slave role on
>> other host. How can i prevent this behavior?
>
> try
> s/inf/-inf
> s/eq/neq
"ne" actually, sorry
___
Pacemaker mailing list: Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.or
On 06/08/2011 10:39 AM, ruslan usifov wrote:
> Hello
>
> I have follow constraint:
>
> location ms_drbd_web-U_slave_on_drbd3 ms_drbd_web-U \
> rule role="slave" inf: #uname eq drbd3
>
>
> Which as i think it prevents slave role from launch on all hosts except
> drbd3,
nope
it says "pu
Hello
I have follow constraint:
location ms_drbd_web-U_slave_on_drbd3 ms_drbd_web-U \
rule role="slave" inf: #uname eq drbd3
Which as i think it prevents slave role from launch on all hosts except
drbd3, but when i shutdown drbd3 host Pacemaker try start slave role on
other host. How ca
2011/6/8 Dejan Muhamedagic
> On Tue, Jun 07, 2011 at 11:19:25AM -0600, Serge Dubrouski wrote:
> > On Tue, Jun 7, 2011 at 9:55 AM, Dejan Muhamedagic >wrote:
> >
> > > On Tue, Jun 07, 2011 at 09:47:17AM -0600, Serge Dubrouski wrote:
> > > > On Tue, Jun 7, 2011 at 9:39 AM, Dejan Muhamedagic <
> dej
On 06/07/2011 07:09 PM, CeR wrote:
> Hi there!
>
> I have some doubts, hope you folks can help me.
>
> In a system I have two (or more) ways to start a daemon:
> A) /etc/init.d/ script. The service could be started by the system
> (/etc/rcX) or by me manually.
> B) The daemon has an executable
Hi,
On Tue, Jun 07, 2011 at 06:51:42PM +, veghead wrote:
> I'm trying to setup a pair of LDAP servers running 389 (formerly Fedora DS)
> in
> high availability using Pacemaker with a floating IP. In addition, 389
> supports
> multi-master replication, where all changes on one node are auto
On Tue, Jun 07, 2011 at 11:19:25AM -0600, Serge Dubrouski wrote:
> On Tue, Jun 7, 2011 at 9:55 AM, Dejan Muhamedagic wrote:
>
> > On Tue, Jun 07, 2011 at 09:47:17AM -0600, Serge Dubrouski wrote:
> > > On Tue, Jun 7, 2011 at 9:39 AM, Dejan Muhamedagic > >wrote:
> > >
> > > > Hi,
> > > >
> > > > On
30 matches
Mail list logo