On Wed, Mar 11, 2009 at 8:27 PM, Andrew Beekhof wrote:
> On Wed, Mar 11, 2009 at 18:17, Pavel Georgiev wrote:
>> I've noticed that pacemaker`s /usr/lib/heartbeat/cib leaks ~200kb
>> every time a resource is migrated. I`ve setup a resource to fail ~ 2
>> minutes after
I've noticed that pacemaker`s /usr/lib/heartbeat/cib leaks ~200kb
every time a resource is migrated. I`ve setup a resource to fail ~ 2
minutes after it is started and the cib proc quickly grows in size.
I`ve upgraded pacemaker to 1.0.2-11.1, which is the latest centos rpm,
but the problem persists.
On Mon, Mar 9, 2009 at 10:44 PM, Andreas Kurz wrote:
> Have a look at the output of "/usr/lib/heartbeat/crmd metadata" and search
> for "cluster_recheck_interval" ... and add it to your crm_config.
>
> I think Andrew already added a patch to pacemaker so the
> cluster_recheck_interval is enabled b
I`m running heartbeat 2.99 + pacemaker 1.0. I have migration-threshold=2 and
failure-timeout=600s. When I simulate 2 resource failures per server, the
resource is migrated to the next node (as expected). When all servers
eligible for running the resource reach migration-threshold failures (2),
the
anyone knows a good solution here? I`m basically triyng to take quorum
decisions based only of three of the nodes of a four node cluster.
Thanks.
On Tue, Feb 17, 2009 at 9:29 AM, Andrew Beekhof wrote:
> On Tue, Feb 17, 2009 at 02:14, Pavel Georgiev wrote:
> > I have a setup with
Thanks, I`ll upgrade.
Any input on issue (2) and (3)
On Wed, Feb 18, 2009 at 9:15 PM, Serge Dubrouski wrote:
> 2.1.3 has a well known bug that was fixed in later releases. fail
> count doesn't get increased when resource fails. You have to upgrade.
>
> On Wed, Feb 18, 2009 a
I`m using heartbeat 2.1.3, default Centos 5 rpm. I`m running 3 nodes with a
single resource (LSB RA) which has an equal score on each server. I`m having
two "issues" (which actually might be feature)
1) If I stop the resource between two heartbeat "monitor" intervals, it
detects it is down and rest
I have a setup with 4 nodes, 3 of them may run a resource in active/passive
mode and that resource should never run on the 4th node. I also have a
second resource which may run on either one of the nodes (again
active/passive).
Since I can run the first resource on only 3 of the nodes, can I make
Thanks, that was helpfull.
On Monday 23 June 2008 13:40:15 Dejan Muhamedagic wrote:
> Hi,
>
> On Fri, Jun 20, 2008 at 08:44:16PM +0300, Pavel Georgiev wrote:
> > Hi.
> >
> > I`m trying get some custom scripts triggered whenever an appliance
> > acquires/rele
it into a group with
your primary resources.
On Fri, Jun 20, 2008 at 11:44 AM, Pavel Georgiev
<[EMAIL PROTECTED]> wrote:
Hi.
I`m trying get some custom scripts triggered whenever an appliance
acquires/releases a resource. What is the best practice for doing
this?
I saw HA executes s
Hi.
I`m trying get some custom scripts triggered whenever an appliance
acquires/releases a resource. What is the best practice for doing this?
I saw HA executes scripts in /etc/ha.d/resource.d/ for the specified
resources, so that looks the way to go for, I`m expecting
that /etc/ha.d/resource.
11 matches
Mail list logo