Hi,
I have a cluster and it works good, but I see sometimes cluster is stopped on
all nodes and I should start manually. pcsd service is running but cluster is
stopped.I see the pacemaker log but I couldn't find any warning or error. what
is the issue?
(stonith is disable.)
Regards,H.Yavari
On 05/20/2016 10:29 AM, Leon Botes wrote:
> I push the following config.
> The iscsi-target fails as it tries to start on iscsiA-node1
> This is because I have no target installed on iscsiA-node1 which is by
> design. All services listed here should only start on iscsiA-san1
> iscsiA-san2.
> I am
On 05/20/2016 10:02 AM, Pratip Ghosh wrote:
> Hi All,
>
> I am implementing 2 node RedHat (RHEL 7.2) HA cluster on Amazon EC2
> instance. For floating IP I am using a shell script provided by AWS so
> that virtual IP float to another instance if any one server failed with
> health check. In basic
Klaus Wenninger wrote:
> On 05/20/2016 08:39 AM, Ulrich Windl wrote:
> Jehan-Guillaume de Rorthais schrieb am 19.05.2016 um
> 21:29 in
> > Nachricht <20160519212947.6cc0fd7b@firost>:
> > [...]
> >> I was thinking of a use case where a graceful
Ken Gaillot wrote:
> A recent thread discussed a proposed new feature, a new environment
> variable that would be passed to resource agents, indicating whether a
> stop action was part of a recovery.
>
> Since that thread was long and covered a lot of topics, I'm starting a
Ken Gaillot wrote:
> On 05/12/2016 06:21 AM, Adam Spiers wrote:
> > Ken Gaillot wrote:
> >> On 05/10/2016 02:29 AM, Ulrich Windl wrote:
> Here is what I'm testing currently:
>
> - When the cluster recovers a resource, the resource agent's
I push the following config.
The iscsi-target fails as it tries to start on iscsiA-node1
This is because I have no target installed on iscsiA-node1 which is by
design. All services listed here should only start on iscsiA-san1
iscsiA-san2.
I am using using the iscsiA-node1 basically for quorum
Hi All,
I am implementing 2 node RedHat (RHEL 7.2) HA cluster on Amazon EC2
instance. For floating IP I am using a shell script provided by AWS so
that virtual IP float to another instance if any one server failed with
health check. In basic level cluster is working but I have 2 issues on
Le Fri, 20 May 2016 15:31:16 +0300,
Andrey Rogovsky a écrit :
> Hi!
> I cant get attribute value:
> /usr/sbin/crm_attribute -q --type nodes --node-uname $HOSTNAME --attr-name
> master-pgsqld --get-value
> Error performing operation: No such device or address
>
> This value
> -Ursprüngliche Nachricht-
> Von: Jehan-Guillaume de Rorthais [mailto:j...@dalibo.com]
> Gesendet: Freitag, 20. Mai 2016 13:52
> An: Felix Zachlod (Lists)
> Cc: users@clusterlabs.org
> Betreff: Re: [ClusterLabs] Pacemaker not invoking monitor after
> $interval
>
>>> "Felix Zachlod (Lists)" schrieb am 20.05.2016 um
13:33 in Nachricht
<670f732376b88843b8df7ad917cf8dd9289c0...@bulla.intern.onesty-tech.loc>:
> Hello!
>
> I am currently working on a cluster setup which includes several resources
> with "monitor interval=XXs" set. As
Le Fri, 20 May 2016 11:33:39 +,
"Felix Zachlod (Lists)" a écrit :
> Hello!
>
> I am currently working on a cluster setup which includes several resources
> with "monitor interval=XXs" set. As far as I understand this should run the
> monitor action on the resource
Hello!
I am currently working on a cluster setup which includes several resources with
"monitor interval=XXs" set. As far as I understand this should run the monitor
action on the resource agent every XX seconds. But it seems it doesn't.
Actually monitor is only invoked in special condition,
version 1.1.13-10.el7_2.2-44eb2dd
Hello!
I am currently developing a master/slave resource agent. So far it is working
just fine, but this resource agent implements reload() and this does not work
as expected when running as Master:
The reload action is invoked and it succeeds returning 0. The
Le Fri, 20 May 2016 11:12:28 +0200,
"Ulrich Windl" a écrit :
> >>> Jehan-Guillaume de Rorthais schrieb am 20.05.2016 um
> 09:59 in
> Nachricht <20160520095934.029c1822@firost>:
> > Le Fri, 20 May 2016 08:39:42 +0200,
> > "Ulrich Windl"
>>> Jehan-Guillaume de Rorthais schrieb am 20.05.2016 um
09:59 in
Nachricht <20160520095934.029c1822@firost>:
> Le Fri, 20 May 2016 08:39:42 +0200,
> "Ulrich Windl" a écrit :
>
>> >>> Jehan-Guillaume de Rorthais schrieb am
On 05/20/2016 08:39 AM, Ulrich Windl wrote:
Jehan-Guillaume de Rorthais schrieb am 19.05.2016 um
21:29 in
> Nachricht <20160519212947.6cc0fd7b@firost>:
> [...]
>> I was thinking of a use case where a graceful demote or stop action failed
>> multiple times and to give a
Le Fri, 20 May 2016 08:39:42 +0200,
"Ulrich Windl" a écrit :
> >>> Jehan-Guillaume de Rorthais schrieb am 19.05.2016 um
> >>> 21:29 in
> Nachricht <20160519212947.6cc0fd7b@firost>:
> [...]
> > I was thinking of a use case where a graceful
>>> Jehan-Guillaume de Rorthais schrieb am 19.05.2016 um
>>> 21:29 in
Nachricht <20160519212947.6cc0fd7b@firost>:
[...]
> I was thinking of a use case where a graceful demote or stop action failed
> multiple times and to give a chance to the RA to choose another method to
>
19 matches
Mail list logo