On 26/08/16 02:14 +, Jason A Ramsey wrote:
> Well, I got around the problem, but I don’t understand the solution…
>
> I edited /etc/pam.d/password-auth and commented out the following line:
>
> authrequiredpam_tally2.so onerr=fail audit silent
> deny=5
On 05/09/16 21:26 +0200, Jan Pokorný wrote:
> On 25/08/16 17:55 +0200, Sébastien Emeriau wrote:
>> When i check my corosync.log i see this line :
>>
>> info: cib_stats: Processed 1 operations (1.00us average, 0%
>> utilization) in the last 10min
>>
>> What does it mean (cpu load or just
On 25/08/16 17:55 +0200, Sébastien Emeriau wrote:
> When i check my corosync.log i see this line :
>
> info: cib_stats: Processed 1 operations (1.00us average, 0%
> utilization) in the last 10min
>
> What does it mean (cpu load or just information) ?
These are just periodically (10 minutes
Hi,
On Mon, Sep 5, 2016 at 3:46 PM, Dan Swartzendruber
wrote:
> ...
> Marek, thanks. I have tested repeatedly (8 or so times with disk writes
> in progress) with 5-7 seconds and have had no corruption. My only issue
> with using power_wait here (possibly I am
On Mon, 5 Sep 2016 15:04:34 +0200
Klaus Wenninger wrote:
> On 09/05/2016 01:38 PM, Stefan Schörghofer wrote:
> > Hi List,
> >
> > I am currently trying to setup the following situation in my lab:
> >
> > |--Cluster IP--|
> > | HAProxy
On 2016-09-05 03:04, Ulrich Windl wrote:
Marek Grac schrieb am 03.09.2016 um 14:41 in
Nachricht
:
Hi,
There are two problems mentioned in the email.
1) power-wait
Power-wait is a quite advanced option and
On 09/05/2016 03:02 PM, Gabriele Bulfon wrote:
> I read docs, looks like sbd fencing is more about iscsi/fc exposed
> storage resources.
> Here I have real shared disks (seen from solaris with the format
> utility as normal sas disks, but on both nodes).
> They are all jbod disks, that ZFS
Perfect! I did missed it. Thanks for the help!!
-Original Message-
From: Kristoffer Grönlund [mailto:kgronl...@suse.com]
Sent: Monday, September 05, 2016 3:27 PM
To: Nurit Vilosny ; users@clusterlabs.org
Subject: RE: [ClusterLabs] pacemaker doesn't failover when
On 09/03/2016 08:42 PM, Shermal Fernando wrote:
>
> Hi,
>
>
>
> Currently our system have 99.96% uptime. But our goal is to increase
> it beyond 99.999%. Now we are studying the
> reliability/performance/features of pacemaker to replace the existing
> clustering solution.
>
>
>
> While testing
I read docs, looks like sbd fencing is more about iscsi/fc exposed storage
resources.
Here I have real shared disks (seen from solaris with the format utility as
normal sas disks, but on both nodes).
They are all jbod disks, that ZFS organizes in raidz/mirror pools, so I have 5
disks on one
On 09/05/2016 01:38 PM, Stefan Schörghofer wrote:
> Hi List,
>
> I am currently trying to setup the following situation in my lab:
>
> |--Cluster IP--|
> | HAProxy instances |HAProxy instances |
> | Node 1| Node 2 |
>
>
>
> Now
Nurit Vilosny writes:
> Here is the configuration for the httpd:
>
> # pcs resource show cluster_virtualIP
> Resource: cluster_virtualIP (class=ocf provider=heartbeat type=IPaddr2)
> Attributes: ip=10.215.53.99
> Operations: monitor interval=20s
Depends on your OS, but generally /var/log/messages. Also, please share
your full pacemaker config. Please only obfuscate passwords.
digimer
On 05/09/16 07:53 PM, Nurit Vilosny wrote:
> Hi Kristoffer,
> Thanks for the prompt answer.
> Result of kill -9 is a dead process. Restart is not being
Here is the configuration for the httpd:
# pcs resource show cluster_virtualIP
Resource: cluster_virtualIP (class=ocf provider=heartbeat type=IPaddr2)
Attributes: ip=10.215.53.99
Operations: monitor interval=20s (cluster_virtualIP-monitor-interval-20s)
start interval=0s
Hi List,
I am currently trying to setup the following situation in my lab:
|--Cluster IP--|
| HAProxy instances |HAProxy instances |
| Node 1| Node 2 |
Now I've successfully added the Cluster IP resource to pacemaker and
Hello,
I implemented the suggested change in corosync and I realized that service
pacemaker stop on the master node works provided that I run crm_resource -P
from another terminal right after it, and the same goes for the case of the
"failback", getting back the node that failed on the
On 09/05/2016 11:20 AM, Gabriele Bulfon wrote:
> The dual machine is equipped with a syncro controller LSI 3008 MPT SAS3.
> Both nodes can see the same jbod disks (10 at the moment, up to 24).
> Systems are XStreamOS / illumos, with ZFS.
> Each system has one ZFS pool of 5 disks, with different
The dual machine is equipped with a syncro controller LSI 3008 MPT SAS3.
Both nodes can see the same jbod disks (10 at the moment, up to 24).
Systems are XStreamOS / illumos, with ZFS.
Each system has one ZFS pool of 5 disks, with different pool names (data1,
data2).
When in active / active, the
>>> Marek Grac schrieb am 03.09.2016 um 14:41 in Nachricht
:
> Hi,
>
> There are two problems mentioned in the email.
>
> 1) power-wait
>
> Power-wait is a quite advanced option and there are only few fence
>
Nurit Vilosny writes:
> Hi everyone,
> I tried the IRC for that, but I get disconnected and cannot see the reply...
> So I try again:
> I have a cluster with 3 nodes and 2 services: apache and application service
> - grouped together.
> Debugging the cluster I used kill -9
20 matches
Mail list logo