On Thu, Oct 2, 2014 at 6:59 PM, Саша Александров wrote:
> Andrei,
>
> I suspect that you think in a way 'if there is a default monitor
> interval value of 60s monitor operation should occur every 60
> seconds', correct?
>
Somewhere in this direction, yes, but not limited to operations.
> Well, t
On Thu, Oct 2, 2014 at 6:31 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Thu, Oct 02, 2014 at 12:22:35PM +0400, Andrei Borzenkov wrote:
>> Is it possible to display values for all resource properties,
>> including those set to default values?
>
> What do you consider a "property"? Instance attributes
On Mon, Oct 6, 2014 at 9:03 AM, Digimer wrote:
> If stonith was configured, after the time out, the first node would fence
> the second node ("unable to reach" != "off").
>
> Alternatively, you can set corosync to 'wait_for_all' and have the first
> node do nothing until it sees the peer.
>
Am I
Hi All,
When I move the next sample in RHEL6.5(glib2-2.22.5-7.el6) and
Ubuntu14.04(libglib2.0-0:amd64 2.40.0-2), movement is different.
* Sample : test2.c
{{{
#include
#include
#include
#include
guint t1, t2, t3;
gboolean timer_func2(gpointer data){
printf("TIMER EXPIRE!2\n");
If stonith was configured, after the time out, the first node would
fence the second node ("unable to reach" != "off").
Alternatively, you can set corosync to 'wait_for_all' and have the first
node do nothing until it sees the peer.
To do otherwise would be to risk a split-brain. Each node ne
Hi all,
I had this question from a while, did not understand the logic for it.
Why should I have to start pacemaker simultaneously on both of my nodes (of a 2
node cluster) simultaneously, although I have disabled quorum in the cluster.
It fails in the startup step of
[root@rk16 ~]# service pace
On 3 Oct 2014, at 5:07 am, Felix Zachlod wrote:
> Am 02.10.2014 18:02, schrieb Digimer:
>> On 02/10/14 02:44 AM, Felix Zachlod wrote:
>>> I am currently running 8.4.5 on to of Debian Wheezy with Pacemaker 1.1.7
>>
>> Please upgrade to 1.1.10+!
>>
>
> Are you referring to a special bug/ code c
Hi Andrew,
>> lrmd[1632]: error: crm_abort: crm_glib_handler: Forked child 1840 to
> record non-fatal assert at logging.c:73 : Source ID 51 was not found when
> attempting to remove it
>> lrmd[1632]: crit: crm_glib_handler: GLib: Source ID 51 was not found
> when attempting to remove it
>
On 3 Oct 2014, at 3:22 am, Daniel Dehennin wrote:
> emmanuel segura writes:
>
>> for guest fencing you can use, something like this
>> http://www.daemonzone.net/e/3/, rather to have a full cluster stack in
>> your guest, you can try to use pacemaker-remote for your virtual guest
>
> I think i
crm_report starting prior to the disk being removed?
On 2 Oct 2014, at 3:55 pm, Carsten Otto wrote:
> Dear Andrew,
>
> please find the time to have a look at this.
>
> Thank you,
> Carsten
> --
> andrena objects ag
> Büro Frankfurt
> Clemensstr. 8
> 60487 Frankfurt
>
> Tel: +49 (0) 69 977 86
On 2 Oct 2014, at 8:02 pm, Andrei Borzenkov wrote:
> According to documentation (Pacemaker 1.1.x explained) "when
> [Master/Slave] the resource is started, it must come up in the
> mode called Slave". But what I observe here - in some cases pacemaker
> treats Slave state as error. As example (pa
On 3 Oct 2014, at 11:18 am, renayama19661...@ybb.ne.jp wrote:
> Hi Andrew,
>
> About a similar problem, we confirmed it in Pacemaker1.1.12.
> The problem occurs in (glib2.40.0) in Ubuntu14.04.
>
> lrmd[1632]:error: crm_abort: crm_glib_handler: Forked child 1840 to
> record non-fatal assert
Am Samstag, 4. Oktober 2014, 16:02:24 schrieb Hauke Homburg:
> Hello,
>
> Can anyone suggest a good Book about Pacemaker Configuration? Ich have
> allready read the Books from Gallileo Computing and O'Reilly.
>
> Thanks for your help
>
> Greetings
>
> Hauke
thanks for reading the German "Clust
13 matches
Mail list logo