>>> Eric Ren schrieb am 16.02.2017 um 04:50 in Nachricht
:
> Hi,
>
> On 11/09/2016 12:37 AM, Marc Smith wrote:
>> Hi,
>>
>> First, I realize ocf:lvm2:VolumeGroup comes from the LVM2 package and
>> not resource-agents, but I'm hoping someone on this list is familiar
>> with this RA and can provide
>>> Jan Pokorný schrieb am 15.02.2017 um 18:04 in
Nachricht
<20170215170435.gk18...@redhat.com>:
> On 15/02/17 15:13 +, Christine Caulfield wrote:
>> On 15/02/17 14:50, Jan Friesse wrote:
Hi all,
Corosync Cluster Engine, version '2.3.4'
Copyright (c) 2006-2009 Red Hat, Inc
Hi,
On 11/09/2016 12:37 AM, Marc Smith wrote:
Hi,
First, I realize ocf:lvm2:VolumeGroup comes from the LVM2 package and
not resource-agents, but I'm hoping someone on this list is familiar
with this RA and can provide some insight.
In my cluster configuration, I'm using ocf:lvm2:VolumeGroup to
At 2017-02-15 23:13:08, "Christine Caulfield" wrote:
>
>Yes, it seems that some corosync SEGVs trigger this obscure bug in
>libqb. I've chased a few possible causes and none have been fruitful.
>
>If you get this then corosync has crashed, and this other bug is masking
>the actual diagnostics - I
please note that everything works fine when there is only one clone resource
configured, the resource will get restarted and the vip will get moved.
anyway, I will check my ocfs again.
原始邮件
发件人: <kgail...@redhat.com>
收件人:何海龙10164561
抄送人: <users@clusterlabs.org>
日 期 :2017年02月15日
On 02/15/2017 12:17 PM, dur...@mgtsciences.com wrote:
> I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine
> using Virtualbox.
>
> I began with this.
> http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/
>
>
> When it came to fencing, I refer
On 15/02/17 18:04 +0100, Jan Pokorný wrote:
> On 15/02/17 15:13 +, Christine Caulfield wrote:
>> On 15/02/17 14:50, Jan Friesse wrote:
Hi all,
Corosync Cluster Engine, version '2.3.4'
Copyright (c) 2006-2009 Red Hat, Inc.
Today I found corosync consuming 100% cpu
Hi,
I have configured two virtualIP resources but one of them does not start
and I'm not able to find the reason.
Nothing appear in logs:
[root@vdicnode01 ~]# crm_mon -1 -r
Stack: corosync
Current DC: vdicnode02-priv (version 1.1.15-11.el7_3.2-e174ec8) - partition
with quorum
Last updated: Wed F
I have 2 Fedora VMs (node1, and node2) running on a Windows 10 machine
using Virtualbox.
I began with this.
http://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html-single/Clusters_from_Scratch/
When it came to fencing, I refered to this.
http://www.linux-ha.org/wiki/SBD_Fencing
To the file /etc
Hi folks,
I'm running some test scenarios with an ocf_heartbeat_iscsi pacemaker
resource,
using the following XIV multipath'ed configuration:
I created a single XIV iscsi host definition containing all the pacemaker
host (cluster node) 'Initiator's:
XIV 7812475>>host_list_ports host=pacemaker_i
On 15/02/17 15:13 +, Christine Caulfield wrote:
> On 15/02/17 14:50, Jan Friesse wrote:
>>> Hi all,
>>>
>>> Corosync Cluster Engine, version '2.3.4'
>>> Copyright (c) 2006-2009 Red Hat, Inc.
>>>
>>> Today I found corosync consuming 100% cpu. Strace showed following:
>>>
>>> write(7, "\v\0\0\
On Wed, Feb 15, 2017 at 09:21:50AM -0600, Ken Gaillot wrote:
> On 02/15/2017 03:57 AM, he.hailo...@zte.com.cn wrote:
> > I just tried using colocation, it dosen't work.
> >
> >
> > I failed the node paas-controller-3, but sdclient_vip didn't get moved:
>
> The colocation would work, but the prob
On 02/15/2017 03:57 AM, he.hailo...@zte.com.cn wrote:
> I just tried using colocation, it dosen't work.
>
>
> I failed the node paas-controller-3, but sdclient_vip didn't get moved:
The colocation would work, but the problem you're having with router and
apigateway is preventing it from getting
On 15/02/17 14:50, Jan Friesse wrote:
>> Hi all,
>>
>> Corosync Cluster Engine, version '2.3.4'
>> Copyright (c) 2006-2009 Red Hat, Inc.
>>
>> Today I found corosync consuming 100% cpu. Strace showed following:
>>
>> write(7, "\v\0\0\0", 4) = -1 EAGAIN (Resource
>> temporarily unava
Hi all,
Corosync Cluster Engine, version '2.3.4'
Copyright (c) 2006-2009 Red Hat, Inc.
Today I found corosync consuming 100% cpu. Strace showed following:
write(7, "\v\0\0\0", 4) = -1 EAGAIN (Resource temporarily
unavailable)
write(7, "\v\0\0\0", 4) = -1 EAGAIN
Hi all,
Corosync Cluster Engine, version '2.3.4'
Copyright (c) 2006-2009 Red Hat, Inc.
Today I found corosync consuming 100% cpu. Strace showed following:
write(7, "\v\0\0\0", 4) = -1 EAGAIN (Resource temporarily
unavailable)
write(7, "\v\0\0\0", 4) = -1 EAGAIN (
I switch back to "location" tonight to continue with the testing, at least
sometimes the vip is moving..
I mentioned earlier, with "location", only one clone resource would get
restarted, the other two would not,,,but just now, all 3 clone resources get
restarted and the vips get moved as expe
I just tried using colocation, it dosen't work.
I failed the node paas-controller-3, but sdclient_vip didn't get moved:
Online: [ paas-controller-1 paas-controller-2 paas-controller-3 ]
router_vip (ocf::heartbeat:IPaddr2): Started paas-controller-1
sdclient_vip (ocf::hea
18 matches
Mail list logo