Hi Jiaju,
Thank you for a reply.
I could understand some ways that the arbitrator is redundant.
Sincerely,
Jiaju
2012年3月27日22:38 Jiaju Zhang :
> On Tue, 2012-03-27 at 12:22 +0900, Yuichi SEINO wrote:
>> Hi Jiaju,
>>
>> Thank you for a reply. I understand the case that a arbitrator have to
>> be
Hi Andreas,
Thanks, I've updated the colocation rule to be in the correct order. I also
enabled the STONITH resource (this was temporarily disabled before for some
additional testing). DRBD has its own network connection over the br1 interface
(192.168.5.0/24 network), a direct crossover cabl
On 2012-03-27T17:54:18, sinchb wrote:
> crm status :
> gfs-control:0_monitor_0 (node=VSR1060, call=11, rc=5, status=complete): not
> installed
> dlm:0_monitor_0 (node=VSR1060, call=12, rc=5, status=complete): not installed
> test1_monitor_0 (node=VSR1060, call=8, rc=5, status=complete): not inst
On Tue, 2012-03-27 at 16:56 +1100, Andrew Beekhof wrote:
> On Tue, Mar 27, 2012 at 2:40 PM, Gao,Yan wrote:
> > On 03/27/12 10:33, Andrew Beekhof wrote:
> >> On Tue, Mar 27, 2012 at 2:34 AM, Jiaju Zhang wrote:
> >>> On Mon, 2012-03-26 at 11:50 +0900, Yuichi Seino wrote:
> Hi Jiaju,
>
> >
On Tue, 2012-03-27 at 12:22 +0900, Yuichi SEINO wrote:
> Hi Jiaju,
>
> Thank you for a reply. I understand the case that a arbitrator have to
> be redundant .
> And I want to ask two questions .
>
> 1. I think about a satisfied way. Its way is that we make a special
> site of arbitrator.
> Ca
On 2012-03-27T09:17:15, Dominik Epple wrote:
> Since I don't know what strange events may have lead to this situation
> (perhaps it was a manual manipulation), I created the following testcase to
> investigate this: configure a cluster, add some failover IP address primitive
> resource, let pa
Hello,
I do have no-quorum-policy="ignore"
Any idea how to reproduce this ?
Regards,
On 23 February 2012 23:43, Andrew Beekhof wrote:
> On Thu, Feb 23, 2012 at 9:17 PM, Hugo Deprez
> wrote:
> > I don't think so has, I do have over similar cluster on the same network
> and
> > didn't have any
crm status :
gfs-control:0_monitor_0 (node=VSR1060, call=11, rc=5, status=complete): not
installed
dlm:0_monitor_0 (node=VSR1060, call=12, rc=5, status=complete): not installed
test1_monitor_0 (node=VSR1060, call=8, rc=5, status=complete): not installed
Why ???!!!__
Hello,
I have a primitive resource (in my case, a IPaddr2 simple failover IP). Now I
happen to find the resource running on both of the members of the HA cluster.
Well, pacemaker says he has started it only on one of the nodes, but the IP
address which is managed by this resource is configured
Andrew Beekhof writes:
> I believe things are much improved with more recent releases.
> Hence the recommendation of 1.4.x
Yes, we switched to corosync and the problem doesn't occur again!
All tests are passed now.
>>But really: you need to fix your "unsteady" network,
>>and probably should impl
On 2012-03-27T09:26:53, neha chatrath wrote:
> So , I need to define following order clauses:
>
> crm configure order fs--after-drbd inf: ms_ddrbd: promote drbd_fs:start
> crm configure order X--after-drbd_fs inf: drbd_fs X
> crm configure order Y--after-drbd_fs inf: drbd_fs Y
>
> But with abov
On 2012-03-26T05:19:38, Nirmala wrote:
> I have a 4 node cluster which run a master/slave resource with stonith
> disabled. I use one of the following methods to stop the master role
> - demote
> - pkill corosync on master node
> - shutdown -r now on master node
>
> In the first two cases,
12 matches
Mail list logo