On Tue, May 11, 2010 at 7:52 AM, Steven Dake wrote:
> On Tue, 2010-05-11 at 07:48 +0200, Alain.Moulle wrote:
>> Hi,
>> FYI : me too, I have debug : on and I faced the problem on RHEL5 as well
>> as on fc12.
>> Alain
>
> I have found the root cause I believe is related to your issues.
> Basically w
On Tue, 2010-05-11 at 07:48 +0200, Alain.Moulle wrote:
> Hi,
> FYI : me too, I have debug : on and I faced the problem on RHEL5 as well
> as on fc12.
> Alain
I have found the root cause I believe is related to your issues.
Basically with debug:on the internal buffers inside logsys are
overflowed,
Hi,
FYI : me too, I have debug : on and I faced the problem on RHEL5 as well
as on fc12.
Alain
> Hi,
>
> I experienced the same issue on Redhat 5.5 PPC.
> I compiled all packages myself, since there are no ppc packages available in
> the clusterlabs repository.
> If Andrew will post his SRPM some
In our active/passive cluster openais is started on the active node "cuzzonia"
and the resource "MONITOR" is started too.
During boot processing of the second node "cuzzonib" resource "MONITOR" is
started on cuzzonib too before openais is started on that node.
Later, when openais detects that the
On Mon, 2010-05-10 at 23:58 +0200, Andreas Mock wrote:
> -Ursprüngliche Nachricht-
> Von: Steven Dake
> Gesendet: 10.05.2010 23:38:01
> An: "Alain.Moulle"
> Betreff: [Openais] plan for resolving corosync services unloading problem
> blocking shutdown on opensuse
>
> >We will begin analy
On Mon, 2010-05-10 at 19:02 -0400, Vadym Chepkov wrote:
> Yes, I am
>
try without
>
> On May 10, 2010, at 6:59 PM, Steven Dake wrote:
>
> > Do you have debug: on in your config file?
> >
> > Regards
> > -steve
> >
> > On Mon, 2010-05-10 at 18:24 -0400, Vadym Chepkov wrote:
> >> Hi,
> >>
> >>
Yes, I am
On May 10, 2010, at 6:59 PM, Steven Dake wrote:
> Do you have debug: on in your config file?
>
> Regards
> -steve
>
> On Mon, 2010-05-10 at 18:24 -0400, Vadym Chepkov wrote:
>> Hi,
>>
>> I experienced the same issue on Redhat 5.5 PPC.
>> I compiled all packages myself, since there a
Do you have debug: on in your config file?
Regards
-steve
On Mon, 2010-05-10 at 18:24 -0400, Vadym Chepkov wrote:
> Hi,
>
> I experienced the same issue on Redhat 5.5 PPC.
> I compiled all packages myself, since there are no ppc packages available in
> the clusterlabs repository.
> If Andrew wi
Hi,
I experienced the same issue on Redhat 5.5 PPC.
I compiled all packages myself, since there are no ppc packages available in
the clusterlabs repository.
If Andrew will post his SRPM somewhere or maybe instructions how to compile it,
I would be happy to contribute.
Vadym
On May 10, 2010, at
Putting the expected votes to one in both corosync and pacemaker allows the
cluster
to start with one node (not what I want). Unfortunately, it also does not
allow the
cluster to continue with 1 node after a failure because pacemaker remembers
the
two node cluster and increases its expected votes.
Bug analysis that we are undertaking can be found here:
https://bugzilla.redhat.com/show_bug.cgi?id=590898
Please feel free to add any extra data you may have beyond the
backtrace.
Thanks
-steve
On Mon, 2010-05-10 at 14:38 -0700, Steven Dake wrote:
> It seems pretty clear from the mailing list
-Ursprüngliche Nachricht-
Von: Steven Dake
Gesendet: 10.05.2010 23:38:01
An: "Alain.Moulle"
Betreff: [Openais] plan for resolving corosync services unloading problem
blocking shutdown on opensuse
>We will begin analysis of the instrumentation results once we have a
>trace.
>
>I would re
It seems pretty clear from the mailing list traffic recently there is a
critical flaw with the shutdown related in some way to Pacemaker and
Corosync that happens on a few people's opensuse systems. It seems to
only reproduce on opensuse however we don't know if it is limited to
this platform. Fi
Hey everyone. I'm somewhat new to Corosync, and over the past week I've
been trying to learn as much as I can. When I went home on Friday
everything with the cluster was fine, but when I came in this morning I
noticed a problem.
I have two nodes in my cluster, we'll call them "Pepsi" and "Co
As soon as I got it again ... because it is strange, I did not face the
problem
again since this morning ! And besides I'm sure that on Friday I was in
a case where
the stop/cleanup (of a resource failed on start) enables the corosync
shutdown to
complete , and as long as I had not cleanup the f
On May 10, 2010, at 4:03 AM, Andrew Beekhof wrote:
> On Mon, May 10, 2010 at 8:31 AM, Alain.Moulle wrote:
>>
>> I meant "/etc/init.d/corosync stop" never returns.
>
> Ok. Can you show us the logs and "ps axf" please?
> ___
> Openais mailing list
> O
On Mon, May 10, 2010 at 8:31 AM, Alain.Moulle wrote:
>
> I meant "/etc/init.d/corosync stop" never returns.
Ok. Can you show us the logs and "ps axf" please?
___
Openais mailing list
Openais@lists.linux-foundation.org
https://lists.linux-foundation.org
On 08/05/10 01:02, Alan Jones wrote:
> I'd like to modify the quorum behavior to require 2 nodes to start the
> cluster but allow it to continue with only 1 node after a failure.
> It seemed that the two_node option used with the votequorum provider
> might provide what I'm looking for (corosync.co
18 matches
Mail list logo