) complete
cib[6027]: 2009/08/14_13:16:44 info: cib_stats: Processed 32 operations
(312.00us average, 0% utilization) in the last 10min
The folllowing is an extract from the ha-log file. It starts up the instance
but then somehow detects that the instance is not active then shuts down DB2
again.
-
count -ge 5 " looks for the amount of db2 processess greater and
equal to 5. Set the value to 3 and you will be set :)
Thanks
On Fri, Aug 14, 2009 at 1:54 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Fri, Aug 14, 2009 at 01:32:22PM +0200, Timothy Carr wrote:
> > Hi All,
> &
root 7075 70720 15:55 pts/0 00:00:00 db2ckpwd 0
Thats the output from db2_local_ps
On Fri, Aug 14, 2009 at 3:29 PM, Dejan Muhamedagic wrote:
> Hi,
>
> On Fri, Aug 14, 2009 at 02:42:32PM +0200, Timothy Carr wrote:
> > Heh,
> >
> > thanks fo
wrote:
> Hi,
>
> On Fri, Aug 14, 2009 at 03:56:07PM +0200, Timothy Carr wrote:
> > Hi
> >
> > Node 0
> > UIDPID PPIDC STIME TTY TIME CMD
> > db2poc 7072 70706 15:55 pts/0 00:00:00 db2sysc 0
> >
r migrated resources / cleanup
resources and then it loads fine.
Another issue I am having is when a node shuts down, the resource does not
migrate automatically to another node. Any reason for this , am i supposed
to enable something in order to achieve this functionality ?
Thanks
--
Ti
Hi All,
Hope everyone is well.
I am having a problem with my test cluster in that it keeps starting up with
Linux-HA without quorum. I've cleared all cib information but still the same
thing.
Any help would be appreciated. I am using the hb_gui
Thanks
--
Timothy Carr
Technical Speci
Hi
Its a 2 node test cluster running HA 2.1.4 on SLES 10SP2 . I am not sure
what you mean by "expected-votes attribute"
Tim
On Tue, Aug 25, 2009 at 10:42 AM, Michael Schwartzkopff
wrote:
> Am Dienstag, 25. August 2009 10:27:40 schrieb Timothy Carr:
> > Hi All,
> >
t. Working now.
Thanks
On Tue, Aug 25, 2009 at 10:49 AM, Timothy Carr
wrote:
> Hi
>
> Its a 2 node test cluster running HA 2.1.4 on SLES 10SP2 . I am not sure
> what you mean by "expected-votes attribute"
>
> Tim
>
>
>
> On Tue, Aug 25, 2009 at 10:42 AM, Mi
B to A
>
> try to set "Default Resource Stickiness" to INFINITY.
> The Application will stay on one Node until the Node fails.
>
> Greetings
> Kay
> ___
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http:
starts up
the resource again ..
Any ideas as to why this is happening to the group or am I mis-configuring
somewhere ?
Thanks
--
Timothy Carr
Technical Specialist
University of Cape Town
Cell: +27834572568
Fax: +27865472190
Gtalk: timothy.c...@foxtrail.co.za
Skype: timothy.carr.foxtrail
It stays on the same node ..
tim
On Wed, Aug 26, 2009 at 3:21 PM, Andrew Beekhof wrote:
> On Wed, Aug 26, 2009 at 2:02 PM, Timothy
> Carr wrote:
> > Hi All,
> >
> > I have configured the resource stickiness attribute for a simgle resource
> > which has not been a
rce agents.
This not clears the " multi-running error " which i was getting and the
resource now sticks to the failover host
Tim
On Wed, Aug 26, 2009 at 3:29 PM, Andrew Beekhof wrote:
> On Wed, Aug 26, 2009 at 3:24 PM, Timothy
> Carr wrote:
> > It stays on the same node ..
&g
>
> --
> Regards,
>
> Ahmed Munir
> ___
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
--
Timothy Carr
nux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
--
Timothy Carr
Technical Specialist
University of Cape Town
Cell: +27834572568
Fax: +27865472190
Gtalk: timothy.c...@foxtrail.co.za
Skype: timothy.carr.
34.56.79
>
> --
> Shadus
> ___
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org
> http://lists.linux-ha.org/mailman/listinfo/linux-ha
> See also: http://linux-ha.org/ReportingProblems
>
--
Timothy Carr
Technical Specialist
U
17.328.2921| 7:
> 617.515.2491 | *: richard.marsh...@arbella.com
>
>
> -Original Message-
> From: linux-ha-boun...@lists.linux-ha.org
> [mailto:linux-ha-boun...@lists.linux-ha.org] On Behalf Of Timothy Carr
> Sent: Thursday, October 01, 2009 9:03 AM
> To: General Linux-HA mail
e out more
> > tomorrow. There should be a way to force the removal of the old node
> > names (ideas anyone?)
> I know a bit more now. The cluster thinks it has 4 nodes instead of 3. I
> see this in my logs:
> ccm: [5131]: debug: total_node_count=4, total_quorum_votes=400
> But there are really only 3 nod
inal
> message.
> > ___
> > Linux-HA mailing list
> > Linux-HA@lists.linux-ha.org
> > http://lists.linux-ha.org/mailman/listinfo/linux-ha
> > See also: http://linux-ha.org/ReportingProblems
> >
> ___
18 matches
Mail list logo