Hi,
One of the problems I've run into both personally and working with
customers is finding errors in CIBs. I see this kind of thing a good
bit on the mailing list too.
So, to help with this and maybe lighten the burden on the mailing list
and Andrew and Dejan and Lars, I wrote a command wh
Junko IKEDA wrote:
I found something rule like this;
When the following process was killed, the system would reboot.
* ccm
* cib
* lrmd
* crmd
* pengine
* tengine
These processes would be restarted when they are killed.
* FIFO
* media (ex. write/read bcast)
* stonithd
* attrd
* mgmtd
* respawn (
Moderation was already removed.
Please see my other note.
--
Alan Robertson <[EMAIL PROTECTED]>
"Openness is the foundation and preservative of friendship... Let me
claim from you at all times your undisguised opinions." - William
Wilberforce
Because of the surprise timing of this announcement, right in the last
phases of a release, and during time when I'm supposed to be on
vacation, I'm postponing discussion on this until at least Monday to
give me a chance to get testing back on track.
Although I did get some hints that this _mi
Hi, Andrew here, not a lovable children's character.
Apologies for the deception, but alas it has become necessary.
In addition to utterly misrepresenting me, Alan has taken away my (and
Lars') ability to respond.
Item #1 - I have not unilaterally left Linux-HA (at least not by my
own choi
> Last updated: Fri Dec 7 10:18:29 2007 Current DC: NodeA
> (2a7021a1-ab44-403d-80a4-5ff9b4e24fcc) 2 Nodes configured. 1
> Resources configured.
>
> Node: NodeA (2a7021a1-ab44-403d-80a4-5ff9b4e24fcc): standby Node:
> NodeB (296a344e-4ca5-4aae-be0b-7fc4473a7e05): online
>
I have this cluster of 2 nodes, NodeA and NodeB. I have the cluster IP mapped
to one of them I want to move it top the other, as a monitored resource.
I do crm_resource -M -r rc_ip_addr -H NodeB when it runs on NodeA and the other
way when it runs on the other node. It only seems to work one wa
On Fri, 7 Dec 2007, Alan Robertson wrote:
Andrew's contributions to the Linux-HA community will be missed. I am sad
that he has unilaterally decided to leave Linux-HA and fork his code into in
a separate project.
I have suspected that this was coming for a number of months, but as you
proba
I figure out this issue just after I compile this email.
the second node did not have the link in /etc/init.d for the lsb script.
All seems to be working now.
I am confused a little about this this works though. When I start a
resource on one node, does it automatically run the stop of that
r
>>>have a look at the code in crm_mon, its not hugely complex.
[RH] It may not be rocket science once you understand what is it you're looking
at. I am still struggling with the concepts and terminology. crm_mon reports
this:
Last updated: Fri Dec 7 10:18:29 2007
Current DC: Node
Andrew's contributions to the Linux-HA community will be missed. I am
sad that he has unilaterally decided to leave Linux-HA and fork his code
into in a separate project.
I have suspected that this was coming for a number of months, but as you
probably have guessed, Andrew won't reply to emai
Hi,
Programmatically, I am now able to retrieve the running nodes,
retrieve my resources and find on which node my resources are running.
I would also need to know which node I am currently running on. I
checked on the different crm_xx utility and I cannot find any way to do
that. The o
On 2007-12-07T15:24:41, matilda matilda <[EMAIL PROTECTED]> wrote:
> Hi Andrew,
>
> can you give some explanation to us why this decision was made?
> What is the vision/idea behind that?
I'm not Andrew, but the primary motivator is that the CRM will in the
future be a dual-stacked effort, and th
Hi Andrew,
can you give some explanation to us why this decision was made?
What is the vision/idea behind that?
Best regards
Andreas Mock
>>> Andrew Beekhof <[EMAIL PROTECTED]> 07.12.2007 14:14 >>>
After much careful consideration, it is increasingly clear to some of
us that the CRM needs to
Because the server pairs are going to end up in different buildings.
Also because of customer enforced fire walling requirements that's not
possible.
Which leads me to another thing I'm going to be playing with.
Quorum.
With 4 nodes, which are going to be split across different buildings I
need t
Hi,
On Fri, Dec 07, 2007 at 08:34:29AM +1100, Amos Shapira wrote:
> On 06/12/2007, Dejan Muhamedagic <[EMAIL PROTECTED]> wrote:
> > Hi,
> >
> > On Thu, Dec 06, 2007 at 05:10:28PM +1100, Amos Shapira wrote:
> > > Would you be interested in the tiny diff's I had to make to the .spec
> > > file? Are
Hi,
On Fri, Dec 07, 2007 at 08:44:46AM +1100, Amos Shapira wrote:
> On 05/12/2007, Dejan Muhamedagic <[EMAIL PROTECTED]> wrote:
> > On Fri, Nov 30, 2007 at 05:16:38PM +1100, Amos Shapira wrote:
> > > On 30/11/2007, Dejan Muhamedagic <[EMAIL PROTECTED]> wrote:
> > > >
> > > > Hi,
> > > >
> > > > On
After much careful consideration, it is increasingly clear to some of
us that the CRM needs to be a separate project rather than being
bundled with Heartbeat.
As such, it has been decided that 2.1.3 will be the last release of a
combined Heartbeat + CRM.
After this point, I will extract
On Dec 7, 2007 10:41 AM, Andrew Beekhof <[EMAIL PROTECTED]> wrote:
>
> On Dec 7, 2007, at 10:03 AM, China wrote:
>
> > Ok, but I don't understand why with
> >
> > pingd: 500
> > PC_A: 100
> > resource_stickiness: 100 (3 resources make 300)
> >
> > the resource failback. The expressions that you gi
Thanks a lot. It worked very well. :)
2007/9/13, Philip Gwyn <[EMAIL PROTECTED]>:
>
>
> On 11-Sep-2007 Departamento Técnico de El Norte de Castilla wrote:
> > Yes, but I heard about some magical key combinations that restart
> systems
> > even in a kernel panic (Something like Alt + Sys Req + some
Hi,
The problem is reported in the bugzilla #1662.
Please see my comment and a patch at comment #6 and #8.
http://developerbugs.linux-foundation.org/show_bug.cgi?id=1662#c6
Thanks,
Keisuke MORI
Dejan Muhamedagic <[EMAIL PROTECTED]> writes:
> Hi,
>
> On Thu, Dec 06, 2007 at 10:54:36AM +1100,
On Dec 7, 2007, at 4:55 AM, Jeff Humes wrote:
I have created a simple heartbeat cluster:
2 Centos 4.5 nodes
HB version:
heartbeat-pils-2.1.2-3.el4.centos
heartbeat-stonith-2.1.2-3.el4.centos
heartbeat-gui-2.1.2-3.el4.centos
heartbeat-2.1.2-3.el4.centos
Here is the issue I see, and I dont k
> > I found something rule like this;
> > When the following process was killed, the system would reboot.
> > * ccm
> > * cib
> > * lrmd
> > * crmd
> > * pengine
> > * tengine
> >
> > These processes would be restarted when they are killed.
> > * FIFO
> > * media (ex. write/read bcast)
> > * stonit
On Dec 7, 2007, at 10:03 AM, China wrote:
Ok, but I don't understand why with
pingd: 500
PC_A: 100
resource_stickiness: 100 (3 resources make 300)
the resource failback. The expressions that you give me returns the
same
results like with:
pingd: 1000
PC_A: 100
resource_stickiness: 100 (3
Ok, but I don't understand why with
pingd: 500
PC_A: 100
resource_stickiness: 100 (3 resources make 300)
the resource failback. The expressions that you give me returns the same
results like with:
pingd: 1000
PC_A: 100
resource_stickiness: 100 (3 resources make 300)
but the behavior is differen
25 matches
Mail list logo