Aaron Bush escribió:
Here are some comments from Dejan back on November 5th, 2008 when I had a 
similar question about ILO.  It may help shed some light on this.  You can look 
in the archives for around the same time and maybe get some good info.
OK, thank you.

Now I am trying to deploy something similar and I have the next error:

WARN: start ilo_burns failed, because its hostlist is empty

The message is very clear but I have the next line on the stonith config:
<nvpair id="id-hostlist-burns" name="hostlist" value="node1"/>

Why could be the reason for this error ?

-ab

Note that handling of clones is done on a different level, i.e.
by the CRM which decides where to run resources. The idea of
cloned stonith resources was to have "more" assurance that one of
the nodes which run the stonith resource may shoot the offending
node. Obviously, this may make sense only for clusters with more
than two nodes. On the other hand, if your stonith devices are
reliable and regularly monitored, I don't see any need for
shooting a node from more than one node. So, with the lights-out
devices which are capable of managing only its host (iLO, IBM
RSA, DRAC) I'd suggest having a normal (non-cloned) stonith
resource with a -INF constraint to prevent it from running on the
node it can shoot. This kind of power management setup seems to
be very popular and probably prevails today.

On larger clusters with stonith devices which may shoot a set of
nodes, a single cloned resource should suffice.

Does this help? A bit at least?

-----Original Message-----
From: Adrian Chapela [mailto:achapela.rexist...@gmail.com] Sent: Thursday, March 12, 2009 12:14 PM
To: pacemaker@oss.clusterlabs.org
Subject: Re: [Pacemaker] iLO2 stonith device

Dejan Muhamedagic escribió:
Hi,

On Thu, Mar 12, 2009 at 02:32:59PM +0100, Andrew Beekhof wrote:
On Wed, Mar 11, 2009 at 17:13, Adrian Chapela
<achapela.rexist...@gmail.com> wrote:
One more time.

I have decided to use external/riloe as my stonith device
but I have some
doubts. My system will be a cluster of two nodes.

First, Do I need to config riloe stonith as a clone ?
not required, but its an option that may (or may not) simplify the
configuration
Not much in a two-node configuration. Instead of a clone, there
is a primitive and a location constraint.
Ok, if I understood well I need to have two primitives (or clones if I want) one for each device. Of course I need to run the stonith device of nodeA in nodeB, is this true ?

And, I am not sure about this kind of stonith because: we need a normal interface (better if is redunded as a bonding for example) to access to iLO server interfaces but in each server there is only one iLO device. If you have two switchs to achieve high availability, you will need two devices... Other option could be two switchs, one for each ethernet card on each server (two cards by server) and then one iLO device to one switch and one to another switch. What do you think about this solution ? Could be reliable ?
Second, this stonith device have some parameters of
configuration: hostlist,
ilo_hostname, ilo_user, ilo_password, ilo_can_reset, ilo_protocol,
ilo_powerdown_method. All of examples I have seen have in
hostlist one node
Is this a list or is a hostname ?. Is ilo_hostname a
hostname to access to
ilo device ? Could have only one stonith clone for two nodes ?
You'd have to look at the code.
I dont know it personally, but one would hope that with a name like
hostlist it should support a list.
ilo_hostname is the ip address of the ilo device.
The last question is about the license of the iLO2 card.
Do I need a license
to use this card as a poweroff device ? I think not
because riloe is
accessing iLO2 by https but I have downloaded a 60 trial
license and now I
can't try that. Has someone the answer ?
-ENOIDEA
Didn't know one needed a license for an ilo device.
I have tested today with another server without iLO Advanced license and I can shutdown the server. The license only is needed to have a remote console that is like a remote kvm. It is very good but it isn't needed in Linux HA.
Thanks,

Dejan

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker



_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker



_______________________________________________
Pacemaker mailing list
Pacemaker@oss.clusterlabs.org
http://oss.clusterlabs.org/mailman/listinfo/pacemaker

Reply via email to