I will be out of the office starting 09/19/2008 and will not return until
09/23/2008.
I will respond to your message when I return.
If your request requires immediate attention, Please contact the MVS
Technical Support Hotline
at 1-866-866-4488 x12000
**
I remember that some time ago someone asked for a cheap stonith
device, just found something like that: "IP Power 9258T Network AC
Power Controller", in case anyone is interested.
Regards,
Ciro
___
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http:/
Hi all,
I think I have followed the steps descripted in the "
http://www.linux-ha.org/v2/faq/forced_failover"; to set the resource-stickiness
appropriately in order that my resource could migrate to another node when it
failed. But it just cann't migrate to another node when it failed now
Hi,
I am on the way to setup a 2 node xen cluster using drbd.
To automatically control the role (Secondary -> Secondary, Primary->
Secondary, Secondary->Primary) of my drbd resource, it seems like (not sure
though) i need to use
disk = [ 'drbd1:guest01-disk,xvda,w' ]
instead of
disk = [ "phy:/d
I'm running 2.1.0 in a two-node cluster. I get this warning for no
apparent reason (false positive I suspect). It is not reproducible, but
intermittent. Once it occurs, restarting the resource does not clear
it, it shows up the next time pengine runs.
Example:
Sep 16 07:12:40 8213 node0 peng
Furtunately the DRBD guys were able to help me with this. I had a
wrong timeout set in the drbd-peer-outdater.
Thanks for the efforts you put in trying to help me.
Christoph
2008/9/18 Andrew Beekhof <[EMAIL PROTECTED]>:
> On Thu, Sep 18, 2008 at 10:34, Christoph Eßer <[EMAIL PROTECTED]> wrote:
>
Hi all
I created an ocf resource agent named ResourceAgent.
I added it in the python script, and I did not give any values like
parameters(because I dont have any parameters to set) or time.
when I executed this python script, it gave me an error.
like, I have added my ResourceAgent like this
Hi,
On Thu, Sep 18, 2008 at 06:22:49PM +0300, [EMAIL PROTECTED] wrote:
>
>
>
> Hello everyone,
> I have installed heartbeat to my two-computer cluster successfully. I have
> tested the app and it is working perfect but i have a problem.
>
> I want to do a back-up system. When master computer f
[EMAIL PROTECTED] ha scritto:
Hello everyone,
I have installed heartbeat to my two-computer cluster successfully. I have
tested the app and it is working perfect but i have a problem.
I want to do a back-up system. When master computer fails, the slave must
gain the control(which heartbeat doe
Hello everyone,
I have installed heartbeat to my two-computer cluster successfully. I have
tested the app and it is working perfect but i have a problem.
I want to do a back-up system. When master computer fails, the slave must
gain the control(which heartbeat does). Addition to this, after som
On Thursday 18 September 2008 15:38:50 Dejan Muhamedagic wrote:
> Right, that should probably go into the FAQ, since it's typical
> for v1 configurations on membership change to try to start the
> resources again.
>
> A resource agent must be able to handle double starts (or stops),
> i.e. a start
On Thu, Sep 18, 2008 at 01:31:44PM +0200, Wayne Gemmell wrote:
> On Wednesday 17 September 2008 16:12:49 Dejan Muhamedagic wrote:
> > There's no good reason for that change to make any difference.
> > Otherwise, in a two-node cluster, if only one node is running
> > it should run the resources. The
On Wednesday 17 September 2008 16:12:49 Dejan Muhamedagic wrote:
> There's no good reason for that change to make any difference.
> Otherwise, in a two-node cluster, if only one node is running
> it should run the resources. The answer is hopefully in the logs.
Thanks, I've atached the relevant se
On Thu, Sep 18, 2008 at 10:34, Christoph Eßer <[EMAIL PROTECTED]> wrote:
> 2008/9/17 Andrew Beekhof <[EMAIL PROTECTED]>:
>> On Wed, Sep 17, 2008 at 16:06, Christoph Eßer <[EMAIL PROTECTED]> wrote:
>>> I don't see why it should not be. cat /proc/drbd indicates the DRBD
>>> resource is working correc
2008/9/17 Andrew Beekhof <[EMAIL PROTECTED]>:
> On Wed, Sep 17, 2008 at 16:06, Christoph Eßer <[EMAIL PROTECTED]> wrote:
>> I don't see why it should not be. cat /proc/drbd indicates the DRBD
>> resource is working correctly on both nodes.
>
> Call me crazy, but I thought it wise to ask since drbd
Hi,
On Thu, Sep 18, 2008 at 10:44:35AM +0800, heartbeat wrote:
> Hi all,
> A port of my cib.xml as follows:
>
>
> provider="heartbeat">
>
>
>
>
>
>
>
>
Hi,
On Wed, Sep 17, 2008 at 02:02:56PM -0400, Jill Schaumloeffel wrote:
> Thank you for your helpful answers, Dejan.
>
> I wondered about the STONITH resource. I ran accross a How-To on that.
>
> For starting the Scalix application, if I understand correctly,
> there is no resource script to c
Am Donnerstag, 18. September 2008 07:03 schrieb lakshmipadmaja maddali:
> Hi all,
> How can we monitor the status of resources that are present in virtual
> machine .
> will it be possible.
See parameter monitor_scripts in the Xen RA. No such hooks for other
virtualization techniques. You wou
18 matches
Mail list logo