Hi, currently Xen 3.2 on SLES10 managed domains ('xm new' instead of
'xm create') only uses configuration files the first time you add
them, later, any 'xm block-attach' is stored in its own database (not
sure about the nomenclature here).
Checking the Xen agent I see that the configuration file i
On 2008-07-09T12:44:16, Andrew Beekhof <[EMAIL PROTECTED]> wrote:
> Assuming it comes out, 2.1.4 wont not include all the
> fixes/enhancements from Pacemaker 0.6.
But 2.1.4 should have all relevant bugfixes to pacemaker, still; though
indeed not all of the enhancements.
Regards,
Lars
--
T
All,
For our project I have developed a number of tools that allow us to
regularly build, install, and test our applications in a fully automated
way. I am now looking at automating changes to the heartbeat XML for
adding/updating/deleting resources, constraints, etc. I was wondering if
anyone had
I used 2.0.6 version because I would like to have hbagent with no CRM - and a
patch was available in that version.
Now, finally we've decided to update our cluster configuration in 2.1.3 with
CRM.
Thank you very much for your help, especially for SNMP configuration.
Anne
-Message d'origine-
Hi,
On Fri, Jun 27, 2008 at 10:14:56PM +0200, Andreas Mock wrote:
> Hi all,
>
> > -Urspr?ngliche Nachricht-
> > Von: "Dejan Muhamedagic" <[EMAIL PROTECTED]>
> > Gesendet: 27.06.08 15:10:28
> > An: General Linux-HA mailing list
> > Betreff: Re: [Linux-HA] problem with stonith external/ibm
Hello again!
I am doing new tests of master/slave cluster. I had solved my MySQL
server script problems and I think now it is OK ( If you want to try
http://code.adrianchapela.net/heartbeat/mysql_slave_master ) .
Now my problems are the others, the "Unnecessary shuffling of
master/slave reso
On Jul 11, 2008, at 10:36 AM, Thibaut Perrin wrote:
Or maybe with resource groups ? :-/
resource groups are a syntactic shortcut for resources with a specific
colocation and ordering relationship.
either will work in your case :)
see "placing resources relative to other resources" in the
On Jul 11, 2008, at 9:29 AM, Junko IKEDA wrote:
Hi,
We are now trying to show a good performance report to the potential
customer.
Our customer's requests are here;
* There are more than 100 resources on one node.
* 100 resources are included in one group, so they would start/stop
sequentially
On Fri, Jul 11, 2008 at 10:29:55AM +0200, Thibaut Perrin wrote:
> I have one more question, my guess would be using colocation, but
> I'm not sure :
>
> I have 2 nodes on my cluster, and for the moment I have 1 IP
> addresse resource and 1 apache resource.
>
> I have set the stickiness scores to
Or maybe with resource groups ? :-/
I'm a bit lost as you can see ^^
Thanks for any help
Thibaut
Thibaut Perrin wrote:
Hi,
Thanks again for your help :)
I have one more question, my guess would be using colocation, but I'm
not sure :
I have 2 nodes on my cluster, and for the moment I hav
Hi,
Thanks again for your help :)
I have one more question, my guess would be using colocation, but I'm
not sure :
I have 2 nodes on my cluster, and for the moment I have 1 IP addresse
resource and 1 apache resource.
I have set the stickiness scores to 100.
When I standby the default node
On Jul 10, 2008, at 5:44 PM, Raghuram Bondalapati wrote:
I tried that, but it did not work.
oh, right, sorry.
crm_failcount -D is what you need
--Raghu
On 7/9/08, Andrew Beekhof <[EMAIL PROTECTED]> wrote:
On Wed, Jul 9, 2008 at 19:47, Raghuram Bondalapati
<[EMAIL PROTECTED]> wrote:
An
> > Hi,
> >
> > We are now trying to show a good performance report to the potential
> > customer.
> > Our customer's requests are here;
> > * There are more than 100 resources on one node.
> > * 100 resources are included in one group, so they would start/stop
> > sequentially.
> > * Fail over for
On Jul 11, 2008, at 1:23 AM, Steve Wray wrote:
I am wondering if the Debian Etch nfs init scripts are LSB
compliant...
After some experimenting I was shocked to find that /etc/init.d/nfs-
kernel-server stop returned status 0 yet did not stop anything at all.
then thats a no :)
i document
On Jul 10, 2008, at 2:38 PM, Michael Alger wrote:
On Thu, Jul 10, 2008 at 01:22:16PM +0200, Lukas Pecha wrote:
I would like to know, if I can limit Heartbeat monitor operations
to run on some nodes in cluster only?
I think heartbeat will run the monitor script regardless, because it
wants to
On Jul 10, 2008, at 4:30 PM, Thibaut Perrin wrote:
Hi
Thanks for your help ! :)
I'm now using the hb_gui, and that gave me a really better
understanding on how the resources and contraints work ! :)
Now I have a question :
Is it possible to turn auto_failback to off using the hb_gui ?
a
Here is your problem:
name="default-resource-failure-stickiness" value="0"/>
Grab
http://clusterlabs.org/mw/Image:Configuration_Explained.pdf
And look for the section on "Migration due to failure"
On Jul 11, 2008, at 7:11 AM, Max Deputter wrote:
Hi Linux-HA guru, I'm running heartbeat 2.1.
17 matches
Mail list logo