On Wed, March 11, 2009 18:19, Ethan Bannister wrote:
>
> Hello,
>
>
> I have been working on a complete fail-over SAN for some time now and
> almost have everything working the way it should. However, there have
> been some drawbacks. I am using the most up to date version of Heartbeat
> and Pa
On Wednesday 11 March 2009 5:17 pm, you wrote:
> You got it. Both servers serve the same site. If one dies the other
> takes over for it. To answer your question about database data, use
> the same database server from both nodes. That database server could
> be yet another failover cluster.
>
> Th
You got it. Both servers serve the same site. If one dies the other
takes over for it. To answer your question about database data, use
the same database server from both nodes. That database server could
be yet another failover cluster.
The most popular way to do failover database (mysql, postgre
On Wednesday 11 March 2009 2:50 pm, you wrote:
> Yes, you should be able to see the web page.
>
> The first thing you want to do is make sure that httpd is configured
> properly without HA. HA just handles the starting and stopping.
>
> With the cluster completely down, start httpd on one of the no
On Wed, Mar 11, 2009 at 19:39, Pavel Georgiev wrote:
>> Also yes :)
>> You can either grab the latest sources or wait for 1.0.3
>
> Any estimates when will that be out
later this month
> (I`m guessing the centos rpms will
> be available shortly after the release)?
same time as everyone else :)
Andrew Beekhof wrote:
On Wed, Mar 11, 2009 at 18:41, Jerome Yanga wrote:
I have tried DRBD 8.3.0 but the DRBD OCF Agent of Pacemaker 1.0.2-11.1 does not
seem to work well with it.
technically speaking, the agent comes from heartbeat not pacemaker -
maybe there is an update...
Here are
On Wed, Mar 11, 2009 at 8:27 PM, Andrew Beekhof wrote:
> On Wed, Mar 11, 2009 at 18:17, Pavel Georgiev wrote:
>> I've noticed that pacemaker`s /usr/lib/heartbeat/cib leaks ~200kb
>> every time a resource is migrated. I`ve setup a resource to fail ~ 2
>> minutes after it is started and the cib pro
Yes, you should be able to see the web page.
The first thing you want to do is make sure that httpd is configured
properly without HA. HA just handles the starting and stopping.
With the cluster completely down, start httpd on one of the nodes and
make sure httpd works and that you can view the p
On Wed, Mar 11, 2009 at 18:17, Pavel Georgiev wrote:
> I've noticed that pacemaker`s /usr/lib/heartbeat/cib leaks ~200kb
> every time a resource is migrated. I`ve setup a resource to fail ~ 2
> minutes after it is started and the cib proc quickly grows in size.
> I`ve upgraded pacemaker to 1.0.2-1
On Wed, Mar 11, 2009 at 18:41, Jerome Yanga wrote:
> Thank you all. I have been happy with the functionality of the setup that
> you guys helped build.
>
> For reference, here are the versions that I am running.
>
> drbd-8.2.7-3
> drbd-debuginfo-8.2.7-3
> drbd-km-2.6.18_128.1.1.el5-8.2.7-3
> hea
Thank you all. I have been happy with the functionality of the setup that you
guys helped build.
For reference, here are the versions that I am running.
drbd-8.2.7-3
drbd-debuginfo-8.2.7-3
drbd-km-2.6.18_128.1.1.el5-8.2.7-3
heartbeat-2.99.2-6.1
heartbeat-common-2.99.2-6.1
heartbeat-debug-2.99.2
Hello,
I have been working on a complete fail-over SAN for some time now and almost
have everything working the way it should. However, there have been some
drawbacks. I am using the most up to date version of Heartbeat and
Pacemaker. I have been modifying and testing everything through the CR
On Tuesday 10 March 2009 3:35 pm, you wrote:
> I found it useful to use hb_gui. I did the following.
>
> 1. Install heartbeat via RPMs.
> 2. Configure heartbeat ha.cf and authkeys. I set crm to yes in ha.cr,
> so I did not need an haresources file.
> 3. usermod -G haclient hacluster
> 4. passwd hac
I've noticed that pacemaker`s /usr/lib/heartbeat/cib leaks ~200kb
every time a resource is migrated. I`ve setup a resource to fail ~ 2
minutes after it is started and the cib proc quickly grows in size.
I`ve upgraded pacemaker to 1.0.2-11.1, which is the latest centos rpm,
but the problem persists.
One more time.
I have decided to use external/riloe as my stonith device but I have
some doubts. My system will be a cluster of two nodes.
First, Do I need to config riloe stonith as a clone ?
Second, this stonith device have some parameters of configuration:
hostlist, ilo_hostname, ilo_user
Hi @ll,
I've a test setup with linux-HA here which causes some trouble since yesterday.
Everytime I start the hb_gui to check for the state and switch packets, the gui
freezes after connecting to it.
It only occurs on one side so i'm pretty sure the cluster works fine
(crm_verify does say so).
Hi:
Looking at crm_mon, I sometimes see a node listed as UNCLEAN (online) or
UNCLEAN (offline). It looks like UNCLEAN (online) means that the node
disappeared unexpectedly from the cluster. How about UNCLEAN (online)?
Regards,
Nick
___
Linux-HA maili
17 matches
Mail list logo