Hi Keisuke-san,
On Thu, Mar 01, 2012 at 11:47:50AM +0900, Keisuke MORI wrote:
Hi,
Any update on this?
No. I was reluctant, but I think that we can pull the patches.
Cheers,
Dejan
2012/2/1 Keisuke MORI keisuke.mori...@gmail.com:
Hi Dejan,
2012/1/31 Dejan Muhamedagic de...@suse.de:
On Thu, Mar 1, 2012 at 8:47 AM, Stefan Schloesser
sschloes...@intermediate.de wrote:
Hi Florian,
thanks for the link, I added the
COROSYNC_DEFAULT_CONFIG_IFACE=openaisserviceenableexperimental:corosync_parser
Param to no avail, also tried stonith (though I don't see why this should be
can you show me your /etc/cluster/cluster.conf?
because i think your problem it's a fencing-loop
Il giorno 01 marzo 2012 01:03, William Seligman selig...@nevis.columbia.edu
ha scritto:
On 2/28/12 7:26 PM, Lars Ellenberg wrote:
On Tue, Feb 28, 2012 at 03:51:29PM -0500, William Seligman
Hi Florian,
None, because dual-Primary and OCFS2 are utterly pointless for Apache and
MySQL. Apache and MySQL can be easily built on single-Primary DRBD with a
regular filesystem (ext3/4, XFS, you name it).
I would like load-balancing and use typo3 which writes upon access to the
filesystem
On 1 Mar 2012, at 10:25, Stefan Schloesser wrote:
I would like load-balancing and use typo3 which writes upon access to the
filesystem and db (cache etc.). Still pointless?
I do (well, I will be again when I get corosync/pacemaker working again!)
something similar using a managed IP in
On Thu, Mar 1, 2012 at 10:25 AM, Stefan Schloesser
sschloes...@intermediate.de wrote:
Hi Florian,
None, because dual-Primary and OCFS2 are utterly pointless for Apache and
MySQL. Apache and MySQL can be easily built on single-Primary DRBD with a
regular filesystem (ext3/4, XFS, you name it).
Hi Marcus,
I would like load-balancing and use typo3 which writes upon access to the
filesystem and db (cache etc.). Still pointless?
I do (well, I will be again when I get corosync/pacemaker working again!)
something similar using a managed IP in front of haproxy/stunnel/apache with
On 1 Mar 2012, at 11:25, Stefan Schloesser wrote:
My setup would involve 2 loadbalancer and 2 nodes. Are you saying that
running GlusterFs on both nodes using its replication feature is easier +
more reliable than DRBD + ocfs2 + pacemaker?
I can't compare reliability as I've never used
On 3/1/12 4:15 AM, emmanuel segura wrote:
can you show me your /etc/cluster/cluster.conf?
because i think your problem it's a fencing-loop
Here it is:
/etc/cluster/cluster.conf:
?xml version=1.0?
cluster config_version=17 name=Nevis_HA
logging debug=off/
cman expected_votes=1 two_node=1
try to change the fence daemon tag like this
fence_daemon clean_start=1 post_join_delay=30 /
change your cluster config version and after reboot the cluster
Il giorno 01 marzo 2012 12:28, William Seligman
On 03/01/12 11:25, Stefan Schloesser wrote:
Hi Marcus,
I would like load-balancing and use typo3 which writes upon access to the
filesystem and db (cache etc.). Still pointless?
I do (well, I will be again when I get corosync/pacemaker working again!)
something similar using a managed
Hi,
GlusterFS is _definitely_ easier to set up than dual-Primary DRBD with OCFS2,
and also much harder to break.
I'll give it a try. Thank you both for this hint.
MySQL Cluster as in NDB? Or Galera replication? Or MySQL replication in
dual-master mode (*shudder*)? Or MySQL on DRBD?
As in
On 3/1/12 6:34 AM, emmanuel segura wrote:
try to change the fence daemon tag like this
fence_daemon clean_start=1 post_join_delay=30 /
change your cluster config version and after reboot the cluster
This did not
On 3/1/12 12:10 PM, William Seligman wrote:
On 3/1/12 6:34 AM, emmanuel segura wrote:
try to change the fence daemon tag like this
fence_daemon clean_start=1 post_join_delay=30 /
change your cluster config version and
Ok william
if this it'sn the problem, when you show me your pacemaker cib xml
crm configure show OUTPUT
Il giorno 01 marzo 2012 18:10, William Seligman selig...@nevis.columbia.edu
ha scritto:
On 3/1/12 6:34 AM, emmanuel segura wrote:
try to change the fence daemon tag like this
Hi, I would like to know whether the following is the expected
behavior of HA (running wich CRM):
- Two nodes.
- node-1 has the control and has a virtual IP and kamailio daemon
(or whichever).
- kamailio daemon has a correct OCF script, and the process is
monitored (OCF monitor action) by HA
On Thu, Mar 01, 2012 at 12:16:17PM -0500, William Seligman wrote:
On 3/1/12 12:10 PM, William Seligman wrote:
On 3/1/12 6:34 AM, emmanuel segura wrote:
try to change the fence daemon tag like this
fence_daemon clean_start=1 post_join_delay=30 /
On Thursday 01 March 2012 18:48:37 Iñaki Baz Castillo wrote:
Hi, I would like to know whether the following is the expected
behavior of HA (running wich CRM):
- Two nodes.
- node-1 has the control and has a virtual IP and kamailio daemon
(or whichever).
- kamailio daemon has a correct
On 3/1/12 12:56 PM, Lars Ellenberg wrote:
On Thu, Mar 01, 2012 at 12:16:17PM -0500, William Seligman wrote:
On 3/1/12 12:10 PM, William Seligman wrote:
On 3/1/12 6:34 AM, emmanuel segura wrote:
try to change the fence daemon tag like this
fence_daemon
2012/3/1 Arnold Krille arn...@arnoldarts.de:
You want to read/learn about the failure-counters. And see that starting a
resource counts as 1 failures effectively preventing this resource from
running on this node. Of course you as admin can (and should) correct the
problem and then reset
After days spent debugging a fencing issue with my cluster, I know for certain
that this fencing agent works, at least for me. I'd like to contribute it to the
Linux HA community.
In my cluster, the fencing mechanism is to use NUT (Network UPS Tools;
http://www.networkupstools.org/ to turn off
On Thu, Mar 1, 2012 at 11:37 PM, William Seligman
selig...@nevis.columbia.edu wrote:
That script doesn't work for stonith-ng. So here's a new agent, written in
perl,
and tested under pacemaker-1.1.6 and nut-2.4.3.
I know there's a fence_apc_snmp agent that already in resource-agents.
On Fri, Mar 2, 2012 at 9:37 AM, William Seligman
selig...@nevis.columbia.edu wrote:
After days spent debugging a fencing issue with my cluster, I know for certain
that this fencing agent works, at least for me. I'd like to contribute it to
the
Linux HA community.
In my cluster, the fencing
23 matches
Mail list logo