Hi,
man openais.conf : rrp_mode
This specifies the mode of redundant ring, which may
be none,
active, or passive. Active replication offers
slightly lower
latency from transmit to delivery in faulty network
environ-
ments but
Hi,
I'm working with : cman-3.0.0-15.rc1.fc11.x86_64 //
rgmanager-3.0.0-15.rc1.fc11.x86_64
I wanted that CS takes in account dynamically the changes in
cluster.conf. Someone told
me here a few weeks ago that ccs_sync was automatically done , and that
I had only
to execute :
cman_tool
Hi,
With this release : cman-3.0.2-1.fc11.x86_64
it seems that we can't do ccs_tool update anymore :
ccs_tool update /etc/cluster/cluster.conf
Unknown command, update.
Try 'ccs_tool help' for help.
and effectively the help does not list anymore options update (neither
upgrade).
:
[r...@oberon3 ~]# ccs_sync help
Unable to parse /etc/cluster/cluster.conf: No such file or directory
Does that mean that ccs_sync does not take in account the
/etc/sysconfig/cman file ?
Thanks again
Alain
Fabio M. Di Nitto a écrit :
On Fri, 2009-09-04 at 11:46 +0200, Alain.Moulle wrote:
Hi
Hi,
I have this cman version :
cman-3.0.0-15.rc1.fc11.x86_64
is it possible to put the cluster.conf in another place than /etc/cluster/.
and if so, how can I tell it to cman ?
Thanks
Alain
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi,
I'd need to put the cluster.conf in another place than /etc/cluster/.
(without any symbolic link)
Is-it possible ?
Thanks
Alain
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi Fabio,
cman-3.0.0-15.rc1.fc11.x86_64
rgmanager-3.0.0-15.rc1.fc11.x86_64
Alain
On Thu, 2009-08-27 at 08:31 +0200, Alain.Moulle wrote:
Hi,
I'd need to put the cluster.conf in another place than /etc/cluster/.
(without any symbolic link)
Is-it possible ?
What version of the stack
Hi,
given a failover domain like this one :
failoverdomain name=FailoverDomain-1 ordered=*???* restricted=*???*
failoverdomainnode name=node1 priority=1/
failoverdomainnode name=node2 priority=2/
failoverdomainnode name=node3 priority=3/
/failoverdomain
and a service
Hi
What is the best way in a script to quickly identify if the CS is CS4 or
CS5 ?
(I got ccs_test -V but perhaps is there a better solution ?)
Thanks a lot
Alain
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi
Context : cluster with two nodes (without quorum disk) :
Both nodes are running their CS5 and their services.
If on one node we launch shutdown -h, we systematically get
the error :
debug Event (0:1:0) Processed
debug 2 events processed
clurgmgrd[29262]: err #47: Failed changing service
Hi
I would like to force the CS5 to check one specific service every 10s and
and try restart it if status is failed . I 've tried to set
checkinterval=10
on the service line :
service domain=testHA1 name=testHA1 autostart=0 checkinterval=10
but it seems to have none effect, the status is done
Hi
We have executed some benchs on gfs2, gfs compared to ext3 and ocfs2
and the results are (just some examples):
* Read on 1 file :
quite the same performance
* Write on 1 file :
quite the same performance
* Creation of many many files and repertories :
gfs2 is 10
Hi ,
it seems that the CS5 supports up to 128 nodes ...
(whereas it was 8 with CS3 and CS4 ? )
did some of you have tested at least the CS5 with more than 10 nodes ?
does it reveal any big problem or restriction to have big clusters with
CS5 ?
Thanks
Regards
Alain
--
Linux-cluster mailing
Hi
Would it be possible for you to to give me the bugzilla number ?
Thanks a lot.
Alain
Message: 1
Date: Wed, 24 Sep 2008 17:00:24 -0400
From: Lon Hohberger l...@redhat.com
Subject: Re: [Linux-cluster] CS5 / quorum disk configuration
On Wed, 2008-09-24 at 14:40 +0200, Alain.Moulle wrote:
Hi
Hi,
some more information about my problem (see previous email for
complete log)
(sorry if someone sent to me a response but I've not got any for now
perhaps because I'm in digest mode)
But if I start qdiskd after cman and in foreground likewise :
/qdiskd -ddd -f/
There is no more the
Hi,
I have a more precise sequence in the syslog on the node entering the
infernal loop Node is undead,
I think it could give some more indication about the problem, if someone
could help me ...
Thanks a lot
Regards
Alain
[r...@node3 ~]# tail -f /var/log/syslog /var/log/daemons/*
[1] 12017
Hi,
I'm facing again this problem of Node evicted and Node is undead ...
And I really don't know what to do ... below are the traces in syslog.
My version is :RHEL5.3 / cman-2.0.98-1.el5
Feb 25 14:33:33 s_...@xn3 qdiskd[27582]: notice Writing eviction
notice for node 2
Feb 25 14:33:34
Alain.Moulle wrote:
Hi,
I'm facing again this problem of Node evicted and Node is undead ...
And I really don't know what to do ... below are the traces in syslog.
My version is :RHEL5.3 / cman-2.0.98-1.el5
Feb 25 14:33:33 s_...@xn3 qdiskd[27582]: notice Writing eviction
notice
Hi
I don't remember : is the Quorum Disk functionnality working fine on CS4 ?
And from which one CS4 U... ?
Thanks
Regards
Alain
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi ,
About this problem, I wonder if it is a definitive behavior considered
as normal ?
or if this will work differently in a next release of cman or openais ?
(in previous versions with cman-2.0.73, we did not had this problem)
Thanks if someone could give an answer...
Regards,
Alain
Release
Hi Chrissie,
Thanks for your quick response.
But we started the cman manually and then did not start any other
service between.
There is no risk that the network goes down during the test.
We don't use actually 'intelligent switch' and no cisco switches.
We made a new test with the cman
.
On Friday 09 January 2009 11:47:02 Alain.Moulle wrote:
Hi
Release : cman-2.0.95-1.el5
(but same problem with 2.0.98)
I face a problem when launching cman on a two-node cluster :
1. Launching cman on node 1 : OK
2. When launching cman on node 2, the log on node1 gives :
cman killed
Hi
Release : cman-2.0.95-1.el5
(but same problem with 2.0.98)
I face a problem when launching cman on a two-node cluster :
1. Launching cman on node 1 : OK
2. When launching cman on node 2, the log on node1 gives :
cman killed by node 2 because we rejoined the cluster without a full
Hi
I wonder if it is possible to increase the period of hello messages
with a parameter in cluster.conf ?
By the way : what is the rule to decide on which eth interface the
heart-beat will be sent ?
is it always the name in node record with its IP adress in /etc/hosts ?
Thanks
Regards
Alain
Hi
Version cman : cman-2.0.95-1
I have a doubt about the behavior of cman with heat-beat quorum disk :
if there is a pb on quorum disk access (IO error) on only one of the 2
nodes
in cluster, but with heart-beat always working fine, does cman will
force the failover ?
or does it keep both
Hi
I just wonder if there are a lot of évolutions in CS5 delivered with RHEL5.2
with regard to the one delivered with RHEL5.1 ?
Thanks a lot
Regards,
Alain
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster
Hi
Something strange about qdisk configuration with CS5 :
it seems that it is necessary to reboot the system after the configuration
of quorum disk (via mkqdisk and in cluster.conf) so that it works fine,
otherwise, we encounter the problem of Node x is undead at first
failover try.
Whereas
27 matches
Mail list logo