>
>
> Message: 1
> Date: Wed, 8 Jul 2009 13:20:02 -0400
> From: Jeff Sturm
> Subject: RE: [Linux-cluster] Trying to locate the bottleneck
> To: "linux clustering"
> Message-ID:
><64d0546c5ebbd147b75de133d798665f02fdc...@hugo.eprize.local>
> Content-Type: text/plain; charset="us-ascii"
>
>
This FAQ section gives good advice. The Xen network-bridge scripts are
designed to work on hosts without any preconfigured bridge; however I
find it much more straightforward to configure the host for bridging
myself exactly as in the FAQ. As a plus you have more complete control
over all your ne
On Wed, Jul 8, 2009 at 11:52 PM, Aaron Benner wrote:
> I have 3 xen Dom0 machines upon which I'm trying to build a cluster for HA
> DomUs. At present the cluster config file simply lists the 3 nodes. No
> fencing, services, resources or failover domains have been defined. I know
> that this is
I have 3 xen Dom0 machines upon which I'm trying to build a cluster
for HA DomUs. At present the cluster config file simply lists the 3
nodes. No fencing, services, resources or failover domains have been
defined. I know that this is not what I will need moving to
production. I was usin
Hi all,
as previously announced here:
http://www.redhat.com/archives/cluster-devel/2009-January/msg00074.html
now that STABLE3 is in "production ready" status, the End-Of-Life date
for STABLE2 is set.
Regards
Fabio
signature.asc
Description: This is a digitally signed message part
--
Linux-c
I've just had bad experience all around with GFS2. You may want to try GFS1
and play with the tunable parameters.
On Wed, Jul 8, 2009 at 1:58 PM, Peter Schobel wrote:
> I am trying to set up a four node cluster but am getting very poor
> performance when removing large directories. A directory a
On Wed, Jul 08, 2009 at 01:58:30PM -0700, Peter Schobel wrote:
> I am trying to set up a four node cluster but am getting very poor
> performance when removing large directories. A directory approximately
> 1.6G in size takes around 5 mins to remove from the gfs2 filesystem
> but removes in aroun
The cluster team and its community are proud to announce the 3.0.0 final
release from the STABLE3 branch.
"And now what?"
The STABLE3 branch will continue to receive bug fixes and improvements
as feedback from our community and users will flow in.
Regular update releases will be available to sync
I am trying to set up a four node cluster but am getting very poor
performance when removing large directories. A directory approximately
1.6G in size takes around 5 mins to remove from the gfs2 filesystem
but removes in around 10 seconds from the local disk.
I am using CentOS 5.3 with kernel 2.6
On Wed, Jul 8, 2009 at 8:51 PM, Murugan P wrote:
> HI Friends,
>
>
> I want to get the clear understanding about the software (package) openasis
> which is using for the RHCS 5.
>
> Kindly give me your input or suggest any URL for the same.
>
Hi,,
It's openais and a quick google search leads you
HI Friends,
I want to get the clear understanding about the software (package) openasis
which is using for the RHCS 5.
Kindly give me your input or suggest any URL for the same.
--Muruga
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-clust
Hi Jeff
iptables is disabled within this setup as this is basically being done
within a development enviroment. Still on the hunt to see where this
bottle neck is happening though. Trying alternative loadbalancing
software to see if I get the same results
Thanks
R.
Jeff Sturm wrote:
I thi
I think this is created when you first run iptables. If you have no NAT
rules on the load balancer, the ip_conntrack_max setting won't exist,
and you'll need to look somewhere else for the problem.
-Jeff
> -Original Message-
> From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-
Hello list,
i am experimenting with a cluster based on RHCM, but to avoid single point of
failure i would like to have two storage machines, replicated via drbd. To be
able to access the data in case of failure of one of the storages there should
be multipath access to the data. So my question is:
Hi Guys
I am trying to locate ip_conntrack_max within CentOS 5.3 but it doesn't
appear to be where I expect it to be. I have googled for this and from
what I have read it should be located within
/proc/sys/net/ipv4/ip_conntrack_max
Which is where I thought it would be but unfortunately it is
Thanks Juanra got my point.
CMAN package is not present in any of the group including the
groups(Clustering/ClusterStorages/Base).
But, i want to know, from which group the CMAN package is installing.
One more thing want to share with you, If u selected the above mentioned
group after the instal
Hi,
You can't avoid it starting. It does other things aside from quotas
although its workload is pretty light. It might be possible in the
future to avoid having quite so many threads (I've already got rid of a
number of them) but its not very high priority.
It maybe that some of the work could b
Hi folks!
Now that developers (many thanks!) have corrected the issue that made
gfs2_quotad to stay in an uninterruptible state (so it was got into account
when calculating system load), I have another question: if quotas are
disabled by default, why is this kthread started? Is there any way of
av
In the fence_daemon tag. Like this:
On Wed, Jul 8, 2009 at 2:50 AM, Abed-nego G. Escobal, Jr. <
abednegoy...@yahoo.com> wrote:
>
> I haven't tried it yet. To which part of the cluster.conf should I be
> inserting clean_start=1 ?
>
> --- On Wed, 7/8/09, Ian Hayes wrote:
>
> > From: Ian Hayes
On Wed, Jul 8, 2009 at 3:58 PM, Murugan P wrote:
>
> I have installed the OS(centos 5.3) with cluster software and after
> installation i can able to see
>
> [r...@testgfs ~]# rpm -qa | grep cman
> cman-2.0.98-1.el5
>
> **
>
> My question is, while selecting the software at the installati
2009/7/8 Murugan P :
> Kindly clarify friends , how to know that which software is having the CMAN
> packages since i haven't seen the same in clustering/ClusterStorage.
At installation time, you can install the software selecting "Groups"
of packages. These groups can be tuned as you prefer by c
I have installed the OS(centos 5.3) with cluster software and after
installation i can able to see
[r...@testgfs ~]# rpm -qa | grep cman
cman-2.0.98-1.el5
**
My question is, while selecting the software at the installation time i
don't find the CMAN packages using F2 on the software
clus
On Wed, Jul 8, 2009 at 3:10 PM, Murugan P wrote:
> HI Friends,
>
> I need small clarification from u guys...
>
> Whille installing the centos 5.3 which software needs to select for
> RHCS(Cluster service) and clarify which is having the CMAN package.
Hi,
rgmanager
cman
openais
and if you are us
HI Friends,
I need small clarification from u guys...
Whille installing the centos 5.3 which software needs to select for
RHCS(Cluster service) and clarify which is having the CMAN package.
Thanks & Regards,
P. Murugan
murugan...@gmail.com
9841705767
--
Linux-cluster mailing list
Linux-cluster@r
2009/7/8 Murugan P :
> Hi Friends,
>
> I want to install the RHCS with GFS2 on Centos 5.3.
>
> Kindly provide the list of packages(NAME) which is need for my requirement
> and confirm whether DLM is inbuild with 5.3 Kernel.
http://www.centos.org/docs/5/html/5.2/Cluster_Suite_Overview/
--
Giusepp
On Wed, Jul 8, 2009 at 1:05 PM, victor titus wrote:
> Hi All,
>Below are the messages found from the Log "/var/log/messages".
> Seems to be some problem with the release of NVRAM memory. Due to this
> the LVM in the cluster are not detected by the server, commands like
> lvdisplay, pvdisplay j
Hi Friends,
I want to install the RHCS with GFS2 on Centos 5.3.
Kindly provide the list of packages(NAME) which is need for my requirement
and confirm whether DLM is inbuild with 5.3 Kernel.
Thanks & Regards,
P. Murugan
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.co
Hi All,
Below are the messages found from the Log "/var/log/messages".
Seems to be some problem with the release of NVRAM memory. Due to this
the LVM in the cluster are not detected by the server, commands like
lvdisplay, pvdisplay just show no output.
*
I haven't tried it yet. To which part of the cluster.conf should I be inserting
clean_start=1 ?
--- On Wed, 7/8/09, Ian Hayes wrote:
> From: Ian Hayes
> Subject: Re: [Linux-cluster] Cannot make cluster after upgrade
> To: "linux clustering"
> Date: Wednesday, 8 July, 2009, 2:59 PM
> Sounds a
Hi Jeff
Many Thanks for your reply.
I have had a look to see if there if there is anything suspicious within
dmesg and within messages and unfortunately there isn't anything at all
apart from one timeout.
Jul 8 10:15:51 loadbalancer-01 nanny[5427]: [inactive] shutting down
192.168.10.36:80
Hi,
I added a heuristic checking network status and help in network failure
scenarios.
However, I still face the same problem as soon as I stop the services orderly
in the node holding the qdisk master role or reboot it.
If I execute in master qdisk node:
# service rgmanager stop
# service cl
31 matches
Mail list logo