Please configure fencing. If you don't, it _will_ cause you problems.
On 07/01/15 09:48 PM, Cao, Vinh wrote:
Hi Digimer,
No we're not supporting multicast. I'm trying to use Broadcast, but Redhat
support is saying better to use transport=udpu. Which I did set and it is
complaining time out.
I
Hi Digimer,
No we're not supporting multicast. I'm trying to use Broadcast, but Redhat
support is saying better to use transport=udpu. Which I did set and it is
complaining time out.
I did try to set broadcast, but somehow it didn't work either.
Let me give broadcast a try again.
Thanks,
Vinh
It looks like a network problem... Does your (virtual) switch support
multicast properly and have you opened up the proper ports in the firewall?
On 07/01/15 05:32 PM, Cao, Vinh wrote:
Hi Digimer,
Yes, I just did. Looks like they are failing. I'm not sure why that is.
Please see the attachment
Did you configure fencing properly?
On 07/01/15 05:32 PM, Cao, Vinh wrote:
Hi Digimer,
Yes, I just did. Looks like they are failing. I'm not sure why that is.
Please see the attachment for all servers log.
By the way, I do appreciated all the helps I can get.
Vinh
-Original Message-
Hi Digimer,
Yes, I just did. Looks like they are failing. I'm not sure why that is.
Please see the attachment for all servers log.
By the way, I do appreciated all the helps I can get.
Vinh
-Original Message-
From: linux-cluster-boun...@redhat.com
[mailto:linux-cluster-boun...@redhat.c
Quorum is enabled by default. I need to see the entire logs from all
five nodes, as I mentioned in the first email. Please disable cman from
starting on boot, configure fencing properly and then reboot all nodes
cleanly. Start the 'tail -f -n 0 /var/log/messages' on all five nodes,
then in anot
Hi Digimer,
Here is from the logs:
[root@ustlvcmsp1954 ~]# tail -f /var/log/messages
Jan 7 16:14:01 ustlvcmsp1954 corosync[8182]: [SERV ] Service engine loaded:
corosync profile loading service
Jan 7 16:14:01 ustlvcmsp1954 corosync[8182]: [QUORUM] Using quorum provider
quorum_cman
Jan 7
On 07/01/15 03:39 PM, Cao, Vinh wrote:
Hello Digimer,
Yes, I would agrre with you RHEL6.4 is old. We patched monthly, but I'm not
sure why these servers are still at 6.4. Most of our system are 6.6.
Here is my cluster config. All I want is using cluster to have BGFS2 mount via
/etc/fstab.
roo
Hello Digimer,
Yes, I would agrre with you RHEL6.4 is old. We patched monthly, but I'm not
sure why these servers are still at 6.4. Most of our system are 6.6.
Here is my cluster config. All I want is using cluster to have BGFS2 mount via
/etc/fstab.
root@ustlvcmsp1955 ~]# cat /etc/cluster/clus
My first though would be to set in
cluster.conf.
If that doesn't work, please share your configuration file. Then, with
all nodes offline, open a terminal to each node and run 'tail -f -n 0
/var/log/messages'. With that running, start all the nodes and wait for
things to settle down, then pa
Hello Cluster guru,
I'm trying to setup Redhat 6.4 OS cluster with 5 nodes. With two nodes I don't
have any issue.
But with 5 nodes, when I ran clustat I got 3 nodes online and the other two off
line.
When I start the one that are off line. Service cman start. I got:
[root@ustlvcmspxxx ~]
Welcome to the fence-agents 4.0.14 release
This release includes some new features and several bugfixes:
* fence_zvmip for IBM z/VM is rewritten to Python
* new fence agent for Emerson devices
* fix invalid default ports for fence_eps and fence_amt
* properly escape XML in other fields of metad
12 matches
Mail list logo