2009/3/19
>
>
> ""Also, if I shut down both nodes and start just one of them, the starting
> node still waits in the "starting fencing" part many minutes even though the
> cluster should be quorate (there's a quorum disk)!
> ""
>
> I had a similar situation and the reason why the first node couldn
Dan, a quick google reveals the patch from whence this log originates
and appears to have replaced a message that read "Services locked". If
you are seeing these only after bootup, I suspect that it's just a
condition of the cluster booting up and rgmanager not allowing things
to flip from node to
Nothing to say for the first part,
but this:
""Also, if I shut down both nodes and start just one of them, the starting node
still waits in the "starting fencing" part many minutes even though the cluster
should be quorate (there's a quorum disk)!
""
I had a similar situation and the reason wh
Hunt, Gary wrote:
Is there a way to get a cluster node to recognize that the number of
votes a quorum disk gets has changed? I added a new node to the cluster
and updated the cluster.conf to reflect the changes and propagated it.
In this case I went from 3 total votes and a quorum disk vote
Is there a way to get a cluster node to recognize that the number of votes a
quorum disk gets has changed? I added a new node to the cluster and updated
the cluster.conf to reflect the changes and propagated it. In this case I went
from 3 total votes and a quorum disk vote of 1 to 5 total vote
I was fighting a very similar issue today. I am not familiar with the fencing
you are using, but I would guess your fence device is not working properly. If
a node fails and the fencing doesn't succeed it will halt all gfs activity. If
a clustat shows both nodes and the quorum disk online, bu
I have a 4-node cluster, each node running one of four services. Each
service is an ip/fs combination. I'm trying to test service failover.
After disconnecting the network to one of the nodes (ip link set eth0
down), its running service is not migrated to another node until the
node get successfull
Bob Peterson wrote:
- "vu pham" wrote:
| Although my gfs partition has a lot of free space to add two more
| journal, but gfs_jadd complains of not enough space.
|
| Do I have to run any extra command to make it work ?
Hi,
This is a common complaint about gfs. Frankly, I'm surprised
th
All,
I currently use the fence_mcdata script (with slight mod) to provide
fencing to my DS-4700M switch.
I have two questions:
1. The username and password are stored plain text within the
cluster.conf file. Is there a way to make this more secure?
(password script?)
2. fence_mcdata works by
- "vu pham" wrote:
| Although my gfs partition has a lot of free space to add two more
| journal, but gfs_jadd complains of not enough space.
|
| Do I have to run any extra command to make it work ?
Hi,
This is a common complaint about gfs. Frankly, I'm surprised
that it's not in the FAQ
Oh, just ignore this stupid question. I didn't read the man page
carefully. This sentence from the man page solves the problem:
"gfs_jadd will not use space that has been formatted for filesystem data
even if that space has never been populated with files."
I extended the lvm and then grew t
Although my gfs partition has a lot of free space to add two more
journal, but gfs_jadd complains of not enough space.
Do I have to run any extra command to make it work ?
[r...@vm1 gfsdata]# gfs_tool df /mnt/gfsdata
/mnt/gfsdata:
SB lock proto = "lock_dlm"
SB lock table = "cluster1:gfs2"
Hi all,
i have setup a cluster with two nodes and I need to modify when the service's
status is checked. To do this I have put this on cluster.conf:
But status service is ch
Hello all
I have a two-node cluster with a quorum disk.
When I pull off the power cord from one node, the other node freezes the
shared gfs-volumes and all activity stops, even though the cluster maintains
quorum. When the other node boots up, I can see that "starting fencing"
takes many minutes
14 matches
Mail list logo