Re: [Linux-cluster] problems with clvmd and lvms on rhel6.1

2012-08-10 Thread Poós Krisztián
Yeah, Thanks. I checked your thread...if you ment "clvmd hangs" however It's like not finished... I see only 3 entries for that thread and unfortunately no solution at the end. May I miss something? However my scenario is a bit different, I don't need gfs, but only clvmd with a failover lvm, as th

[Linux-cluster] Fwd: Re: problems with clvmd and lvms on rhel6.1

2012-08-10 Thread Poós Krisztián
So I forgot the test environment in this case. Here is the normal environment which is not fully productive yet, so I can do tests on it... Fencing (SCSI 3 persistent reservation) works and tested. I configured the cluster to used, it, and the lvms are still down... the cluster not able to mount th

Re: [Linux-cluster] problems with clvmd and lvms on rhel6.1

2012-08-10 Thread Digimer
Could well be. As I mentioned, no fencing == things break. On 08/10/2012 01:00 PM, Chip Burke wrote: See my thread earlier as I am having similar issues. I am testing this soon, but I "think" the issue in my case is setting up SCSI fencing before GFS2. So essentially it has nothing to fence off

Re: [Linux-cluster] problems with clvmd and lvms on rhel6.1

2012-08-10 Thread Chip Burke
See my thread earlier as I am having similar issues. I am testing this soon, but I "think" the issue in my case is setting up SCSI fencing before GFS2. So essentially it has nothing to fence off of, sees it as a fault, and never recovers. I "think" my fix will be establish the LVMs, GFS2 etc then p

Re: [Linux-cluster] problems with clvmd and lvms on rhel6.1

2012-08-10 Thread Digimer
Not sure if it relates, but I can say that without fencing, things will break in strange ways. The reason is that if anything triggers a fault, the cluster blocks by design and stays blocked until a fence call succeeds (which is impossible without fencing configured in the first place). Can y

Re: [Linux-cluster] problems with clvmd and lvms on rhel6.1

2012-08-10 Thread Poós Krisztián
This is the cluster conf, Which is a clone of the problematic system on a test environment (without the ORacle and SAP instances, only focusing on this LVM issue, with an LVM resource) [root@rhel2 ~]# cat /etc/cluster/cluster.conf

Re: [Linux-cluster] problems with clvmd and lvms on rhel6.1

2012-08-10 Thread Digimer
On 08/10/2012 11:07 AM, Poós Krisztián wrote: Dear all, I hope that anyone run into this problem in the past, so maybe can help me resolving this issue. There is a 2 node rhel cluster with quorum also. There are clustered lvms, where the -c- flag is on. If I start clvmd all the clustered lvms b

[Linux-cluster] problems with clvmd and lvms on rhel6.1

2012-08-10 Thread Poós Krisztián
Dear all, I hope that anyone run into this problem in the past, so maybe can help me resolving this issue. There is a 2 node rhel cluster with quorum also. There are clustered lvms, where the -c- flag is on. If I start clvmd all the clustered lvms became online. After this if I start rgmanager,

[Linux-cluster] How to see what node is the master for quorum disk?

2012-08-10 Thread Gianluca Cecchi
Hello, in qdiskd.log I get at cluster startup the node that becomes master for quorum disk. config is in fact something like and in syslog.conf # qdisk logging local4.*/var/log/qdiskd.log The file is rotated so after some time I have only empty qd