To follow up, I couldn't manually leave by dlm_tool
[root@test2 log]# dlm_tool leave clvmd
Leaving lockspace "clvmd"
dlm_open_lockspace clvmd error (nil) 2

[root@test2 log]# dlm_tool ls
dlm lockspaces
name          clvmd
id            0x4104eefa
flags         0x00000002 leave
change        member 2 joined 1 remove 0 failed 0 seq 1,1
members       1 2

Thanks.
Shi

On Fri, Mar 11, 2011 at 9:28 AM, Shi Jin <jinzish...@gmail.com> wrote:

> Thank you all.
> The problem I have is that I don't seem to be able to get out of the
> cluster gracefully, even if I stop the services manually in the right order.
> For example, I joined the cluster manually by starting cman, clvmd and gfs2
> in the order and everything is working just fine.
>
> Then I wanted to reboot. This time, I want to do it manually so I went to
> stop the services in order.
> [root@test2 ~]# service gfs2 stop
> Unmounting GFS2 filesystem (/vrstorm):                     [  OK  ]
> [root@test2 ~]# service clvmd stop
> Signaling clvmd to exit                                    [  OK  ]
> Waiting for clvmd to exit:                                 [FAILED]
> clvmd failed to exit                                       [FAILED]
>
> Somehow clvmd cannot be stopped. I still have the process running
> root      2646  0.0  0.5 194476 45016 ?        SLsl 02:18   0:00 clvmd -T30
>
> How do I stop clvmd gracefully? I am running RHEL-6.
> [root@test2 ~]# uname -a
> Linux test2 2.6.32-71.18.2.el6.x86_64 #1 SMP Wed Mar 2 14:17:40 EST 2011
> x86_64 x86_64 x86_64 GNU/Linux
> [root@test2 ~]# cat /etc/redhat-release
> Red Hat Enterprise Linux Server release 6.0 (Santiago)
>
>
> Thank you very much.
>
> Shi
>
>
>
> On Thu, Mar 10, 2011 at 1:41 PM, Alvaro Jose Fernandez <
> alvaro.fernan...@sivsa.com> wrote:
>
>>  Hi,
>>
>>
>>
>> Given fencing is properly configured, I think the default boot/sshutdown
>> RHCS scripts should work. I too use two_node (but no clvmd) in RHEL5.5 with
>> latest updates to cman and rgmanager, and a shutdown -r works well (and a
>> shutdown -h too). The other node cluster daemon should log this as a node
>> shutdown in /var/log/messages, and it should adjust quorum, and not trigger
>> a fencing action over the other node.
>>
>>
>>
>> If one halts and poweroff via shutdown -h one of the two nodes, and then
>> reboots (via shutdown -r) the surviving node, the surviving node will fence
>> the other. We have power switch fencing, and it should simply suceed (making
>> a power  off then a power on on the other node's outlets). Once this fencing
>> suceeds, the boot sequence continues and the node assumes quorum.
>>
>>
>>
>> If later the other node is powered on, it should join the cluster without
>> problems.
>>
>>
>>
>> alvaro,
>>
>>
>>
>> Hi there,
>>
>>
>>
>> I've setup a two-node cluster with cman, clvmd and gfs2. I don't use qdisk
>> but had
>>
>> <cman expected_votes="1" two_node="1"/>
>>
>>
>>
>> I would like to know what is the proper procedure to reboot a node in the
>> two-node cluster (maybe this applies for all size?) when both nodes are
>> functioning fine but I just want to reboot one for some reason (for example,
>> upgrade kernel). Is there a preferred/better way to reboot the machine
>> rather than just running the "reboot" command as root. I have been doing the
>> "reboot" command so far and it sometimes creates problems for us, including
>>  making the other node to fail.
>>
>>
>>
>> Thank you very much.
>> Shi
>> --
>> Shi Jin, Ph.D.
>>
>> --
>> Linux-cluster mailing list
>> Linux-cluster@redhat.com
>> https://www.redhat.com/mailman/listinfo/linux-cluster
>>
>
>
>
> --
> Shi Jin, Ph.D.
>
>


-- 
Shi Jin, Ph.D.
--
Linux-cluster mailing list
Linux-cluster@redhat.com
https://www.redhat.com/mailman/listinfo/linux-cluster

Reply via email to