Re: [ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot

2016-12-05 Thread su liu
hello digimer,

I am happy to tell you that I got the reason why I can not access the LVs
on the compute1 node.

Because I make a mistake with thc /etc/lvm/lvm.con on the compute1 node.
Now it works.


Then I would to study how to snapshotting a LV.

Thank you!

2016-12-06 14:24 GMT+08:00 su liu :

> It is the resource configration whthin my pacemaker cluster:
>
> [root@controller ~]# cibadmin --query --scope resources
> 
>   
> 
>   
>  name="allow_stonith_disabled" value="true"/>
>   
>   
>  timeout="90"/>
>  timeout="100"/>
> 
>   
> 
> 
>   
>   
> 
>   
>   
> 
>   
>  name="activate_vgs" value="true"/>
>   
>   
>  timeout="90"/>
>  timeout="90"/>
>  name="monitor"/>
>   
>   
> 
> 
>   
>   
> 
>   
> 
> [root@controller ~]#
>
>
>
> 2016-12-06 14:16 GMT+08:00 su liu :
>
>> Thank you very much.
>>
>> Because I am new to pacemaker, and I have checked the docs that
>> additional devices are needed when configing stonith, but now I does not
>> have it in my environment.
>>
>> I will see how to config it afterward.
>>
>> Now I want to know how the cluster LVM works. Thank you for your patience
>> explanation.
>>
>> The scene is:
>>
>> controller node + compute1 node
>>
>> I mount a SAN to both controller and compute1 node. Then I run a
>> pacemaker + corosync + clvmd cluster:
>>
>> [root@controller ~]# pcs status --full
>> Cluster name: mycluster
>> Last updated: Tue Dec  6 14:09:59 2016 Last change: Mon Dec  5 21:26:02
>> 2016 by root via cibadmin on controller
>> Stack: corosync
>> Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) - partition
>> with quorum
>> 2 nodes and 4 resources configured
>>
>> Online: [ compute1 (2) controller (1) ]
>>
>> Full list of resources:
>>
>>  Clone Set: dlm-clone [dlm]
>>  dlm (ocf::pacemaker:controld): Started compute1
>>  dlm (ocf::pacemaker:controld): Started controller
>>  Started: [ compute1 controller ]
>>  Clone Set: clvmd-clone [clvmd]
>>  clvmd (ocf::heartbeat:clvm): Started compute1
>>  clvmd (ocf::heartbeat:clvm): Started controller
>>  Started: [ compute1 controller ]
>>
>> Node Attributes:
>> * Node compute1 (2):
>> * Node controller (1):
>>
>> Migration Summary:
>> * Node compute1 (2):
>> * Node controller (1):
>>
>> PCSD Status:
>>   controller: Online
>>   compute1: Online
>>
>> Daemon Status:
>>   corosync: active/disabled
>>   pacemaker: active/disabled
>>   pcsd: active/enabled
>>
>>
>>
>> step 2:
>>
>> I create a cluster VG:cinder-volumes:
>>
>> [root@controller ~]# vgdisplay
>>   --- Volume group ---
>>   VG Name   cinder-volumes
>>   System ID
>>   Formatlvm2
>>   Metadata Areas1
>>   Metadata Sequence No  44
>>   VG Access read/write
>>   VG Status resizable
>>   Clustered yes
>>   Sharedno
>>   MAX LV0
>>   Cur LV0
>>   Open LV   0
>>   Max PV0
>>   Cur PV1
>>   Act PV1
>>   VG Size   1000.00 GiB
>>   PE Size   4.00 MiB
>>   Total PE  255999
>>   Alloc PE / Size   0 / 0
>>   Free  PE / Size   255999 / 1000.00 GiB
>>   VG UUID   aLamHi-mMcI-2NsC-Spjm-QWZr-MzHx-pPYSTt
>>
>> [root@controller ~]#
>>
>>
>> Step 3 :
>>
>> I create a LV and I want it can be seen and accessed on the compute1 node
>> but it is failed:
>>
>> [root@controller ~]# lvcreate --name test001 --size 1024m cinder-volumes
>>   Logical volume "test001" created.
>> [root@controller ~]#
>> [root@controller ~]#
>> [root@controller ~]# lvs
>>   LV  VG Attr   LSize Pool Origin Data%  Meta%  Move
>> Log Cpy%Sync Convert
>>   test001 cinder-volumes -wi-a- 1.00g
>>
>> [root@controller ~]#
>> [root@controller ~]#
>> [root@controller ~]# ll /dev/cinder-volumes/test001
>> lrwxrwxrwx 1 root root 7 Dec  6 14:13 /dev/cinder-volumes/test001 ->
>> ../dm-0
>>
>>
>>
>> I can access it on the contrller node, but on the comput1 node, I can see
>> it with lvs command .but cant access it with ls command, because it is not
>> exists on the /dev/cinder-volumes directory:
>>
>>
>> [root@compute1 ~]# lvs
>>   LV  VG Attr   LSize Pool Origin Data%  Meta%  Move
>> Log Cpy%Sync Convert
>>   test001 cinder-volumes -wi--- 1.00g
>>
>> [root@compute1 ~]#
>> [root@compute1 ~]#
>> [root@compute1 ~]# ll /dev/cinder-volumes
>> ls: cannot access /dev/cinder-volumes: No such file or directory
>> [root@compute1 ~]#
>> [root@compute1 ~]#
>> [root@compute1 ~]# lvscan
>>   inactive  '/dev/cinder-volumes/test001' [1.00 GiB] inherit
>> [root@compute1 ~]#
>>
>>
>>
>> Is something error with my configuration besides stonith?  Could you help
>> me?  thank you very much.
>>
>>
>>
>>
>>
>>
>>
>>
>>
>>
>> 2016-12-06 11:37 

Re: [ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot

2016-12-05 Thread su liu
It is the resource configration whthin my pacemaker cluster:

[root@controller ~]# cibadmin --query --scope resources

  

  

  
  



  


  
  

  
  

  

  
  



  
  


  
  

  

[root@controller ~]#



2016-12-06 14:16 GMT+08:00 su liu :

> Thank you very much.
>
> Because I am new to pacemaker, and I have checked the docs that additional
> devices are needed when configing stonith, but now I does not have it in my
> environment.
>
> I will see how to config it afterward.
>
> Now I want to know how the cluster LVM works. Thank you for your patience
> explanation.
>
> The scene is:
>
> controller node + compute1 node
>
> I mount a SAN to both controller and compute1 node. Then I run a pacemaker
> + corosync + clvmd cluster:
>
> [root@controller ~]# pcs status --full
> Cluster name: mycluster
> Last updated: Tue Dec  6 14:09:59 2016 Last change: Mon Dec  5 21:26:02
> 2016 by root via cibadmin on controller
> Stack: corosync
> Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) - partition
> with quorum
> 2 nodes and 4 resources configured
>
> Online: [ compute1 (2) controller (1) ]
>
> Full list of resources:
>
>  Clone Set: dlm-clone [dlm]
>  dlm (ocf::pacemaker:controld): Started compute1
>  dlm (ocf::pacemaker:controld): Started controller
>  Started: [ compute1 controller ]
>  Clone Set: clvmd-clone [clvmd]
>  clvmd (ocf::heartbeat:clvm): Started compute1
>  clvmd (ocf::heartbeat:clvm): Started controller
>  Started: [ compute1 controller ]
>
> Node Attributes:
> * Node compute1 (2):
> * Node controller (1):
>
> Migration Summary:
> * Node compute1 (2):
> * Node controller (1):
>
> PCSD Status:
>   controller: Online
>   compute1: Online
>
> Daemon Status:
>   corosync: active/disabled
>   pacemaker: active/disabled
>   pcsd: active/enabled
>
>
>
> step 2:
>
> I create a cluster VG:cinder-volumes:
>
> [root@controller ~]# vgdisplay
>   --- Volume group ---
>   VG Name   cinder-volumes
>   System ID
>   Formatlvm2
>   Metadata Areas1
>   Metadata Sequence No  44
>   VG Access read/write
>   VG Status resizable
>   Clustered yes
>   Sharedno
>   MAX LV0
>   Cur LV0
>   Open LV   0
>   Max PV0
>   Cur PV1
>   Act PV1
>   VG Size   1000.00 GiB
>   PE Size   4.00 MiB
>   Total PE  255999
>   Alloc PE / Size   0 / 0
>   Free  PE / Size   255999 / 1000.00 GiB
>   VG UUID   aLamHi-mMcI-2NsC-Spjm-QWZr-MzHx-pPYSTt
>
> [root@controller ~]#
>
>
> Step 3 :
>
> I create a LV and I want it can be seen and accessed on the compute1 node
> but it is failed:
>
> [root@controller ~]# lvcreate --name test001 --size 1024m cinder-volumes
>   Logical volume "test001" created.
> [root@controller ~]#
> [root@controller ~]#
> [root@controller ~]# lvs
>   LV  VG Attr   LSize Pool Origin Data%  Meta%  Move
> Log Cpy%Sync Convert
>   test001 cinder-volumes -wi-a- 1.00g
>
> [root@controller ~]#
> [root@controller ~]#
> [root@controller ~]# ll /dev/cinder-volumes/test001
> lrwxrwxrwx 1 root root 7 Dec  6 14:13 /dev/cinder-volumes/test001 ->
> ../dm-0
>
>
>
> I can access it on the contrller node, but on the comput1 node, I can see
> it with lvs command .but cant access it with ls command, because it is not
> exists on the /dev/cinder-volumes directory:
>
>
> [root@compute1 ~]# lvs
>   LV  VG Attr   LSize Pool Origin Data%  Meta%  Move
> Log Cpy%Sync Convert
>   test001 cinder-volumes -wi--- 1.00g
>
> [root@compute1 ~]#
> [root@compute1 ~]#
> [root@compute1 ~]# ll /dev/cinder-volumes
> ls: cannot access /dev/cinder-volumes: No such file or directory
> [root@compute1 ~]#
> [root@compute1 ~]#
> [root@compute1 ~]# lvscan
>   inactive  '/dev/cinder-volumes/test001' [1.00 GiB] inherit
> [root@compute1 ~]#
>
>
>
> Is something error with my configuration besides stonith?  Could you help
> me?  thank you very much.
>
>
>
>
>
>
>
>
>
>
> 2016-12-06 11:37 GMT+08:00 Digimer :
>
>> On 05/12/16 10:32 PM, su liu wrote:
>> > Digimer, thank you very much!
>> >
>> > I do not need to have the data accessible on both nodes at once. I want
>> > to use the clvm+pacemaker+corosync in OpenStack Cinder.
>>
>> I'm not sure what "cinder" is, so I don't know what it needs to work.
>>
>> > then only a VM need access the LV at once. But the Cinder service which
>> > runs on the controller node is  responsible for snapshotting the LVs
>> > which are attaching on the VMs runs on other Compute nodes(such as
>> > compute1 node).
>>
>> If you don't need to access an LV on more than one node at a time, then
>> don't add clustered LVM and keep things simple. 

Re: [ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot

2016-12-05 Thread su liu
Thank you very much.

Because I am new to pacemaker, and I have checked the docs that additional
devices are needed when configing stonith, but now I does not have it in my
environment.

I will see how to config it afterward.

Now I want to know how the cluster LVM works. Thank you for your patience
explanation.

The scene is:

controller node + compute1 node

I mount a SAN to both controller and compute1 node. Then I run a pacemaker
+ corosync + clvmd cluster:

[root@controller ~]# pcs status --full
Cluster name: mycluster
Last updated: Tue Dec  6 14:09:59 2016 Last change: Mon Dec  5 21:26:02
2016 by root via cibadmin on controller
Stack: corosync
Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) - partition
with quorum
2 nodes and 4 resources configured

Online: [ compute1 (2) controller (1) ]

Full list of resources:

 Clone Set: dlm-clone [dlm]
 dlm (ocf::pacemaker:controld): Started compute1
 dlm (ocf::pacemaker:controld): Started controller
 Started: [ compute1 controller ]
 Clone Set: clvmd-clone [clvmd]
 clvmd (ocf::heartbeat:clvm): Started compute1
 clvmd (ocf::heartbeat:clvm): Started controller
 Started: [ compute1 controller ]

Node Attributes:
* Node compute1 (2):
* Node controller (1):

Migration Summary:
* Node compute1 (2):
* Node controller (1):

PCSD Status:
  controller: Online
  compute1: Online

Daemon Status:
  corosync: active/disabled
  pacemaker: active/disabled
  pcsd: active/enabled



step 2:

I create a cluster VG:cinder-volumes:

[root@controller ~]# vgdisplay
  --- Volume group ---
  VG Name   cinder-volumes
  System ID
  Formatlvm2
  Metadata Areas1
  Metadata Sequence No  44
  VG Access read/write
  VG Status resizable
  Clustered yes
  Sharedno
  MAX LV0
  Cur LV0
  Open LV   0
  Max PV0
  Cur PV1
  Act PV1
  VG Size   1000.00 GiB
  PE Size   4.00 MiB
  Total PE  255999
  Alloc PE / Size   0 / 0
  Free  PE / Size   255999 / 1000.00 GiB
  VG UUID   aLamHi-mMcI-2NsC-Spjm-QWZr-MzHx-pPYSTt

[root@controller ~]#


Step 3 :

I create a LV and I want it can be seen and accessed on the compute1 node
but it is failed:

[root@controller ~]# lvcreate --name test001 --size 1024m cinder-volumes
  Logical volume "test001" created.
[root@controller ~]#
[root@controller ~]#
[root@controller ~]# lvs
  LV  VG Attr   LSize Pool Origin Data%  Meta%  Move
Log Cpy%Sync Convert
  test001 cinder-volumes -wi-a- 1.00g

[root@controller ~]#
[root@controller ~]#
[root@controller ~]# ll /dev/cinder-volumes/test001
lrwxrwxrwx 1 root root 7 Dec  6 14:13 /dev/cinder-volumes/test001 -> ../dm-0



I can access it on the contrller node, but on the comput1 node, I can see
it with lvs command .but cant access it with ls command, because it is not
exists on the /dev/cinder-volumes directory:


[root@compute1 ~]# lvs
  LV  VG Attr   LSize Pool Origin Data%  Meta%  Move
Log Cpy%Sync Convert
  test001 cinder-volumes -wi--- 1.00g

[root@compute1 ~]#
[root@compute1 ~]#
[root@compute1 ~]# ll /dev/cinder-volumes
ls: cannot access /dev/cinder-volumes: No such file or directory
[root@compute1 ~]#
[root@compute1 ~]#
[root@compute1 ~]# lvscan
  inactive  '/dev/cinder-volumes/test001' [1.00 GiB] inherit
[root@compute1 ~]#



Is something error with my configuration besides stonith?  Could you help
me?  thank you very much.










2016-12-06 11:37 GMT+08:00 Digimer :

> On 05/12/16 10:32 PM, su liu wrote:
> > Digimer, thank you very much!
> >
> > I do not need to have the data accessible on both nodes at once. I want
> > to use the clvm+pacemaker+corosync in OpenStack Cinder.
>
> I'm not sure what "cinder" is, so I don't know what it needs to work.
>
> > then only a VM need access the LV at once. But the Cinder service which
> > runs on the controller node is  responsible for snapshotting the LVs
> > which are attaching on the VMs runs on other Compute nodes(such as
> > compute1 node).
>
> If you don't need to access an LV on more than one node at a time, then
> don't add clustered LVM and keep things simple. If you are using DRBD,
> keep the backup secondary. If you are using LUNs, only connect the LUN
> to the host that needs it at a given time.
>
> In HA, you always want to keep things as simple as possible.
>
> > Need I active the LVs in /exclusively mode all the time? to supoort
> > snapping it while attaching on the VM./
>
> If you use clustered LVM, yes, but then you can't access the LV on any
> other nodes... If you don't need clustered LVM, then no, you continue to
> use it as simple LVM.
>
> Note; Snapshoting VMs is NOT SAFE unless you have a way to be certain
> that the guest VM has flushed it's caches and is made crash safe before
> the snapshot is made. Otherwise, 

Re: [ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot

2016-12-05 Thread Digimer
On 05/12/16 10:32 PM, su liu wrote:
> Digimer, thank you very much!
> 
> I do not need to have the data accessible on both nodes at once. I want
> to use the clvm+pacemaker+corosync in OpenStack Cinder.

I'm not sure what "cinder" is, so I don't know what it needs to work.

> then only a VM need access the LV at once. But the Cinder service which
> runs on the controller node is  responsible for snapshotting the LVs
> which are attaching on the VMs runs on other Compute nodes(such as
> compute1 node). 

If you don't need to access an LV on more than one node at a time, then
don't add clustered LVM and keep things simple. If you are using DRBD,
keep the backup secondary. If you are using LUNs, only connect the LUN
to the host that needs it at a given time.

In HA, you always want to keep things as simple as possible.

> Need I active the LVs in /exclusively mode all the time? to supoort
> snapping it while attaching on the VM./

If you use clustered LVM, yes, but then you can't access the LV on any
other nodes... If you don't need clustered LVM, then no, you continue to
use it as simple LVM.

Note; Snapshoting VMs is NOT SAFE unless you have a way to be certain
that the guest VM has flushed it's caches and is made crash safe before
the snapshot is made. Otherwise, your snapshot might be corrupted.

> /The following is the result when execute lvscan command on compute1 node:/
> /
> /
> /
> [root@compute1 ~]# lvs
>   LV  VG Attr  
> LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes -wi---
> 1.00g
> 
> 
> 
> and on the controller node:
> 
> [root@controller ~]# lvscan ACTIVE
> '/dev/cinder-volumes/volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5' [1.00
> GiB] inherit
> 
> 
> 
> thank you very much!

Did you setup stonith? If not, things will go bad. Not "if", only
"when". Even in a test environment, you _must_ setup stonith.

-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org


Re: [ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot

2016-12-05 Thread su liu
lvscan result on compute1 node:

[root@compute1 ~]# lvscan
  inactive
 '/dev/cinder-volumes/volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5' [1.00
GiB] inherit

2016-12-06 11:32 GMT+08:00 su liu :

> Digimer, thank you very much!
>
> I do not need to have the data accessible on both nodes at once. I want to
> use the clvm+pacemaker+corosync in OpenStack Cinder.
>
> then only a VM need access the LV at once. But the Cinder service which
> runs on the controller node is  responsible for snapshotting the LVs which
> are attaching on the VMs runs on other Compute nodes(such as compute1
> node).
>
> Need I active the LVs in *exclusively mode all the time? to supoort
> snapping it while attaching on the VM.*
>
> *The following is the result when execute lvscan command on compute1 node:*
>
>
>
>
>
>
>
>
>
> *[root@compute1 ~]# lvs  LV  VG
>   Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync
> Convert  volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
> -wi--- 1.00gand on the controller node:[root@controller ~]# lvscan
> ACTIVE '/dev/cinder-volumes/volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5'
> [1.00 GiB] inheritthank you very much!*
>
>
> 2016-12-06 11:15 GMT+08:00 Digimer :
>
>> On 05/12/16 09:10 PM, su liu wrote:
>> > Thanks for your replay,  This snapshot factor will seriously affect my
>> > application.
>>
>> Do you really need to have the data accessible on both nodes at once? To
>> do this requires a cluster file system as well, like gfs2. These all
>> require cluster locking (DLM) which is slow compared to normal file
>> systems. It also adds a lot of complexity.
>>
>> In my experience, most people who start thinking they want concurrent
>> access don't really need it, and that makes things a lot simpler.
>>
>> > then, because now I have not a stonith device and I want to verify the
>> > basic process of snapshot a clustered LV.
>>
>> Working stonith *is* part of basic process. It is integral to testing
>> failure and recovery. So it should be a high priority, even in a proof
>> of concept/test environment.
>>
>> > I have a more question:
>> >
>> > After I create a VG: cinder-volumes on controller node, I can see it
>> > throuth vgs command on both controller and compute
>> > 1 nodes. then i create a
>> > LV:volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5. Then I execute the lvs
>> > command on both nodes:
>> >
>> > [root@controller ~]# lvs
>> >   LV  VG Attr
>> >   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>> >   volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
>> > -wi-a- 1.00g
>> > [root@controller ~]#
>> > [root@controller ~]#
>> > [root@controller ~]#
>> > [root@controller ~]# ll /dev/cinder-volumes/
>> > total 0
>> > lrwxrwxrwx 1 root root 7 Dec  5 21:29
>> > volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 -> ../dm-0
>> >
>> >
>> >
>> > [root@compute1 ~]# lvs
>> >   LV  VG Attr
>> >   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>> >   volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
>> > -wi--- 1.00g
>> > [root@compute1 ~]#
>> > [root@compute1 ~]#
>> > [root@compute1 ~]# ll /dev/cinder-volumes
>> > ls: cannot access /dev/cinder-volumes: No such file or directory
>> > [root@compute1 ~]#
>> >
>> >
>> >
>> > But it seems that the LV can't be exist on the compute1 node. My
>> > question is that how to access the LV on the compute1 node?
>> >
>> > thanks very much!
>>
>> Do you see it after 'lvscan'? You should see it on both nodes at the
>> same time as soon as it is created, *if* things are working properly. It
>> is possible, without stonith, that they are not.
>>
>> Please configure and test stonith, and see if the problem remains. If it
>> does, tail the system logs on both nodes, create the LV on the
>> controller and report back what log messages show up.
>>
>> digimer
>>
>> >
>> > 2016-12-06 9:26 GMT+08:00 Digimer > > >:
>> >
>> > On 05/12/16 08:16 PM, su liu wrote:
>> > > *Hi all,
>> > >
>> > > *
>> > > *I am new to pacemaker and I have some questions about the clvmd +
>> > > pacemaker + corosync. I wish you could explain it for me if you
>> are
>> > > free. thank you very much!
>> > >
>> > > *
>> > > *I have 2 nodes and the pacemaker's status is as follows:*
>> > >
>> > > [root@controller ~]# pcs status --full
>> > > Cluster name: mycluster
>> > > Last updated: Mon Dec  5 18:15:12 2016Last change: Fri
>> > Dec  2
>> > > 15:01:03 2016 by root via cibadmin on compute1
>> > > Stack: corosync
>> > > Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) -
>> > partition
>> > > with quorum
>> > > 2 nodes and 4 resources configured
>> > >
>> > > Online: [ compute1 (2) controller (1) ]
>> > >

Re: [ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot

2016-12-05 Thread su liu
Digimer, thank you very much!

I do not need to have the data accessible on both nodes at once. I want to
use the clvm+pacemaker+corosync in OpenStack Cinder.

then only a VM need access the LV at once. But the Cinder service which
runs on the controller node is  responsible for snapshotting the LVs which
are attaching on the VMs runs on other Compute nodes(such as compute1
node).

Need I active the LVs in *exclusively mode all the time? to supoort
snapping it while attaching on the VM.*

*The following is the result when execute lvscan command on compute1 node:*









*[root@compute1 ~]# lvs  LV  VG
Attr   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync
Convert  volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
-wi--- 1.00gand on the controller node:[root@controller ~]# lvscan
ACTIVE '/dev/cinder-volumes/volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5'
[1.00 GiB] inheritthank you very much!*


2016-12-06 11:15 GMT+08:00 Digimer :

> On 05/12/16 09:10 PM, su liu wrote:
> > Thanks for your replay,  This snapshot factor will seriously affect my
> > application.
>
> Do you really need to have the data accessible on both nodes at once? To
> do this requires a cluster file system as well, like gfs2. These all
> require cluster locking (DLM) which is slow compared to normal file
> systems. It also adds a lot of complexity.
>
> In my experience, most people who start thinking they want concurrent
> access don't really need it, and that makes things a lot simpler.
>
> > then, because now I have not a stonith device and I want to verify the
> > basic process of snapshot a clustered LV.
>
> Working stonith *is* part of basic process. It is integral to testing
> failure and recovery. So it should be a high priority, even in a proof
> of concept/test environment.
>
> > I have a more question:
> >
> > After I create a VG: cinder-volumes on controller node, I can see it
> > throuth vgs command on both controller and compute
> > 1 nodes. then i create a
> > LV:volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5. Then I execute the lvs
> > command on both nodes:
> >
> > [root@controller ~]# lvs
> >   LV  VG Attr
> >   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
> >   volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
> > -wi-a- 1.00g
> > [root@controller ~]#
> > [root@controller ~]#
> > [root@controller ~]#
> > [root@controller ~]# ll /dev/cinder-volumes/
> > total 0
> > lrwxrwxrwx 1 root root 7 Dec  5 21:29
> > volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 -> ../dm-0
> >
> >
> >
> > [root@compute1 ~]# lvs
> >   LV  VG Attr
> >   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
> >   volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
> > -wi--- 1.00g
> > [root@compute1 ~]#
> > [root@compute1 ~]#
> > [root@compute1 ~]# ll /dev/cinder-volumes
> > ls: cannot access /dev/cinder-volumes: No such file or directory
> > [root@compute1 ~]#
> >
> >
> >
> > But it seems that the LV can't be exist on the compute1 node. My
> > question is that how to access the LV on the compute1 node?
> >
> > thanks very much!
>
> Do you see it after 'lvscan'? You should see it on both nodes at the
> same time as soon as it is created, *if* things are working properly. It
> is possible, without stonith, that they are not.
>
> Please configure and test stonith, and see if the problem remains. If it
> does, tail the system logs on both nodes, create the LV on the
> controller and report back what log messages show up.
>
> digimer
>
> >
> > 2016-12-06 9:26 GMT+08:00 Digimer  > >:
> >
> > On 05/12/16 08:16 PM, su liu wrote:
> > > *Hi all,
> > >
> > > *
> > > *I am new to pacemaker and I have some questions about the clvmd +
> > > pacemaker + corosync. I wish you could explain it for me if you are
> > > free. thank you very much!
> > >
> > > *
> > > *I have 2 nodes and the pacemaker's status is as follows:*
> > >
> > > [root@controller ~]# pcs status --full
> > > Cluster name: mycluster
> > > Last updated: Mon Dec  5 18:15:12 2016Last change: Fri
> > Dec  2
> > > 15:01:03 2016 by root via cibadmin on compute1
> > > Stack: corosync
> > > Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) -
> > partition
> > > with quorum
> > > 2 nodes and 4 resources configured
> > >
> > > Online: [ compute1 (2) controller (1) ]
> > >
> > > Full list of resources:
> > >
> > >  Clone Set: dlm-clone [dlm]
> > >  dlm(ocf::pacemaker:controld):Started compute1
> > >  dlm(ocf::pacemaker:controld):Started controller
> > >  Started: [ compute1 controller ]
> > >  Clone Set: clvmd-clone [clvmd]
> > >  clvmd(ocf::heartbeat:clvm):

Re: [ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot

2016-12-05 Thread Digimer
On 05/12/16 09:10 PM, su liu wrote:
> Thanks for your replay,  This snapshot factor will seriously affect my
> application.

Do you really need to have the data accessible on both nodes at once? To
do this requires a cluster file system as well, like gfs2. These all
require cluster locking (DLM) which is slow compared to normal file
systems. It also adds a lot of complexity.

In my experience, most people who start thinking they want concurrent
access don't really need it, and that makes things a lot simpler.

> then, because now I have not a stonith device and I want to verify the
> basic process of snapshot a clustered LV.

Working stonith *is* part of basic process. It is integral to testing
failure and recovery. So it should be a high priority, even in a proof
of concept/test environment.

> I have a more question:
>
> After I create a VG: cinder-volumes on controller node, I can see it
> throuth vgs command on both controller and compute 
> 1 nodes. then i create a
> LV:volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5. Then I execute the lvs
> command on both nodes:
>
> [root@controller ~]# lvs
>   LV  VG Attr
>   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
> -wi-a- 1.00g
> [root@controller ~]# 
> [root@controller ~]# 
> [root@controller ~]# 
> [root@controller ~]# ll /dev/cinder-volumes/
> total 0
> lrwxrwxrwx 1 root root 7 Dec  5 21:29
> volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 -> ../dm-0
>
>
>
> [root@compute1 ~]# lvs
>   LV  VG Attr
>   LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes
> -wi--- 1.00g
> [root@compute1 ~]# 
> [root@compute1 ~]# 
> [root@compute1 ~]# ll /dev/cinder-volumes
> ls: cannot access /dev/cinder-volumes: No such file or directory
> [root@compute1 ~]# 
>
>
>
> But it seems that the LV can't be exist on the compute1 node. My
> question is that how to access the LV on the compute1 node?
>
> thanks very much!

Do you see it after 'lvscan'? You should see it on both nodes at the
same time as soon as it is created, *if* things are working properly. It
is possible, without stonith, that they are not.

Please configure and test stonith, and see if the problem remains. If it
does, tail the system logs on both nodes, create the LV on the
controller and report back what log messages show up.

digimer

>
> 2016-12-06 9:26 GMT+08:00 Digimer  >:
>
> On 05/12/16 08:16 PM, su liu wrote:
> > *Hi all,
> >
> > *
> > *I am new to pacemaker and I have some questions about the clvmd +
> > pacemaker + corosync. I wish you could explain it for me if you are
> > free. thank you very much!
> >
> > *
> > *I have 2 nodes and the pacemaker's status is as follows:*
> >
> > [root@controller ~]# pcs status --full
> > Cluster name: mycluster
> > Last updated: Mon Dec  5 18:15:12 2016Last change: Fri
> Dec  2
> > 15:01:03 2016 by root via cibadmin on compute1
> > Stack: corosync
> > Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) -
> partition
> > with quorum
> > 2 nodes and 4 resources configured
> >
> > Online: [ compute1 (2) controller (1) ]
> >
> > Full list of resources:
> >
> >  Clone Set: dlm-clone [dlm]
> >  dlm(ocf::pacemaker:controld):Started compute1
> >  dlm(ocf::pacemaker:controld):Started controller
> >  Started: [ compute1 controller ]
> >  Clone Set: clvmd-clone [clvmd]
> >  clvmd(ocf::heartbeat:clvm):Started compute1
> >  clvmd(ocf::heartbeat:clvm):Started controller
> >  Started: [ compute1 controller ]
> >
> > Node Attributes:
> > * Node compute1 (2):
> > * Node controller (1):
> >
> > Migration Summary:
> > * Node compute1 (2):
> > * Node controller (1):
> >
> > PCSD Status:
> >   controller: Online
> >   compute1: Online
> >
> > Daemon Status:
> >   corosync: active/disabled
> >   pacemaker: active/disabled
> >   pcsd: active/enabled
> > *
> > *
>
> You need to configure and enable (and test!) stonith. This is
> doubly-so
> with clustered LVM/shared storage.
>
> > *I create a lvm on controller node and it can be seen on the
> compute1
> > node immediately with 'lvs' command. but the lvm it not activate on
> > compute1.
> >
> > *
> > *then i want to create a snapshot of the lvm, but failed with
> the error
> > message:*
> >
> > /### volume-4fad87bb-3d4c-4a96-bef1-8799980050d1 must be active
> > exclusively to create snapshot ###
> >
>  

Re: [ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot

2016-12-05 Thread su liu
Thanks for your replay,  This snapshot factor will seriously affect my
application.

then, because now I have not a stonith device and I want to verify the
basic process of snapshot a clustered LV.

I have a more question:

After I create a VG: cinder-volumes on controller node, I can see it
throuth vgs command on both controller and compute
1 nodes. then i create a LV:volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5.
Then I execute the lvs command on both nodes:

[root@controller ~]# lvs
  LV  VG Attr
LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes -wi-a-
1.00g
[root@controller ~]#
[root@controller ~]#
[root@controller ~]#
[root@controller ~]# ll /dev/cinder-volumes/
total 0
lrwxrwxrwx 1 root root 7 Dec  5 21:29
volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 -> ../dm-0



[root@compute1 ~]# lvs
  LV  VG Attr
LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes -wi---
1.00g
[root@compute1 ~]#
[root@compute1 ~]#
[root@compute1 ~]# ll /dev/cinder-volumes
ls: cannot access /dev/cinder-volumes: No such file or directory
[root@compute1 ~]#



But it seems that the LV can't be exist on the compute1 node. My question
is that how to access the LV on the compute1 node?

thanks very much!

2016-12-06 9:26 GMT+08:00 Digimer :

> On 05/12/16 08:16 PM, su liu wrote:
> > *Hi all,
> >
> > *
> > *I am new to pacemaker and I have some questions about the clvmd +
> > pacemaker + corosync. I wish you could explain it for me if you are
> > free. thank you very much!
> >
> > *
> > *I have 2 nodes and the pacemaker's status is as follows:*
> >
> > [root@controller ~]# pcs status --full
> > Cluster name: mycluster
> > Last updated: Mon Dec  5 18:15:12 2016Last change: Fri Dec  2
> > 15:01:03 2016 by root via cibadmin on compute1
> > Stack: corosync
> > Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) - partition
> > with quorum
> > 2 nodes and 4 resources configured
> >
> > Online: [ compute1 (2) controller (1) ]
> >
> > Full list of resources:
> >
> >  Clone Set: dlm-clone [dlm]
> >  dlm(ocf::pacemaker:controld):Started compute1
> >  dlm(ocf::pacemaker:controld):Started controller
> >  Started: [ compute1 controller ]
> >  Clone Set: clvmd-clone [clvmd]
> >  clvmd(ocf::heartbeat:clvm):Started compute1
> >  clvmd(ocf::heartbeat:clvm):Started controller
> >  Started: [ compute1 controller ]
> >
> > Node Attributes:
> > * Node compute1 (2):
> > * Node controller (1):
> >
> > Migration Summary:
> > * Node compute1 (2):
> > * Node controller (1):
> >
> > PCSD Status:
> >   controller: Online
> >   compute1: Online
> >
> > Daemon Status:
> >   corosync: active/disabled
> >   pacemaker: active/disabled
> >   pcsd: active/enabled
> > *
> > *
>
> You need to configure and enable (and test!) stonith. This is doubly-so
> with clustered LVM/shared storage.
>
> > *I create a lvm on controller node and it can be seen on the compute1
> > node immediately with 'lvs' command. but the lvm it not activate on
> > compute1.
> >
> > *
> > *then i want to create a snapshot of the lvm, but failed with the error
> > message:*
> >
> > /### volume-4fad87bb-3d4c-4a96-bef1-8799980050d1 must be active
> > exclusively to create snapshot ###
> >
> > /
> > *Can someone tell me how to snapshot a lvm in the cluster lvm
> > environment? thank you very much。*
>
> This is how it works. You can't snapshot a clustered LV, as the error
> indicates. The process is ACTIVE -> deactivate on all node -> set
> exclusive on one node -> set it back to ACTIVE, then you can snapshot.
>
> It's not very practical, unfortunately.
>
> > Additional information:
> >
> > [root@controller ~]# vgdisplay
> >   --- Volume group ---
> >   VG Name   cinder-volumes
> >   System ID
> >   Formatlvm2
> >   Metadata Areas1
> >   Metadata Sequence No  19
> >   VG Access read/write
> >   VG Status resizable
> >   Clustered yes
> >   Sharedno
> >   MAX LV0
> >   Cur LV1
> >   Open LV   0
> >   Max PV0
> >   Cur PV1
> >   Act PV1
> >   VG Size   1000.00 GiB
> >   PE Size   4.00 MiB
> >   Total PE  255999
> >   Alloc PE / Size   256 / 1.00 GiB
> >   Free  PE / Size   255743 / 999.00 GiB
> >   VG UUID   aLamHi-mMcI-2NsC-Spjm-QWZr-MzHx-pPYSTt
> >
> > [root@controller ~]# rpm -qa |grep pacem
> > pacemaker-cli-1.1.13-10.el7_2.4.x86_64
> > pacemaker-libs-1.1.13-10.el7_2.4.x86_64
> > pacemaker-1.1.13-10.el7_2.4.x86_64
> > pacemaker-cluster-libs-1.1.13-10.el7_2.4.x86_64
> >
> >
> > [root@controller ~]# lvs
> >   LV 

Re: [ClusterLabs] [pacemaker+ clvm] Cluster lvm must be active exclusively to create snapshot

2016-12-05 Thread Digimer
On 05/12/16 08:16 PM, su liu wrote:
> *Hi all,
> 
> *
> *I am new to pacemaker and I have some questions about the clvmd +
> pacemaker + corosync. I wish you could explain it for me if you are
> free. thank you very much!
> 
> *
> *I have 2 nodes and the pacemaker's status is as follows:*
> 
> [root@controller ~]# pcs status --full
> Cluster name: mycluster
> Last updated: Mon Dec  5 18:15:12 2016Last change: Fri Dec  2
> 15:01:03 2016 by root via cibadmin on compute1
> Stack: corosync
> Current DC: compute1 (2) (version 1.1.13-10.el7_2.4-44eb2dd) - partition
> with quorum
> 2 nodes and 4 resources configured
> 
> Online: [ compute1 (2) controller (1) ]
> 
> Full list of resources:
> 
>  Clone Set: dlm-clone [dlm]
>  dlm(ocf::pacemaker:controld):Started compute1
>  dlm(ocf::pacemaker:controld):Started controller
>  Started: [ compute1 controller ]
>  Clone Set: clvmd-clone [clvmd]
>  clvmd(ocf::heartbeat:clvm):Started compute1
>  clvmd(ocf::heartbeat:clvm):Started controller
>  Started: [ compute1 controller ]
> 
> Node Attributes:
> * Node compute1 (2):
> * Node controller (1):
> 
> Migration Summary:
> * Node compute1 (2):
> * Node controller (1):
> 
> PCSD Status:
>   controller: Online
>   compute1: Online
> 
> Daemon Status:
>   corosync: active/disabled
>   pacemaker: active/disabled
>   pcsd: active/enabled
> *
> *

You need to configure and enable (and test!) stonith. This is doubly-so
with clustered LVM/shared storage.

> *I create a lvm on controller node and it can be seen on the compute1
> node immediately with 'lvs' command. but the lvm it not activate on
> compute1.
> 
> *
> *then i want to create a snapshot of the lvm, but failed with the error
> message:*
> 
> /### volume-4fad87bb-3d4c-4a96-bef1-8799980050d1 must be active
> exclusively to create snapshot ###
> 
> /
> *Can someone tell me how to snapshot a lvm in the cluster lvm
> environment? thank you very much。*

This is how it works. You can't snapshot a clustered LV, as the error
indicates. The process is ACTIVE -> deactivate on all node -> set
exclusive on one node -> set it back to ACTIVE, then you can snapshot.

It's not very practical, unfortunately.

> Additional information:
> 
> [root@controller ~]# vgdisplay 
>   --- Volume group ---
>   VG Name   cinder-volumes
>   System ID 
>   Formatlvm2
>   Metadata Areas1
>   Metadata Sequence No  19
>   VG Access read/write
>   VG Status resizable
>   Clustered yes
>   Sharedno
>   MAX LV0
>   Cur LV1
>   Open LV   0
>   Max PV0
>   Cur PV1
>   Act PV1
>   VG Size   1000.00 GiB
>   PE Size   4.00 MiB
>   Total PE  255999
>   Alloc PE / Size   256 / 1.00 GiB
>   Free  PE / Size   255743 / 999.00 GiB
>   VG UUID   aLamHi-mMcI-2NsC-Spjm-QWZr-MzHx-pPYSTt
> 
> [root@controller ~]# rpm -qa |grep pacem
> pacemaker-cli-1.1.13-10.el7_2.4.x86_64
> pacemaker-libs-1.1.13-10.el7_2.4.x86_64
> pacemaker-1.1.13-10.el7_2.4.x86_64
> pacemaker-cluster-libs-1.1.13-10.el7_2.4.x86_64
> 
> 
> [root@controller ~]# lvs
>   LV  VG Attr  
> LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes -wi-a-
> 1.00g
> 
> 
> [root@compute1 ~]# lvs
>   LV  VG Attr  
> LSize Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
>   volume-1b0ea468-37c8-4b47-a6fa-6cce65b068b5 cinder-volumes -wi---
> 1.00g
> 
> 
> thank you very much!
> 
> 
> 
> 
> 
> ___
> Users mailing list: Users@clusterlabs.org
> http://lists.clusterlabs.org/mailman/listinfo/users
> 
> Project Home: http://www.clusterlabs.org
> Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
> Bugs: http://bugs.clusterlabs.org
> 


-- 
Digimer
Papers and Projects: https://alteeve.ca/w/
What if the cure for cancer is trapped in the mind of a person without
access to education?

___
Users mailing list: Users@clusterlabs.org
http://lists.clusterlabs.org/mailman/listinfo/users

Project Home: http://www.clusterlabs.org
Getting started: http://www.clusterlabs.org/doc/Cluster_from_Scratch.pdf
Bugs: http://bugs.clusterlabs.org