Hi!

I think the problem is this: When cluster software is active on a node and you 
add a VG with YaST, the VG defaults to "clustered". Setting the VG to 
non-clustered might fix the problem. If clustered is intended, you must start 
the cLVM framework.

Regards,
Ulrich

>>> Craig Lesle <craig.le...@bruden.com> schrieb am 24.07.2013 um 21:19 in
Nachricht <51f028bc.8030...@bruden.com>:
> Thought I would start here to report a clvm/dlm issue. If this is the 
> wrong place for this inquiry, please let me know - thank you.
> 
> Upgraded a pacemaker two node HA configuration from openSUSE 12.2 to 
> 12.3. Previously the 12.2 configuration has been running fine for 
> several months.
> 
> Am having trouble activating any cluster enabled logical volumes. It 
> basically fails the same way for an existing logical volume activation 
> or logical volume creation with invalid argument.
> 
> admin01:~ # lvcreate --name test1lv -L 3G vgtest01
>    Error locking on node c01010a: Invalid argument
>    Error locking on node d01010a: Invalid argument
>    Failed to activate new LV.
> 
> 
> Decided to see if this was an upgrade issue or not, so stood up a 
> temporary pacemaker cluster and did a clean install an openSUSE 12.3 
> operating system directly.
> 
> Unfortunately I am seeing the same result from this clean installation.
> 
> ============
> Last updated: Wed Jul 24 15:02:52 2013
> Last change: Wed Jul 24 11:57:58 2013 by root via cibadmin on wilma
> Stack: openais
> Current DC: fred - partition with quorum
> Version: 1.1.7-61a079313275f3e9d0e85671f62c721d32ce3563
> 2 Nodes configured, 2 expected votes
> 8 Resources configured.
> ============
> 
> Online: [ wilma fred ]
> 
> Full list of resources:
> 
>   st-fred        (stonith:external/esxi_free):   Started wilma
>   st-wilma       (stonith:external/esxi_free):   Started fred
>   Clone Set: base-clone [base-group]
>       Started: [ fred wilma ]
> 
> 
> wilma:~ # pvs
>    PV         VG       Fmt  Attr PSize  PFree
>    /dev/sda2  vg00     lvm2 a--  11.80g 1.30g
>    /dev/sdb   vgtest01 lvm2 a--   4.00g 4.00g
> 
> wilma:~ # vgs
>    VG       #PV #LV #SN Attr   VSize  VFree
>    vg00       1   2   0 wz--n- 11.80g 1.30g
>    vgtest01   1   0   0 wz--nc  4.00g 4.00g
> 
> wilma:~ # lvs
>    LV     VG   Attr      LSize Pool Origin Data%  Move Log Copy% Convert
>    rootlv vg00 -wi-ao--- 9.00g
>    swaplv vg00 -wi-ao--- 1.50g
> 
> 
> 
> wilma:~ # lvcreate --name apachelv -L 3G vgtest01
>    Error locking on node 2d01010a: Invalid argument
>    Error locking on node 2e01010a: Invalid argument
>    Failed to activate new LV.
> 
> 
> The actual error seems to pass back to lvm from dlm;
> 
> 2013-07-24T14:51:24.974031-04:00 wilma lvm[2652]: LVM thread waiting for 
> work
> 2013-07-24T14:51:24.974800-04:00 wilma lvm[2652]: 771817738 got message 
> from nodeid 771817738 for 755040522. len 18
> 2013-07-24T14:51:24.984424-04:00 wilma lvm[2652]: 771817738 got message 
> from nodeid 755040522 for 0. len 84
> 2013-07-24T14:51:24.984745-04:00 wilma lvm[2652]: add_to_lvmqueue: 
> cmd=0x10dd620. client=0x69eb60, msg=0x7fda07acedfc, len=84, 
> csid=0x7fffed3359bc, xid=0
> 2013-07-24T14:51:24.985161-04:00 wilma lvm[2652]: process_work_item: remote
> 2013-07-24T14:51:24.985501-04:00 wilma lvm[2652]: process_remote_command 
> LOCK_LV (0x32) for clientid 0x5000000 XID 26 on node 2d01010a
> 2013-07-24T14:51:24.985804-04:00 wilma lvm[2652]: do_lock_lv: resource 
> 'cf7Jq5RfXQNx1RjSvxelxjDv4SVOdtT35ae3knXL7L1oyU2KfrwdWLnpKrDiG8GC', cmd 
> = 0x99 LCK_LV_ACTIVATE (R
> EAD|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), 
> critical_section = 0
> 2013-07-24T14:51:24.986191-04:00 wilma lvm[2652]: lock_resource 
> 'cf7Jq5RfXQNx1RjSvxelxjDv4SVOdtT35ae3knXL7L1oyU2KfrwdWLnpKrDiG8GC', 
> flags=1, mode=1
> *2013-07-24T14:51:24.986528-04:00 wilma lvm[2652]: dlm_ls_lock returned 22**
> **2013-07-24T14:51:24.986905-04:00 wilma lvm[2652]: hold_lock. lock at 1 
> failed: Invalid argument**
> **2013-07-24T14:51:24.987288-04:00 wilma lvm[2652]: Command return is 
> 22, critical_section is 0*
> 2013-07-24T14:51:24.987592-04:00 wilma lvm[2652]: LVM thread waiting for 
> work
> 2013-07-24T14:51:24.987908-04:00 wilma lvm[2652]: 771817738 got message 
> from nodeid 771817738 for 755040522. len 35
> 2013-07-24T14:51:25.002361-04:00 wilma lvm[2652]: 771817738 got message 
> from nodeid 755040522 for 0. len 84
> 2013-07-24T14:51:25.002672-04:00 wilma lvm[2652]: add_to_lvmqueue: 
> cmd=0x10dd620. client=0x69eb60, msg=0x7fda07acefec, len=84, 
> csid=0x7fffed3359bc, xid=0
> 2013-07-24T14:51:25.003013-04:00 wilma lvm[2652]: process_work_item: remote
> 2013-07-24T14:51:25.003337-04:00 wilma lvm[2652]: process_remote_command 
> LOCK_LV (0x32) for clientid 0x5000000 XID 27 on node 2d01010a
> 2013-07-24T14:51:25.003651-04:00 wilma lvm[2652]: do_lock_lv: resource 
> 'cf7Jq5RfXQNx1RjSvxelxjDv4SVOdtT35ae3knXL7L1oyU2KfrwdWLnpKrDiG8GC', cmd 
> = 0x98 LCK_LV_DEACTIVATE
> (NULL|LV|NONBLOCK|CLUSTER_VG), flags = 0x4 ( DMEVENTD_MONITOR ), 
> critical_section = 0
> 2013-07-24T14:51:25.003959-04:00 wilma lvm[2652]: do_deactivate_lock, 
> lock not already held
> 2013-07-24T14:51:25.004379-04:00 wilma lvm[2652]: Command return is 0, 
> critical_section is 0
> 2013-07-24T14:51:25.004750-04:00 wilma lvm[2652]: LVM thread waiting for 
> work
> 
> 
> 
> Versions installed.
> 
> wilma:~ # rpm -qa | egrep 
> 'clvm|cluster-glue|corosync|crmsh|dlm|libglue2|openais|pacemaker|resource-agen
> t' 
> | sort
> cluster-glue-1.0.11-2.1.1.x86_64
> corosync-1.4.3-4.1.1.x86_64
> crmsh-1.2.4-3.1.1.x86_64
> libcorosync4-1.4.3-4.1.1.x86_64
> libdlm-3.00.01-25.5.1.x86_64
> libdlm-devel-3.00.01-25.5.1.x86_64
> libdlm3-3.00.01-25.5.1.x86_64
> libglue2-1.0.11-2.1.1.x86_64
> libopenais3-1.1.4-15.1.1.x86_64
> libpacemaker3-1.1.7-3.1.1.x86_64
> lvm2-clvm-2.02.98-20.2.1.x86_64
> openais-1.1.4-15.1.1.x86_64
> pacemaker-1.1.7-3.1.1.x86_64
> resource-agents-3.9.5-2.4.1.x86_64
> 
> 
> 
> 
> 
> _______________________________________________
> Linux-HA mailing list
> Linux-HA@lists.linux-ha.org 
> http://lists.linux-ha.org/mailman/listinfo/linux-ha 
> See also: http://linux-ha.org/ReportingProblems 


_______________________________________________
Linux-HA mailing list
Linux-HA@lists.linux-ha.org
http://lists.linux-ha.org/mailman/listinfo/linux-ha
See also: http://linux-ha.org/ReportingProblems

Reply via email to