Hello,

If I understand code correctly, it looks there at least have two meanings for 
this warning:

1. from code comments, it just a noisy, you can ignore it totally:

```
    # We recommend to activate one LV at a time so that this specific volume
    # binds to a proper filesystem to protect the data
    # TODO:
    # Will this warn message be too noisy?  <==== this line
    if [ -z "$LV" ]; then
        ocf_log warn "You are recommended to activate one LV at a time or use 
exclusive activation mode."
    fi
```

2. also from comments, there are some difficult to check all LVs status in VG.
   So if you use (set up) a complex LVs format, it may won't work as expection.

```
# TODO:
# How can we accurately check if LVs in the given VG are all active?
#
# David:
# If we wanted to check that all LVs in the VG are active, then we would
# probably need to use the lvs/lv_live_table command here since dmsetup
# won't know about inactive LVs that should be active.
#
# Eric:
# But, lvs/lv_live_table command doesn't work well now. I tried the following
# method:
#
# lv_count=$(vgs --foreign -o lv_count --noheadings ${VG} 2>/dev/null | tr -d 
'[:blank:]')
# dm_count=$(dmsetup --noheadings info -c -S "vg_name=${VG}" 2>/dev/null | grep -c 
"${VG}-")
# test $lv_count -eq $dm_count
#
# It works, but we cannot afford to use LVM command in lvm_status. LVM command 
is expensive
# because it may potencially scan all disks on the system, update the metadata 
even using
# lvs/vgs when the metadata is somehow inconsistent.
#
# So, we have to make compromise that the VG is assumably active if any LV of 
the VG is active.
#
# Paul:
# VGS + LVS with "-" in their name get mangled with double dashes in dmsetup.
# Switching to wc and just counting lines while depending on the vgname + 
lvname filter
# in dmsetup gets around the issue with dmsetup reporting correctly but grep 
failing.
#
# Logic for both test cases and dmsetup calls changed so they match too.
#
# This is AllBad but there isn't a better way that I'm aware of yet.
```

Thanks

On 12/1/20 2:23 AM, Andrei Borzenkov wrote:
30.11.2020 15:36, Ulrich Windl пишет:
Hi!

I configured a shared LVM activation as per instructions (I hope) in SLES15 
SP2. However I get this warning:
LVM-activate(prm_testVG_activate)[57281]: WARNING: You are recommended to 
activate one LV at a time or use exclusive activation mode.

The configuration is:
primitive prm_testVG_activate LVM-activate \
         params vgname=testVG vg_access_mode=lvmlockd activation_mode=shared ...

And I cloned the primitive. So whare is the problem?


The comments in RA say

         # We recommend to activate one LV at a time so that this
specific volume
         # binds to a proper filesystem to protect the data

I have no idea what "binds to a proper filesystem" is supposed to mean.
My best guess is that it's trying to describe dependency between LVM
resource and filesystem resource, but in this case I also do not see how
shared and exclusive activation differ here.

GIT history does not help, resource was added initially with this warning.

Does it mean I have to use parameter "lvname"? If so, dows it mean that step 3 of 
"PROCEDURE 22.4: CREATING AN LVM-ACTIVATE RESOURCE" is missing the parameter, too?

Example given is:
root # crm configure primitive vg1 ocf:heartbeat:LVM-activate \
params vgname=vg1 vg_access_mode=lvmlockd activation_mode=shared \
op start timeout=90s interval=0 \
op stop timeout=90s interval=0 \
op monitor interval=30s timeout=90s

Regards,
Ulrich



_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/


_______________________________________________
Manage your subscription:
https://lists.clusterlabs.org/mailman/listinfo/users

ClusterLabs home: https://www.clusterlabs.org/

Reply via email to