Hello community,

here is the log from the commit of package lvm2 for openSUSE:Factory checked in 
at 2019-09-23 13:15:40
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Comparing /work/SRC/openSUSE:Factory/lvm2 (Old)
 and      /work/SRC/openSUSE:Factory/.lvm2.new.7948 (New)
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Package is "lvm2"

Mon Sep 23 13:15:40 2019 rev:126 rq:730344 version:2.03.05

Changes:
--------
--- /work/SRC/openSUSE:Factory/lvm2/lvm2.changes        2019-07-15 
22:42:36.923922250 +0200
+++ /work/SRC/openSUSE:Factory/.lvm2.new.7948/lvm2.changes      2019-09-23 
13:15:46.801122590 +0200
@@ -1,0 +2,46 @@
+Mon Sep  9 12:00:00 UTC 2019 - heming.z...@suse.com
+
+- Update lvm2.spec: make baselibs.conf to a common source.
+
+-------------------------------------------------------------------
+Mon Sep  9 11:00:25 UTC 2019 - g...@suse.com
+
+- Avoid creation of mixed-blocksize PV on LVM volume groups (bsc#1149408)
+  + bug-1149408_Fix-rounding-writes-up-to-sector-size.patch
+  + bug-1149408_vgcreate-vgextend-restrict-PVs-with-mixed-block-size.patch
+- Update lvm.conf files
+  - add devices/allow_mixed_block_sizes item
+
+-------------------------------------------------------------------
+Mon Sep 02 11:21:03 UTC 2019 - heming.z...@suse.com
+
+- Update to LVM2.2.03.05
+  - To drop lvm2-clvm and lvm2-cmirrord rpms (jsc#PM-1324)
+  - Fix Out of date package (bsc#1111734)
+  - Fix occasional slow shutdowns with kernel 5.0.0 and up (bsc#1137648)
+  - Remove clvmd
+  - Remove lvmlib (api)
+  - Remove lvmetad
+- Drop patches that have been merged into upstream
+  - bug-1114113_metadata-prevent-writing-beyond-metadata-area.patch
+  - bug-1137296_pvremove-vgextend-fix-using-device-aliases-with-lvmetad.patch
+  - bug-1135984_cache-support-no_discard_passdown.patch
+- Drop patches that have been nonexist/unsupport in upstream
+  - bsc1080299-detect-clvm-properly.patch
+  - bug-998893_make_pvscan_service_after_multipathd.patch
+  - bug-978055_clvmd-try-to-refresh-device-cache-on-the-first-failu.patch
+  - bug-950089_test-fix-lvm2-testsuite-build-error.patch
+  - bug-1072624_test-lvmetad_dump-always-timed-out-when-using-nc.patch
+  - tests-specify-python3-as-the-script-interpreter.patch
+- Update spec files
+  - merge device-mapper, lvm2-lockd, lvm2 into one spec file
+  - clvmd/lvmlib (api)/lvmetad had been removed, so delete related context in 
spec file
+- Update lvm.conf files
+  - remove all lvmetad lines/keywords
+  - add event_activation
+  - remove fallback_to_lvm1 & related items
+  - remove 
locking_type/fallback_to_clustered_locking/fallback_to_local_locking items
+  - remove locking_library item
+  - remove all special filter rules
+
+-------------------------------------------------------------------

Old:
----
  LVM2.2.02.180.tgz
  LVM2.2.02.180.tgz.asc
  bsc1080299-detect-clvm-properly.patch
  bug-1072624_test-lvmetad_dump-always-timed-out-when-using-nc.patch
  bug-1114113_metadata-prevent-writing-beyond-metadata-area.patch
  bug-1135984_cache-support-no_discard_passdown.patch
  bug-1137296_pvremove-vgextend-fix-using-device-aliases-with-lvmetad.patch
  bug-950089_test-fix-lvm2-testsuite-build-error.patch
  bug-978055_clvmd-try-to-refresh-device-cache-on-the-first-failu.patch
  bug-998893_make_pvscan_service_after_multipathd.patch
  device-mapper.changes
  device-mapper.spec
  lvm2-clvm.changes
  lvm2-clvm.spec
  pre_checkin.sh
  tests-specify-python3-as-the-script-interpreter.patch

New:
----
  LVM2.2.03.05.tgz
  LVM2.2.03.05.tgz.asc
  _multibuild
  bug-1149408_Fix-rounding-writes-up-to-sector-size.patch
  bug-1149408_vgcreate-vgextend-restrict-PVs-with-mixed-block-size.patch

++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++

Other differences:
------------------
++++++ lvm2.spec ++++++
++++ 744 lines (skipped)
++++ between /work/SRC/openSUSE:Factory/lvm2/lvm2.spec
++++ and /work/SRC/openSUSE:Factory/.lvm2.new.7948/lvm2.spec

++++++ LVM2.2.02.180.tgz -> LVM2.2.03.05.tgz ++++++
++++ 143808 lines of diff (skipped)

++++++ _multibuild ++++++
<multibuild>
  <package>devicemapper</package>
  <package>lockd</package>
</multibuild>
++++++ bug-1149408_Fix-rounding-writes-up-to-sector-size.patch ++++++
>From 7f347698e3d09b15b4f9aed9c61239fda7b9e8c8 Mon Sep 17 00:00:00 2001
From: David Teigland <teigl...@redhat.com>
Date: Fri, 26 Jul 2019 14:21:08 -0500
Subject: [PATCH] Fix rounding writes up to sector size

Do this at two levels, although one would be enough to
fix the problem seen recently:

- Ignore any reported sector size other than 512 of 4096.
  If either sector size (physical or logical) is reported
  as 512, then use 512.  If neither are reported as 512,
  and one or the other is reported as 4096, then use 4096.
  If neither is reported as either 512 or 4096, then use 512.

- When rounding up a limited write in bcache to be a multiple
  of the sector size, check that the resulting write size is
  not larger than the bcache block itself.  (This shouldn't
  happen if the sector size is 512 or 4096.)
---
 lib/device/bcache.c | 89 +++++++++++++++++++++++++++++++++++++++++++++++++++--
 lib/device/dev-io.c | 52 +++++++++++++++++++++++++++++++
 lib/device/device.h |  8 +++--
 lib/label/label.c   | 30 ++++++++++++++----
 4 files changed, 169 insertions(+), 10 deletions(-)

diff --git a/lib/device/bcache.c b/lib/device/bcache.c
index 7b0935352..04fbf3521 100644
--- a/lib/device/bcache.c
+++ b/lib/device/bcache.c
@@ -169,6 +169,7 @@ static bool _async_issue(struct io_engine *ioe, enum dir d, 
int fd,
        sector_t offset;
        sector_t nbytes;
        sector_t limit_nbytes;
+       sector_t orig_nbytes;
        sector_t extra_nbytes = 0;
 
        if (((uintptr_t) data) & e->page_mask) {
@@ -191,11 +192,41 @@ static bool _async_issue(struct io_engine *ioe, enum dir 
d, int fd,
                        return false;
                }
 
+               /*
+                * If the bcache block offset+len goes beyond where lvm is
+                * intending to write, then reduce the len being written
+                * (which is the bcache block size) so we don't write past
+                * the limit set by lvm.  If after applying the limit, the
+                * resulting size is not a multiple of the sector size (512
+                * or 4096) then extend the reduced size to be a multiple of
+                * the sector size (we don't want to write partial sectors.)
+                */
                if (offset + nbytes > _last_byte_offset) {
                        limit_nbytes = _last_byte_offset - offset;
-                       if (limit_nbytes % _last_byte_sector_size)
+
+                       if (limit_nbytes % _last_byte_sector_size) {
                                extra_nbytes = _last_byte_sector_size - 
(limit_nbytes % _last_byte_sector_size);
 
+                               /*
+                                * adding extra_nbytes to the reduced nbytes 
(limit_nbytes)
+                                * should make the final write size a multiple 
of the
+                                * sector size.  This should never result in a 
final size
+                                * larger than the bcache block size (as long 
as the bcache
+                                * block size is a multiple of the sector size).
+                                */
+                               if (limit_nbytes + extra_nbytes > nbytes) {
+                                       log_warn("Skip extending write at %llu 
len %llu limit %llu extra %llu sector_size %llu",
+                                                (unsigned long long)offset,
+                                                (unsigned long long)nbytes,
+                                                (unsigned long 
long)limit_nbytes,
+                                                (unsigned long 
long)extra_nbytes,
+                                                (unsigned long 
long)_last_byte_sector_size);
+                                       extra_nbytes = 0;
+                               }
+                       }
+
+                       orig_nbytes = nbytes;
+
                        if (extra_nbytes) {
                                log_debug("Limit write at %llu len %llu to len 
%llu rounded to %llu",
                                          (unsigned long long)offset,
@@ -210,6 +241,22 @@ static bool _async_issue(struct io_engine *ioe, enum dir 
d, int fd,
                                          (unsigned long long)limit_nbytes);
                                nbytes = limit_nbytes;
                        }
+
+                       /*
+                        * This shouldn't happen, the reduced+extended
+                        * nbytes value should never be larger than the
+                        * bcache block size.
+                        */
+                       if (nbytes > orig_nbytes) {
+                               log_error("Invalid adjusted write at %llu len 
%llu adjusted %llu limit %llu extra %llu sector_size %llu",
+                                         (unsigned long long)offset,
+                                         (unsigned long long)orig_nbytes,
+                                         (unsigned long long)nbytes,
+                                         (unsigned long long)limit_nbytes,
+                                         (unsigned long long)extra_nbytes,
+                                         (unsigned long 
long)_last_byte_sector_size);
+                               return false;
+                       }
                }
        }
 
@@ -403,6 +450,7 @@ static bool _sync_issue(struct io_engine *ioe, enum dir d, 
int fd,
                uint64_t nbytes = len;
                sector_t limit_nbytes = 0;
                sector_t extra_nbytes = 0;
+               sector_t orig_nbytes = 0;
 
                if (offset > _last_byte_offset) {
                        log_error("Limit write at %llu len %llu beyond last 
byte %llu",
@@ -415,9 +463,30 @@ static bool _sync_issue(struct io_engine *ioe, enum dir d, 
int fd,
 
                if (offset + nbytes > _last_byte_offset) {
                        limit_nbytes = _last_byte_offset - offset;
-                       if (limit_nbytes % _last_byte_sector_size)
+
+                       if (limit_nbytes % _last_byte_sector_size) {
                                extra_nbytes = _last_byte_sector_size - 
(limit_nbytes % _last_byte_sector_size);
 
+                               /*
+                                * adding extra_nbytes to the reduced nbytes 
(limit_nbytes)
+                                * should make the final write size a multiple 
of the
+                                * sector size.  This should never result in a 
final size
+                                * larger than the bcache block size (as long 
as the bcache
+                                * block size is a multiple of the sector size).
+                                */
+                               if (limit_nbytes + extra_nbytes > nbytes) {
+                                       log_warn("Skip extending write at %llu 
len %llu limit %llu extra %llu sector_size %llu",
+                                                (unsigned long long)offset,
+                                                (unsigned long long)nbytes,
+                                                (unsigned long 
long)limit_nbytes,
+                                                (unsigned long 
long)extra_nbytes,
+                                                (unsigned long 
long)_last_byte_sector_size);
+                                       extra_nbytes = 0;
+                               }
+                       }
+
+                       orig_nbytes = nbytes;
+
                        if (extra_nbytes) {
                                log_debug("Limit write at %llu len %llu to len 
%llu rounded to %llu",
                                          (unsigned long long)offset,
@@ -432,6 +501,22 @@ static bool _sync_issue(struct io_engine *ioe, enum dir d, 
int fd,
                                          (unsigned long long)limit_nbytes);
                                nbytes = limit_nbytes;
                        }
+
+                       /*
+                        * This shouldn't happen, the reduced+extended
+                        * nbytes value should never be larger than the
+                        * bcache block size.
+                        */
+                       if (nbytes > orig_nbytes) {
+                               log_error("Invalid adjusted write at %llu len 
%llu adjusted %llu limit %llu extra %llu sector_size %llu",
+                                         (unsigned long long)offset,
+                                         (unsigned long long)orig_nbytes,
+                                         (unsigned long long)nbytes,
+                                         (unsigned long long)limit_nbytes,
+                                         (unsigned long long)extra_nbytes,
+                                         (unsigned long 
long)_last_byte_sector_size);
+                               return false;
+                       }
                }
 
                where = offset;
diff --git a/lib/device/dev-io.c b/lib/device/dev-io.c
index 3fe264755..5fa0b7a9e 100644
--- a/lib/device/dev-io.c
+++ b/lib/device/dev-io.c
@@ -250,6 +250,58 @@ static int _dev_discard_blocks(struct device *dev, 
uint64_t offset_bytes, uint64
        return 1;
 }
 
+int dev_get_direct_block_sizes(struct device *dev, unsigned int 
*physical_block_size,
+                               unsigned int *logical_block_size)
+{
+       int fd = dev->bcache_fd;
+       int do_close = 0;
+       unsigned int pbs = 0;
+       unsigned int lbs = 0;
+
+       if (dev->physical_block_size || dev->logical_block_size) {
+               *physical_block_size = dev->physical_block_size;
+               *logical_block_size = dev->logical_block_size;
+               return 1;
+       }
+
+       if (fd <= 0) {
+               if (!dev_open_readonly(dev))
+                       return 0;
+               fd = dev_fd(dev);
+               do_close = 1;
+       }
+
+       /*
+        * BLKPBSZGET from kernel comment for blk_queue_physical_block_size:
+        * "the lowest possible sector size that the hardware can operate on
+        * without reverting to read-modify-write operations"
+        */
+       if (ioctl(fd, BLKPBSZGET, &pbs)) {
+               stack;
+               pbs = 0;
+       }
+
+       /*
+        * BLKSSZGET from kernel comment for blk_queue_logical_block_size:
+        * "the lowest possible block size that the storage device can address."
+        */
+       if (ioctl(fd, BLKSSZGET, &lbs)) {
+               stack;
+               lbs = 0;
+       }
+
+       dev->physical_block_size = pbs;
+       dev->logical_block_size = lbs;
+
+       *physical_block_size = pbs;
+       *logical_block_size = lbs;
+
+       if (do_close && !dev_close_immediate(dev))
+               stack;
+
+       return 1;
+}
+
 /*-----------------------------------------------------------------
  * Public functions
  *---------------------------------------------------------------*/
diff --git a/lib/device/device.h b/lib/device/device.h
index 30e1e79b3..bb65f841d 100644
--- a/lib/device/device.h
+++ b/lib/device/device.h
@@ -67,8 +67,10 @@ struct device {
        /* private */
        int fd;
        int open_count;
-       int phys_block_size;
-       int block_size;
+       int phys_block_size;     /* From either BLKPBSZGET or BLKSSZGET, don't 
use */
+       int block_size;          /* From BLKBSZGET, returns 
bdev->bd_block_size, likely set by fs, probably don't use */
+       int physical_block_size; /* From BLKPBSZGET: lowest possible sector 
size that the hardware can operate on without reverting to read-modify-write 
operations */
+       int logical_block_size;  /* From BLKSSZGET: lowest possible block size 
that the storage device can address */
        int read_ahead;
        int bcache_fd;
        uint32_t flags;
@@ -132,6 +134,8 @@ void dev_size_seqno_inc(void);
  * All io should use these routines.
  */
 int dev_get_block_size(struct device *dev, unsigned int *phys_block_size, 
unsigned int *block_size);
+int dev_get_direct_block_sizes(struct device *dev, unsigned int 
*physical_block_size,
+                               unsigned int *logical_block_size);
 int dev_get_size(struct device *dev, uint64_t *size);
 int dev_get_read_ahead(struct device *dev, uint32_t *read_ahead);
 int dev_discard_blocks(struct device *dev, uint64_t offset_bytes, uint64_t 
size_bytes);
diff --git a/lib/label/label.c b/lib/label/label.c
index fb7ad1d56..5d8a0f51b 100644
--- a/lib/label/label.c
+++ b/lib/label/label.c
@@ -1495,16 +1495,34 @@ bool dev_set_bytes(struct device *dev, uint64_t start, 
size_t len, uint8_t val)
 
 void dev_set_last_byte(struct device *dev, uint64_t offset)
 {
-       unsigned int phys_block_size = 0;
-       unsigned int block_size = 0;
+       unsigned int physical_block_size = 0;
+       unsigned int logical_block_size = 0;
+       unsigned int bs;
 
-       if (!dev_get_block_size(dev, &phys_block_size, &block_size)) {
+       if (!dev_get_direct_block_sizes(dev, &physical_block_size, 
&logical_block_size)) {
                stack;
-               /* FIXME  ASSERT or regular error testing is missing */
-               return;
+               return; /* FIXME: error path ? */
+       }
+
+       if ((physical_block_size == 512) && (logical_block_size == 512))
+               bs = 512;
+       else if ((physical_block_size == 4096) && (logical_block_size == 4096))
+               bs = 4096;
+       else if ((physical_block_size == 512) || (logical_block_size == 512)) {
+               log_debug("Set last byte mixed block sizes physical %u logical 
%u using 512",
+                         physical_block_size, logical_block_size);
+               bs = 512;
+       } else if ((physical_block_size == 4096) || (logical_block_size == 
4096)) {
+               log_debug("Set last byte mixed block sizes physical %u logical 
%u using 4096",
+                         physical_block_size, logical_block_size);
+               bs = 4096;
+       } else {
+               log_debug("Set last byte mixed block sizes physical %u logical 
%u using 512",
+                         physical_block_size, logical_block_size);
+               bs = 512;
        }
 
-       bcache_set_last_byte(scan_bcache, dev->bcache_fd, offset, 
phys_block_size);
+       bcache_set_last_byte(scan_bcache, dev->bcache_fd, offset, bs);
 }
 
 void dev_unset_last_byte(struct device *dev)
-- 
2.12.3

++++++ bug-1149408_vgcreate-vgextend-restrict-PVs-with-mixed-block-size.patch 
++++++
>From 0404539edb25e4a9d3456bb3e6b402aa2767af6b Mon Sep 17 00:00:00 2001
From: David Teigland <teigl...@redhat.com>
Date: Thu, 1 Aug 2019 10:06:47 -0500
Subject: [PATCH] vgcreate/vgextend: restrict PVs with mixed block sizes

Avoid having PVs with different logical block sizes in the same VG.
This prevents LVs from having mixed block sizes, which can produce
file system errors.

The new config setting devices/allow_mixed_block_sizes (default 0)
can be changed to 1 to return to the unrestricted mode.
---
 lib/commands/toolcontext.h       |  1 +
 lib/config/config_settings.h     |  5 ++++
 lib/metadata/metadata-exported.h |  1 +
 lib/metadata/metadata.c          | 44 ++++++++++++++++++++++++++++++
 tools/lvmcmdline.c               |  2 ++
 tools/toollib.c                  | 47 ++++++++++++++++++++++++++++++++
 tools/vgcreate.c                 |  2 ++
 7 files changed, 102 insertions(+)

diff --git a/lib/commands/toolcontext.h b/lib/commands/toolcontext.h
index 488752c8f..655d9f297 100644
--- a/lib/commands/toolcontext.h
+++ b/lib/commands/toolcontext.h
@@ -153,6 +153,7 @@ struct cmd_context {
        unsigned include_shared_vgs:1;          /* report/display cmds can 
reveal lockd VGs */
        unsigned include_active_foreign_vgs:1;  /* cmd should process foreign 
VGs with active LVs */
        unsigned vg_read_print_access_error:1;  /* print access errors from 
vg_read */
+       unsigned allow_mixed_block_sizes:1;
        unsigned force_access_clustered:1;
        unsigned lockd_gl_disable:1;
        unsigned lockd_vg_disable:1;
diff --git a/lib/config/config_settings.h b/lib/config/config_settings.h
index 527d5bd07..edfe4a31a 100644
--- a/lib/config/config_settings.h
+++ b/lib/config/config_settings.h
@@ -502,6 +502,11 @@ cfg(devices_allow_changes_with_duplicate_pvs_CFG, 
"allow_changes_with_duplicate_
        "Enabling this setting allows the VG to be used as usual even with\n"
        "uncertain devices.\n")
 
+cfg(devices_allow_mixed_block_sizes_CFG, "allow_mixed_block_sizes", 
devices_CFG_SECTION, 0, CFG_TYPE_BOOL, 0, vsn(2, 3, 6), NULL, 0, NULL,
+       "Allow PVs in the same VG with different logical block sizes.\n"
+       "When allowed, the user is responsible to ensure that an LV is\n"
+       "using PVs with matching block sizes when necessary.\n")
+
 cfg_array(allocation_cling_tag_list_CFG, "cling_tag_list", 
allocation_CFG_SECTION, CFG_DEFAULT_UNDEFINED, CFG_TYPE_STRING, NULL, vsn(2, 2, 
77), NULL, 0, NULL,
        "Advise LVM which PVs to use when searching for new space.\n"
        "When searching for free space to extend an LV, the 'cling' 
allocation\n"
diff --git a/lib/metadata/metadata-exported.h b/lib/metadata/metadata-exported.h
index f18587a73..e1767b78d 100644
--- a/lib/metadata/metadata-exported.h
+++ b/lib/metadata/metadata-exported.h
@@ -623,6 +623,7 @@ struct pvcreate_params {
        unsigned is_remove : 1;         /* is removing PVs, not creating */
        unsigned preserve_existing : 1;
        unsigned check_failed : 1;
+       unsigned check_consistent_block_size : 1;
 };
 
 struct lvresize_params {
diff --git a/lib/metadata/metadata.c b/lib/metadata/metadata.c
index f19df3d1d..e55adc212 100644
--- a/lib/metadata/metadata.c
+++ b/lib/metadata/metadata.c
@@ -751,12 +751,40 @@ int vg_extend_each_pv(struct volume_group *vg, struct 
pvcreate_params *pp)
 {
        struct pv_list *pvl;
        unsigned int max_phys_block_size = 0;
+       unsigned int physical_block_size, logical_block_size;
+       unsigned int prev_lbs = 0;
+       int inconsistent_existing_lbs = 0;
 
        log_debug_metadata("Adding PVs to VG %s.", vg->name);
 
        if (vg_bad_status_bits(vg, RESIZEABLE_VG))
                return_0;
 
+       /*
+        * Check if existing PVs have inconsistent block sizes.
+        * If so, do not enforce new devices to be consistent.
+        */
+       dm_list_iterate_items(pvl, &vg->pvs) {
+               logical_block_size = 0;
+               physical_block_size = 0;
+
+               if (!dev_get_direct_block_sizes(pvl->pv->dev, 
&physical_block_size, &logical_block_size))
+                       continue;
+
+               if (!logical_block_size)
+                       continue;
+
+               if (!prev_lbs) {
+                       prev_lbs = logical_block_size;
+                       continue;
+               }
+               
+               if (prev_lbs != logical_block_size) {
+                       inconsistent_existing_lbs = 1;
+                       break;
+               }
+       }
+
        dm_list_iterate_items(pvl, &pp->pvs) {
                log_debug_metadata("Adding PV %s to VG %s.", 
pv_dev_name(pvl->pv), vg->name);
 
@@ -767,6 +795,22 @@ int vg_extend_each_pv(struct volume_group *vg, struct 
pvcreate_params *pp)
                        return 0;
                }
 
+               logical_block_size = 0;
+               physical_block_size = 0;
+
+               if (!dev_get_direct_block_sizes(pvl->pv->dev, 
&physical_block_size, &logical_block_size))
+                       log_warn("WARNING: PV %s has unknown block size.", 
pv_dev_name(pvl->pv));
+
+               else if (prev_lbs && logical_block_size && (logical_block_size 
!= prev_lbs)) {
+                       if (vg->cmd->allow_mixed_block_sizes || 
inconsistent_existing_lbs)
+                               log_debug("Devices have inconsistent block 
sizes (%u and %u)", prev_lbs, logical_block_size);
+                       else {
+                               log_error("Devices have inconsistent logical 
block sizes (%u and %u).",
+                                         prev_lbs, logical_block_size);
+                               return 0;
+                       }
+               }
+
                if (!add_pv_to_vg(vg, pv_dev_name(pvl->pv), pvl->pv, 0)) {
                        log_error("PV %s cannot be added to VG %s.",
                                  pv_dev_name(pvl->pv), vg->name);
diff --git a/tools/lvmcmdline.c b/tools/lvmcmdline.c
index 30f9d8133..7d29b6fab 100644
--- a/tools/lvmcmdline.c
+++ b/tools/lvmcmdline.c
@@ -2319,6 +2319,8 @@ static int _get_current_settings(struct cmd_context *cmd)
 
        cmd->scan_lvs = find_config_tree_bool(cmd, devices_scan_lvs_CFG, NULL);
 
+       cmd->allow_mixed_block_sizes = find_config_tree_bool(cmd, 
devices_allow_mixed_block_sizes_CFG, NULL);
+
        /*
         * enable_hints is set to 1 if any commands are using hints.
         * use_hints is set to 1 if this command doesn't use the hints.
diff --git a/tools/toollib.c b/tools/toollib.c
index b2313f8ff..155528c4e 100644
--- a/tools/toollib.c
+++ b/tools/toollib.c
@@ -5355,6 +5355,8 @@ int pvcreate_each_device(struct cmd_context *cmd,
        struct pv_list *vgpvl;
        struct device_list *devl;
        const char *pv_name;
+       unsigned int physical_block_size, logical_block_size;
+       unsigned int prev_pbs = 0, prev_lbs = 0;
        int must_use_all = (cmd->cname->flags & MUST_USE_ALL_ARGS);
        int found;
        unsigned i;
@@ -5394,6 +5396,51 @@ int pvcreate_each_device(struct cmd_context *cmd,
        dm_list_iterate_items(pd, &pp->arg_devices)
                pd->dev = dev_cache_get(cmd, pd->name, cmd->filter);
 
+       /*
+        * Check for consistent block sizes.
+        */
+       if (pp->check_consistent_block_size) {
+               dm_list_iterate_items(pd, &pp->arg_devices) {
+                       if (!pd->dev)
+                               continue;
+
+                       logical_block_size = 0;
+                       physical_block_size = 0;
+
+                       if (!dev_get_direct_block_sizes(pd->dev, 
&physical_block_size, &logical_block_size)) {
+                               log_warn("WARNING: Unknown block size for 
device %s.", dev_name(pd->dev));
+                               continue;
+                       }
+
+                       if (!logical_block_size) {
+                               log_warn("WARNING: Unknown logical_block_size 
for device %s.", dev_name(pd->dev));
+                               continue;
+                       }
+
+                       if (!prev_lbs) {
+                               prev_lbs = logical_block_size;
+                               prev_pbs = physical_block_size;
+                               continue;
+                       }
+
+                       if (prev_lbs == logical_block_size) {
+                               /* Require lbs to match, just warn about 
unmatching pbs. */
+                               if (!cmd->allow_mixed_block_sizes && prev_pbs 
&& physical_block_size &&
+                                   (prev_pbs != physical_block_size))
+                                       log_warn("WARNING: Devices have 
inconsistent physical block sizes (%u and %u).",
+                                                 prev_pbs, 
physical_block_size);
+                               continue;
+                       }
+
+                       if (!cmd->allow_mixed_block_sizes) {
+                               log_error("Devices have inconsistent logical 
block sizes (%u and %u).",
+                                         prev_lbs, logical_block_size);
+                               log_print("See lvm.conf 
allow_mixed_block_sizes.");
+                               return 0;
+                       }
+               }
+       }
+
        /*
         * Use process_each_pv to search all existing PVs and devices.
         *
diff --git a/tools/vgcreate.c b/tools/vgcreate.c
index d594ec110..09b6a6c95 100644
--- a/tools/vgcreate.c
+++ b/tools/vgcreate.c
@@ -47,6 +47,8 @@ int vgcreate(struct cmd_context *cmd, int argc, char **argv)
        /* Don't create a new PV on top of an existing PV like pvcreate does. */
        pp.preserve_existing = 1;
 
+       pp.check_consistent_block_size = 1;
+
        if (!vgcreate_params_set_defaults(cmd, &vp_def, NULL))
                return EINVALID_CMD_LINE;
        vp_def.vg_name = vg_name;
-- 
2.21.0

++++++ lvm.conf ++++++
--- /var/tmp/diff_new_pack.JU5a73/_old  2019-09-23 13:15:48.261122328 +0200
+++ /var/tmp/diff_new_pack.JU5a73/_new  2019-09-23 13:15:48.261122328 +0200
@@ -21,7 +21,6 @@
 # N.B. Take care that each setting only appears once if uncommenting
 # example settings in this file.
 
-#hello
 
 # Configuration section config.
 # How LVM configuration settings are handled.
@@ -124,7 +123,6 @@
        # then the device is accepted. Be careful mixing 'a' and 'r' patterns,
        # as the combination might produce unexpected results (test changes.)
        # Run vgscan after changing the filter to regenerate the cache.
-       # See the use_lvmetad comment for a special case regarding filters.
        # 
        # Example
        # Accept every block device:
@@ -140,36 +138,20 @@
        # 
        # This configuration option has an automatic default value.
        # filter = [ "a|.*/|" ]
-       filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|", 
"r|/dev/fd.*|", "r|/dev/cdrom|", "a/.*/" ]
+       # Below filter was used in SUSE/openSUSE before lvm2-2.03. It conflicts
+       # with lvm2-2.02.180+, so comment out in lvm2-2.03 release.
+       # filter = [ "r|/dev/.*/by-path/.*|", "r|/dev/.*/by-id/.*|", 
"r|/dev/fd.*|", "r|/dev/cdrom|", "a/.*/" ]
 
        # Configuration option devices/global_filter.
        # Limit the block devices that are used by LVM system components.
        # Because devices/filter may be overridden from the command line, it is
-       # not suitable for system-wide device filtering, e.g. udev and lvmetad.
+       # not suitable for system-wide device filtering, e.g. udev.
        # Use global_filter to hide devices from these LVM system components.
        # The syntax is the same as devices/filter. Devices rejected by
        # global_filter are not opened by LVM.
        # This configuration option has an automatic default value.
        # global_filter = [ "a|.*/|" ]
 
-       # Configuration option devices/cache_dir.
-       # Directory in which to store the device cache file.
-       # The results of filtering are cached on disk to avoid rescanning dud
-       # devices (which can take a very long time). By default this cache is
-       # stored in a file named .cache. It is safe to delete this file; the
-       # tools regenerate it. If obtain_device_list_from_udev is enabled, the
-       # list of devices is obtained from udev and any existing .cache file
-       # is removed.
-       cache_dir = "/etc/lvm/cache"
-
-       # Configuration option devices/cache_file_prefix.
-       # A prefix used before the .cache file name. See devices/cache_dir.
-       cache_file_prefix = ""
-
-       # Configuration option devices/write_cache_state.
-       # Enable/disable writing the cache file. See devices/cache_dir.
-       write_cache_state = 1
-
        # Configuration option devices/types.
        # List of additional acceptable block device types.
        # These are of device type names from /proc/devices, followed by the
@@ -187,6 +169,10 @@
        # present on the system. sysfs must be part of the kernel and mounted.)
        sysfs_scan = 1
 
+       # Configuration option devices/scan_lvs.
+       # Scan LVM LVs for layered PVs.
+       scan_lvs = 0
+
        # Configuration option devices/multipath_component_detection.
        # Ignore devices that are components of DM multipath devices.
        multipath_component_detection = 1
@@ -270,14 +256,6 @@
        # different way, making them a better choice for VG stacking.
        ignore_lvm_mirrors = 1
 
-       # Configuration option devices/disable_after_error_count.
-       # Number of I/O errors after which a device is skipped.
-       # During each LVM operation, errors received from each device are
-       # counted. If the counter of a device exceeds the limit set here,
-       # no further I/O is sent to that device for the remainder of the
-       # operation. Setting this to 0 disables the counters altogether.
-       disable_after_error_count = 0
-
        # Configuration option devices/require_restorefile_with_uuid.
        # Allow use of pvcreate --uuid without requiring --restorefile.
        require_restorefile_with_uuid = 1
@@ -314,6 +292,12 @@
        # Enabling this setting allows the VG to be used as usual even with
        # uncertain devices.
        allow_changes_with_duplicate_pvs = 0
+
+       # Configuration option devices/allow_mixed_block_sizes.
+       # Allow PVs in the same VG with different logical block sizes.
+       # When allowed, the user is responsible to ensure that an LV is
+       # using PVs with matching block sizes when necessary.
+       allow_mixed_block_sizes = 0
 }
 
 # Configuration section allocation.
@@ -348,7 +332,7 @@
        maximise_cling = 1
 
        # Configuration option allocation/use_blkid_wiping.
-       # Use blkid to detect existing signatures on new PVs and LVs.
+       # Use blkid to detect and erase existing signatures on new PVs and LVs.
        # The blkid library can detect more signatures than the native LVM
        # detection code, but may take longer. LVM needs to be compiled with
        # blkid wiping support for this setting to apply. LVM native detection
@@ -500,6 +484,154 @@
        # Default physical extent size in KiB to use for new VGs.
        # This configuration option has an automatic default value.
        # physical_extent_size = 4096
+
+       # Configuration option allocation/vdo_use_compression.
+       # Enables or disables compression when creating a VDO volume.
+       # Compression may be disabled if necessary to maximize performance
+       # or to speed processing of data that is unlikely to compress.
+       # This configuration option has an automatic default value.
+       # vdo_use_compression = 1
+
+       # Configuration option allocation/vdo_use_deduplication.
+       # Enables or disables deduplication when creating a VDO volume.
+       # Deduplication may be disabled in instances where data is not expected
+       # to have good deduplication rates but compression is still desired.
+       # This configuration option has an automatic default value.
+       # vdo_use_deduplication = 1
+
+       # Configuration option allocation/vdo_use_metadata_hints.
+       # Enables or disables whether VDO volume should tag its latency-critical
+       # writes with the REQ_SYNC flag. Some device mapper targets such as 
dm-raid5
+       # process writes with this flag at a higher priority.
+       # Default is enabled.
+       # This configuration option has an automatic default value.
+       # vdo_use_metadata_hints = 1
+
+       # Configuration option allocation/vdo_minimum_io_size.
+       # The minimum IO size for VDO volume to accept, in bytes.
+       # Valid values are 512 or 4096. The recommended and default value is 
4096.
+       # This configuration option has an automatic default value.
+       # vdo_minimum_io_size = 4096
+
+       # Configuration option allocation/vdo_block_map_cache_size_mb.
+       # Specifies the amount of memory in MiB allocated for caching block map
+       # pages for VDO volume. The value must be a multiple of 4096 and must be
+       # at least 128MiB and less than 16TiB. The cache must be at least 16MiB
+       # per logical thread. Note that there is a memory overhead of 15%.
+       # This configuration option has an automatic default value.
+       # vdo_block_map_cache_size_mb = 128
+
+       # Configuration option allocation/vdo_block_map_period.
+       # The speed with which the block map cache writes out modified block 
map pages.
+       # A smaller era length is likely to reduce the amount time spent 
rebuilding,
+       # at the cost of increased block map writes during normal operation.
+       # The maximum and recommended value is 16380; the minimum value is 1.
+       # This configuration option has an automatic default value.
+       # vdo_block_map_period = 16380
+
+       # Configuration option allocation/vdo_check_point_frequency.
+       # The default check point frequency for VDO volume.
+       # This configuration option has an automatic default value.
+       # vdo_check_point_frequency = 0
+
+       # Configuration option allocation/vdo_use_sparse_index.
+       # Enables sparse indexing for VDO volume.
+       # This configuration option has an automatic default value.
+       # vdo_use_sparse_index = 0
+
+       # Configuration option allocation/vdo_index_memory_size_mb.
+       # Specifies the amount of index memory in MiB for VDO volume.
+       # The value must be at least 256MiB and at most 1TiB.
+       # This configuration option has an automatic default value.
+       # vdo_index_memory_size_mb = 256
+
+       # Configuration option allocation/vdo_slab_size_mb.
+       # Specifies the size in MiB of the increment by which a VDO is grown.
+       # Using a smaller size constrains the total maximum physical size
+       # that can be accommodated. Must be a power of two between 128MiB and 
32GiB.
+       # This configuration option has an automatic default value.
+       # vdo_slab_size_mb = 2048
+
+       # Configuration option allocation/vdo_ack_threads.
+       # Specifies the number of threads   to use for acknowledging
+       # completion of requested VDO I/O operations.
+       # The value must be at in range [0..100].
+       # This configuration option has an automatic default value.
+       # vdo_ack_threads = 1
+
+       # Configuration option allocation/vdo_bio_threads.
+       # Specifies the number of threads to use for submitting I/O
+       # operations to the storage device of VDO volume.
+       # The value must be in range [1..100]
+       # Each additional thread after the first will use an additional 18MiB 
of RAM,
+       # plus 1.12 MiB of RAM per megabyte of configured read cache size.
+       # This configuration option has an automatic default value.
+       # vdo_bio_threads = 1
+
+       # Configuration option allocation/vdo_bio_rotation.
+       # Specifies the number of I/O operations to enqueue for each 
bio-submission
+       # thread before directing work to the next. The value must be in range 
[1..1024].
+       # This configuration option has an automatic default value.
+       # vdo_bio_rotation = 64
+
+       # Configuration option allocation/vdo_cpu_threads.
+       # Specifies the number of threads to use for CPU-intensive work such as
+       # hashing or compression for VDO volume. The value must be in range 
[1..100]
+       # This configuration option has an automatic default value.
+       # vdo_cpu_threads = 2
+
+       # Configuration option allocation/vdo_hash_zone_threads.
+       # Specifies the number of threads across which to subdivide parts of 
the VDO
+       # processing based on the hash value computed from the block data.
+       # The value must be at in range [0..100].
+       # vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads 
must be
+       # either all zero or all non-zero.
+       # This configuration option has an automatic default value.
+       # vdo_hash_zone_threads = 1
+
+       # Configuration option allocation/vdo_logical_threads.
+       # Specifies the number of threads across which to subdivide parts of 
the VDO
+       # processing based on the hash value computed from the block data.
+       # A logical thread count of 9 or more will require explicitly specifying
+       # a sufficiently large block map cache size, as well.
+       # The value must be in range [0..100].
+       # vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads 
must be
+       # either all zero or all non-zero.
+       # This configuration option has an automatic default value.
+       # vdo_logical_threads = 1
+
+       # Configuration option allocation/vdo_physical_threads.
+       # Specifies the number of threads across which to subdivide parts of 
the VDO
+       # processing based on physical block addresses.
+       # Each additional thread after the first will use an additional 10MiB 
of RAM.
+       # The value must be in range [0..16].
+       # vdo_hash_zone_threads, vdo_logical_threads and vdo_physical_threads 
must be
+       # either all zero or all non-zero.
+       # This configuration option has an automatic default value.
+       # vdo_physical_threads = 1
+
+       # Configuration option allocation/vdo_write_policy.
+       # Specifies the write policy:
+       # auto  - VDO will check the storage device and determine whether it 
supports flushes.
+       #         If it does, VDO will run in async mode, otherwise it will run 
in sync mode.
+       # sync  - Writes are acknowledged only after data is stably written.
+       #         This policy is not supported if the underlying storage is not 
also synchronous.
+       # async - Writes are acknowledged after data has been cached for 
writing to stable storage.
+       #         Data which has not been flushed is not guaranteed to persist 
in this mode.
+       # This configuration option has an automatic default value.
+       # vdo_write_policy = "auto"
+
+       # Configuration option allocation/vdo_max_discard.
+       # Specified te maximum size of discard bio accepted, in 4096 byte 
blocks.
+       # I/O requests to a VDO volume are normally split into 4096-byte blocks,
+       # and processed up to 2048 at a time. However, discard requests to a 
VDO volume
+       # can be automatically split to a larger size, up to <max discard> 
4096-byte blocks
+       # in a single bio, and are limited to 1500 at a time.
+       # Increasing this value may provide better overall performance, at the 
cost of
+       # increased latency for the individual discard requests.
+       # The default and minimum is 1. The maximum is UINT_MAX / 4096.
+       # This configuration option has an automatic default value.
+       # vdo_max_discard = 1
 }
 
 # Configuration section log.
@@ -614,9 +746,9 @@
        # Select log messages by class.
        # Some debugging messages are assigned to a class and only appear in
        # debug output if the class is listed here. Classes currently
-       # available: memory, devices, activation, allocation, lvmetad,
+       # available: memory, devices, io, activation, allocation,
        # metadata, cache, locking, lvmpolld. Use "all" to see everything.
-       debug_classes = [ "memory", "devices", "activation", "allocation", 
"lvmetad", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
+       debug_classes = [ "memory", "devices", "io", "activation", 
"allocation", "metadata", "cache", "locking", "lvmpolld", "dbus" ]
 }
 
 # Configuration section backup.
@@ -704,32 +836,6 @@
        # the error messages.
        activation = 1
 
-       # Configuration option global/fallback_to_lvm1.
-       # Try running LVM1 tools if LVM cannot communicate with DM.
-       # This option only applies to 2.4 kernels and is provided to help
-       # switch between device-mapper kernels and LVM1 kernels. The LVM1
-       # tools need to be installed with .lvm1 suffices, e.g. vgscan.lvm1.
-       # They will stop working once the lvm2 on-disk metadata format is used.
-       # This configuration option has an automatic default value.
-       # fallback_to_lvm1 = 1
-
-       # Configuration option global/format.
-       # The default metadata format that commands should use.
-       # The -M 1|2 option overrides this setting.
-       # 
-       # Accepted values:
-       #   lvm1
-       #   lvm2
-       # 
-       # This configuration option has an automatic default value.
-       # format = "lvm2"
-
-       # Configuration option global/format_libraries.
-       # Shared libraries that process different metadata formats.
-       # If support for LVM1 metadata was compiled as a shared library use
-       # format_libraries = "liblvm2format1.so"
-       # This configuration option does not have a default value defined.
-
        # Configuration option global/segment_libraries.
        # This configuration option does not have a default value defined.
 
@@ -742,57 +848,10 @@
        # Location of /etc system configuration directory.
        etc = "/etc"
 
-       # Configuration option global/locking_type.
-       # Type of locking to use.
-       # 
-       # Accepted values:
-       #   0
-       #     Turns off locking. Warning: this risks metadata corruption if
-       #     commands run concurrently.
-       #   1
-       #     LVM uses local file-based locking, the standard mode.
-       #   2
-       #     LVM uses the external shared library locking_library.
-       #   3
-       #     LVM uses built-in clustered locking with clvmd.
-       #     This is incompatible with lvmetad. If use_lvmetad is enabled,
-       #     LVM prints a warning and disables lvmetad use.
-       #   4
-       #     LVM uses read-only locking which forbids any operations that
-       #     might change metadata.
-       #   5
-       #     Offers dummy locking for tools that do not need any locks.
-       #     You should not need to set this directly; the tools will select
-       #     when to use it instead of the configured locking_type.
-       #     Do not use lvmetad or the kernel device-mapper driver with this
-       #     locking type. It is used by the --readonly option that offers
-       #     read-only access to Volume Group metadata that cannot be locked
-       #     safely because it belongs to an inaccessible domain and might be
-       #     in use, for example a virtual machine image or a disk that is
-       #     shared by a clustered machine.
-       # 
-       locking_type = 1
-
        # Configuration option global/wait_for_locks.
        # When disabled, fail if a lock request would block.
        wait_for_locks = 1
 
-       # Configuration option global/fallback_to_clustered_locking.
-       # Attempt to use built-in cluster locking if locking_type 2 fails.
-       # If using external locking (type 2) and initialisation fails, with
-       # this enabled, an attempt will be made to use the built-in clustered
-       # locking. Disable this if using a customised locking_library.
-       fallback_to_clustered_locking = 1
-
-       # Configuration option global/fallback_to_local_locking.
-       # Use locking_type 1 (local) if locking_type 2 or 3 fail.
-       # If an attempt to initialise type 2 or type 3 locking failed, perhaps
-       # because cluster components such as clvmd are not running, with this
-       # enabled, an attempt will be made to use local file-based locking
-       # (type 1). If this succeeds, only commands against local VGs will
-       # proceed. VGs marked as clustered will be ignored.
-       fallback_to_local_locking = 1
-
        # Configuration option global/locking_dir.
        # Directory to use for LVM command file locks.
        # Local non-LV directory that holds file-based locks while commands are
@@ -813,24 +872,12 @@
        # Search this directory first for shared libraries.
        # This configuration option does not have a default value defined.
 
-       # Configuration option global/locking_library.
-       # The external locking library to use for locking_type 2.
-       # This configuration option has an automatic default value.
-       # locking_library = "liblvm2clusterlock.so"
-
        # Configuration option global/abort_on_internal_errors.
        # Abort a command that encounters an internal error.
        # Treat any internal errors as fatal errors, aborting the process that
        # encountered the internal error. Please only enable for debugging.
        abort_on_internal_errors = 0
 
-       # Configuration option global/detect_internal_vg_cache_corruption.
-       # Internal verification of VG structures.
-       # Check if CRC matches when a parsed VG is used multiple times. This
-       # is useful to catch unexpected changes to cached VG structures.
-       # Please only enable for debugging.
-       detect_internal_vg_cache_corruption = 0
-
        # Configuration option global/metadata_read_only.
        # No operations that change on-disk metadata are permitted.
        # Additionally, read-only commands that encounter metadata in need of
@@ -865,6 +912,17 @@
        # 
        mirror_segtype_default = "raid1"
 
+       # Configuration option global/support_mirrored_mirror_log.
+       # Configuration option global/support_mirrored_mirror_log.
+       # Enable mirrored 'mirror' log type for testing.
+       # 
+       # This type is deprecated to create or convert to but can
+       # be enabled to test that activation of existing mirrored
+       # logs and conversion to disk/core works.
+       # 
+       # Not supported for regular operation!
+       support_mirrored_mirror_log = 0
+
        # Configuration option global/raid10_segtype_default.
        # The segment type used by the -i -m combination.
        # The --type raid10|mirror option overrides this setting.
@@ -913,41 +971,20 @@
        # This configuration option has an automatic default value.
        # lvdisplay_shows_full_device_path = 0
 
-       # Configuration option global/use_lvmetad.
-       # Use lvmetad to cache metadata and reduce disk scanning.
-       # When enabled (and running), lvmetad provides LVM commands with VG
-       # metadata and PV state. LVM commands then avoid reading this
-       # information from disks which can be slow. When disabled (or not
-       # running), LVM commands fall back to scanning disks to obtain VG
-       # metadata. lvmetad is kept updated via udev rules which must be set
-       # up for LVM to work correctly. (The udev rules should be installed
-       # by default.) Without a proper udev setup, changes in the system's
-       # block device configuration will be unknown to LVM, and ignored
-       # until a manual 'pvscan --cache' is run. If lvmetad was running
-       # while use_lvmetad was disabled, it must be stopped, use_lvmetad
-       # enabled, and then started. When using lvmetad, LV activation is
-       # switched to an automatic, event-based mode. In this mode, LVs are
-       # activated based on incoming udev events that inform lvmetad when
-       # PVs appear on the system. When a VG is complete (all PVs present),
-       # it is auto-activated. The auto_activation_volume_list setting
-       # controls which LVs are auto-activated (all by default.)
-       # When lvmetad is updated (automatically by udev events, or directly
-       # by pvscan --cache), devices/filter is ignored and all devices are
-       # scanned by default. lvmetad always keeps unfiltered information
-       # which is provided to LVM commands. Each LVM command then filters
-       # based on devices/filter. This does not apply to other, non-regexp,
-       # filtering settings: component filters such as multipath and MD
-       # are checked during pvscan --cache. To filter a device and prevent
-       # scanning from the LVM system entirely, including lvmetad, use
-       # devices/global_filter.
-       use_lvmetad = 1
-
-       # Configuration option global/lvmetad_update_wait_time.
-       # Number of seconds a command will wait for lvmetad update to finish.
-       # After waiting for this period, a command will not use lvmetad, and
-       # will revert to disk scanning.
+       # Configuration option global/event_activation.
+       # Activate LVs based on system-generated device events.
+       # When a device appears on the system, a system-generated event runs
+       # the pvscan command to activate LVs if the new PV completes the VG.
+       # Use auto_activation_volume_list to select which LVs should be
+       # activated from these events (the default is all.)
+       # When event_activation is disabled, the system will generally run
+       # a direct activation command to activate LVs in complete VGs.
+       event_activation = 1
+
+       # Configuration option global/use_aio.
+       # Use async I/O when reading and writing devices.
        # This configuration option has an automatic default value.
-       # lvmetad_update_wait_time = 10
+       # use_aio = 1
 
        # Configuration option global/use_lvmlockd.
        # Use lvmlockd for locking among hosts using LVM on shared storage.
@@ -1073,6 +1110,17 @@
        # This configuration option has an automatic default value.
        # cache_repair_options = [ "" ]
 
+       # Configuration option global/vdo_format_executable.
+       # The full path to the vdoformat command.
+       # LVM uses this command to initial data volume for VDO type logical 
volume
+       # This configuration option has an automatic default value.
+       # vdo_format_executable = "/usr/bin/vdoformat"
+
+       # Configuration option global/vdo_format_options.
+       # List of options passed added to standard vdoformat command.
+       # This configuration option has an automatic default value.
+       # vdo_format_options = [ "" ]
+
        # Configuration option global/fsadm_executable.
        # The full path to the fsadm command.
        # LVM uses this command to help with lvresize -r operations.
@@ -1446,6 +1494,33 @@
        # 
        thin_pool_autoextend_percent = 20
 
+       # Configuration option activation/vdo_pool_autoextend_threshold.
+       # Auto-extend a VDO pool when its usage exceeds this percent.
+       # Setting this to 100 disables automatic extension.
+       # The minimum value is 50 (a smaller value is treated as 50.)
+       # Also see vdo_pool_autoextend_percent.
+       # Automatic extension requires dmeventd to be monitoring the LV.
+       # 
+       # Example
+       # Using 70% autoextend threshold and 20% autoextend size, when a 10G
+       # VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
+       # 8.4G, it is extended to 14.4G:
+       # vdo_pool_autoextend_threshold = 70
+       # 
+       vdo_pool_autoextend_threshold = 100
+
+       # Configuration option activation/vdo_pool_autoextend_percent.
+       # Auto-extending a VDO pool adds this percent extra space.
+       # The amount of additional space added to a VDO pool is this
+       # percent of its current size.
+       # 
+       # Example
+       # Using 70% autoextend threshold and 20% autoextend size, when a 10G
+       # VDO pool exceeds 7G, it is extended to 12G, and when it exceeds
+       # 8.4G, it is extended to 14.4G:
+       # This configuration option has an automatic default value.
+       # vdo_pool_autoextend_percent = 20
+
        # Configuration option activation/mlock_filter.
        # Do not mlock these memory areas.
        # While activating devices, I/O to devices being (re)configured is
@@ -1612,24 +1687,6 @@
        # This configuration option is advanced.
        # This configuration option has an automatic default value.
        # stripesize = 64
-
-       # Configuration option metadata/dirs.
-       # Directories holding live copies of text format metadata.
-       # These directories must not be on logical volumes!
-       # It's possible to use LVM with a couple of directories here,
-       # preferably on different (non-LV) filesystems, and with no other
-       # on-disk metadata (pvmetadatacopies = 0). Or this can be in addition
-       # to on-disk metadata areas. The feature was originally added to
-       # simplify testing and is not supported under low memory situations -
-       # the machine could lock up. Never edit any files in these directories
-       # by hand unless you are absolutely sure you know what you are doing!
-       # Use the supplied toolset to make changes (e.g. vgcfgrestore).
-       # 
-       # Example
-       # dirs = [ "/etc/lvm/metadata", "/mnt/disk2/lvm/metadata2" ]
-       # 
-       # This configuration option is advanced.
-       # This configuration option does not have a default value defined.
 # }
 
 # Configuration section report.
@@ -2080,6 +2137,23 @@
        # This configuration option has an automatic default value.
        # thin_command = "lvm lvextend --use-policies"
 
+       # Configuration option dmeventd/vdo_library.
+       # The library dmeventd uses when monitoring a VDO pool device.
+       # libdevmapper-event-lvm2vdo.so monitors the filling of a pool
+       # and emits a warning through syslog when the usage exceeds 80%. The
+       # warning is repeated when 85%, 90% and 95% of the pool is filled.
+       # This configuration option has an automatic default value.
+       # vdo_library = "libdevmapper-event-lvm2vdo.so"
+
+       # Configuration option dmeventd/vdo_command.
+       # The plugin runs command with each 5% increment when VDO pool volume
+       # gets above 50%.
+       # Command which starts with 'lvm ' prefix is internal lvm command.
+       # You can write your own handler to customise behaviour in more details.
+       # User handler is specified with the full path starting with '/'.
+       # This configuration option has an automatic default value.
+       # vdo_command = "lvm lvextend --use-policies"
+
        # Configuration option dmeventd/executable.
        # The full path to the dmeventd binary.
        # This configuration option has an automatic default value.


Reply via email to