Re: [pve-devel] [PATCH zfsonlinux v2 2/2] fix #5288: cherry-pick fix for udev-partition links > 16

2024-03-06 Thread Fabian Grünbichler
> Stoiko Ivanov  hat am 06.03.2024 14:24 CET geschrieben:
> 
>  
> see:
> https://github.com/openzfs/zfs/pull/15970
> https://github.com/openzfs/zfs/issues/15904
> 
> for some additional background.

FWIW, this one got two approvals upstream in the meantime ;)

> Signed-off-by: Stoiko Ivanov 
> ---
>  ...rectly-handle-partition-16-and-later.patch | 52 +++
>  debian/patches/series |  1 +
>  2 files changed, 53 insertions(+)
>  create mode 100644 
> debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch
> 
> diff --git 
> a/debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch 
> b/debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch
> new file mode 100644
> index ..578b74bd
> --- /dev/null
> +++ b/debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch
> @@ -0,0 +1,52 @@
> +From  Mon Sep 17 00:00:00 2001
> +From: =?UTF-8?q?Fabian=20Gr=C3=BCnbichler?= 
> +Date: Wed, 6 Mar 2024 10:39:06 +0100
> +Subject: [PATCH] udev: correctly handle partition #16 and later
> +MIME-Version: 1.0
> +Content-Type: text/plain; charset=UTF-8
> +Content-Transfer-Encoding: 8bit
> +
> +If a zvol has more than 15 partitions, the minor device number exhausts
> +the slot count reserved for partitions next to the zvol itself. As a
> +result, the minor number cannot be used to determine the partition
> +number for the higher partition, and doing so results in wrong named
> +symlinks being generated by udev.
> +
> +Since the partition number is encoded in the block device name anyway,
> +let's just extract it from there instead.
> +
> +Fixes: #15904
> +
> +Signed-off-by: Fabian Grünbichler 
> +Signed-off-by: Stoiko Ivanov 
> +---
> + udev/zvol_id.c | 9 +
> + 1 file changed, 5 insertions(+), 4 deletions(-)
> +
> +diff --git a/udev/zvol_id.c b/udev/zvol_id.c
> +index 5960b9787..609349594 100644
> +--- a/udev/zvol_id.c
>  b/udev/zvol_id.c
> +@@ -51,7 +51,7 @@ const char *__asan_default_options(void) {
> + int
> + main(int argc, const char *const *argv)
> + {
> +-if (argc != 2) {
> ++if (argc != 2 || strncmp(argv[1], "/dev/zd", 7) != 0) {
> + fprintf(stderr, "usage: %s /dev/zdX\n", argv[0]);
> + return (1);
> + }
> +@@ -72,9 +72,10 @@ main(int argc, const char *const *argv)
> + return (1);
> + }
> + 
> +-unsigned int dev_part = minor(sb.st_rdev) % ZVOL_MINORS;
> +-if (dev_part != 0)
> +-sprintf(zvol_name + strlen(zvol_name), "-part%u", dev_part);
> ++const char *dev_part = strrchr(dev_name, 'p');
> ++if (dev_part != NULL) {
> ++sprintf(zvol_name + strlen(zvol_name), "-part%s", dev_part + 1);
> ++}
> + 
> + for (size_t i = 0; i < strlen(zvol_name); ++i)
> + if (isblank(zvol_name[i]))
> diff --git a/debian/patches/series b/debian/patches/series
> index 35f81d13..9eedf857 100644
> --- a/debian/patches/series
> +++ b/debian/patches/series
> @@ -9,3 +9,4 @@
>  0009-arc-stat-summary-guard-access-to-l2arc-MFU-MRU-stats.patch
>  0010-Fix-nfs_truncate_shares-without-etc-exports.d.patch
>  0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
> +0012-udev-correctly-handle-partition-16-and-later.patch
> -- 
> 2.39.2
> 
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH docs 3/9] local-btrfs: unify casing for btrfs

2024-03-06 Thread Thomas Lamprecht
Am 04/03/2024 um 14:22 schrieb Christoph Heiss:
> It's mostly spelled BTRFS anyway in our documentation (and also the
> official casing AFAICS), so align a few instances where it spelled
> lowercase.

official spelling is really not consistent and includes at least btrfs,
BTRFS and Btrfs – but having use it one casing here makes it slightly
better so fine by me.


> 
> Signed-off-by: Christoph Heiss 
> ---
>  local-btrfs.adoc | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] applied: [PATCH docs 4/9] getting-help: forum: align wording with pmg-docs

2024-03-06 Thread Thomas Lamprecht
Am 04/03/2024 um 14:22 schrieb Christoph Heiss:
> This paragraph as phrased in pmg-docs sounds better & reads easier, so
> apply it here too.
> 
> Signed-off-by: Christoph Heiss 
> ---
>  getting-help.adoc | 8 
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH docs 2/9] asciidoc: introduce `pricing-url` variable, much like pmg-docs

2024-03-06 Thread Thomas Lamprecht
Am 04/03/2024 um 14:22 schrieb Christoph Heiss:
> Seems like a pretty sensible thing to do here too.
> 
> Signed-off-by: Christoph Heiss 
> ---
>  asciidoc/asciidoc-pve.conf | 1 +
>  getting-help.adoc  | 2 +-
>  pve-package-repos.adoc | 2 +-
>  3 files changed, 3 insertions(+), 2 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH docs 1/9] gitignore: add package build outputs

2024-03-06 Thread Thomas Lamprecht
Am 04/03/2024 um 14:22 schrieb Christoph Heiss:
> .. much like it many other repos.
> 
> Signed-off-by: Christoph Heiss 
> ---
>  .gitignore | 3 +++
>  1 file changed, 3 insertions(+)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH docs] storage: fix zfs over iscsi links

2024-03-06 Thread Thomas Lamprecht
Am 01/03/2024 um 09:21 schrieb Dominik Csapak:
> The `_ZFS_over_iSCSI` wiki page is redirected to the legacy page
> (for historical reasons), but we want to link to the reference docs
> instead.
> 
> for the wiki add the legacy link in a `see also` section, so users can
> still reach that page easily should they need to
> 
> Signed-off-by: Dominik Csapak 
> ---
>  pve-storage-zfs.adoc | 9 +
>  pvesm.adoc   | 2 +-
>  2 files changed, 10 insertions(+), 1 deletion(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] ceph: pool edit: set target ratio to 0 when the value is unset

2024-03-06 Thread Fiona Ebner
Am 06.03.24 um 16:17 schrieb Maximiliano Sandoval:
> Fiona Ebner  writes:
> 
>> It might be cleaner to just use
>> emptyValue: 0,
>> in the field declaration like is already done for the "Target Size"
>> field. And the same issue is also present for the "Min. # of PGs" field,
>> right?
> 
> Thanks for the emptyValue tip, I didn't know about it. Unfortunately, I
> tested this and it didn't work.
> 

Yes, sorry. Unfortunately, that is a feature specific to
PVE.form.SizeField, respectively Proxmox.form.SizeField.

For the "Min. # of PGs" field, we could add a similar feature to
proxmoxintegerfield. And the "Target Ratio" field is just a numberfield,
so would need yet another instance of the feature. So maybe not worth it.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] ceph: pool edit: set target ratio to 0 when the value is unset

2024-03-06 Thread Maximiliano Sandoval
Fiona Ebner  writes:

> It might be cleaner to just use
> emptyValue: 0,
> in the field declaration like is already done for the "Target Size"
> field. And the same issue is also present for the "Min. # of PGs" field,
> right?

Thanks for the emptyValue tip, I didn't know about it. Unfortunately, I
tested this and it didn't work.

--
Maximiliano


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] ceph: pool edit: set target ratio to 0 when the value is unset

2024-03-06 Thread Fiona Ebner
Am 06.03.24 um 15:14 schrieb Maximiliano Sandoval:
> If the pool has a target_size_ratio set it might be desirable to unset
> its value, e.g. if set by mistake on .mgr.
> 
> Currently unsetting the value won't do anything in the web UI. With this
> patch it is set to zero, which the API correctly understands and unsets
> it.
> 
> one can verify the value set using
> 
> $ ceph osd pool get  target_size_ratio
> 
> after setting the valut to 0 through the API it will output
> 
> Error ENOENT: option 'target_size_ratio' is not set on pool 
> 'cephfs-test_data'
> 
> Signed-off-by: Maximiliano Sandoval 
> ---
>  www/manager6/ceph/Pool.js | 6 +-
>  1 file changed, 5 insertions(+), 1 deletion(-)
> 
> diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
> index c61d4f71..224f3cea 100644
> --- a/www/manager6/ceph/Pool.js
> +++ b/www/manager6/ceph/Pool.js
> @@ -226,7 +226,11 @@ Ext.define('PVE.CephPoolInputPanel', {
>  onGetValues: function(values) {
>   Object.keys(values || {}).forEach(function(name) {
>   if (values[name] === '') {
> - delete values[name];
> + if (name === 'target_size_ratio') {
> + values[name] = 0;
> + } else {
> + delete values[name];
> + }
>   }
>   });
>  

It might be cleaner to just use
emptyValue: 0,
in the field declaration like is already done for the "Target Size"
field. And the same issue is also present for the "Min. # of PGs" field,
right?


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/1] fix 1734: clone VM: if deactivation fails demote error to warning

2024-03-06 Thread Hannes Dürr



On 3/6/24 15:04, Fiona Ebner wrote:

Yes, but the question is what is worse: Needing to re-do the clone or
having the VM config on the wrong node?


To avoid that, I'd lean towards keeping the behavior of failing the task
if deactivating $newvollist fails. After all, at least in case of LVM
not being able to deactivate because the device is in use, we just
created $newvollist so hopefully nobody else should be accessing it.

Fine by me. Yes, it's unlikely to fail. And we can still adapt later if
users ever complain about it.


I've sent a v2 
https://lists.proxmox.com/pipermail/pve-devel/2024-March/062115.html


Also, i agree with friedrich, but yes, it should be very unlikely to fail.



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH manager] ceph: pool edit: set target ratio to 0 when the value is unset

2024-03-06 Thread Maximiliano Sandoval
If the pool has a target_size_ratio set it might be desirable to unset
its value, e.g. if set by mistake on .mgr.

Currently unsetting the value won't do anything in the web UI. With this
patch it is set to zero, which the API correctly understands and unsets
it.

one can verify the value set using

$ ceph osd pool get  target_size_ratio

after setting the valut to 0 through the API it will output

Error ENOENT: option 'target_size_ratio' is not set on pool 
'cephfs-test_data'

Signed-off-by: Maximiliano Sandoval 
---
 www/manager6/ceph/Pool.js | 6 +-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/www/manager6/ceph/Pool.js b/www/manager6/ceph/Pool.js
index c61d4f71..224f3cea 100644
--- a/www/manager6/ceph/Pool.js
+++ b/www/manager6/ceph/Pool.js
@@ -226,7 +226,11 @@ Ext.define('PVE.CephPoolInputPanel', {
 onGetValues: function(values) {
Object.keys(values || {}).forEach(function(name) {
if (values[name] === '') {
-   delete values[name];
+   if (name === 'target_size_ratio') {
+   values[name] = 0;
+   } else {
+   delete values[name];
+   }
}
});
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server v2 1/1] fix 1734: clone VM: if deactivation fails demote error to warning

2024-03-06 Thread Hannes Duerr
When a template with disks on LVM is cloned to another node, the volumes
are first activated, then cloned and deactivated again after cloning.

However, if clones of this template are now created in parallel to other
nodes, it can happen that one of the tasks can no longer deactivate the
logical volume because it is still in use.  The reason for this is that
we use a shared lock.
Since the failed deactivation does not necessarily have consequences, we
downgrade the error to a warning, which means that the clone tasks will
continue to be completed successfully.

Signed-off-by: Hannes Duerr 
---
changes since v1:
- fix nits and spelling

 PVE/API2/Qemu.pm | 7 ++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 69c5896..1ff5abe 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -48,6 +48,7 @@ use PVE::DataCenterConfig;
 use PVE::SSHInfo;
 use PVE::Replication;
 use PVE::StorageTunnel;
+use PVE::RESTEnvironment qw(log_warn);
 
 BEGIN {
 if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -3820,7 +3821,11 @@ __PACKAGE__->register_method({
 
if ($target) {
# always deactivate volumes - avoid lvm LVs to be active on 
several nodes
-   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
+   eval {
+   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
+   };
+   log_warn($@) if ($@);
+
PVE::Storage::deactivate_volumes($storecfg, $newvollist);
 
my $newconffile = PVE::QemuConfig->config_file($newid, 
$target);
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/1] fix 1734: clone VM: if deactivation fails demote error to warning

2024-03-06 Thread Fiona Ebner
Am 06.03.24 um 14:14 schrieb Friedrich Weber:
> On 06/03/2024 13:40, Fiona Ebner wrote:
>> Am 06.03.24 um 11:47 schrieb Hannes Duerr:
>>> @@ -3820,7 +3821,13 @@ __PACKAGE__->register_method({
>>>  
>>> if ($target) {
>>> # always deactivate volumes - avoid lvm LVs to be active on 
>>> several nodes
>>> -   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
>>> $snapname) if !$running;
>>> +   eval {
>>> +   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
>>> $snapname) if !$running;
>>> +   };
>>> +   my $err = $@;
>>> +   if ($err) {
>>> +   log_warn("$err\n");
>>> +   }
>>> PVE::Storage::deactivate_volumes($storecfg, $newvollist);
>>
>> We might also want to catch errors here. Otherwise, the whole clone
>> operation (which might've taken hours) can still fail just because of a
>> deactivation error. But when failing here, we shouldn't move the config
>> file (or the LV can get active on multiple nodes more easily).
> 
> I think succeeding but not moving the config file when deactivating
> $newvollist fails sounds like it could lead to unexpected behavior.
> Right now, when running `qm clone 101 [...] --target node2` on node1
> succeeds, one can be sure there will be an VM 101 on node2. But if we
> cannot deactivate $newvollist and thus don't move the config file, the
> command succeeds but VM 101 instead exists on node1 (correct me if I'm
> wrong), which may be confusing e.g. if the clone is automated.
> 

Yes, but the question is what is worse: Needing to re-do the clone or
having the VM config on the wrong node?

> To avoid that, I'd lean towards keeping the behavior of failing the task
> if deactivating $newvollist fails. After all, at least in case of LVM
> not being able to deactivate because the device is in use, we just
> created $newvollist so hopefully nobody else should be accessing it.

Fine by me. Yes, it's unlikely to fail. And we can still adapt later if
users ever complain about it.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH zfsonlinux v2 0/2] update ZFS to 2.2.3 and include a fix for udev-links for partitions

2024-03-06 Thread Stoiko Ivanov
changes from v1:
* add a fix for #5288 after Fiona managed to reproduce it and we saw it
  was a known issue addressed by Fabian with a pull-request upstream
* add a bit more detail to the submodule-update commit-message

minimally tested in my virtual setup, additionally Fiona tested that the
fix for 5288 indeed works with her reproducer

Stoiko Ivanov (2):
  update zfs submodule to 2.2.3 and refresh patches
  fix #5288: cherry-pick fix for udev-partition links > 16

 debian/patches/0005-Enable-zed-emails.patch   |   2 +-
 ...hten-bounds-for-noalloc-stat-availab.patch |   4 +-
 ...do-not-truncate-shares-not-zfs-mount.patch | 131 --
 ...rectly-handle-partition-16-and-later.patch |  52 +++
 debian/patches/series |   2 +-
 upstream  |   2 +-
 6 files changed, 57 insertions(+), 136 deletions(-)
 delete mode 100644 
debian/patches/0012-fix-mount-do-not-truncate-shares-not-zfs-mount.patch
 create mode 100644 
debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch

-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH zfsonlinux v2 2/2] fix #5288: cherry-pick fix for udev-partition links > 16

2024-03-06 Thread Stoiko Ivanov
see:
https://github.com/openzfs/zfs/pull/15970
https://github.com/openzfs/zfs/issues/15904

for some additional background.

Signed-off-by: Stoiko Ivanov 
---
 ...rectly-handle-partition-16-and-later.patch | 52 +++
 debian/patches/series |  1 +
 2 files changed, 53 insertions(+)
 create mode 100644 
debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch

diff --git 
a/debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch 
b/debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch
new file mode 100644
index ..578b74bd
--- /dev/null
+++ b/debian/patches/0012-udev-correctly-handle-partition-16-and-later.patch
@@ -0,0 +1,52 @@
+From  Mon Sep 17 00:00:00 2001
+From: =?UTF-8?q?Fabian=20Gr=C3=BCnbichler?= 
+Date: Wed, 6 Mar 2024 10:39:06 +0100
+Subject: [PATCH] udev: correctly handle partition #16 and later
+MIME-Version: 1.0
+Content-Type: text/plain; charset=UTF-8
+Content-Transfer-Encoding: 8bit
+
+If a zvol has more than 15 partitions, the minor device number exhausts
+the slot count reserved for partitions next to the zvol itself. As a
+result, the minor number cannot be used to determine the partition
+number for the higher partition, and doing so results in wrong named
+symlinks being generated by udev.
+
+Since the partition number is encoded in the block device name anyway,
+let's just extract it from there instead.
+
+Fixes: #15904
+
+Signed-off-by: Fabian Grünbichler 
+Signed-off-by: Stoiko Ivanov 
+---
+ udev/zvol_id.c | 9 +
+ 1 file changed, 5 insertions(+), 4 deletions(-)
+
+diff --git a/udev/zvol_id.c b/udev/zvol_id.c
+index 5960b9787..609349594 100644
+--- a/udev/zvol_id.c
 b/udev/zvol_id.c
+@@ -51,7 +51,7 @@ const char *__asan_default_options(void) {
+ int
+ main(int argc, const char *const *argv)
+ {
+-  if (argc != 2) {
++  if (argc != 2 || strncmp(argv[1], "/dev/zd", 7) != 0) {
+   fprintf(stderr, "usage: %s /dev/zdX\n", argv[0]);
+   return (1);
+   }
+@@ -72,9 +72,10 @@ main(int argc, const char *const *argv)
+   return (1);
+   }
+ 
+-  unsigned int dev_part = minor(sb.st_rdev) % ZVOL_MINORS;
+-  if (dev_part != 0)
+-  sprintf(zvol_name + strlen(zvol_name), "-part%u", dev_part);
++  const char *dev_part = strrchr(dev_name, 'p');
++  if (dev_part != NULL) {
++  sprintf(zvol_name + strlen(zvol_name), "-part%s", dev_part + 1);
++  }
+ 
+   for (size_t i = 0; i < strlen(zvol_name); ++i)
+   if (isblank(zvol_name[i]))
diff --git a/debian/patches/series b/debian/patches/series
index 35f81d13..9eedf857 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -9,3 +9,4 @@
 0009-arc-stat-summary-guard-access-to-l2arc-MFU-MRU-stats.patch
 0010-Fix-nfs_truncate_shares-without-etc-exports.d.patch
 0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
+0012-udev-correctly-handle-partition-16-and-later.patch
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH zfsonlinux v2 1/2] update zfs submodule to 2.2.3 and refresh patches

2024-03-06 Thread Stoiko Ivanov
mostly support for newer kernel-versions, and fixes for the BRT bugs
discovered with 2.2.0 (BRT remains disabled by default).

The update contains a fix for CVE-2020-24370 in lua (which is present
in ZFS for channel-programs, which we do not use) - see:
https://github.com/openzfs/zfs/pull/15847 for more details.

One patch from Stefan Lendl was backported and is now in the ZFS 2.2
branch.

Signed-off-by: Stoiko Ivanov 
---
 debian/patches/0005-Enable-zed-emails.patch   |   2 +-
 ...hten-bounds-for-noalloc-stat-availab.patch |   4 +-
 ...do-not-truncate-shares-not-zfs-mount.patch | 131 --
 debian/patches/series |   1 -
 upstream  |   2 +-
 5 files changed, 4 insertions(+), 136 deletions(-)
 delete mode 100644 
debian/patches/0012-fix-mount-do-not-truncate-shares-not-zfs-mount.patch

diff --git a/debian/patches/0005-Enable-zed-emails.patch 
b/debian/patches/0005-Enable-zed-emails.patch
index 646d529c..af38f84e 100644
--- a/debian/patches/0005-Enable-zed-emails.patch
+++ b/debian/patches/0005-Enable-zed-emails.patch
@@ -13,7 +13,7 @@ Signed-off-by: Thomas Lamprecht 
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/cmd/zed/zed.d/zed.rc b/cmd/zed/zed.d/zed.rc
-index 78dc1afc7..41d5539ea 100644
+index bc269b155..e6d4b1703 100644
 --- a/cmd/zed/zed.d/zed.rc
 +++ b/cmd/zed/zed.d/zed.rc
 @@ -41,7 +41,7 @@ ZED_EMAIL_ADDR="root"
diff --git 
a/debian/patches/0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
 
b/debian/patches/0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
index f58c58e8..3c87b0cb 100644
--- 
a/debian/patches/0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
+++ 
b/debian/patches/0011-zpool-status-tighten-bounds-for-noalloc-stat-availab.patch
@@ -51,10 +51,10 @@ Signed-off-by: Thomas Lamprecht 
  1 file changed, 2 insertions(+), 1 deletion(-)
 
 diff --git a/cmd/zpool/zpool_main.c b/cmd/zpool/zpool_main.c
-index 5507f9d3f..98970abfe 100644
+index 69bf9649a..fd42ce7c1 100644
 --- a/cmd/zpool/zpool_main.c
 +++ b/cmd/zpool/zpool_main.c
-@@ -2478,7 +2478,8 @@ print_status_config(zpool_handle_t *zhp, status_cbdata_t 
*cb, const char *name,
+@@ -2616,7 +2616,8 @@ print_status_config(zpool_handle_t *zhp, status_cbdata_t 
*cb, const char *name,
  
if (vs->vs_scan_removing != 0) {
(void) printf(gettext("  (removing)"));
diff --git 
a/debian/patches/0012-fix-mount-do-not-truncate-shares-not-zfs-mount.patch 
b/debian/patches/0012-fix-mount-do-not-truncate-shares-not-zfs-mount.patch
deleted file mode 100644
index ab01e623..
--- a/debian/patches/0012-fix-mount-do-not-truncate-shares-not-zfs-mount.patch
+++ /dev/null
@@ -1,131 +0,0 @@
-From  Mon Sep 17 00:00:00 2001
-From: Stefan Lendl <1321542+s...@users.noreply.github.com>
-Date: Fri, 12 Jan 2024 21:05:11 +0100
-Subject: [PATCH] fix(mount): do not truncate shares not zfs mount
-
-When running zfs share -a resetting the exports.d/zfs.exports makes
-sense the get a clean state.
-Truncating was also called with zfs mount which would not populate the
-file again.
-Add test to verify shares persist after mount -a.
-
-Reviewed-by: Brian Behlendorf 
-Signed-off-by: Stefan Lendl 
-Closes #15607
-Closes #15660

- cmd/zfs/zfs_main.c|  3 +-
- tests/runfiles/common.run |  3 +-
- tests/zfs-tests/tests/Makefile.am |  1 +
- .../zfs_share/zfs_share_after_mount.ksh   | 62 +++
- 4 files changed, 67 insertions(+), 2 deletions(-)
- create mode 100755 
tests/zfs-tests/tests/functional/cli_root/zfs_share/zfs_share_after_mount.ksh
-
-diff --git a/cmd/zfs/zfs_main.c b/cmd/zfs/zfs_main.c
-index 9939f206a..f67f6114d 100644
 a/cmd/zfs/zfs_main.c
-+++ b/cmd/zfs/zfs_main.c
-@@ -7234,7 +7234,8 @@ share_mount(int op, int argc, char **argv)
-   pthread_mutex_init(&share_mount_state.sm_lock, NULL);
- 
-   /* For a 'zfs share -a' operation start with a clean slate. */
--  zfs_truncate_shares(NULL);
-+  if (op == OP_SHARE)
-+  zfs_truncate_shares(NULL);
- 
-   /*
-* libshare isn't mt-safe, so only do the operation in parallel
-diff --git a/tests/runfiles/common.run b/tests/runfiles/common.run
-index f6e5367f5..a600140ea 100644
 a/tests/runfiles/common.run
-+++ b/tests/runfiles/common.run
-@@ -316,7 +316,8 @@ tags = ['functional', 'cli_root', 'zfs_set']
- [tests/functional/cli_root/zfs_share]
- tests = ['zfs_share_001_pos', 'zfs_share_002_pos', 'zfs_share_003_pos',
- 'zfs_share_004_pos', 'zfs_share_006_pos', 'zfs_share_008_neg',
--'zfs_share_010_neg', 'zfs_share_011_pos', 'zfs_share_concurrent_shares']
-+'zfs_share_010_neg', 'zfs_share_011_pos', 'zfs_share_concurrent_shares',
-+'zfs_share_after_mount']
- tags = ['functional', 'cli_root', 'zfs_share']
- 
- [tests/functional/cli_root/zfs_snapshot]

Re: [pve-devel] [PATCH qemu-server 1/1] fix 1734: clone VM: if deactivation fails demote error to warning

2024-03-06 Thread Friedrich Weber
On 06/03/2024 13:40, Fiona Ebner wrote:
> Am 06.03.24 um 11:47 schrieb Hannes Duerr:
>> @@ -3820,7 +3821,13 @@ __PACKAGE__->register_method({
>>  
>>  if ($target) {
>>  # always deactivate volumes - avoid lvm LVs to be active on 
>> several nodes
>> -PVE::Storage::deactivate_volumes($storecfg, $vollist, 
>> $snapname) if !$running;
>> +eval {
>> +PVE::Storage::deactivate_volumes($storecfg, $vollist, 
>> $snapname) if !$running;
>> +};
>> +my $err = $@;
>> +if ($err) {
>> +log_warn("$err\n");
>> +}
>>  PVE::Storage::deactivate_volumes($storecfg, $newvollist);
> 
> We might also want to catch errors here. Otherwise, the whole clone
> operation (which might've taken hours) can still fail just because of a
> deactivation error. But when failing here, we shouldn't move the config
> file (or the LV can get active on multiple nodes more easily).

I think succeeding but not moving the config file when deactivating
$newvollist fails sounds like it could lead to unexpected behavior.
Right now, when running `qm clone 101 [...] --target node2` on node1
succeeds, one can be sure there will be an VM 101 on node2. But if we
cannot deactivate $newvollist and thus don't move the config file, the
command succeeds but VM 101 instead exists on node1 (correct me if I'm
wrong), which may be confusing e.g. if the clone is automated.

To avoid that, I'd lean towards keeping the behavior of failing the task
if deactivating $newvollist fails. After all, at least in case of LVM
not being able to deactivate because the device is in use, we just
created $newvollist so hopefully nobody else should be accessing it.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/1] fix 1734: clone VM: if deactivation fails demote error to warning

2024-03-06 Thread Fiona Ebner
Am 06.03.24 um 11:47 schrieb Hannes Duerr:
> @@ -3820,7 +3821,13 @@ __PACKAGE__->register_method({
>  
>   if ($target) {
>   # always deactivate volumes - avoid lvm LVs to be active on 
> several nodes
> - PVE::Storage::deactivate_volumes($storecfg, $vollist, 
> $snapname) if !$running;
> + eval {
> + PVE::Storage::deactivate_volumes($storecfg, $vollist, 
> $snapname) if !$running;
> + };
> + my $err = $@;
> + if ($err) {
> + log_warn("$err\n");
> + }
>   PVE::Storage::deactivate_volumes($storecfg, $newvollist);

We might also want to catch errors here. Otherwise, the whole clone
operation (which might've taken hours) can still fail just because of a
deactivation error. But when failing here, we shouldn't move the config
file (or the LV can get active on multiple nodes more easily). Can be a
separate patch or the same (since it's still about demoting deactivation
failure, would still fit logically).

>  
>   my $newconffile = PVE::QemuConfig->config_file($newid, 
> $target);


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/1] fix 1734: clone VM: if deactivation fails demote error to warning

2024-03-06 Thread Fiona Ebner
Am 06.03.24 um 12:37 schrieb Friedrich Weber:
> Thanks for tackling this! Can confirm this patch demotes the error to a
> warning and lets the qmclone task succeed (with a warning). GUI shows
> "Warnings: 1" and task log contains:
> 
> can't deactivate LV '/dev/foobar/vm-100-disk-0':   Logical volume
> foobar/vm-100-disk-0 in use.
> WARN: volume deactivation failed: foobar:vm-100-disk-0 at
> /usr/share/perl5/PVE/Storage.pm line 1246.
> 

Sounds like there is a missing newline after the error message in
PVE/Storage.pm. That's why Perl prints the file/line info.


>> @@ -3820,7 +3821,13 @@ __PACKAGE__->register_method({
>>  
>>  if ($target) {
>>  # always deactivate volumes - avoid lvm LVs to be active on 
>> several nodes
>> -PVE::Storage::deactivate_volumes($storecfg, $vollist, 
>> $snapname) if !$running;
>> +eval {
>> +PVE::Storage::deactivate_volumes($storecfg, $vollist, 
>> $snapname) if !$running;
>> +};
>> +my $err = $@;
>> +if ($err) {
>> +log_warn("$err\n");
>> +}
> 
> I think the extra \n adds an unnecessary newline here, which looks a bit
> weird in the task log (though I'm not sure why the `chomp` in `log_warn`
> doesn't remove the newline).
> 
> While at it, I think the four lines can be shortened to
> 
>> log_warn($@) if $@;
> 
> Though that might be too terse -- someone with more Perl experience than
> me should judge that :)
> 

It's fine if the error is only used for printing and this comes
immediately after the eval.

In cases, you do something else with the error it can still be
if (my $err = $@) {
}


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server 1/1] fix 1734: clone VM: if deactivation fails demote error to warning

2024-03-06 Thread Friedrich Weber
Thanks for tackling this! Can confirm this patch demotes the error to a
warning and lets the qmclone task succeed (with a warning). GUI shows
"Warnings: 1" and task log contains:

can't deactivate LV '/dev/foobar/vm-100-disk-0':   Logical volume
foobar/vm-100-disk-0 in use.
WARN: volume deactivation failed: foobar:vm-100-disk-0 at
/usr/share/perl5/PVE/Storage.pm line 1246.

Small nits in-line:

On 06/03/2024 11:47, Hannes Duerr wrote:
> When a template with disks on LVM is cloned to another node, the storage
> is first activated, then cloned and deactivated again after cloning.

s/storage is/volumes are/g

> 
> However, if clones of this template are now created in parellel to other

typo: parallel

> nodes, it can happen that one of the tasks can no longer deactivate the
> logical volume because it is still in use.  The reason for this is that
> we use a shared lock.
> Since the failed deactivation does not necessarily have consequences, we
> downgrade the error to a warning, which means that the clone tasks will
> continue to be completed successfully.
> 
> Signed-off-by: Hannes Duerr 
> ---
>  PVE/API2/Qemu.pm | 9 -
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 69c5896..f1e88b8 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -48,6 +48,7 @@ use PVE::DataCenterConfig;
>  use PVE::SSHInfo;
>  use PVE::Replication;
>  use PVE::StorageTunnel;
> +use PVE::RESTEnvironment qw(log_warn);
>  
>  BEGIN {
>  if (!$ENV{PVE_GENERATING_DOCS}) {
> @@ -3820,7 +3821,13 @@ __PACKAGE__->register_method({
>  
>   if ($target) {
>   # always deactivate volumes - avoid lvm LVs to be active on 
> several nodes
> - PVE::Storage::deactivate_volumes($storecfg, $vollist, 
> $snapname) if !$running;
> + eval {
> + PVE::Storage::deactivate_volumes($storecfg, $vollist, 
> $snapname) if !$running;
> + };
> + my $err = $@;
> + if ($err) {
> + log_warn("$err\n");
> + }

I think the extra \n adds an unnecessary newline here, which looks a bit
weird in the task log (though I'm not sure why the `chomp` in `log_warn`
doesn't remove the newline).

While at it, I think the four lines can be shortened to

> log_warn($@) if $@;

Though that might be too terse -- someone with more Perl experience than
me should judge that :)

>   PVE::Storage::deactivate_volumes($storecfg, $newvollist);
>  
>   my $newconffile = PVE::QemuConfig->config_file($newid, 
> $target);


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH pve-common] ticket: remove fallback for SHA1-base64 CSRF prevention tokens

2024-03-06 Thread Wolfgang Bumiller
applied, thanks


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 1/1] fix 1734: clone VM: if deactivation fails demote error to warning

2024-03-06 Thread Hannes Duerr
When a template with disks on LVM is cloned to another node, the storage
is first activated, then cloned and deactivated again after cloning.

However, if clones of this template are now created in parellel to other
nodes, it can happen that one of the tasks can no longer deactivate the
logical volume because it is still in use.  The reason for this is that
we use a shared lock.
Since the failed deactivation does not necessarily have consequences, we
downgrade the error to a warning, which means that the clone tasks will
continue to be completed successfully.

Signed-off-by: Hannes Duerr 
---
 PVE/API2/Qemu.pm | 9 -
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 69c5896..f1e88b8 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -48,6 +48,7 @@ use PVE::DataCenterConfig;
 use PVE::SSHInfo;
 use PVE::Replication;
 use PVE::StorageTunnel;
+use PVE::RESTEnvironment qw(log_warn);
 
 BEGIN {
 if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -3820,7 +3821,13 @@ __PACKAGE__->register_method({
 
if ($target) {
# always deactivate volumes - avoid lvm LVs to be active on 
several nodes
-   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
+   eval {
+   PVE::Storage::deactivate_volumes($storecfg, $vollist, 
$snapname) if !$running;
+   };
+   my $err = $@;
+   if ($err) {
+   log_warn("$err\n");
+   }
PVE::Storage::deactivate_volumes($storecfg, $newvollist);
 
my $newconffile = PVE::QemuConfig->config_file($newid, 
$target);
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage 1/1] storage/plugins: pass scfg to parse_volname

2024-03-06 Thread Roland Kammerer via pve-devel
--- Begin Message ---
On Tue, Mar 05, 2024 at 01:13:08PM +0100, Fabian Grünbichler wrote:
> I wonder whether the following wouldn't also work?
> 
> - keep the volume name on the PVE side like the other storage plugins
> (which means - encode vital information about the volume there so that
> it's possible to query *without* talking to the storage[0])
> - use a separate, unknown to PVE identifier on the DRBD side
> - store the association (either from volname to internal, or both
> ways) in linstore and potentially cache it on the PVE side if needed

Hi Fabian,

thanks for your feedback, highly appreciated, I'm pretty sure this is
exactly what I hinted in the cover letter with "completely virtualizing
the names", but did not want to go into too much details of alternative
implementations as the text was already very long.

I did a draft of that idea after running into the problem with
parse_volname initially. One can argue what the worse UX is, but giving
the users virtual vm-100-disk-1 names that have absolutely nothing to do
with the actual LINSTOR/DRBD resources did not sound to convincing
either. If things go sideways uses switch to the cli and for example
check the state of the DRBD resources. That is easier if they see the
actual pm-$UUID name in the PVE GUI already than to step through another
layer of virtual names.

If I go that route I have to store the mapping on a "global level" in
the LINSTOR controller, but if I only store the VM ID, I can store it in
the "object" associated with the LINSTOR resource/disk. If the disk gets
deleted, that is trivial, the "disk object" is deleted and so its VM ID
mapping. On the other hand if I need to store the mapping on the "global
level", then I also need some extra code to make sure that the entries
are actually consistent if things fail. Minor, but still.

What killed the idea for me was that it surprisingly complicated the
code dramatically. One needs to resolve these mappings way more often
than I expected. Everything that in the end talks to LINSTOR needs to
resolve that mapping. Also the parts that deal with snapshots became a
lot more tricky. With what I propose users (and in this case also the
plugin and LISNTOR) get what it is, pm-UUID names, the rest (except
parse_volname) then just works.

I perfectly understand that my suggested way is not perfect, it is a
trade off, but looking at both implementations this resulted in much
more reliable/maintainable code on the plugin's side, and not too much
hassle on the PVE side. Making parse_volname slightly worse vs. making
everything else in the plugin worse...

I give it one more round of thought, but most likely this will be a v2
with the changes Thomas requested if this is okay.

Regards, rck

--- End Message ---
___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel