[pve-devel] applied: [PATCH zfsonlinux] update zfs submodule to 2.1.13 and refresh patches

2023-09-28 Thread Thomas Lamprecht
Am 28/09/2023 um 12:37 schrieb Stoiko Ivanov:
> Sugested-by: Thomas Lamprecht 
> Signed-off-by: Stoiko Ivanov 
> ---
> did some minimal testing (ztest for a while, containers with replication
> and a migration between 2 nodes) - looked ok
> The changelog also seems harmless from a quick glance.
> 
>  debian/patches/0005-Enable-zed-emails.patch| 2 +-
>  debian/patches/0006-dont-symlink-zed-scripts.patch | 4 ++--
>  upstream   | 2 +-
>  3 files changed, 4 insertions(+), 4 deletions(-)
> 
>

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH v2 qemu-server 2/2] remote-migration: add target-cpu param

2023-09-28 Thread DERUMIER, Alexandre
Le mercredi 26 avril 2023 à 15:14 +0200, Fabian Grünbichler a écrit :
> On April 25, 2023 6:52 pm, Alexandre Derumier wrote:
> > This patch add support for remote migration when target
> > cpu model is different.
> > 
> > The target vm is restart after the migration
> 
> so this effectively introduces a new "hybrid" migration mode ;) the
> changes are a bit smaller than I expected (in part thanks to patch
> #1),
> which is good.
> 
> there are semi-frequent requests for another variant (also applicable
> to
> containers) in the form of a two phase migration
> - storage migrate
> - stop guest
> - incremental storage migrate
> - start guest on target
> 
> given that it might make sense to save-guard this implementation
> here,
> and maybe switch to a new "mode" parameter?
> 

I have implemented in v3 a working switch to remote nbd.

so, after the disk migration, we do a block-job-complete,
source vm is still running and now is running over nbd through the
target-vm.
Then the source vm is shutdown, flushing last pending writes through
nbd.
then the target vm is restarted



> online => switching CPU not allowed
> offline or however-we-call-this-new-mode (or in the future, two-
> phase-restart) => switching CPU allowed
> 
> > 
Still unsure about it, I have added an extra flag  in v3 "-target-
reboot"

- online : check if source vm is online
- target-cpu: change the targetcpu.  (only change value on targetvm)
- target-reboot: skip live migration, do shutdown of source vm and
restart of target vm.



> > Signed-off-by: Alexandre Derumier 
> > ---
> >  PVE/API2/Qemu.pm   | 18 ++
> >  PVE/CLI/qm.pm  |  6 ++
> >  PVE/QemuMigrate.pm | 25 +
> >  3 files changed, 49 insertions(+)
> > 
> > diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> > index 587bb22..6703c87 100644
> > --- a/PVE/API2/Qemu.pm
> > +++ b/PVE/API2/Qemu.pm
> > @@ -4460,6 +4460,12 @@ __PACKAGE__->register_method({
> > optional => 1,
> > default => 0,
> >     },
> > +   'target-cpu' => {
> > +   optional => 1,
> > +   description => "Target Emulated CPU model. For
> > online migration, the storage is live migrate, but the memory
> > migration is skipped and the target vm is restarted.",
> > +   type => 'string',
> > +   format => 'pve-vm-cpu-conf',
> > +   },
> >     'target-storage' => get_standard_option('pve-
> > targetstorage', {
> > completion =>
> > \::QemuServer::complete_migration_storage,
> > optional => 0,
> > @@ -4557,11 +4563,14 @@ __PACKAGE__->register_method({
> > raise_param_exc({ 'target-bridge' => "failed to parse
> > bridge map: $@" })
> >     if $@;
> >  
> > +   my $target_cpu = extract_param($param, 'target-cpu');
> 
> this is okay
> 
> > +
> > die "remote migration requires explicit storage mapping!\n"
> >     if $storagemap->{identity};
> >  
> > $param->{storagemap} = $storagemap;
> > $param->{bridgemap} = $bridgemap;
> > +   $param->{targetcpu} = $target_cpu;
> 
> but this is a bit confusing with the variable/hash key naming ;)
> 
Fixed in the v4

...
> >  
> > +    $remote_conf->{cpu} = $self->{opts}->{targetcpu};
> 
> do we need permission checks here (or better, somewhere early on, for
> doing this here)
> 
> 
> 
fixed in v4: do not override cpuconfig is targetcpu is empty.

About permission, I'm not sure but we don't have specific permission
for cpu.  Do we need to check perm on vm.config ? 
Because Anyway,we should already a have permission to create a vm on
target cluster.

___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel


[pve-devel] [PATCH v4 qemu-server 1/2] migration: move livemigration code in a dedicated sub

2023-09-28 Thread Alexandre Derumier
---
 PVE/QemuMigrate.pm | 420 +++--
 1 file changed, 214 insertions(+), 206 deletions(-)

diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index f41c61f..5ea78a7 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -726,6 +726,219 @@ sub cleanup_bitmaps {
 }
 }
 
+sub live_migration {
+my ($self, $vmid, $migrate_uri, $spice_port) = @_;
+
+my $conf = $self->{vmconf};
+
+$self->log('info', "starting online/live migration on $migrate_uri");
+$self->{livemigration} = 1;
+
+# load_defaults
+my $defaults = PVE::QemuServer::load_defaults();
+
+$self->log('info', "set migration capabilities");
+eval { PVE::QemuServer::set_migration_caps($vmid) };
+warn $@ if $@;
+
+my $qemu_migrate_params = {};
+
+# migrate speed can be set via bwlimit (datacenter.cfg and API) and via the
+# migrate_speed parameter in qm.conf - take the lower of the two.
+my $bwlimit = $self->get_bwlimit();
+
+my $migrate_speed = $conf->{migrate_speed} // 0;
+$migrate_speed *= 1024; # migrate_speed is in MB/s, bwlimit in KB/s
+
+if ($bwlimit && $migrate_speed) {
+   $migrate_speed = ($bwlimit < $migrate_speed) ? $bwlimit : 
$migrate_speed;
+} else {
+   $migrate_speed ||= $bwlimit;
+}
+$migrate_speed ||= ($defaults->{migrate_speed} || 0) * 1024;
+
+if ($migrate_speed) {
+   $migrate_speed *= 1024; # qmp takes migrate_speed in B/s.
+   $self->log('info', "migration speed limit: ". 
render_bytes($migrate_speed, 1) ."/s");
+} else {
+   # always set migrate speed as QEMU default to 128 MiBps == 1 Gbps, use 
16 GiBps == 128 Gbps
+   $migrate_speed = (16 << 30);
+}
+$qemu_migrate_params->{'max-bandwidth'} = int($migrate_speed);
+
+my $migrate_downtime = $defaults->{migrate_downtime};
+$migrate_downtime = $conf->{migrate_downtime} if 
defined($conf->{migrate_downtime});
+# migrate-set-parameters expects limit in ms
+$migrate_downtime *= 1000;
+$self->log('info', "migration downtime limit: $migrate_downtime ms");
+$qemu_migrate_params->{'downtime-limit'} = int($migrate_downtime);
+
+# set cachesize to 10% of the total memory
+my $memory = get_current_memory($conf->{memory});
+my $cachesize = int($memory * 1048576 / 10);
+$cachesize = round_powerof2($cachesize);
+
+$self->log('info', "migration cachesize: " . render_bytes($cachesize, 1));
+$qemu_migrate_params->{'xbzrle-cache-size'} = int($cachesize);
+
+$self->log('info', "set migration parameters");
+eval {
+   mon_cmd($vmid, "migrate-set-parameters", %{$qemu_migrate_params});
+};
+$self->log('info', "migrate-set-parameters error: $@") if $@;
+
+if (PVE::QemuServer::vga_conf_has_spice($conf->{vga}) && 
!$self->{opts}->{remote}) {
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+
+   my (undef, $proxyticket) = 
PVE::AccessControl::assemble_spice_ticket($authuser, $vmid, $self->{node});
+
+   my $filename = "/etc/pve/nodes/$self->{node}/pve-ssl.pem";
+   my $subject =  PVE::AccessControl::read_x509_subject_spice($filename);
+
+   $self->log('info', "spice client_migrate_info");
+
+   eval {
+   mon_cmd($vmid, "client_migrate_info", protocol => 'spice',
+   hostname => $proxyticket, 
'port' => 0, 'tls-port' => $spice_port,
+   'cert-subject' => $subject);
+   };
+   $self->log('info', "client_migrate_info error: $@") if $@;
+
+}
+
+my $start = time();
+
+$self->log('info', "start migrate command to $migrate_uri");
+eval {
+   mon_cmd($vmid, "migrate", uri => $migrate_uri);
+};
+my $merr = $@;
+$self->log('info', "migrate uri => $migrate_uri failed: $merr") if $merr;
+
+my $last_mem_transferred = 0;
+my $usleep = 100;
+my $i = 0;
+my $err_count = 0;
+my $lastrem = undef;
+my $downtimecounter = 0;
+while (1) {
+   $i++;
+   my $avglstat = $last_mem_transferred ? $last_mem_transferred / $i : 0;
+
+   usleep($usleep);
+
+   my $stat = eval { mon_cmd($vmid, "query-migrate") };
+   if (my $err = $@) {
+   $err_count++;
+   warn "query migrate failed: $err\n";
+   $self->log('info', "query migrate failed: $err");
+   if ($err_count <= 5) {
+   usleep(1_000_000);
+   next;
+   }
+   die "too many query migrate failures - aborting\n";
+   }
+
+   my $status = $stat->{status};
+   if (defined($status) && $status =~ m/^(setup)$/im) {
+   sleep(1);
+   next;
+   }
+
+   if (!defined($status) || $status !~ 
m/^(active|completed|failed|cancelled)$/im) {
+   die $merr if $merr;
+   die "unable to parse migration status '$status' - aborting\n";
+   }
+   $merr = undef;
+   $err_count = 0;
+
+   my 

[pve-devel] [PATCH v4 qemu-server 0/2] remote-migration: migration with different cpu

2023-09-28 Thread Alexandre Derumier
This patch series allow remote migration between cluster with different cpu 
model.

2 new params are introduced: "target-cpu" && "target-reboot"

If target-cpu is defined, this will replace the cpu model of the target vm.

If vm is online/running, an extra "target-reboot" safeguard option is needed.
Indeed, as the target cpu is different, the live migration with memory transfert
is skipped (as anyway, the target will die with a different cpu).

Then, after the storage copy, we switch source vm disk to the targetvm nbd 
export,
then shutdown the source vm and restart the target vm.
(Like a virtual reboot between source/target)



Changelog v2:

The first version was simply shuting down the target vm,
wihout doing the block-job-complete.

After doing production migration with around 400vms, I had
some fs corruption, like some datas was still in buffer.

This v2 has been tested with another 400vms batch, without
any corruption.


Changelog v3:

v2 was not perfect, still have some 1 or 2 fs corruption with vms doing
a lot of write.

This v3 retake idea of the v1 but in a cleaner way

- we migrate disk to target vm
- source vm is switching disk to the nbd of the target vm. 
  (with a block-job-complete, and not a block-job-cancel with standard disk 
migration).
  We are 100% sure it that no pending write is still pending in the migration 
job.
- source vm is shutdown
- target with is restart


Changelog v4:
 - bugfix: no not override cpu with empty config if targetcpu is not defined
 - small cleanups with params 



We have redone a lot of migration this summer( maybe another 4000vm),
0 corruption, windows or linux guest vms.






Alexandre Derumier (2):
  migration: move livemigration code in a dedicated sub
  remote-migration: add target-cpu && target-reboot params

 PVE/API2/Qemu.pm   |  23 ++-
 PVE/CLI/qm.pm  |  11 ++
 PVE/QemuMigrate.pm | 451 -
 3 files changed, 276 insertions(+), 209 deletions(-)

-- 
2.39.2


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v4 qemu-server 2/2] remote-migration: add target-cpu && target-reboot params

2023-09-28 Thread Alexandre Derumier
This patch add support for remote migration when target
cpu model is different.

target-reboot param need to be defined to allow migration
whens source vm is online.

When defined, only the live storage migration is done,
and instead to transfert memory, we cleanly shutdown source vm
and restart the target vm. (like a virtual reboot between source/dest)
---
 PVE/API2/Qemu.pm   | 23 ++-
 PVE/CLI/qm.pm  | 11 +++
 PVE/QemuMigrate.pm | 31 +--
 3 files changed, 62 insertions(+), 3 deletions(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 774b0c7..38991e9 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -4586,6 +4586,17 @@ __PACKAGE__->register_method({
optional => 1,
default => 0,
},
+   'target-cpu' => {
+   optional => 1,
+   description => "Target Emulated CPU model. For online 
migration, this require target-reboot option",
+   type => 'string',
+   format => 'pve-vm-cpu-conf',
+   },
+   'target-reboot' => {
+   type => 'boolean',
+   description => "For online migration , don't migrate memory, 
only storage. Then, the source vm is shutdown and the target vm is restarted.",
+   optional => 1,
+   },
'target-storage' => get_standard_option('pve-targetstorage', {
completion => \::QemuServer::complete_migration_storage,
optional => 0,
@@ -4666,7 +4677,7 @@ __PACKAGE__->register_method({
 
if (PVE::QemuServer::check_running($source_vmid)) {
die "can't migrate running VM without --online\n" if 
!$param->{online};
-
+   die "can't migrate running VM without --target-reboot when target 
cpu is different" if $param->{'target-cpu'} && !$param->{'target-reboot'};
} else {
warn "VM isn't running. Doing offline migration instead.\n" if 
$param->{online};
$param->{online} = 0;
@@ -4683,6 +4694,7 @@ __PACKAGE__->register_method({
raise_param_exc({ 'target-bridge' => "failed to parse bridge map: $@" })
if $@;
 
+
die "remote migration requires explicit storage mapping!\n"
if $storagemap->{identity};
 
@@ -5732,6 +5744,15 @@ __PACKAGE__->register_method({
PVE::QemuServer::nbd_stop($state->{vmid});
return;
},
+   'restart' => sub {
+   PVE::QemuServer::vm_stop(undef, $state->{vmid}, 1, 1);
+   my $info = PVE::QemuServer::vm_start_nolock(
+   $state->{storecfg},
+   $state->{vmid},
+   $state->{conf},
+   );
+   return;
+   },
'resume' => sub {
if 
(PVE::QemuServer::Helpers::vm_running_locally($state->{vmid})) {
PVE::QemuServer::vm_resume($state->{vmid}, 1, 1);
diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
index b17b4fe..9d89cfe 100755
--- a/PVE/CLI/qm.pm
+++ b/PVE/CLI/qm.pm
@@ -189,6 +189,17 @@ __PACKAGE__->register_method({
optional => 1,
default => 0,
},
+   'target-cpu' => {
+   optional => 1,
+   description => "Target Emulated CPU model. For online 
migration, this require target-reboot option",
+   type => 'string',
+   format => 'pve-vm-cpu-conf',
+   },
+   'target-reboot' => {
+   type => 'boolean',
+   description => "For online migration , don't migrate memory, 
only storage. Then, the source vm is shutdown and the target vm is restarted.",
+   optional => 1,
+   },
'target-storage' => get_standard_option('pve-targetstorage', {
completion => \::QemuServer::complete_migration_storage,
optional => 0,
diff --git a/PVE/QemuMigrate.pm b/PVE/QemuMigrate.pm
index 5ea78a7..8131b0b 100644
--- a/PVE/QemuMigrate.pm
+++ b/PVE/QemuMigrate.pm
@@ -729,6 +729,11 @@ sub cleanup_bitmaps {
 sub live_migration {
 my ($self, $vmid, $migrate_uri, $spice_port) = @_;
 
+if($self->{opts}->{'target-reboot'}){
+   $self->log('info', "target reboot - skip live migration.");
+   return;
+}
+
 my $conf = $self->{vmconf};
 
 $self->log('info', "starting online/live migration on $migrate_uri");
@@ -993,6 +998,7 @@ sub phase1_remote {
 my $remote_conf = PVE::QemuConfig->load_config($vmid);
 PVE::QemuConfig->update_volume_ids($remote_conf, $self->{volume_map});
 
+$remote_conf->{cpu} = $self->{opts}->{'target-cpu'} if 
$self->{opts}->{'target-cpu'};
 my $bridges = map_bridges($remote_conf, $self->{opts}->{bridgemap});
 for my $target (keys $bridges->%*) {
for my $nic (keys $bridges->{$target}->%*) {
@@ -1356,7 +1362,14 @@ sub phase2 {
# finish 

Re: [pve-devel] [PATCH installer] install: install correct grub metapackage for the current boot-mode

2023-09-28 Thread Thomas Lamprecht
Am 28/09/2023 um 16:29 schrieb Stoiko Ivanov:
> just realized while talking with Friedrich off-list - if this gets applied
> it probably would make sense to include it in the pve7to8 (same for pbs
> and pmg) checks (and also in the upgrade guides)
> (mostly meant as a note to myself) 

Potentially better suited in one of our kernel or (probably even better)
boot-tool related package's postinst? As that would then cover existing
8.x systems too.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH installer] install: install correct grub metapackage for the current boot-mode

2023-09-28 Thread Stoiko Ivanov
just realized while talking with Friedrich off-list - if this gets applied
it probably would make sense to include it in the pve7to8 (same for pbs
and pmg) checks (and also in the upgrade guides)
(mostly meant as a note to myself) 

On Thu, 28 Sep 2023 16:05:33 +0200
Stoiko Ivanov  wrote:

> grub packages in debian split between:
> * meta-packages, which handles (among other things) the reinstalling
>   grub to the actual device/ESP in case of a version upgrade (grub-pc,
>   grub-efi-amd64)
> * bin-packages, which contain the actual boot-loaders
> The bin-packages can coexist on a system, but the meta-package
> conflict with each other (didn't check why, but I don't see a hard
> conflict on a quick glance)
> 
> Currently our ISO installs grub-pc unconditionally (and both bin
> packages, since we install the legacy bootloader also on uefi-booted
> systems). This results in uefi-systems not getting a new grub
> installed automatically upon upgrade.
> 
> Reported in our community-forum from users who upgraded to PVE 8.0,
> and still run into an issue fixed in grub for bookworm:
> https://forum.proxmox.com/threads/.123512/
> 
> Reproduced and analyzed by Friedrich.
> 
> This patch changes the installer, to install the meta-package fitting
> for the boot-mode.
> 
> We do not set the debconf variable install_devices, because in my
> tests a plain debian installed in uefi mode has this set, and a
> `grep -ri install_devices /var/lib/dpkg/info` yields only results with
> grub-pc.
> 
> Reported-by: Friedrich Weber 
> Signed-off-by: Stoiko Ivanov 
> ---
> quickly tested by building an ISO (with the necessary modifications to
> ship both packages as .deb) and installing in legacy mode and uefi mode
> once.
>  Proxmox/Install.pm | 6 ++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
> index 1117fc4..d775ac0 100644
> --- a/Proxmox/Install.pm
> +++ b/Proxmox/Install.pm
> @@ -1057,6 +1057,12 @@ _EOD
>   chomp;
>   my $path = $_;
>   my ($deb) = $path =~ m/${proxmox_pkgdir}\/(.*\.deb)/;
> +
> + # the grub-pc/grub-efi-amd64 packages (w/o -bin) are the ones 
> actually updating grub
> + # upon upgrade - and conflict with each other - install the fitting 
> one only
> + next if ($deb =~ /grub-pc_/ && $run_env->{boot_type} ne 'bios');
> + next if ($deb =~ /grub-efi-amd64_/ && $run_env->{boot_type} ne 
> 'efi');
> +
>   update_progress($count/$pkg_count, 0.5, 0.75, "extracting $deb");
>   print STDERR "extracting: $deb\n";
>   syscmd("chroot $targetdir dpkg $dpkg_opts --force-depends 
> --no-triggers --unpack /tmp/pkg/$deb") == 0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH installer] install: install correct grub metapackage for the current boot-mode

2023-09-28 Thread Stoiko Ivanov
grub packages in debian split between:
* meta-packages, which handles (among other things) the reinstalling
  grub to the actual device/ESP in case of a version upgrade (grub-pc,
  grub-efi-amd64)
* bin-packages, which contain the actual boot-loaders
The bin-packages can coexist on a system, but the meta-package
conflict with each other (didn't check why, but I don't see a hard
conflict on a quick glance)

Currently our ISO installs grub-pc unconditionally (and both bin
packages, since we install the legacy bootloader also on uefi-booted
systems). This results in uefi-systems not getting a new grub
installed automatically upon upgrade.

Reported in our community-forum from users who upgraded to PVE 8.0,
and still run into an issue fixed in grub for bookworm:
https://forum.proxmox.com/threads/.123512/

Reproduced and analyzed by Friedrich.

This patch changes the installer, to install the meta-package fitting
for the boot-mode.

We do not set the debconf variable install_devices, because in my
tests a plain debian installed in uefi mode has this set, and a
`grep -ri install_devices /var/lib/dpkg/info` yields only results with
grub-pc.

Reported-by: Friedrich Weber 
Signed-off-by: Stoiko Ivanov 
---
quickly tested by building an ISO (with the necessary modifications to
ship both packages as .deb) and installing in legacy mode and uefi mode
once.
 Proxmox/Install.pm | 6 ++
 1 file changed, 6 insertions(+)

diff --git a/Proxmox/Install.pm b/Proxmox/Install.pm
index 1117fc4..d775ac0 100644
--- a/Proxmox/Install.pm
+++ b/Proxmox/Install.pm
@@ -1057,6 +1057,12 @@ _EOD
chomp;
my $path = $_;
my ($deb) = $path =~ m/${proxmox_pkgdir}\/(.*\.deb)/;
+
+   # the grub-pc/grub-efi-amd64 packages (w/o -bin) are the ones 
actually updating grub
+   # upon upgrade - and conflict with each other - install the fitting 
one only
+   next if ($deb =~ /grub-pc_/ && $run_env->{boot_type} ne 'bios');
+   next if ($deb =~ /grub-efi-amd64_/ && $run_env->{boot_type} ne 
'efi');
+
update_progress($count/$pkg_count, 0.5, 0.75, "extracting $deb");
print STDERR "extracting: $deb\n";
syscmd("chroot $targetdir dpkg $dpkg_opts --force-depends 
--no-triggers --unpack /tmp/pkg/$deb") == 0
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH proxmox-backup] ui: fix #4260: add dynamic notes in backup group comment

2023-09-28 Thread Philipp Hufnagl
When there is no comment for a backup group, the comment of the last
snapshot in this group will be shown slightly grayed out as long as
the group is collapsed.

Signed-off-by: Philipp Hufnagl 
---
 www/css/ext6-pbs.css |  3 +++
 www/datastore/Content.js | 17 ++---
 2 files changed, 17 insertions(+), 3 deletions(-)

diff --git a/www/css/ext6-pbs.css b/www/css/ext6-pbs.css
index 5fd65d25..324f73b8 100644
--- a/www/css/ext6-pbs.css
+++ b/www/css/ext6-pbs.css
@@ -226,6 +226,9 @@ span.snapshot-comment-column {
 display: inline-block;
 width: calc(100% - 18px);
 }
+span.snapshot-comment-derived {
+opacity: 0.7;
+}
 
 .x-action-col-icon.good:before {
 color: #21BF4B;
diff --git a/www/datastore/Content.js b/www/datastore/Content.js
index 9fc07d49..926aab89 100644
--- a/www/datastore/Content.js
+++ b/www/datastore/Content.js
@@ -146,6 +146,7 @@ Ext.define('PBS.DataStoreContent', {
leaf: false,
iconCls: "fa " + cls,
expanded: false,
+   comment: item.data.comment,
backup_type: item.data["backup-type"],
backup_id: item.data["backup-id"],
children: [],
@@ -287,6 +288,7 @@ Ext.define('PBS.DataStoreContent', {
if (item["backup-time"] > last_backup && item.size !== 
null) {
last_backup = item["backup-time"];
group["backup-time"] = last_backup;
+   group["last-comment"] = item.comment;
group.files = item.files;
group.size = item.size;
group.owner = item.owner;
@@ -900,16 +902,25 @@ Ext.define('PBS.DataStoreContent', {
flex: 1,
renderer: (v, meta, record) => {
let data = record.data;
+   let additional_classes = "";
if (!data || data.leaf || data.root) {
return '';
}
-   if (v === undefined || v === null) {
-   v = '';
+
+   // when there is no group comment and the section is collapsed,
+   // display the most recent snapshot comment
+   if (v === undefined || v === null|| v === '') {
+   if (data.expanded === false) {
+   v = data['last-comment'];
+   additional_classes = "snapshot-comment-derived";
+   } else {
+   v = '';
+   }
}
v = Ext.String.htmlEncode(v);
let icon = 'x-action-col-icon fa fa-fw fa-pencil pointer';
 
-   return `${v}
+   return `${v}
`;
},
listeners: {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage/widget-toolkit 0/2] List all OSDs on disk

2023-09-28 Thread Aaron Lauterer

ping? patches still apply cleanly

On 8/22/23 11:04, Aaron Lauterer wrote:

It is possible to have multiple OSD daemons on a single disk. This is
useful if fast NVME drives are used to utilize their full potential.

For these situations we want to list all OSD daemons that are located on
the disk in the disk panel of the node.

I added a new 'osdid-list' parameter to the Disks GET API endpoint for
this.
We can probably deprecate/remove the 'osdid' the next time we can
introduce breaking API changes.

storage: Aaron Lauterer (1):
   disks: get: add osdid-list return parameter

  src/PVE/API2/Disks.pm |  6 +++-
  src/PVE/Diskmanage.pm | 10 ++
  .../disk_tests/cciss/disklist_expected.json   |  1 +
  .../hdd_smart/disklist_expected.json  |  2 ++
  .../nvme_smart/disklist_expected.json |  1 +
  .../disk_tests/sas/disklist_expected.json |  1 +
  .../disk_tests/sas_ssd/disklist_expected.json |  1 +
  .../ssd_smart/disklist_expected.json  |  5 +++
  .../disk_tests/usages/disklist_expected.json  | 32 +--
  9 files changed, 49 insertions(+), 10 deletions(-)


widget-toolkit: Aaron Lauterer (1):
   DiskList: render osdid-list if present

  src/panel/DiskList.js | 6 +-
  1 file changed, 5 insertions(+), 1 deletion(-)




___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager v3] fix #4631: ceph: osd: create: add osds-per-device

2023-09-28 Thread Aaron Lauterer

ping? Patch still applies

previous patch versions with discussion are:
https://lists.proxmox.com/pipermail/pve-devel/2023-August/058794.html
https://lists.proxmox.com/pipermail/pve-devel/2023-August/058803.html

On 8/23/23 11:44, Aaron Lauterer wrote:

Allows to automatically create multiple OSDs per physical device. The
main use case are fast NVME drives that would be bottlenecked by a
single OSD service.

By using the 'ceph-volume lvm batch' command instead of the 'ceph-volume
lvm create' for multiple OSDs / device, we don't have to deal with the
split of the drive ourselves.

But this means that the parameters to specify a DB or WAL device won't
work as the 'batch' command doesn't use them. Dedicated DB and WAL
devices don't make much sense anyway if we place the OSDs on fast NVME
drives.

Some other changes to how the command is built were needed as well, as
the 'batch' command needs the path to the disk as a positional argument,
not as '--data /dev/sdX'.
We drop the '--cluster-fsid' parameter because the 'batch' command
doesn't accept it. The 'create' will fall back to reading it from the
ceph.conf file.

Removal of OSDs works as expected without any code changes. As long as
there are other OSDs on a disk, the VG & PV won't be removed, even if
'cleanup' is enabled.

The '--no-auto' parameter is used to avoid the following deprecation
warning:
```
--> DEPRECATION NOTICE
--> You are using the legacy automatic disk sorting behavior
--> The Pacific release will change the default to --no-auto
--> passed data devices: 1 physical, 0 LVM
--> relative data size: 0.
```

Signed-off-by: Aaron Lauterer 
---

changes since v2:
* removed check for fsid
* rework ceph-volume call to place the positional devpath parameter
   after '--'

  PVE/API2/Ceph/OSD.pm | 35 +--
  1 file changed, 29 insertions(+), 6 deletions(-)

diff --git a/PVE/API2/Ceph/OSD.pm b/PVE/API2/Ceph/OSD.pm
index ded35990..a1d92ca7 100644
--- a/PVE/API2/Ceph/OSD.pm
+++ b/PVE/API2/Ceph/OSD.pm
@@ -275,6 +275,13 @@ __PACKAGE__->register_method ({
type => 'string',
description => "Set the device class of the OSD in crush."
},
+   'osds-per-device' => {
+   optional => 1,
+   type => 'integer',
+   minimum => '1',
+   description => 'OSD services per physical device. Only useful for 
fast ".
+   "NVME devices to utilize their performance better.',
+   },
},
  },
  returns => { type => 'string' },
@@ -294,6 +301,15 @@ __PACKAGE__->register_method ({
# extract parameter info and fail if a device is set more than once
my $devs = {};
  
+	# allow 'osds-per-device' only without dedicated db and/or wal devs. We cannot specify them with

+   # 'ceph-volume lvm batch' and they don't make a lot of sense on fast 
NVMEs anyway.
+   if ($param->{'osds-per-device'}) {
+   for my $type ( qw(db_dev wal_dev) ) {
+   raise_param_exc({ $type => "canot use 'osds-per-device' parameter 
with '${type}'" })
+   if $param->{$type};
+   }
+   }
+
my $ceph_conf = cfs_read_file('ceph.conf');
  
  	my $osd_network = $ceph_conf->{global}->{cluster_network};

@@ -363,10 +379,6 @@ __PACKAGE__->register_method ({
my $rados = PVE::RADOS->new();
my $monstat = $rados->mon_command({ prefix => 'quorum_status' });
  
-	die "unable to get fsid\n" if !$monstat->{monmap} || !$monstat->{monmap}->{fsid};

-   my $fsid = $monstat->{monmap}->{fsid};
-$fsid = $1 if $fsid =~ m/^([0-9a-f\-]+)$/;
-
my $ceph_bootstrap_osd_keyring = 
PVE::Ceph::Tools::get_config('ceph_bootstrap_osd_keyring');
  
  	if (! -f $ceph_bootstrap_osd_keyring && $ceph_conf->{global}->{auth_client_required} eq 'cephx') {

@@ -470,7 +482,10 @@ __PACKAGE__->register_method ({
$test_disk_requirements->($disklist);
  
  		my $dev_class = $param->{'crush-device-class'};

-   my $cmd = ['ceph-volume', 'lvm', 'create', '--cluster-fsid', 
$fsid ];
+   # create allows for detailed configuration of DB and WAL devices
+   # batch for easy creation of multiple OSDs (per device)
+   my $create_mode = $param->{'osds-per-device'} ? 'batch' : 
'create';
+   my $cmd = ['ceph-volume', 'lvm', $create_mode ];
push @$cmd, '--crush-device-class', $dev_class if $dev_class;
  
  		my $devname = $devs->{dev}->{name};

@@ -504,9 +519,17 @@ __PACKAGE__->register_method ({
push @$cmd, "--block.$type", $part_or_lv;
}
  
-		push @$cmd, '--data', $devpath;

+   push @$cmd, '--data', $devpath if $create_mode eq 'create';
push @$cmd, '--dmcrypt' if $param->{encrypted};
  
+		if ($create_mode eq 'batch') {

+   push @$cmd,
+   '--osds-per-device', 

[pve-devel] [PATCH qemu 2/7] buildsys: fixup submodule target

2023-09-28 Thread Fiona Ebner
It's not enough to initialize the submodules anymore, as some got
replaced by wrap files, see QEMU commit 2019cabfee ("meson:
subprojects: replace submodules with wrap files").

Download the subprojects during initialization of the QEMU submodule,
so building (without the automagical --enable-download) can succeeed
afterwards.

Signed-off-by: Fiona Ebner 
---
 Makefile | 5 -
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/Makefile b/Makefile
index 6c62c78..e389a9c 100644
--- a/Makefile
+++ b/Makefile
@@ -17,7 +17,10 @@ all: $(DEBS)
 
 .PHONY: submodule
 submodule:
-   test -f "$(SRCDIR)/configure" || git submodule update --init --recursive
+ifeq ($(shell test -f "$(SRCDIR)/configure" && echo 1 || echo 0), 0)
+   git submodule update --init --recursive
+   cd $(SRCDIR); meson subprojects download
+endif
 
 PC_BIOS_FW_PURGE_LIST_IN = \
hppa-firmware.img \
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH-SERIES qemu] update to QEMU 8.1.1

2023-09-28 Thread Fiona Ebner
Patch changes:

For backup, opening the backup dump block driver needed to be adapted,
because of coroutine context changes.

Block graph locking was disabled, because of deadlocks.

Snapshot code has a huge performance regression which required a
workaround.


Meta-changes:

Use --disable-download options to avoid automatic downloads during
build, but require the user to do so once themselves. Also done when
initializing the submodule in the Makefile

Switch back to using QEMU's keycodemapdb instead of splitting it out
in our build-dir. Wasn't updated since QEMU 6.0 anymore and the reason
for the split is not known. If anything pops up, we can re-do it and
document the reason this time.

Versioned Breaks for qemu-server required, because an upstream change
would prevent VM boot with most configurations (including default
one).


Fiona Ebner (7):
  d/rules: use disable-download option instead of git-submodules=ignore
  buildsys: fixup submodule target
  buildsys: use QEMU's keycodemapdb again
  update submodule and patches to QEMU 8.1.1
  add patch to disable graph locking
  add patch to avoid huge snapshot performance regression
  d/control: add versioned Breaks for qemu-server <= 8.0.6

 Makefile  |   20 +-
 debian/control|1 +
 ...d-support-for-sync-bitmap-mode-never.patch |  124 +-
 ...-support-for-conditional-and-always-.patch |   10 +-
 ...check-for-bitmap-mode-without-bitmap.patch |4 +-
 ...-to-bdrv_dirty_bitmap_merge_internal.patch |6 +-
 .../0006-mirror-move-some-checks-to-qmp.patch |8 +-
 ...race-with-clients-disconnecting-earl.patch |   22 +-
 ...as-Internal-cdbs-have-16-byte-length.patch |   10 +-
 ...ial-deadlock-when-draining-during-tr.patch |   10 +-
 ...irty-bitmap-fix-loading-bitmap-when.patch} |2 +-
 ...hen-getting-cursor-without-a-console.patch |   36 -
 ...el-async-DMA-operation-before-reset.patch} |4 +-
 ...-memory-prevent-dma-reentracy-issues.patch |  130 -
 ...t-graph-lock-Disable-locking-for-now.patch |  152 ++
 ...le-reentrancy-detection-for-script-R.patch |   39 -
 ...-disable-reentrancy-detection-for-io.patch |   37 -
 ...-workaround-snapshot-performance-reg.patch |   57 +
 ...sable-reentrancy-detection-for-iomem.patch |   35 -
 ...le-reentrancy-detection-for-apic-msi.patch |   36 -
 .../extra/0011-vhost-fix-the-fd-leak.patch|   29 -
 ...k-file-change-locking-default-to-off.patch |6 +-
 ...he-CPU-model-to-kvm64-32-instead-of-.patch |4 +-
 ...erfs-no-default-logfile-if-daemonize.patch |4 +-
 ...PVE-Up-glusterfs-allow-partial-reads.patch |2 +-
 ...return-success-on-info-without-snaps.patch |4 +-
 ...dd-add-osize-and-read-from-to-stdin-.patch |   12 +-
 ...E-Up-qemu-img-dd-add-isize-parameter.patch |   14 +-
 ...PVE-Up-qemu-img-dd-add-n-skip_create.patch |   10 +-
 ...-add-l-option-for-loading-a-snapshot.patch |   14 +-
 ...virtio-balloon-improve-query-balloon.patch |   12 +-
 .../0014-PVE-qapi-modify-query-machines.patch |   12 +-
 .../0015-PVE-qapi-modify-spice-query.patch|4 +-
 ...nnel-implementation-for-savevm-async.patch |8 +-
 ...async-for-background-state-snapshots.patch |   58 +-
 ...add-optional-buffer-size-to-QEMUFile.patch |   44 +-
 ...add-the-zeroinit-block-driver-filter.patch |   10 +-
 ...-Add-dummy-id-command-line-parameter.patch |   10 +-
 ...le-posix-make-locking-optiono-on-cre.patch |   18 +-
 ...3-PVE-monitor-disable-oob-capability.patch |4 +-
 ...sed-balloon-qemu-4-0-config-size-fal.patch |4 +-
 ...E-Allow-version-code-in-machine-type.patch |   22 +-
 ...VE-Backup-add-vma-backup-format-code.patch |   22 +-
 ...-Backup-add-backup-dump-block-driver.patch |4 +-
 ...ckup-Proxmox-backup-patches-for-QEMU.patch |  127 +-
 ...estore-new-command-to-restore-from-p.patch |4 +-
 ...k-driver-to-map-backup-archives-into.patch |   54 +-
 ...ct-stderr-to-journal-when-daemonized.patch |   12 +-
 ...igrate-dirty-bitmap-state-via-savevm.patch |   23 +-
 ...dirty-bitmap-migrate-other-bitmaps-e.patch |4 +-
 ...all-back-to-open-iscsi-initiatorname.patch |4 +-
 ...PVE-block-stream-increase-chunk-size.patch |2 +-
 ...accept-NULL-qiov-in-bdrv_pad_request.patch |   14 +-
 .../0039-block-add-alloc-track-driver.patch   |2 +-
 ...apshots-hold-the-BQL-during-setup-ca.patch |   24 +-
 ...vm-async-don-t-hold-BQL-during-setup.patch |4 +-
 debian/patches/series |   13 +-
 debian/rules  |2 +-
 keycodemapdb/LICENSE.BSD  |   27 -
 keycodemapdb/LICENSE.GPL2 |  339 ---
 keycodemapdb/README   |  114 -
 keycodemapdb/data/README  |   89 -
 keycodemapdb/data/keymaps.csv |  539 
 keycodemapdb/meson.build  |1 -
 keycodemapdb/tests/.gitignore |   11 -
 keycodemapdb/tests/Makefile   |  150 --
 keycodemapdb/tests/javascript  

[pve-devel] [PATCH qemu 7/7] d/control: add versioned Breaks for qemu-server <= 8.0.6

2023-09-28 Thread Fiona Ebner
Upstream QEMU commit 4271f40383 ("virtio-net: correctly report maximum
tx_queue_size value") made setting an invalid tx_queue_size for a
non-vDPA/vhost-user net device a hard error. Now, qemu-server before
commit 089aed81 ("cfg2cmd: netdev: fix value for tx_queue_size") did
just that, so the newer QEMU version would break start-up for most VMs
(a default vNIC configuration would be affected).

Signed-off-by: Fiona Ebner 
---
 debian/control | 1 +
 1 file changed, 1 insertion(+)

diff --git a/debian/control b/debian/control
index 8328cd4..7e6cd6b 100644
--- a/debian/control
+++ b/debian/control
@@ -79,6 +79,7 @@ Replaces: pve-kvm,
   qemu-system-arm,
   qemu-system-x86,
   qemu-utils,
+Breaks: qemu-server (<= 8.0.6)
 Description: Full virtualization on x86 hardware
  Using KVM, one can run multiple virtual PCs, each running unmodified Linux or
  Windows images. Each virtual machine has private virtualized hardware: a
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu 6/7] add patch to avoid huge snapshot performance regression

2023-09-28 Thread Fiona Ebner
Taking a snapshot became prohibitively slow because of the
migration_transferred_bytes() call in migration_rate_exceeded() [0].

This also applied to the async snapshot taking in Proxmox VE, so
work around the issue until it is fixed upstream.

[0]: https://gitlab.com/qemu-project/qemu/-/issues/1821

Signed-off-by: Fiona Ebner 
---
 ...-workaround-snapshot-performance-reg.patch | 57 +++
 debian/patches/series |  1 +
 2 files changed, 58 insertions(+)
 create mode 100644 
debian/patches/extra/0007-migration-states-workaround-snapshot-performance-reg.patch

diff --git 
a/debian/patches/extra/0007-migration-states-workaround-snapshot-performance-reg.patch
 
b/debian/patches/extra/0007-migration-states-workaround-snapshot-performance-reg.patch
new file mode 100644
index 000..8031837
--- /dev/null
+++ 
b/debian/patches/extra/0007-migration-states-workaround-snapshot-performance-reg.patch
@@ -0,0 +1,57 @@
+From  Mon Sep 17 00:00:00 2001
+From: Fiona Ebner 
+Date: Thu, 28 Sep 2023 11:19:14 +0200
+Subject: [PATCH] migration states: workaround snapshot performance regression
+
+Commit 813cd616 ("migration: Use migration_transferred_bytes() to
+calculate rate_limit") introduced a prohibitive performance regression
+when taking a snapshot [0]. The reason turns out to be the flushing
+done by migration_transferred_bytes()
+
+Just use a _noflush version of the relevant function as a workaround
+until upstream fixes the issue. This is inspired by a not-applied
+upstream series [1], but doing the very minimum to avoid the
+regression.
+
+[0]: https://gitlab.com/qemu-project/qemu/-/issues/1821
+[1]: https://lists.nongnu.org/archive/html/qemu-devel/2023-05/msg07708.html
+
+Signed-off-by: Fiona Ebner 
+---
+ migration/migration-stats.c | 16 +++-
+ 1 file changed, 15 insertions(+), 1 deletion(-)
+
+diff --git a/migration/migration-stats.c b/migration/migration-stats.c
+index 095d6d75bb..8073c8ebaa 100644
+--- a/migration/migration-stats.c
 b/migration/migration-stats.c
+@@ -18,6 +18,20 @@
+ 
+ MigrationAtomicStats mig_stats;
+ 
++/*
++ * Same as migration_transferred_bytes below, but using the _noflush
++ * variant of qemu_file_transferred() to avoid a performance
++ * regression in migration_rate_exceeded().
++ */
++static uint64_t migration_transferred_bytes_noflush(QEMUFile *f)
++{
++uint64_t multifd = stat64_get(_stats.multifd_bytes);
++uint64_t qemu_file = qemu_file_transferred_noflush(f);
++
++trace_migration_transferred_bytes(qemu_file, multifd);
++return qemu_file + multifd;
++}
++
+ bool migration_rate_exceeded(QEMUFile *f)
+ {
+ if (qemu_file_get_error(f)) {
+@@ -25,7 +39,7 @@ bool migration_rate_exceeded(QEMUFile *f)
+ }
+ 
+ uint64_t rate_limit_start = stat64_get(_stats.rate_limit_start);
+-uint64_t rate_limit_current = migration_transferred_bytes(f);
++uint64_t rate_limit_current = migration_transferred_bytes_noflush(f);
+ uint64_t rate_limit_used = rate_limit_current - rate_limit_start;
+ uint64_t rate_limit_max = stat64_get(_stats.rate_limit_max);
+ 
diff --git a/debian/patches/series b/debian/patches/series
index 6d681da..c27c245 100644
--- a/debian/patches/series
+++ b/debian/patches/series
@@ -4,6 +4,7 @@ 
extra/0003-ide-avoid-potential-deadlock-when-draining-during-tr.patch
 extra/0004-migration-block-dirty-bitmap-fix-loading-bitmap-when.patch
 extra/0005-hw-ide-reset-cancel-async-DMA-operation-before-reset.patch
 extra/0006-Revert-Revert-graph-lock-Disable-locking-for-now.patch
+extra/0007-migration-states-workaround-snapshot-performance-reg.patch
 bitmap-mirror/0001-drive-mirror-add-support-for-sync-bitmap-mode-never.patch
 bitmap-mirror/0002-drive-mirror-add-support-for-conditional-and-always-.patch
 bitmap-mirror/0003-mirror-add-check-for-bitmap-mode-without-bitmap.patch
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu 5/7] add patch to disable graph locking

2023-09-28 Thread Fiona Ebner
There are still some issues with graph locking, e.g. deadlocks during
backup canceling [0] and initial attempts to fix it didn't work [1].
Because the AioContext locks still exist, it should still be safe to
disable graph locking.

[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-09/msg00729.html
[1]: https://lists.nongnu.org/archive/html/qemu-devel/2023-09/msg06905.html

Signed-off-by: Fiona Ebner 
---
 ...t-graph-lock-Disable-locking-for-now.patch | 152 ++
 debian/patches/series |   1 +
 2 files changed, 153 insertions(+)
 create mode 100644 
debian/patches/extra/0006-Revert-Revert-graph-lock-Disable-locking-for-now.patch

diff --git 
a/debian/patches/extra/0006-Revert-Revert-graph-lock-Disable-locking-for-now.patch
 
b/debian/patches/extra/0006-Revert-Revert-graph-lock-Disable-locking-for-now.patch
new file mode 100644
index 000..0a5baec
--- /dev/null
+++ 
b/debian/patches/extra/0006-Revert-Revert-graph-lock-Disable-locking-for-now.patch
@@ -0,0 +1,152 @@
+From  Mon Sep 17 00:00:00 2001
+From: Fiona Ebner 
+Date: Thu, 28 Sep 2023 10:07:03 +0200
+Subject: [PATCH] Revert "Revert "graph-lock: Disable locking for now""
+
+This reverts commit 3cce22defb4b0e47cf135444e30cc673cff5ebad.
+
+There are still some issues with graph locking, e.g. deadlocks during
+backup canceling [0]. Because the AioContext locks still exist, it
+should be safe to disable locking again.
+
+From the original 80fc5d2600 ("graph-lock: Disable locking for now"):
+
+> We don't currently rely on graph locking yet. It is supposed to replace
+> the AioContext lock eventually to enable multiqueue support, but as long
+> as we still have the AioContext lock, it is sufficient without the graph
+> lock. Once the AioContext lock goes away, the deadlock doesn't exist any
+> more either and this commit can be reverted. (Of course, it can also be
+> reverted while the AioContext lock still exists if the callers have been
+> fixed.)
+
+[0]: https://lists.nongnu.org/archive/html/qemu-devel/2023-09/msg00729.html
+
+Signed-off-by: Fiona Ebner 
+---
+ block/graph-lock.c | 26 ++
+ 1 file changed, 26 insertions(+)
+
+diff --git a/block/graph-lock.c b/block/graph-lock.c
+index 5e66f01ae8..3bf2591dc4 100644
+--- a/block/graph-lock.c
 b/block/graph-lock.c
+@@ -30,8 +30,10 @@ BdrvGraphLock graph_lock;
+ /* Protects the list of aiocontext and orphaned_reader_count */
+ static QemuMutex aio_context_list_lock;
+ 
++#if 0
+ /* Written and read with atomic operations. */
+ static int has_writer;
++#endif
+ 
+ /*
+  * A reader coroutine could move from an AioContext to another.
+@@ -88,6 +90,7 @@ void unregister_aiocontext(AioContext *ctx)
+ g_free(ctx->bdrv_graph);
+ }
+ 
++#if 0
+ static uint32_t reader_count(void)
+ {
+ BdrvGraphRWlock *brdv_graph;
+@@ -105,13 +108,21 @@ static uint32_t reader_count(void)
+ assert((int32_t)rd >= 0);
+ return rd;
+ }
++#endif
+ 
+ void bdrv_graph_wrlock(BlockDriverState *bs)
+ {
+ AioContext *ctx = NULL;
+ 
+ GLOBAL_STATE_CODE();
++/*
++ * TODO Some callers hold an AioContext lock when this is called, which
++ * causes deadlocks. Reenable once the AioContext locking is cleaned up 
(or
++ * AioContext locks are gone).
++ */
++#if 0
+ assert(!qatomic_read(_writer));
++#endif
+ 
+ /*
+  * Release only non-mainloop AioContext. The mainloop often relies on the
+@@ -126,6 +137,7 @@ void bdrv_graph_wrlock(BlockDriverState *bs)
+ }
+ }
+ 
++#if 0
+ /* Make sure that constantly arriving new I/O doesn't cause starvation */
+ bdrv_drain_all_begin_nopoll();
+ 
+@@ -154,6 +166,7 @@ void bdrv_graph_wrlock(BlockDriverState *bs)
+ } while (reader_count() >= 1);
+ 
+ bdrv_drain_all_end();
++#endif
+ 
+ if (ctx) {
+ aio_context_acquire(bdrv_get_aio_context(bs));
+@@ -163,6 +176,7 @@ void bdrv_graph_wrlock(BlockDriverState *bs)
+ void bdrv_graph_wrunlock(void)
+ {
+ GLOBAL_STATE_CODE();
++#if 0
+ QEMU_LOCK_GUARD(_context_list_lock);
+ assert(qatomic_read(_writer));
+ 
+@@ -174,10 +188,13 @@ void bdrv_graph_wrunlock(void)
+ 
+ /* Wake up all coroutine that are waiting to read the graph */
+ qemu_co_enter_all(_queue, _context_list_lock);
++#endif
+ }
+ 
+ void coroutine_fn bdrv_graph_co_rdlock(void)
+ {
++/* TODO Reenable when wrlock is reenabled */
++#if 0
+ BdrvGraphRWlock *bdrv_graph;
+ bdrv_graph = qemu_get_current_aio_context()->bdrv_graph;
+ 
+@@ -237,10 +254,12 @@ void coroutine_fn bdrv_graph_co_rdlock(void)
+ qemu_co_queue_wait(_queue, _context_list_lock);
+ }
+ }
++#endif
+ }
+ 
+ void coroutine_fn bdrv_graph_co_rdunlock(void)
+ {
++#if 0
+ BdrvGraphRWlock *bdrv_graph;
+ bdrv_graph = qemu_get_current_aio_context()->bdrv_graph;
+ 
+@@ -258,6 +277,7 @@ void coroutine_fn bdrv_graph_co_rdunlock(void)
+ if (qatomic_read(_writer)) {
+ aio_wait_kick();
+ 

[pve-devel] [PATCH qemu 1/7] d/rules: use disable-download option instead of git-submodules=ignore

2023-09-28 Thread Fiona Ebner
See the following QEMU commits for reference:
0c5f3dcbb2 ("configure: add --enable-pypi and --disable-pypi")
ac4ccac740 ("configure: rename --enable-pypi to --enable-download, control 
subprojects too")
6f3ae23b29 ("configure: remove --with-git-submodules=") removed

The last one removed the option and the closest thing to
git-submodule=ignore is using disable-download. Which will then just
verify that the submodules are present.

Building now will require running either
* Running 'meson subprojects download' in the qemu submodule first.
* Using --enable-download, but then the submodules would be downloaded
  for each build (if not already downloaded in the submodule first)
  and it's just a bit too surprising if downloads happen during build.

The disable-download option will also disable automatic downloading of
missing Python modules from PyPI. Hopefully, it's enough to add them
as Debian build dependencies when required.

Signed-off-by: Fiona Ebner 
---
 debian/rules | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/debian/rules b/debian/rules
index 4d06697..51f56c5 100755
--- a/debian/rules
+++ b/debian/rules
@@ -38,7 +38,7 @@ endif
 
# guest-agent is only required for guest systems
./configure \
-   --with-git-submodules=ignore \
+   --disable-download \
--docdir=/usr/share/doc/pve-qemu-kvm \
--localstatedir=/var \
--prefix=/usr \
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 proxmox 5/7] cache: add new crate 'proxmox-shared-cache'

2023-09-28 Thread Lukas Wagner
This crate contains a file-backed cache with expiration logic.
The cache should be safe to be accessed from multiple processes at
once.

The cache stores values in a directory, based on the key.
E.g. key "foo" results in a file 'foo.json' in the given base
directory. If a new value is set, the file is atomically replaced.
The JSON file also contains some metadata, namely 'added_at' and
'expire_in' - they are used for cache expiration.

Note: This cache is not suited to applications that
 - Might want to cache huge amounts of data, and/or access the cache
   very frequently (due to the overhead of JSON de/serialization)
 - Require arbitrary keys - right now, keys are limited by
   SAFE_ID_REGEX

The cache was developed for the use in pvestatd, in order to cache
e.g. storage plugin status. There, these limitations do not really
play any role.

Signed-off-by: Lukas Wagner 
---
 Cargo.toml   |   1 +
 proxmox-shared-cache/Cargo.toml  |  18 +
 proxmox-shared-cache/debian/changelog|   5 +
 proxmox-shared-cache/debian/control  |  53 ++
 proxmox-shared-cache/debian/copyright|  18 +
 proxmox-shared-cache/debian/debcargo.toml|   7 +
 proxmox-shared-cache/examples/performance.rs | 113 +
 proxmox-shared-cache/src/lib.rs  | 485 +++
 8 files changed, 700 insertions(+)
 create mode 100644 proxmox-shared-cache/Cargo.toml
 create mode 100644 proxmox-shared-cache/debian/changelog
 create mode 100644 proxmox-shared-cache/debian/control
 create mode 100644 proxmox-shared-cache/debian/copyright
 create mode 100644 proxmox-shared-cache/debian/debcargo.toml
 create mode 100644 proxmox-shared-cache/examples/performance.rs
 create mode 100644 proxmox-shared-cache/src/lib.rs

diff --git a/Cargo.toml b/Cargo.toml
index e334ac1..7b1e8e3 100644
--- a/Cargo.toml
+++ b/Cargo.toml
@@ -22,6 +22,7 @@ members = [
 "proxmox-schema",
 "proxmox-section-config",
 "proxmox-serde",
+"proxmox-shared-cache",
 "proxmox-shared-memory",
 "proxmox-sortable-macro",
 "proxmox-subscription",
diff --git a/proxmox-shared-cache/Cargo.toml b/proxmox-shared-cache/Cargo.toml
new file mode 100644
index 000..ada1a12
--- /dev/null
+++ b/proxmox-shared-cache/Cargo.toml
@@ -0,0 +1,18 @@
+[package]
+name = "proxmox-shared-cache"
+version = "0.1.0"
+authors.workspace = true
+edition.workspace = true
+license.workspace = true
+repository.workspace = true
+exclude.workspace = true
+description = "A cache that can be used from multiple processes simultaneously"
+
+[dependencies]
+anyhow.workspace = true
+proxmox-sys = { workspace = true, features = ["timer"] }
+proxmox-time.workspace = true
+proxmox-schema = { workspace = true, features = ["api-types"]}
+serde_json = { workspace = true, features = ["raw_value"] }
+serde = { workspace = true, features = ["derive"]}
+nix.workspace = true
diff --git a/proxmox-shared-cache/debian/changelog 
b/proxmox-shared-cache/debian/changelog
new file mode 100644
index 000..54d39f5
--- /dev/null
+++ b/proxmox-shared-cache/debian/changelog
@@ -0,0 +1,5 @@
+rust-proxmox-shared-cache (0.1.0-1) unstable; urgency=medium
+
+  * initial Debian package
+
+ -- Proxmox Support Team   Thu, 04 May 2023 08:40:38 +0200
diff --git a/proxmox-shared-cache/debian/control 
b/proxmox-shared-cache/debian/control
new file mode 100644
index 000..c8f0d8e
--- /dev/null
+++ b/proxmox-shared-cache/debian/control
@@ -0,0 +1,53 @@
+Source: rust-proxmox-shared-cache
+Section: rust
+Priority: optional
+Build-Depends: debhelper (>= 12),
+ dh-cargo (>= 25),
+ cargo:native ,
+ rustc:native ,
+ libstd-rust-dev ,
+ librust-anyhow-1+default-dev ,
+ librust-nix-0.26+default-dev (>= 0.26.1-~~) ,
+ librust-proxmox-schema-2+api-types-dev ,
+ librust-proxmox-schema-2+default-dev ,
+ librust-proxmox-sys-0.5+default-dev ,
+ librust-proxmox-sys-0.5+timer-dev ,
+ librust-proxmox-time-1+default-dev (>= 1.1.4-~~) ,
+ librust-serde-1+default-dev ,
+ librust-serde-1+derive-dev ,
+ librust-serde-json-1+default-dev ,
+ librust-serde-json-1+raw-value-dev 
+Maintainer: Proxmox Support Team 
+Standards-Version: 4.6.1
+Vcs-Git: https://salsa.debian.org/rust-team/debcargo-conf.git 
[src/proxmox-shared-cache]
+Vcs-Browser: 
https://salsa.debian.org/rust-team/debcargo-conf/tree/master/src/proxmox-shared-cache
+X-Cargo-Crate: proxmox-shared-cache
+Rules-Requires-Root: no
+
+Package: librust-proxmox-shared-cache-dev
+Architecture: any
+Multi-Arch: same
+Depends:
+ ${misc:Depends},
+ librust-anyhow-1+default-dev,
+ librust-nix-0.26+default-dev (>= 0.26.1-~~),
+ librust-proxmox-schema-2+api-types-dev,
+ librust-proxmox-schema-2+default-dev,
+ librust-proxmox-sys-0.5+default-dev,
+ librust-proxmox-sys-0.5+timer-dev,
+ librust-proxmox-time-1+default-dev (>= 1.1.4-~~),
+ librust-serde-1+default-dev,
+ librust-serde-1+derive-dev,
+ librust-serde-json-1+default-dev,
+ librust-serde-json-1+raw-value-dev
+Provides:
+ librust-proxmox-shared-cache+default-dev (= 

[pve-devel] [PATCH v2 proxmox-perl-rs 6/7] cache: add bindings for `SharedCache`

2023-09-28 Thread Lukas Wagner
These bindings are contained in the `SharedCacheBase` class, which is
subclassed by `SharedCache` in Perl. The subclass was needed to
implement the `get_or_update` method since that requires to call a
closure as a passed parameter.

Signed-off-by: Lukas Wagner 
---

Notes:
Changes v1 -> v2:
  - Added lock/unlock
  - Added get_or_update in perl subclass
  - added *_with_lock methods

 common/pkg/Makefile  |  1 +
 common/pkg/Proxmox/RS/SharedCache.pm | 46 ++
 common/src/mod.rs|  1 +
 common/src/shared_cache.rs   | 89 
 pve-rs/Cargo.toml|  1 +
 5 files changed, 138 insertions(+)
 create mode 100644 common/pkg/Proxmox/RS/SharedCache.pm
 create mode 100644 common/src/shared_cache.rs

diff --git a/common/pkg/Makefile b/common/pkg/Makefile
index 7bf669f..a99c30d 100644
--- a/common/pkg/Makefile
+++ b/common/pkg/Makefile
@@ -26,6 +26,7 @@ Proxmox/RS/CalendarEvent.pm:
  Proxmox::RS::APT::Repositories \
  Proxmox::RS::CalendarEvent \
  Proxmox::RS::Notify \
+ Proxmox::RS::SharedCacheBase \
  Proxmox::RS::Subscription
 
 all: Proxmox/RS/CalendarEvent.pm
diff --git a/common/pkg/Proxmox/RS/SharedCache.pm 
b/common/pkg/Proxmox/RS/SharedCache.pm
new file mode 100644
index 000..a35e0c5
--- /dev/null
+++ b/common/pkg/Proxmox/RS/SharedCache.pm
@@ -0,0 +1,46 @@
+package Proxmox::RS::SharedCache;
+
+use base 'Proxmox::RS::SharedCacheBase';
+
+use strict;
+use warnings;
+
+# This part has to be implemented in perl, since we calculate the new value on
+# demand from a passed closure.
+sub get_or_update {
+my ($self, $key, $value_func, $timeout) = @_;
+
+#Lookup value
+my $val = $self->get($key);
+
+if (!$val) {
+   my $lock = undef;
+   eval {
+   # If expired, lock cache entry. This makes sure that other processes
+   # cannot update it at the same time.
+   $lock = $self->lock($key, 1);
+
+   # Check again, may somebody else has already updated the value
+   $val = $self->get_with_lock($key, $lock);
+
+   # If still expired, update it
+   if (!$val) {
+   $val = $value_func->();
+   $self->set_with_lock($key, $val, $timeout, $lock);
+   }
+   };
+
+   my $err = $@;
+
+   # If the file has been locked, we *must* unlock it, no matter what
+   if (defined($lock)) {
+   $self->unlock($lock)
+   }
+
+   die $err if $err;
+}
+
+return $val;
+}
+
+1;
diff --git a/common/src/mod.rs b/common/src/mod.rs
index c3574f4..badfc98 100644
--- a/common/src/mod.rs
+++ b/common/src/mod.rs
@@ -2,4 +2,5 @@ pub mod apt;
 mod calendar_event;
 pub mod logger;
 pub mod notify;
+pub mod shared_cache;
 mod subscription;
diff --git a/common/src/shared_cache.rs b/common/src/shared_cache.rs
new file mode 100644
index 000..0e2b561
--- /dev/null
+++ b/common/src/shared_cache.rs
@@ -0,0 +1,89 @@
+#[perlmod::package(name = "Proxmox::RS::SharedCacheBase")]
+mod export {
+use std::os::fd::{FromRawFd, IntoRawFd, RawFd};
+
+use anyhow::Error;
+use nix::sys::stat::Mode;
+use perlmod::Value;
+use serde_json::Value as JSONValue;
+
+use proxmox_shared_cache::{SharedCache, CacheLockGuard};
+use proxmox_sys::fs::CreateOptions;
+
+pub struct CacheWrapper(SharedCache);
+
+perlmod::declare_magic!(Box :  as 
"Proxmox::RS::SharedCacheBase");
+
+#[export(raw_return)]
+fn new(#[raw] class: Value, base_dir: ) -> Result {
+// TODO: Make this configurable once we need to cache values that 
should be
+// accessible by other users.
+let options = CreateOptions::new()
+.root_only()
+.perm(Mode::from_bits_truncate(0o700));
+
+Ok(perlmod::instantiate_magic!(, MAGIC => Box::new(
+CacheWrapper (
+SharedCache::new(base_dir, options)?
+)
+)))
+}
+
+#[export]
+fn set(
+#[try_from_ref] this: ,
+key: ,
+value: JSONValue,
+expires_in: Option,
+) -> Result<(), Error> {
+this.0.set(key, , expires_in)
+}
+
+#[export]
+fn set_with_lock(
+#[try_from_ref] this: ,
+key: ,
+value: JSONValue,
+expires_in: Option,
+lock: RawFd,
+) -> Result<(), Error> {
+let lock = unsafe { CacheLockGuard::from_raw_fd(lock) };
+this.0.set_with_lock(key, , expires_in, )
+}
+
+#[export]
+fn get(#[try_from_ref] this: , key: ) -> 
Result, Error> {
+this.0.get(key)
+}
+
+#[export]
+fn get_with_lock(#[try_from_ref] this: , key: , lock: 
RawFd) -> Result, Error> {
+let lock = unsafe { CacheLockGuard::from_raw_fd(lock) };
+this.0.get_with_lock(key, )
+}
+
+#[export]
+fn delete(#[try_from_ref] this: , key: ) -> Result<(), 
Error> {
+this.0.delete(key)
+}
+
+#[export]

[pve-devel] [PATCH v2 pve-storage 7/7] stats: api: cache storage plugin status

2023-09-28 Thread Lukas Wagner
Cache storage plugin status so that pvestatd and API calls can use the
cached results, without having to query all storage plugins again.

Introduces the `ignore-cache` on some storage status API calls. By
default it is 0, but when set to 1 the values from the cache will be
ignored.

Signed-off-by: Lukas Wagner 
---

Notes:
Changes v1 -> v2:
- Add `ignore-cache` paramter
- use `get_or_update` method from Proxmox::RS::SharedCached
- invalidated the cache in other cases (e.g. when allocating volumes,
  as this might change the amount of free space)

 src/PVE/API2/Storage/Config.pm  | 18 +++
 src/PVE/API2/Storage/Content.pm | 15 ++
 src/PVE/API2/Storage/Status.pm  | 38 ++-
 src/PVE/Storage.pm  | 84 -
 4 files changed, 131 insertions(+), 24 deletions(-)

diff --git a/src/PVE/API2/Storage/Config.pm b/src/PVE/API2/Storage/Config.pm
index e04b6ab..e0ee0d8 100755
--- a/src/PVE/API2/Storage/Config.pm
+++ b/src/PVE/API2/Storage/Config.pm
@@ -274,6 +274,12 @@ __PACKAGE__->register_method ({
die $err;
}
 
+   eval {
+   # Invalidate cached plugin status on configuration changes.
+   PVE::Storage::status_cache()->delete("storage_plugin_status");
+   };
+   warn "could not invalidate storage plugin status cache: $@\n" if $@;
+
PVE::Storage::write_config($cfg);
 
}, "create storage failed");
@@ -373,6 +379,12 @@ __PACKAGE__->register_method ({
." in Proxmox VE 9. Use 'create-base-path' or 
'create-subdirs' instead.\n"
}
 
+   eval {
+   # Invalidate cached plugin status on configuration changes.
+   PVE::Storage::status_cache()->delete("storage_plugin_status");
+   };
+   warn "could not invalidate storage plugin status cache: $@\n" if $@;
+
PVE::Storage::write_config($cfg);
 
}, "update storage failed");
@@ -422,6 +434,12 @@ __PACKAGE__->register_method ({
 
delete $cfg->{ids}->{$storeid};
 
+   eval {
+   # Invalidate cached plugin status on configuration changes.
+   PVE::Storage::status_cache()->delete("storage_plugin_status");
+   };
+   warn "could not invalidate storage plugin status cache: $@\n" if $@;
+
PVE::Storage::write_config($cfg);
 
}, "delete storage failed");
diff --git a/src/PVE/API2/Storage/Content.pm b/src/PVE/API2/Storage/Content.pm
index fe0ad4a..cc4f5cd 100644
--- a/src/PVE/API2/Storage/Content.pm
+++ b/src/PVE/API2/Storage/Content.pm
@@ -224,6 +224,11 @@ __PACKAGE__->register_method ({
   $param->{format},
   $name, $size);
 
+   eval {
+   # Invalidate cached plugin status on configuration changes.
+   PVE::Storage::status_cache()->delete("storage_plugin_status");
+   };
+   warn "could not invalidate storage plugin status cache: $@\n" if $@;
return $volid;
 }});
 
@@ -459,6 +464,11 @@ __PACKAGE__->register_method ({
# Remove log file #318 and notes file #3972 if they still exist
PVE::Storage::archive_auxiliaries_remove($path);
}
+   eval {
+   # Invalidate cached plugin status on configuration changes.
+   PVE::Storage::status_cache()->delete("storage_plugin_status");
+   };
+   print "could not invalidate storage plugin status cache: $@\n" if 
$@;
};
 
my $id = (defined $ownervm ? "$ownervm@" : '') . $storeid;
@@ -549,6 +559,11 @@ __PACKAGE__->register_method ({
# ssh to connect to local host (which is not needed
my $sshinfo = PVE::SSHInfo::get_ssh_info($target_node);
PVE::Storage::storage_migrate($cfg, $src_volid, $sshinfo, 
$target_sid, {'target_volname' => $target_volname});
+   eval {
+   # Invalidate cached plugin status on configuration changes.
+   PVE::Storage::status_cache()->delete("storage_plugin_status");
+   };
+   print "could not invalidate storage plugin status cache: $@\n" if 
$@;
 
print "DEBUG: end worker $upid\n";
 
diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index b2336e6..e048975 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -85,6 +85,12 @@ __PACKAGE__->register_method ({
optional => 1,
default => 0,
},
+   'ignore-cache' => {
+   description => "Ignore cached storage plugin status",
+   type => 'boolean',
+   optional => 1,
+   default => 0,
+   },
},
 },
 returns => {
@@ -158,7 +164,12 @@ __PACKAGE__->register_method ({
 
my $cfg = PVE::Storage::config();
 
-   my $info = 

[pve-devel] [PATCH v2 proxmox 3/7] sys: fs: use inline formatting for bail! macro

2023-09-28 Thread Lukas Wagner
Signed-off-by: Lukas Wagner 
---
 proxmox-sys/src/fs/mod.rs | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/proxmox-sys/src/fs/mod.rs b/proxmox-sys/src/fs/mod.rs
index 8d790a4..f54aaf6 100644
--- a/proxmox-sys/src/fs/mod.rs
+++ b/proxmox-sys/src/fs/mod.rs
@@ -71,12 +71,12 @@ impl CreateOptions {
 let mode: stat::Mode = 
self.perm.unwrap_or(stat::Mode::from_bits_truncate(0o644));
 
 if let Err(err) = stat::fchmod(fd, mode) {
-bail!("fchmod {:?} failed: {}", path, err);
+bail!("fchmod {path:?} failed: {err}");
 }
 
 if self.owner.is_some() || self.group.is_some() {
 if let Err(err) = fchown(fd, self.owner, self.group) {
-bail!("fchown {:?} failed: {}", path, err);
+bail!("fchown {path:?} failed: {err}");
 }
 }
 Ok(())
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 proxmox 4/7] sys: add make_tmp_dir

2023-09-28 Thread Lukas Wagner
Under the hood, this function calls `mkdtemp` from libc. Unfortunatly
the nix crate did not provide bindings for this function, so we have
to call into libc directly.

Signed-off-by: Lukas Wagner 
---

Notes:
Changes from v1 -> v2:
  - Use remove_dir instead of unlink
  - Log error if cleaning up dir did not work
  - Change how the tmp dir path is passed to mkdtemp, retaining
ownership at all time.
  - Check for os_error immediately after calling mkdtemp

 proxmox-sys/src/fs/dir.rs | 72 +--
 1 file changed, 70 insertions(+), 2 deletions(-)

diff --git a/proxmox-sys/src/fs/dir.rs b/proxmox-sys/src/fs/dir.rs
index 0b409d7..2ef5e2e 100644
--- a/proxmox-sys/src/fs/dir.rs
+++ b/proxmox-sys/src/fs/dir.rs
@@ -1,6 +1,7 @@
-use std::ffi::CStr;
+use std::ffi::{CStr, OsStr};
+use std::os::unix::ffi::OsStrExt;
 use std::os::unix::io::{AsRawFd, OwnedFd};
-use std::path::Path;
+use std::path::{Path, PathBuf};
 
 use anyhow::{bail, Error};
 use nix::errno::Errno;
@@ -8,6 +9,8 @@ use nix::fcntl::OFlag;
 use nix::sys::stat;
 use nix::unistd;
 
+use proxmox_lang::try_block;
+
 use crate::fs::{fchown, CreateOptions};
 
 /// Creates directory at the provided path with specified ownership.
@@ -148,6 +151,54 @@ fn create_path_at_do(
 }
 }
 
+///  Create a temporary directory.
+///
+/// `prefix` determines where the temporary directory will be created. For 
instance, if
+/// `prefix` is `/tmp`, on success the function will return a path in the 
style of
+/// `/tmp/tmp_XX`, where X stands for a random string, ensuring that the 
path is unique.
+///
+/// By default, the created directory has `0o700` permissions. If this is not 
desired, custom
+/// [`CreateOptions`] can be passed via the `option` parameter.
+pub fn make_tmp_dir>(
+prefix: P,
+options: Option,
+) -> Result {
+let mut template = prefix.as_ref().to_owned();
+template = template.join("tmp_XX");
+
+let mut template = template.into_os_string().as_bytes().to_owned();
+// Push NULL byte so that we have a proper NULL-terminated string
+template.push(0);
+
+let returned_buffer = unsafe {
+let raw_buffer: *mut i8 = std::mem::transmute(template.as_mut_ptr());
+libc::mkdtemp(raw_buffer)
+};
+
+// Check errno immediately, so that nothing else can overwrite it.
+let err = std::io::Error::last_os_error();
+
+if returned_buffer.is_null() {
+return Err(err.into());
+}
+let path = PathBuf::from(OsStr::from_bytes([..template.len() - 
1]));
+
+if let Some(options) = options {
+if let Err(err) = try_block!({
+let fd = crate::fd::open(, OFlag::O_DIRECTORY, 
stat::Mode::empty())?;
+options.apply_to(fd.as_raw_fd(), )?;
+Ok::<(), Error>(())
+}) {
+if let Err(err) = std::fs::remove_dir() {
+log::error!("could not clean up temporary directory at 
{path:?}: {err}")
+}
+bail!("could not apply create options to new temporary directory: 
{err}");
+}
+}
+
+Ok(path)
+}
+
 #[cfg(test)]
 mod tests {
 use super::*;
@@ -165,4 +216,21 @@ mod tests {
 )
 .expect("expected create_path to work");
 }
+
+#[test]
+fn test_make_tmp_dir() -> Result<(), Error> {
+let options = CreateOptions::new()
+.owner(unistd::Uid::effective())
+.group(unistd::Gid::effective())
+.perm(stat::Mode::from_bits_truncate(0o755));
+
+let path = make_tmp_dir("/tmp", Some(options))?;
+
+assert!(path.exists());
+assert!(path.is_dir());
+
+std::fs::remove_dir_all()?;
+
+Ok(())
+}
 }
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 storage/proxmox{, -perl-rs} 0/7] cache storage plugin status for pvestatd/API status update calls

2023-09-28 Thread Lukas Wagner
This patch series introduces a caching mechanism for expensive status
update calls made from pvestatd.

As a first step, I introduced the cache to the arguably
most expensive call, namely `storage_info` from pve-storage. Instead
of caching the results of the `storage_info` function as a whole, we
only cache the results from status updates from individual storage
plugins.

Regarding the cache implementation: I really tried it keep it as simple
as possible. Cached values are not queried terribly often and are
rather small, so I ended up with with simply creating a json-file per
cache key in a directory in tmpfs.

Changes from v1 -> v2:
  - Incorporated changes from the first review (thx @Wolfgang, thx @Max)
  - Made sure to invalidate the cache for storage plugin if necessary:
 - When changing storage config in some way
 - When allocating volumes
 - When uploading ISOs/Images (might change the amount of free space)
  - This allows to increase the expiration time - I've set it now to 60s
  - Added a flag to storage status API/CLI that allows one to ignore cached 
values,
in case somebody needs up-to-date values at all cost (--ignore-cache)
  - Added functions that allow the user of proxmox-cache to lock cache entries
(locking is accomplished using lock files and `flock`)
  - Added get_or_update helper - this one tries to get the cached value, and if 
it does not exist/is expired computes the new value from a closure/function
passed in as a parameter, all while holding the lock on the cache entry.
Locking while updating the value eliminates possible race conditions if 
multiple processes want to update the value at the same time.

Versions of this patch series:
  - v1: https://lists.proxmox.com/pipermail/pve-devel/2023-August/058806.html



proxmox:

Lukas Wagner (5):
  sys: fs: remove unnecessary clippy allow directive
  sys: fs: let CreateOptions::apply_to take RawFd instead of File
  sys: fs: use inline formatting for bail! macro
  sys: add make_tmp_dir
  cache: add new crate 'proxmox-shared-cache'

 Cargo.toml   |   1 +
 proxmox-shared-cache/Cargo.toml  |  18 +
 proxmox-shared-cache/debian/changelog|   5 +
 proxmox-shared-cache/debian/control  |  53 ++
 proxmox-shared-cache/debian/copyright|  18 +
 proxmox-shared-cache/debian/debcargo.toml|   7 +
 proxmox-shared-cache/examples/performance.rs | 113 +
 proxmox-shared-cache/src/lib.rs  | 485 +++
 proxmox-shared-memory/src/lib.rs |   4 +-
 proxmox-sys/src/fs/dir.rs|  76 ++-
 proxmox-sys/src/fs/file.rs   |   4 +-
 proxmox-sys/src/fs/mod.rs|  15 +-
 12 files changed, 780 insertions(+), 19 deletions(-)
 create mode 100644 proxmox-shared-cache/Cargo.toml
 create mode 100644 proxmox-shared-cache/debian/changelog
 create mode 100644 proxmox-shared-cache/debian/control
 create mode 100644 proxmox-shared-cache/debian/copyright
 create mode 100644 proxmox-shared-cache/debian/debcargo.toml
 create mode 100644 proxmox-shared-cache/examples/performance.rs
 create mode 100644 proxmox-shared-cache/src/lib.rs


proxmox-perl-rs:

Lukas Wagner (1):
  cache: add bindings for `SharedCache`

 common/pkg/Makefile  |  1 +
 common/pkg/Proxmox/RS/SharedCache.pm | 46 ++
 common/src/mod.rs|  1 +
 common/src/shared_cache.rs   | 89 
 pve-rs/Cargo.toml|  1 +
 5 files changed, 138 insertions(+)
 create mode 100644 common/pkg/Proxmox/RS/SharedCache.pm
 create mode 100644 common/src/shared_cache.rs


pve-storage:

Lukas Wagner (1):
  stats: api: cache storage plugin status

 src/PVE/API2/Storage/Config.pm  | 18 +++
 src/PVE/API2/Storage/Content.pm | 15 ++
 src/PVE/API2/Storage/Status.pm  | 38 ++-
 src/PVE/Storage.pm  | 84 -
 4 files changed, 131 insertions(+), 24 deletions(-)


Summary over all repositories:
  21 files changed, 1049 insertions(+), 43 deletions(-)

-- 
murpp v0.4.0



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 proxmox 2/7] sys: fs: let CreateOptions::apply_to take RawFd instead of File

2023-09-28 Thread Lukas Wagner
Suggested-by: Wolfgang Bumiller 
Signed-off-by: Lukas Wagner 
---
 proxmox-shared-memory/src/lib.rs | 4 ++--
 proxmox-sys/src/fs/file.rs   | 4 ++--
 proxmox-sys/src/fs/mod.rs| 9 -
 3 files changed, 8 insertions(+), 9 deletions(-)

diff --git a/proxmox-shared-memory/src/lib.rs b/proxmox-shared-memory/src/lib.rs
index 4809bd0..3aa04cd 100644
--- a/proxmox-shared-memory/src/lib.rs
+++ b/proxmox-shared-memory/src/lib.rs
@@ -130,8 +130,8 @@ impl SharedMemory {
 // create temporary file using O_TMPFILE
 let mut file = match nix::fcntl::open(_name, oflag | 
OFlag::O_TMPFILE, Mode::empty()) {
 Ok(fd) => {
-let mut file = unsafe { File::from_raw_fd(fd) };
-options.apply_to( file, _name)?;
+let file = unsafe { File::from_raw_fd(fd) };
+options.apply_to(fd, _name)?;
 file
 }
 Err(err) => {
diff --git a/proxmox-sys/src/fs/file.rs b/proxmox-sys/src/fs/file.rs
index ac51389..5d74f31 100644
--- a/proxmox-sys/src/fs/file.rs
+++ b/proxmox-sys/src/fs/file.rs
@@ -135,12 +135,12 @@ pub fn make_tmp_file>(
 // use mkstemp here, because it works with different processes, threads, 
even tokio tasks
 let mut template = path.to_owned();
 template.set_extension("tmp_XX");
-let (mut file, tmp_path) = match unistd::mkstemp() {
+let (file, tmp_path) = match unistd::mkstemp() {
 Ok((fd, path)) => (unsafe { File::from_raw_fd(fd) }, path),
 Err(err) => bail!("mkstemp {:?} failed: {}", template, err),
 };
 
-match options.apply_to( file, _path) {
+match options.apply_to(file.as_raw_fd(), _path) {
 Ok(()) => Ok((file, tmp_path)),
 Err(err) => {
 let _ = unistd::unlink(_path);
diff --git a/proxmox-sys/src/fs/mod.rs b/proxmox-sys/src/fs/mod.rs
index ae54d78..8d790a4 100644
--- a/proxmox-sys/src/fs/mod.rs
+++ b/proxmox-sys/src/fs/mod.rs
@@ -1,12 +1,11 @@
 //! File system related utilities
-use std::fs::File;
 use std::path::Path;
 
 use anyhow::{bail, Error};
 
 use nix::sys::stat;
 use nix::unistd::{Gid, Uid};
-use std::os::unix::io::{AsRawFd, RawFd};
+use std::os::unix::io::RawFd;
 
 #[cfg(feature = "acl")]
 pub mod acl;
@@ -68,15 +67,15 @@ impl CreateOptions {
 self.owner(nix::unistd::ROOT)
 }
 
-pub fn apply_to(, file:  File, path: ) -> Result<(), Error> {
+pub fn apply_to(, fd: RawFd, path: ) -> Result<(), Error> {
 let mode: stat::Mode = 
self.perm.unwrap_or(stat::Mode::from_bits_truncate(0o644));
 
-if let Err(err) = stat::fchmod(file.as_raw_fd(), mode) {
+if let Err(err) = stat::fchmod(fd, mode) {
 bail!("fchmod {:?} failed: {}", path, err);
 }
 
 if self.owner.is_some() || self.group.is_some() {
-if let Err(err) = fchown(file.as_raw_fd(), self.owner, self.group) 
{
+if let Err(err) = fchown(fd, self.owner, self.group) {
 bail!("fchown {:?} failed: {}", path, err);
 }
 }
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 proxmox 1/7] sys: fs: remove unnecessary clippy allow directive

2023-09-28 Thread Lukas Wagner
It seems like the mentioned clippy bug has since been fixed.

Signed-off-by: Lukas Wagner 
---
 proxmox-sys/src/fs/dir.rs | 4 
 proxmox-sys/src/fs/mod.rs | 2 --
 2 files changed, 6 deletions(-)

diff --git a/proxmox-sys/src/fs/dir.rs b/proxmox-sys/src/fs/dir.rs
index 6aee316..0b409d7 100644
--- a/proxmox-sys/src/fs/dir.rs
+++ b/proxmox-sys/src/fs/dir.rs
@@ -14,8 +14,6 @@ use crate::fs::{fchown, CreateOptions};
 ///
 /// Errors if the directory already exists.
 pub fn create_dir>(path: P, options: CreateOptions) -> 
Result<(), nix::Error> {
-// clippy bug?: from_bits_truncate is actually a const fn...
-#[allow(clippy::or_fun_call)]
 let mode: stat::Mode = options
 .perm
 .unwrap_or(stat::Mode::from_bits_truncate(0o770));
@@ -126,8 +124,6 @@ fn create_path_at_do(
 final_opts.as_ref()
 };
 
-// clippy bug?: from_bits_truncate is actually a const fn...
-#[allow(clippy::or_fun_call)]
 let mode = opts
 .and_then(|o| o.perm)
 .unwrap_or(stat::Mode::from_bits_truncate(0o755));
diff --git a/proxmox-sys/src/fs/mod.rs b/proxmox-sys/src/fs/mod.rs
index 8fb677c..ae54d78 100644
--- a/proxmox-sys/src/fs/mod.rs
+++ b/proxmox-sys/src/fs/mod.rs
@@ -69,8 +69,6 @@ impl CreateOptions {
 }
 
 pub fn apply_to(, file:  File, path: ) -> Result<(), Error> {
-// clippy bug?: from_bits_truncate is actually a const fn...
-#[allow(clippy::or_fun_call)]
 let mode: stat::Mode = 
self.perm.unwrap_or(stat::Mode::from_bits_truncate(0o644));
 
 if let Err(err) = stat::fchmod(file.as_raw_fd(), mode) {
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH zfsonlinux] update zfs submodule to 2.1.13 and refresh patches

2023-09-28 Thread Stoiko Ivanov
Sugested-by: Thomas Lamprecht 
Signed-off-by: Stoiko Ivanov 
---
did some minimal testing (ztest for a while, containers with replication
and a migration between 2 nodes) - looked ok
The changelog also seems harmless from a quick glance.

 debian/patches/0005-Enable-zed-emails.patch| 2 +-
 debian/patches/0006-dont-symlink-zed-scripts.patch | 4 ++--
 upstream   | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/debian/patches/0005-Enable-zed-emails.patch 
b/debian/patches/0005-Enable-zed-emails.patch
index d87df009..ef260eba 100644
--- a/debian/patches/0005-Enable-zed-emails.patch
+++ b/debian/patches/0005-Enable-zed-emails.patch
@@ -13,7 +13,7 @@ Signed-off-by: Thomas Lamprecht 
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/cmd/zed/zed.d/zed.rc b/cmd/zed/zed.d/zed.rc
-index 227b26c26..240d0dbfa 100644
+index 1dfd43454..0180dd827 100644
 --- a/cmd/zed/zed.d/zed.rc
 +++ b/cmd/zed/zed.d/zed.rc
 @@ -42,7 +42,7 @@ ZED_EMAIL_ADDR="root"
diff --git a/debian/patches/0006-dont-symlink-zed-scripts.patch 
b/debian/patches/0006-dont-symlink-zed-scripts.patch
index 33c066bd..82e761ca 100644
--- a/debian/patches/0006-dont-symlink-zed-scripts.patch
+++ b/debian/patches/0006-dont-symlink-zed-scripts.patch
@@ -17,10 +17,10 @@ Signed-off-by: Thomas Lamprecht 
  1 file changed, 1 insertion(+), 1 deletion(-)
 
 diff --git a/cmd/zed/zed.d/Makefile.am b/cmd/zed/zed.d/Makefile.am
-index 2c8173b3e..ad39292e4 100644
+index 1905a9207..6dc06252a 100644
 --- a/cmd/zed/zed.d/Makefile.am
 +++ b/cmd/zed/zed.d/Makefile.am
-@@ -49,7 +49,7 @@ install-data-hook:
+@@ -51,7 +51,7 @@ install-data-hook:
for f in $(zedconfdefaults); do \
  test -f "$(DESTDIR)$(zedconfdir)/$${f}" -o \
   -L "$(DESTDIR)$(zedconfdir)/$${f}" || \
diff --git a/upstream b/upstream
index 86783d7d..eb62221f 16
--- a/upstream
+++ b/upstream
@@ -1 +1 @@
-Subproject commit 86783d7d92cf7a859464719a917fdff845b9a9e1
+Subproject commit eb62221ff0f9efbc2ab826ec6f1388c5f05fb664
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel