Re: [pve-devel] [PATCH manager v4 2/9] ui: guest import: add ova-needs-extracting warning text

2024-06-12 Thread Max Carrara
On Fri May 24, 2024 at 3:22 PM CEST, Dominik Csapak wrote:
> Signed-off-by: Dominik Csapak 
> ---
>  www/manager6/window/GuestImport.js | 1 +
>  1 file changed, 1 insertion(+)
>
> diff --git a/www/manager6/window/GuestImport.js 
> b/www/manager6/window/GuestImport.js
> index 4bedc211..76ba6dc8 100644
> --- a/www/manager6/window/GuestImport.js
> +++ b/www/manager6/window/GuestImport.js
> @@ -937,6 +937,7 @@ Ext.define('PVE.window.GuestImport', {
>   gettext('EFI state cannot be imported, you may need to 
> reconfigure the boot order (see {0})'),
>   ' href="https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries;>OVMF/UEFI Boot 
> Entries',
>   ),
> + 'ova-needs-extracting': gettext('Importing from an OVA requires 
> extra space while extracting the contained disks into the import or selected 
> storage.'),

I'm assuming the string here needs to be in one line because of
`gettext`, right? If not, I'd prefer to break it up ;)

>   };
>  let message = warningsCatalogue[w.type];
>   if (!w.type || !message) {



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server v4 4/4] api: create: add 'import-extraction-storage' parameter

2024-06-12 Thread Max Carrara
On Fri May 24, 2024 at 3:22 PM CEST, Dominik Csapak wrote:
> this is to override the target extraction storage for the option disk
> extraction for 'import-from'. This way if the storage does not
> supports the content type 'images', one can give an alternative  one.
>
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/API2/Qemu.pm | 46 +-
>  1 file changed, 37 insertions(+), 9 deletions(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 8c335ac4..80ea52c5 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -128,7 +128,7 @@ my $check_drive_param = sub {
>  };
>  
>  my $check_storage_access = sub {
> -   my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage) = 
> @_;
> +   my ($rpcenv, $authuser, $storecfg, $vmid, $settings, $default_storage, 
> $extraction_storage) = @_;
>  
> $foreach_volume_with_alloc->($settings, sub {
>   my ($ds, $drive) = @_;
> @@ -169,9 +169,18 @@ my $check_storage_access = sub {
>   if $vtype ne 'images' && $vtype ne 'import';
>  
>   if (PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt)) {
> - raise_param_exc({ $ds => "$src_image is not on an storage 
> with 'images' content type."})
> - if !$scfg->{content}->{images};
> - $rpcenv->check($authuser, "/storage/$storeid", 
> ['Datastore.AllocateSpace']);
> + if (defined($extraction_storage)) {
> + my $extraction_scfg = 
> PVE::Storage::storage_config($storecfg, $extraction_storage);
> + raise_param_exc({ 'import-extraction-storage' => 
> "$extraction_storage does not support"
> + ." 'images' content type or is not file 
> based."})

Is there perhaps a way to display "Disk images" like in the storage
config drop down menu? Also, this is where the error handling could be
more fine-grained so the user immediately knows what's wrong, as
mentioned in my response to your cover letter.

> + if !$extraction_scfg->{content}->{images} || 
> !$extraction_scfg->{path};

Style: The `raise_param_exc` plus `if` here should IMO be indented like
you did ...

> + $rpcenv->check($authuser, 
> "/storage/$extraction_storage", ['Datastore.AllocateSpace']);
> + } else {
> + raise_param_exc({ $ds => "$src_image is not on an 
> storage with 'images'"

s/an storage/a storage

> + ." content type and no 'import-extraction-storage' 
> was given."})
> + if !$scfg->{content}->{images};

... here.

> + $rpcenv->check($authuser, "/storage/$storeid", 
> ['Datastore.AllocateSpace']);
> + }
>   }
>   }
>  
> @@ -326,7 +335,7 @@ my $import_from_volid = sub {
>  
>  # Note: $pool is only needed when creating a VM, because pool permissions
>  # are automatically inherited if VM already exists inside a pool.
> -my sub create_disks : prototype($$) {
> +my sub create_disks : prototype($$$) {
>  my (
>   $rpcenv,
>   $authuser,
> @@ -338,6 +347,7 @@ my sub create_disks : prototype($$) {
>   $settings,
>   $default_storage,
>   $is_live_import,
> + $extraction_storage,
>  ) = @_;
>  
>  my $vollist = [];
> @@ -407,8 +417,8 @@ my sub create_disks : prototype($$) {
>   my $needs_extraction = 
> PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt);
>   if ($needs_extraction) {
>   print "extracting $source\n";
> - my $extracted_volid
> -  = 
> PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
> + my $extracted_volid = 
> PVE::GuestImport::extract_disk_from_import_file(
> + $source, $vmid, $extraction_storage);
>   print "finished extracting to $extracted_volid\n";
>   push @$vollist, $extracted_volid;
>   $source = $extracted_volid;
> @@ -941,6 +951,12 @@ __PACKAGE__->register_method({
>   default => 0,
>   description => "Start VM after it was created 
> successfully.",
>   },
> + 'import-extraction-storage' => 
> get_standard_option('pve-storage-id', {
> + description => "Storage for temporarily extracted images 
> 'import-from' image"
> + ." files (default: import source storage)",
> + optional => 1,
> + completion => \::QemuServer::complete_storage,
> + }),
>   },
>   1, # with_disk_alloc
>   ),
> @@ -967,6 +983,7 @@ __PACKAGE__->register_method({
>   my $storage = extract_param($param, 'storage');
>   my $unique = extract_param($param, 'unique');
>   my $live_restore = extract_param($param, 

Re: [pve-devel] [PATCH qemu-server v4 3/4] api: create: implement extracting disks when needed for import-from

2024-06-12 Thread Max Carrara
On Fri May 24, 2024 at 3:21 PM CEST, Dominik Csapak wrote:
> when 'import-from' contains a disk image that needs extraction
> (currently only from an 'ova' archive), do that in 'create_disks'
> and overwrite the '$source' volid.
>
> Collect the names into a 'delete_sources' list, that we use later
> to clean it up again (either when we're finished with importing or in an
> error case).
>
> Signed-off-by: Dominik Csapak 
> ---
>  PVE/API2/Qemu.pm  | 55 +++
>  PVE/QemuServer.pm | 12 +
>  PVE/QemuServer/Helpers.pm |  5 
>  3 files changed, 61 insertions(+), 11 deletions(-)
>
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 2a1d4d79..8c335ac4 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -24,6 +24,7 @@ use PVE::JSONSchema qw(get_standard_option);
>  use PVE::RESTHandler;
>  use PVE::ReplicationConfig;
>  use PVE::GuestHelpers qw(assert_tag_permissions);
> +use PVE::GuestImport;
>  use PVE::QemuConfig;
>  use PVE::QemuServer;
>  use PVE::QemuServer::Cloudinit;
> @@ -159,10 +160,19 @@ my $check_storage_access = sub {
>  
>   if (my $src_image = $drive->{'import-from'}) {
>   my $src_vmid;
> - if (PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed 
> volume
> - (my $vtype, undef, $src_vmid) = 
> PVE::Storage::parse_volname($storecfg, $src_image);
> - raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - 
> not an image" })
> - if $vtype ne 'images';
> + if (my ($storeid, $volname) = 
> PVE::Storage::parse_volume_id($src_image, 1)) { # PVE-managed volume
> + my $scfg = PVE::Storage::storage_config($storecfg, $storeid);
> + my $plugin = PVE::Storage::Plugin->lookup($scfg->{type});
> + (my $vtype, undef, $src_vmid, undef, undef, undef, my $fmt) = 
> $plugin->parse_volname($volname);
> +
> + raise_param_exc({ $ds => "$src_image has wrong type '$vtype' - 
> needs to be 'images' or 'import'" })
> + if $vtype ne 'images' && $vtype ne 'import';

... Shouldn't this here be `||` instead of `&&`? According to the error
message at least.

> +
> + if (PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt)) {
> + raise_param_exc({ $ds => "$src_image is not on an storage 
> with 'images' content type."})

s/an storage/a storage

Also, as I mentioned in another comment, is it maybe possible to display
the same name that's used in the UI instead of 'images' or 'import'?

> + if !$scfg->{content}->{images};
> + $rpcenv->check($authuser, "/storage/$storeid", 
> ['Datastore.AllocateSpace']);
> + }
>   }
>  
>   if ($src_vmid) { # might be actively used by VM and will be copied 
> via clone_disk()
> @@ -392,16 +402,34 @@ my sub create_disks : prototype($$) {
>   $needs_creation = $live_import;
>  
>   if (PVE::Storage::parse_volume_id($source, 1)) { # PVE-managed 
> volume
> + my ($vtype, undef, undef, undef, undef, undef, $fmt)
> + = PVE::Storage::parse_volname($storecfg, $source);
> + my $needs_extraction = 
> PVE::QemuServer::Helpers::needs_extraction($vtype, $fmt);
> + if ($needs_extraction) {
> + print "extracting $source\n";
> + my $extracted_volid
> +  = 
> PVE::GuestImport::extract_disk_from_import_file($source, $vmid);
> + print "finished extracting to $extracted_volid\n";
> + push @$vollist, $extracted_volid;
> + $source = $extracted_volid;
> +
> + my (undef, undef, undef, $parent)
> + = PVE::Storage::volume_size_info($storecfg, 
> $source);
> + die "importing from extracted images with backing file 
> ($parent) not allowed\n"
> + if $parent;
> + }
> +
>   if ($live_import && $ds ne 'efidisk0') {
>   my $path = PVE::Storage::path($storecfg, $source)
>   or die "failed to get a path for '$source'\n";
> - $source = $path;
> - ($size, my $source_format) = 
> PVE::Storage::file_size_info($source);
> - die "could not get file size of $source\n" if !$size;
> + ($size, my $source_format) = 
> PVE::Storage::file_size_info($path);
> + die "could not get file size of $path\n" if !$size;
>   $live_import_mapping->{$ds} = {
> - path => $source,
> + path => $path,
>   format => $source_format,
>   };
> + $live_import_mapping->{$ds}->{'delete-after-finish'} = 
> $source
> + 

Re: [pve-devel] [PATCH storage v4 10/12] plugin: enable import for nfs/btrfs/cifs/cephfs/glusterfs

2024-06-12 Thread Max Carrara
On Fri May 24, 2024 at 3:21 PM CEST, Dominik Csapak wrote:
> and reuse the DirPlugin implementation
>
> Signed-off-by: Dominik Csapak 
> ---
>  src/PVE/Storage/BTRFSPlugin.pm | 5 +
>  src/PVE/Storage/CIFSPlugin.pm  | 6 +-
>  src/PVE/Storage/CephFSPlugin.pm| 6 +-
>  src/PVE/Storage/GlusterfsPlugin.pm | 6 +-
>  src/PVE/Storage/NFSPlugin.pm   | 6 +-
>  5 files changed, 25 insertions(+), 4 deletions(-)
>
> diff --git a/src/PVE/Storage/BTRFSPlugin.pm b/src/PVE/Storage/BTRFSPlugin.pm
> index 42815cb..b7e3f82 100644
> --- a/src/PVE/Storage/BTRFSPlugin.pm
> +++ b/src/PVE/Storage/BTRFSPlugin.pm
> @@ -40,6 +40,7 @@ sub plugindata {
>   backup => 1,
>   snippets => 1,
>   none => 1,
> + import => 1,
>   },
>   { images => 1, rootdir => 1 },
>   ],
> @@ -930,4 +931,8 @@ sub volume_import {
>  return "$storeid:$volname";
>  }
>  
> +sub get_import_metadata {
> +return PVE::Storage::DirPlugin::get_import_metadata(@_);

Not _really_ a fan of one plugin having to rely on another one, as
that introduces (another) interdependency (-> higher coupling). But I
would leave this as it is, because I'm working on a series that aims to
resolve stuff like that (and it's IMO better to just tackle this all in
one series at once anyway). So this is IMO fine and gets an OK from me.
Just wanted to note.

> +}
> +
>  1
> diff --git a/src/PVE/Storage/CIFSPlugin.pm b/src/PVE/Storage/CIFSPlugin.pm
> index 2184471..475065a 100644
> --- a/src/PVE/Storage/CIFSPlugin.pm
> +++ b/src/PVE/Storage/CIFSPlugin.pm
> @@ -99,7 +99,7 @@ sub type {
>  sub plugindata {
>  return {
>   content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1,
> -backup => 1, snippets => 1}, { images => 1 }],
> +backup => 1, snippets => 1, import => 1}, { images => 1 }],
>   format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
>  };
>  }
> @@ -314,4 +314,8 @@ sub update_volume_attribute {
>  return PVE::Storage::DirPlugin::update_volume_attribute(@_);
>  }
>  
> +sub get_import_metadata {
> +return PVE::Storage::DirPlugin::get_import_metadata(@_);
> +}
> +
>  1;
> diff --git a/src/PVE/Storage/CephFSPlugin.pm b/src/PVE/Storage/CephFSPlugin.pm
> index 8aad518..36c64ea 100644
> --- a/src/PVE/Storage/CephFSPlugin.pm
> +++ b/src/PVE/Storage/CephFSPlugin.pm
> @@ -116,7 +116,7 @@ sub type {
>  
>  sub plugindata {
>  return {
> - content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1},
> + content => [ { vztmpl => 1, iso => 1, backup => 1, snippets => 1, 
> import => 1 },
>{ backup => 1 }],
>  };
>  }
> @@ -261,4 +261,8 @@ sub update_volume_attribute {
>  return PVE::Storage::DirPlugin::update_volume_attribute(@_);
>  }
>  
> +sub get_import_metadata {
> +return PVE::Storage::DirPlugin::get_import_metadata(@_);
> +}
> +
>  1;
> diff --git a/src/PVE/Storage/GlusterfsPlugin.pm 
> b/src/PVE/Storage/GlusterfsPlugin.pm
> index 2b7f9e1..9d17180 100644
> --- a/src/PVE/Storage/GlusterfsPlugin.pm
> +++ b/src/PVE/Storage/GlusterfsPlugin.pm
> @@ -97,7 +97,7 @@ sub type {
>  
>  sub plugindata {
>  return {
> - content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1, 
> snippets => 1},
> + content => [ { images => 1, vztmpl => 1, iso => 1, backup => 1, 
> snippets => 1, import => 1},
>{ images => 1 }],
>   format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
>  };
> @@ -352,4 +352,8 @@ sub check_connection {
>  return defined($server) ? 1 : 0;
>  }
>  
> +sub get_import_metadata {
> +return PVE::Storage::DirPlugin::get_import_metadata(@_);
> +}
> +
>  1;
> diff --git a/src/PVE/Storage/NFSPlugin.pm b/src/PVE/Storage/NFSPlugin.pm
> index f2e4c0d..72e9c6d 100644
> --- a/src/PVE/Storage/NFSPlugin.pm
> +++ b/src/PVE/Storage/NFSPlugin.pm
> @@ -53,7 +53,7 @@ sub type {
>  
>  sub plugindata {
>  return {
> - content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup 
> => 1, snippets => 1 },
> + content => [ { images => 1, rootdir => 1, vztmpl => 1, iso => 1, backup 
> => 1, snippets => 1, import => 1 },

This hash exceeds our line length limit, but I assume that's because it's
easier to read in diffs ... Can IMO be also left like that and changed
later or be adapted in a follow-up patch if so desired.

>{ images => 1 }],
>   format => [ { raw => 1, qcow2 => 1, vmdk => 1 } , 'raw' ],
>  };
> @@ -223,4 +223,8 @@ sub update_volume_attribute {
>  return PVE::Storage::DirPlugin::update_volume_attribute(@_);
>  }
>  
> +sub get_import_metadata {
> +return PVE::Storage::DirPlugin::get_import_metadata(@_);
> +}
> +
>  1;



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH storage v4 03/12] plugin: dir: handle ova files for import

2024-06-12 Thread Max Carrara
On Fri May 24, 2024 at 3:21 PM CEST, Dominik Csapak wrote:
> since we want to handle ova files (which are only ovf+images bundled in
> a tar file) for import, add code that handles that.
>
> we introduce a valid volname for files contained in ovas like this:
>
>  storage:import/archive.ova/disk-1.vmdk
>
> by basically treating the last part of the path as the name for the
> contained disk we want.
>
> in that case we return 'import' as type with 'vmdk/qcow2/raw' as format
> (we cannot use something like 'ova+vmdk' without extending the 'format'
> parsing to that for all storages/formats. This is because it runs
> though a verify format check at least once)
>
> we then provide 3 functions to use for that:
>
> * copy_needs_extraction: determines from the given volid (like above) if
>   that needs extraction to copy it, currently only 'import' vtype + a
>   volid with the above format returns true
>
> * extract_disk_from_import_file: this actually extracts the file from
>   the archive. Currently only ova is supported, so the extraction with
>   'tar' is hardcoded, but again we can easily extend/modify that should
>   we need to.
>
>   we currently extract into the either the import storage or a given
>   target storage in the images directory so if the cleanup does not
>   happen, the user can still see and interact with the image via
>   api/cli/gui
>
> * cleanup_extracted_image: intended to cleanup the extracted images from
>   above
>
> we have to modify the `parse_ovf` a bit to handle the missing disk
> images, and we parse the size out of the ovf part (since this is
> informal only, it should be no problem if we cannot parse it sometimes)
>
> Signed-off-by: Dominik Csapak 
> ---
>  src/PVE/API2/Storage/Status.pm |  1 +
>  src/PVE/GuestImport.pm | 77 ++
>  src/PVE/GuestImport/OVF.pm | 52 +---
>  src/PVE/Makefile   |  1 +
>  src/PVE/Storage.pm |  4 +-
>  src/PVE/Storage/DirPlugin.pm   | 15 +-
>  src/PVE/Storage/Plugin.pm  |  4 ++
>  src/test/parse_volname_test.pm | 20 
>  src/test/path_to_volume_id_test.pm |  8 
>  9 files changed, 173 insertions(+), 9 deletions(-)
>  create mode 100644 src/PVE/GuestImport.pm
>
> diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
> index dc6cc69..acde730 100644
> --- a/src/PVE/API2/Storage/Status.pm
> +++ b/src/PVE/API2/Storage/Status.pm
> @@ -749,6 +749,7 @@ __PACKAGE__->register_method({
>   'efi-state-lost',
>   'guest-is-running',
>   'nvme-unsupported',
> + 'ova-needs-extracting',
>   'ovmf-with-lsi-unsupported',
>   'serial-port-socket-only',
>   ],
> diff --git a/src/PVE/GuestImport.pm b/src/PVE/GuestImport.pm
> new file mode 100644
> index 000..988d1f6
> --- /dev/null
> +++ b/src/PVE/GuestImport.pm
> @@ -0,0 +1,77 @@
> +package PVE::GuestImport;
> +
> +use strict;
> +use warnings;
> +
> +use File::Path;
> +
> +use PVE::Storage;
> +use PVE::Tools qw(run_command);
> +
> +sub extract_disk_from_import_file {
> +my ($volid, $vmid, $target_storeid) = @_;
> +
> +my ($source_storeid, $volname) = PVE::Storage::parse_volume_id($volid);
> +$target_storeid //= $source_storeid;
> +my $cfg = PVE::Storage::config();
> +
> +my ($vtype, $name, undef, undef, undef, undef, $fmt) =
> + PVE::Storage::parse_volname($cfg, $volid);
> +
> +die "only files with content type 'import' can be extracted\n"
> + if $vtype ne 'import' || $fmt !~ m/^ova\+/;
> +
> +# extract the inner file from the name
> +my $archive_volid;
> +my $inner_file;
> +my $inner_fmt;
> +if ($name =~ m!^(.*\.ova)/(${PVE::Storage::SAFE_CHAR_CLASS_RE}+)$!) {
> + $archive_volid = "$source_storeid:import/$1";
> + $inner_file = $2;
> + ($inner_fmt) = $fmt =~ /^ova\+(.*)$/;
> +} else {
> + die "cannot extract $volid - invalid volname $volname\n";
> +}
> +
> +my $ova_path = PVE::Storage::path($cfg, $archive_volid);
> +
> +my $tmpdir = PVE::Storage::get_image_dir($cfg, $target_storeid, $vmid);
> +my $pid = $$;
> +$tmpdir .= "/tmp_${pid}_${vmid}";
> +mkpath $tmpdir;
> +
> +($ova_path) = $ova_path =~ m|^(.*)$|; # untaint
> +
> +my $source_path = "$tmpdir/$inner_file";
> +my $target_path;
> +my $target_volid;
> +eval {
> + run_command(['tar', '-x', '--force-local', '-C', $tmpdir, '-f', 
> $ova_path, $inner_file]);
> +
> + # check for symlinks and other non regular files
> + if (-l $source_path || ! -f $source_path) {
> + die "only regular files are allowed\n";
> + }
> +
> + # TODO check for base images in file
> +
> + # create temporary 1M image that will get overwritten by the rename
> + # to reserve the filename and take care of 

Re: [pve-devel] [PATCH storage/qemu-server/manager v4] implement ova/ovf import for file based storages

2024-06-12 Thread Max Carrara
On Fri May 24, 2024 at 3:21 PM CEST, Dominik Csapak wrote:
> This series enables importing ova/ovf from directory based storages,
> inclusive upload/download via the webui (ova only).
>
> It also improves the ovf importer by parsing the ostype, nics, bootorder
> (and firmware from vmware exported files).

Really great work! Built the packages and ran some tests. There are also
a couple comments, so see below.

Building


* Patches applied cleanly
* Had to build the `qemu-server` package on a development VM and build
  and install `pve-storage` with the patches installed there first,
  because the new `PVE::GuestImport` module couldn't be found
  (obviously). Worked fine though.

Testing
---

* Installed all packages on a development VM, encountered no issues

* Checked if new `import` content type shows up only for directory-esque 
  storage types
  --> sure does

* Added `import` as content type to my "local" storage
* Checked if "local" storage has new "import" tab
  --> it does

* Exported a Debian VM from ESXi and tarballed the .ovf, .mf and .vmdk
  to an .ova file
* Uploaded the file to the storage
  --> succeeded

* Tried to import the VM onto my local lvm thin-pool storage
  --> fails, because my "local" storage doesn't have the "Disk Image"
  content type

I hadn't expected this to be honest, because I'm not trying to import it
to "local" here?

* Created a new storage with type "directory" and set its content types
  to "import" and "disk image"
* Tried to import the VM again, but this time selected my LVM thin pool
  as "Extraction Storage" in the dialogue window
  --> also fails as expected, because an LVM thin pool is not file-based

A note on the above test: I'd prefer if the error message here was more
fine grained - the popup said:
> import-extraction-storage: local-lvm does not support 'images' content
> type or is not file based.

IMO it would probably be best to not display these storages at all if
they're not supported as an extraction target, but I'm not sure if that
can actually be done (conveniently) atm. Not a blocker or anything
though.

FYI, even though my development has multiple storage types, only the
following are shown in that drop down list:
  - zfs
  - lvm-thin
  - directory (the new one, but not "local")

The only storages with the new "import" content type are the local one
and the new one I made for testing above, so I'm not sure why the zfs
pool and lvm thin pool are showing up there.

* Attempted another import, this time just keeping the "Extraction
  Storage" as is
  --> Worked quite fast, the VM was online really quick because I had
  live import on and worked without any issues

* Attempted another import, but this time from the "local" storage again
  as I did earlier, but I set the "Extraction Storage" to the new
  directory storage
  --> This time importing from "local" worked, somewhat surprisingly

* On the new directory storage, set the content type to "import" only
  --> The icon changed to the little cloud with the arrow, cute

* Attempted yet another import, this time from "local" again and setting
  the new directory storage as "Extraction Storage"
  --> fails unexpectedly with:
  > scsi0: test-dir-storage:import/debian-esxi.ova/debian-esxi-0.vmdk is
  > not on an storage with 'images' content type and no
  > 'import-extraction-storage' was given."

So, it's now pretty clear to me that the `disks` content type plays a
role in terms of whether something can actually be extracted or not
here. Is there perhaps any way the extraction can be "decoupled" from
the content type or be done in a different manner? If this is currently
a limitation (e.g. because of the storage API) then I'd say it's an
*okay* restriction to have at the moment, but it still feels a little
unintuitive from a user's perspective.

Code Review
---

The patches are rather easy to follow and logically structured, so
that's very nice. There are more comments inline.

The only other thing I'm not sure about is whether we'd actually want
the `PVE::GuestImport` module (or namespace) -- is there anything we'd
like to import in the future? Maybe something like `PVE::Import::Guest`
would be better, even if there's nothing in `PVE::Import` for the time
being. Just so that we don't have to touch the module structure again
soon.

Conclusion
--

Very solid work, this is pretty great! This will be a lovely feature for
our users. Apart from the couple little things I encountered there's
nothing to complain about.

To sum all of the above up, the only suggestions I have are as follows:
* I think the error handling regarding the "Extraction Storage" should
  be more fine-grained
* The `disks` content type shouldn't determine whether an .ova file is
  able to be extracted or not, I think there should be some other
  mechanism for that
* Maybe use `PVE::Import::Guest` instead of `PVE::GuestImport`

>
> I opted to move the OVF.pm to pve-storage, since there is no
> real 

Re: [pve-devel] [PATCH storage] add API method to move a volume between storages

2024-06-12 Thread Filip Schauer

I forgot to mention that this fixes #5191

On 12/06/2024 16:45, Filip Schauer wrote:

Add the ability to move a backup, ISO, container template or snippet
between storages of a node via an API method. Moving a VMA backup to a
Proxmox Backup Server requires the proxmox-vma-to-pbs package to be
installed. Currently only VMA backups can be moved to a Proxmox Backup
Server and moving backups from a Proxmox Backup Server is not yet
supported.

The method can be called from the PVE shell with `pvesm move`:

# pvesm move  
pvesm move local:backup/vzdump-lxc-102-2024_05_29-17_05_27.tar.zst pbs

Or use curl to call the API method:

curl 
https://$APINODE:8006/api2/json/nodes/$TARGETNODE/storage/$TARGETSTORAGE/move \
 --insecure --cookie "$(
---
This patch depends on
[PATCH backup-qemu/vma-to-pbs 0/2] add support for notes and logs
https://lists.proxmox.com/pipermail/pbs-devel/2024-May/009445.html

  src/PVE/API2/Storage/Makefile  |   2 +-
  src/PVE/API2/Storage/MoveVolume.pm |  61 +
  src/PVE/API2/Storage/Status.pm |   7 ++
  src/PVE/CLI/pvesm.pm   |  33 +
  src/PVE/Storage.pm | 195 +
  5 files changed, 271 insertions(+), 27 deletions(-)
  create mode 100644 src/PVE/API2/Storage/MoveVolume.pm

diff --git a/src/PVE/API2/Storage/Makefile b/src/PVE/API2/Storage/Makefile
index 1705080..11f3c95 100644
--- a/src/PVE/API2/Storage/Makefile
+++ b/src/PVE/API2/Storage/Makefile
@@ -1,5 +1,5 @@
  
-SOURCES= Content.pm Status.pm Config.pm PruneBackups.pm Scan.pm FileRestore.pm

+SOURCES= Content.pm Status.pm Config.pm PruneBackups.pm Scan.pm FileRestore.pm 
MoveVolume.pm
  
  .PHONY: install

  install:
diff --git a/src/PVE/API2/Storage/MoveVolume.pm 
b/src/PVE/API2/Storage/MoveVolume.pm
new file mode 100644
index 000..52447a4
--- /dev/null
+++ b/src/PVE/API2/Storage/MoveVolume.pm
@@ -0,0 +1,61 @@
+package PVE::API2::Storage::MoveVolume;
+
+use strict;
+use warnings;
+
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::RESTHandler;
+use PVE::RPCEnvironment;
+use PVE::Storage;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+name => 'move',
+path => '',
+method => 'POST',
+description => "Move a volume from one storage to another",
+permissions => {
+   description => "You need the 'Datastore.Allocate' privilege on the 
storages.",
+   user => 'all',
+},
+protected => 1,
+proxyto => 'node',
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   'source-volume' => {
+   description => "Source volume",
+   type => 'string',
+   },
+   'storage' => get_standard_option('pve-storage-id', {
+   completion => \::Storage::complete_storage_enabled,
+   description => 'Target storage',
+   }),
+   },
+},
+returns => { type => 'string' },
+code => sub {
+   my ($param) = @_;
+
+   my $cfg = PVE::Storage::config();
+   my $source_volume = $param->{'source-volume'};
+   my $target_storeid = $param->{'storage'};
+
+   my ($source_storeid, undef) = 
PVE::Storage::parse_volume_id($source_volume, 0);
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+
+   $rpcenv->check($authuser, "/storage/$source_storeid", 
["Datastore.Allocate"]);
+   $rpcenv->check($authuser, "/storage/$target_storeid", 
["Datastore.Allocate"]);
+
+   my $worker = sub {
+   PVE::Storage::volume_move($cfg, $source_volume, $target_storeid);
+   };
+
+   return $rpcenv->fork_worker('move_volume', '', $authuser, $worker);
+}});
+
+1;
diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index dc6cc69..6c816b7 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -18,6 +18,7 @@ use PVE::Tools qw(run_command);
  
  use PVE::API2::Storage::Content;

  use PVE::API2::Storage::FileRestore;
+use PVE::API2::Storage::MoveVolume;
  use PVE::API2::Storage::PruneBackups;
  use PVE::Storage;
  
@@ -28,6 +29,11 @@ __PACKAGE__->register_method ({

  path => '{storage}/prunebackups',
  });
  
+__PACKAGE__->register_method ({

+subclass => "PVE::API2::Storage::MoveVolume",
+path => '{storage}/move',
+});
+
  __PACKAGE__->register_method ({
  subclass => "PVE::API2::Storage::Content",
  # set fragment delimiter (no subdirs) - we need that, because volume
@@ -233,6 +239,7 @@ __PACKAGE__->register_method ({
{ subdir => 'rrddata' },
{ subdir => 'status' },
{ subdir => 'upload' },
+   { subdir => 'move' },
];
  
  	return $res;

diff --git a/src/PVE/CLI/pvesm.pm b/src/PVE/CLI/pvesm.pm
index 9b9676b..4c042aa 100755
--- a/src/PVE/CLI/pvesm.pm
+++ b/src/PVE/CLI/pvesm.pm
@@ -20,6 +20,7 @@ use PVE::Storage;
  use PVE::Tools qw(extract_param);
  use 

[pve-devel] [PATCH storage] add API method to move a volume between storages

2024-06-12 Thread Filip Schauer
Add the ability to move a backup, ISO, container template or snippet
between storages of a node via an API method. Moving a VMA backup to a
Proxmox Backup Server requires the proxmox-vma-to-pbs package to be
installed. Currently only VMA backups can be moved to a Proxmox Backup
Server and moving backups from a Proxmox Backup Server is not yet
supported.

The method can be called from the PVE shell with `pvesm move`:

# pvesm move  
pvesm move local:backup/vzdump-lxc-102-2024_05_29-17_05_27.tar.zst pbs

Or use curl to call the API method:

curl 
https://$APINODE:8006/api2/json/nodes/$TARGETNODE/storage/$TARGETSTORAGE/move \
--insecure --cookie "$(
---
This patch depends on
[PATCH backup-qemu/vma-to-pbs 0/2] add support for notes and logs
https://lists.proxmox.com/pipermail/pbs-devel/2024-May/009445.html

 src/PVE/API2/Storage/Makefile  |   2 +-
 src/PVE/API2/Storage/MoveVolume.pm |  61 +
 src/PVE/API2/Storage/Status.pm |   7 ++
 src/PVE/CLI/pvesm.pm   |  33 +
 src/PVE/Storage.pm | 195 +
 5 files changed, 271 insertions(+), 27 deletions(-)
 create mode 100644 src/PVE/API2/Storage/MoveVolume.pm

diff --git a/src/PVE/API2/Storage/Makefile b/src/PVE/API2/Storage/Makefile
index 1705080..11f3c95 100644
--- a/src/PVE/API2/Storage/Makefile
+++ b/src/PVE/API2/Storage/Makefile
@@ -1,5 +1,5 @@
 
-SOURCES= Content.pm Status.pm Config.pm PruneBackups.pm Scan.pm FileRestore.pm
+SOURCES= Content.pm Status.pm Config.pm PruneBackups.pm Scan.pm FileRestore.pm 
MoveVolume.pm
 
 .PHONY: install
 install:
diff --git a/src/PVE/API2/Storage/MoveVolume.pm 
b/src/PVE/API2/Storage/MoveVolume.pm
new file mode 100644
index 000..52447a4
--- /dev/null
+++ b/src/PVE/API2/Storage/MoveVolume.pm
@@ -0,0 +1,61 @@
+package PVE::API2::Storage::MoveVolume;
+
+use strict;
+use warnings;
+
+use PVE::JSONSchema qw(get_standard_option);
+use PVE::RESTHandler;
+use PVE::RPCEnvironment;
+use PVE::Storage;
+
+use base qw(PVE::RESTHandler);
+
+__PACKAGE__->register_method ({
+name => 'move',
+path => '',
+method => 'POST',
+description => "Move a volume from one storage to another",
+permissions => {
+   description => "You need the 'Datastore.Allocate' privilege on the 
storages.",
+   user => 'all',
+},
+protected => 1,
+proxyto => 'node',
+parameters => {
+   additionalProperties => 0,
+   properties => {
+   node => get_standard_option('pve-node'),
+   'source-volume' => {
+   description => "Source volume",
+   type => 'string',
+   },
+   'storage' => get_standard_option('pve-storage-id', {
+   completion => \::Storage::complete_storage_enabled,
+   description => 'Target storage',
+   }),
+   },
+},
+returns => { type => 'string' },
+code => sub {
+   my ($param) = @_;
+
+   my $cfg = PVE::Storage::config();
+   my $source_volume = $param->{'source-volume'};
+   my $target_storeid = $param->{'storage'};
+
+   my ($source_storeid, undef) = 
PVE::Storage::parse_volume_id($source_volume, 0);
+
+   my $rpcenv = PVE::RPCEnvironment::get();
+   my $authuser = $rpcenv->get_user();
+
+   $rpcenv->check($authuser, "/storage/$source_storeid", 
["Datastore.Allocate"]);
+   $rpcenv->check($authuser, "/storage/$target_storeid", 
["Datastore.Allocate"]);
+
+   my $worker = sub {
+   PVE::Storage::volume_move($cfg, $source_volume, $target_storeid);
+   };
+
+   return $rpcenv->fork_worker('move_volume', '', $authuser, $worker);
+}});
+
+1;
diff --git a/src/PVE/API2/Storage/Status.pm b/src/PVE/API2/Storage/Status.pm
index dc6cc69..6c816b7 100644
--- a/src/PVE/API2/Storage/Status.pm
+++ b/src/PVE/API2/Storage/Status.pm
@@ -18,6 +18,7 @@ use PVE::Tools qw(run_command);
 
 use PVE::API2::Storage::Content;
 use PVE::API2::Storage::FileRestore;
+use PVE::API2::Storage::MoveVolume;
 use PVE::API2::Storage::PruneBackups;
 use PVE::Storage;
 
@@ -28,6 +29,11 @@ __PACKAGE__->register_method ({
 path => '{storage}/prunebackups',
 });
 
+__PACKAGE__->register_method ({
+subclass => "PVE::API2::Storage::MoveVolume",
+path => '{storage}/move',
+});
+
 __PACKAGE__->register_method ({
 subclass => "PVE::API2::Storage::Content",
 # set fragment delimiter (no subdirs) - we need that, because volume
@@ -233,6 +239,7 @@ __PACKAGE__->register_method ({
{ subdir => 'rrddata' },
{ subdir => 'status' },
{ subdir => 'upload' },
+   { subdir => 'move' },
];
 
return $res;
diff --git a/src/PVE/CLI/pvesm.pm b/src/PVE/CLI/pvesm.pm
index 9b9676b..4c042aa 100755
--- a/src/PVE/CLI/pvesm.pm
+++ b/src/PVE/CLI/pvesm.pm
@@ -20,6 +20,7 @@ use PVE::Storage;
 use PVE::Tools qw(extract_param);
 use PVE::API2::Storage::Config;
 use PVE::API2::Storage::Content;
+use PVE::API2::Storage::MoveVolume;
 use 

[pve-devel] applied: [PATCH manager] ui: fix align mode of two column container

2024-06-12 Thread Fiona Ebner
Am 24.04.24 um 13:39 schrieb Dominik Csapak:
> 'stretch' is most often the wrong value, as that will stretch
> everything, to the height of the whole container, including fields.
> That is not desirable, since fields look not good when stretched this
> way (e.g. the controls are not correctly aligned).
> 
> To fix it, simply set it to 'begin'.
> 
> Signed-off-by: Dominik Csapak 

applied, thanks!


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager] ui: fix pbs storage edit reset behavior

2024-06-12 Thread Fiona Ebner
Am 24.04.24 um 13:03 schrieb Dominik Csapak:
> two similar things to fix here:
> * the 'crypt-allow-edit' field was not submitted, but it's value was
>   only ever set with a bind, so a reset always set it to it's
>   default 'false' value (disabling the radio buttons, even when
>   it was not visible)
> 
> * the initial value of the 'keep' variant of the radiofield was decided
>   only from 'isCreate' (via the 'checked' cbind), but should have been
>   decided by whether there was an encryption key or not.
> 
> both are fixed by setting the values in the 'setValue' method
> explicitly
> 
> Signed-off-by: Dominik Csapak 

Reviewed-by: Fiona Ebner 
Tested-by: Fiona Ebner 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager 4/4] ui: backup job: simplify translatable string

2024-06-12 Thread Fiona Ebner
Am 23.04.24 um 10:27 schrieb Maximiliano Sandoval:
> Signed-off-by: Maximiliano Sandoval 
> ---
>  www/manager6/panel/BackupAdvancedOptions.js | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/www/manager6/panel/BackupAdvancedOptions.js 
> b/www/manager6/panel/BackupAdvancedOptions.js
> index b9e5304e..4b31f403 100644
> --- a/www/manager6/panel/BackupAdvancedOptions.js
> +++ b/www/manager6/panel/BackupAdvancedOptions.js
> @@ -231,7 +231,7 @@ Ext.define('PVE.panel.BackupAdvancedOptions', {
>   xtype: 'component',
>   padding: '5 1',
>   html: `${gettext('Note')}: ${
> - gettext("The node-specific 'vzdump.conf' or, if this is not 
> set, the default from the config schema is used to determine fallback 
> values.")}`,
> + gettext("The node-specific 'vzdump.conf'. If not set, the 
> defaults from the config schema are used to determine fallback values.")}`,
>   },
>  ],
>  });

"The node-specific 'vzdump.conf'." is not a proper sentence.


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH manager 1/4] ui: homogenize uses of Zstd and SCSI

2024-06-12 Thread Fiona Ebner
Am 23.04.24 um 10:27 schrieb Maximiliano Sandoval:
> diff --git a/www/manager6/window/GuestImport.js 
> b/www/manager6/window/GuestImport.js
> index 944d275b..69e7c9bb 100644
> --- a/www/manager6/window/GuestImport.js
> +++ b/www/manager6/window/GuestImport.js
> @@ -924,6 +924,7 @@ Ext.define('PVE.window.GuestImport', {
>   'cdrom-image-ignored': gettext("CD-ROM images cannot get 
> imported, if required you can reconfigure the '{0}' drive in the 'Advanced' 
> tab."),
>   'nvme-unsupported': gettext("NVMe disks are currently not 
> supported, '{0}' will get attaced as SCSI"),
>   'ovmf-with-lsi-unsupported': gettext("OVMF is built without LSI 
> drivers, scsi hardware was set to '{1}'"),
> + 'ovmf-with-lsi-unsupported': gettext("OVMF is built without LSI 
> drivers, SCSI hardware was set to '{1}'"),

This adds a duplicate (which is fixed by the next patch).

>   'serial-port-socket-only': gettext("Serial socket '{0}' will be 
> mapped to a socket"),
>   'guest-is-running': gettext('Virtual guest seems to be running 
> on source host. Import might fail or have inconsistent state!'),
>   'efi-state-lost': Ext.String.format(


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 2/2] api: fix typo reported by perlcritic

2024-06-12 Thread Fiona Ebner
Signed-off-by: Fiona Ebner 
---
 PVE/API2/Qemu.pm | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index cf53e820..86527bd5 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -1572,7 +1572,7 @@ __PACKAGE__->register_method({
$item->{value} = $pending;
$item->{pending} = $conf->{$opt};
} else {
-   $item->{value} = $conf->{$opt},
+   $item->{value} = $conf->{$opt};
}
 
push @$res, $item;
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH qemu-server 1/2] api: add missing use statements

2024-06-12 Thread Fiona Ebner
---
 PVE/API2/Qemu.pm | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 0c3f451a..cf53e820 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -29,6 +29,7 @@ use PVE::QemuServer;
 use PVE::QemuServer::Cloudinit;
 use PVE::QemuServer::CPUConfig;
 use PVE::QemuServer::Drive;
+use PVE::QemuServer::Helpers;
 use PVE::QemuServer::ImportDisk;
 use PVE::QemuServer::Monitor qw(mon_cmd);
 use PVE::QemuServer::Machine;
@@ -45,8 +46,10 @@ use PVE::API2::Firewall::VM;
 use PVE::API2::Qemu::Agent;
 use PVE::VZDump::Plugin;
 use PVE::DataCenterConfig;
+use PVE::ProcFSTools;
 use PVE::SSHInfo;
 use PVE::Replication;
+use PVE::ReplicationState;
 use PVE::StorageTunnel;
 use PVE::RESTEnvironment qw(log_warn);
 
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH qemu-server v10 5/11] fix #1027: virtio-fs support

2024-06-12 Thread Fiona Ebner
Am 14.05.24 um 12:54 schrieb Markus Frank:
> add support for sharing directories with a guest vm
> 
> virtio-fs needs virtiofsd to be started.
> In order to start virtiofsd as a process (despite being a daemon it is does 
> not
> run in the background), a double-fork is used.
> 
> virtiofsd should close itself together with qemu.

Nit: s/qemu/QEMU/

> 
> There are the parameters dirid and the optional parameters direct-io, cache 
> and
> writeback. Additionally the xattr & acl parameter overwrite the directory
> mapping settings for xattr & acl.
> 
> The dirid gets mapped to the path on the current node and is also used as a
> mount tag (name used to mount the device on the guest).
> 
> example config:
> ```
> virtiofs0: foo,direct-io=1,cache=always,acl=1
> virtiofs1: dirid=bar,cache=never,xattr=1,writeback=1
> ```
> 
> For information on the optional parameters see the coherent doc patch
> and the official gitlab README:
> https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md
> 
> Also add a permission check for virtiofs directory access.
> 
> Signed-off-by: Markus Frank 
> ---
>  PVE/API2/Qemu.pm   |  39 ++-
>  PVE/QemuServer.pm  |  19 +++-
>  PVE/QemuServer/Makefile|   3 +-
>  PVE/QemuServer/Memory.pm   |  34 --
>  PVE/QemuServer/Virtiofs.pm | 212 +
>  5 files changed, 296 insertions(+), 11 deletions(-)
>  create mode 100644 PVE/QemuServer/Virtiofs.pm
> 
> diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
> index 2a349c8..5d97896 100644
> --- a/PVE/API2/Qemu.pm
> +++ b/PVE/API2/Qemu.pm
> @@ -695,6 +695,32 @@ my sub check_vm_create_hostpci_perm {
>  return 1;
>  };
>  
> +my sub check_dir_perm {
> +my ($rpcenv, $authuser, $vmid, $pool, $opt, $value) = @_;
> +
> +return 1 if $authuser eq 'root@pam';
> +
> +$rpcenv->check_vm_perm($authuser, $vmid, $pool, ['VM.Config.Disk']);
> +
> +my $virtiofs = PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', 
> $value);

The "pve-qm-virtiofs" format is registered by PVE::Qemu::Virtiofs so
there should be a use statement for that module at the top.

> +$rpcenv->check_full($authuser, "/mapping/dir/$virtiofs->{dirid}", 
> ['Mapping.Use']);
> +
> +return 1;
> +};
> +

---snip---

> diff --git a/PVE/QemuServer/Memory.pm b/PVE/QemuServer/Memory.pm
> index f365f2d..691f9b3 100644
> --- a/PVE/QemuServer/Memory.pm
> +++ b/PVE/QemuServer/Memory.pm
> @@ -10,6 +10,7 @@ use PVE::Exception qw(raise raise_param_exc);
>  use PVE::QemuServer::Helpers qw(parse_number_sets);
>  use PVE::QemuServer::Monitor qw(mon_cmd);
>  use PVE::QemuServer::QMPHelpers qw(qemu_devicedel qemu_objectdel);
> +use PVE::QemuServer::Virtiofs;

Can't we avoid this use statement? You could check on the call site and
pass in $virtiofs_enabled as a parameter to sub config(). Introducing
that parameter and the necessary changes in config() could also be a
preparatory patch. I.e. initially, there would be no caller passing in
1, but a later patch would change that.

>  
>  use base qw(Exporter);
>  
> @@ -336,7 +337,7 @@ sub qemu_memdevices_list {
>  }
>  
>  sub config {
> -my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd) = @_;
> +my ($conf, $vmid, $sockets, $cores, $hotplug, $cmd, $machine_flags) = @_;
>  
>  my $memory = get_current_memory($conf->{memory});
>  my $static_memory = 0;
> @@ -367,6 +368,16 @@ sub config {
>  
>  die "numa needs to be enabled to use hugepages" if $conf->{hugepages} && 
> !$conf->{numa};
>  
> +my $virtiofs_enabled = 0;
> +for (my $i = 0; $i < PVE::QemuServer::Virtiofs::max_virtiofs(); $i++) {
> + my $opt = "virtiofs$i";
> + next if !$conf->{$opt};
> + my $virtiofs = 
> PVE::JSONSchema::parse_property_string('pve-qm-virtiofs', $conf->{$opt});
> + if ($virtiofs) {
> + $virtiofs_enabled = 1;
> + }
> +}
> +
>  if ($conf->{numa}) {
>  
>   my $numa_totalmemory = undef;
> @@ -379,7 +390,8 @@ sub config {
>   my $numa_memory = $numa->{memory};
>   $numa_totalmemory += $numa_memory;
>  
> - my $mem_object = print_mem_object($conf, "ram-node$i", 
> $numa_memory);
> + my $memdev = $virtiofs_enabled ? "virtiofs-mem$i" : "ram-node$i";
> + my $mem_object = print_mem_object($conf, $memdev, $numa_memory);
>  
>   # cpus
>   my $cpulists = $numa->{cpus};
> @@ -404,7 +416,7 @@ sub config {
>   }
>  
>   push @$cmd, '-object', $mem_object;
> - push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=ram-node$i";
> + push @$cmd, '-numa', "node,nodeid=$i,cpus=$cpus,memdev=$memdev";
>   }
>  
>   die "total memory for NUMA nodes must be equal to vm static memory\n"
> @@ -418,15 +430,21 @@ sub config {
>   die "host NUMA node$i doesn't exist\n"
>   if !host_numanode_exists($i) && $conf->{hugepages};
>  
> - my $mem_object = print_mem_object($conf, "ram-node$i", 
> $numa_memory);
> - push 

Re: [pve-devel] [PATCH qemu-server v10 4/11] add virtiofsd as runtime dependency for qemu-server

2024-06-12 Thread Fiona Ebner
Commit title should be prefixed with "d/control:"

Am 14.05.24 um 12:54 schrieb Markus Frank:
> Signed-off-by: Markus Frank 
> ---
>  debian/control | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/debian/control b/debian/control
> index 1301a36..8e4ca7f 100644
> --- a/debian/control
> +++ b/debian/control
> @@ -55,6 +55,7 @@ Depends: dbus,
>   socat,
>   swtpm,
>   swtpm-tools,
> + virtiofsd,
>   ${misc:Depends},
>   ${perl:Depends},
>   ${shlibs:Depends},


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH docs v10 3/11] add doc section for the shared filesystem virtio-fs

2024-06-12 Thread Fiona Ebner
Am 14.05.24 um 12:54 schrieb Markus Frank:
> Signed-off-by: Markus Frank 
> ---
>  qm.adoc | 94 +++--
>  1 file changed, 92 insertions(+), 2 deletions(-)
> 
> diff --git a/qm.adoc b/qm.adoc
> index 42c26db..755e20e 100644
> --- a/qm.adoc
> +++ b/qm.adoc
> @@ -1081,6 +1081,95 @@ recommended to always use a limiter to avoid guests 
> using too many host
>  resources. If desired, a value of '0' for `max_bytes` can be used to disable
>  all limits.
>  
> +[[qm_virtiofs]]
> +Virtio-fs
> +~
> +
> +Virtio-fs is a shared file system that enables sharing a directory between 
> host
> +and guest VM. It takes advantage of the locality of virtual machines and the
> +hypervisor to get a higher throughput than the 9p remote file system 
> protocol.

A bit more general would be:
"higher throughput than network-based protocols, like the 9p remote file
system protocol."

> +
> +To use virtio-fs, the https://gitlab.com/virtio-fs/virtiofsd[virtiofsd] 
> daemon
> +needs to run in the background. In {pve}, this process starts immediately 
> before
> +the start of QEMU.> +
> +Linux VMs with kernel >=5.4 support this feature by default.
> +
> +There is a guide available on how to utilize virtio-fs in Windows VMs.
> +https://github.com/virtio-win/kvm-guest-drivers-windows/wiki/Virtiofs:-Shared-file-system
> +
> +Known Limitations
> +^
> +
> +* Virtiofsd crashing means no recovery until VM is fully stopped and 
> restarted.
> +* Virtiofsd not responding may result in NFS-like hanging access in the VM.
> +* Memory hotplug does not work in combination with virtio-fs (also results in
> +hanging access).
> +* Live migration does not work.
> +* Windows cannot understand ACLs. Therefore, disable it for Windows VMs,
> +otherwise the virtio-fs device will not be visible within the VMs.
> +
> +Add Mapping for Shared Directories
> +^^
> +
> +To add a mapping for a shared directory, either use the API directly with
> +`pvesh` as described in the xref:resource_mapping[Resource Mapping] section:

There is an "either" in the sentence above, but then there is no "or".

> +
> +
> +pvesh create /cluster/mapping/dir --id dir1 \
> +--map node=node1,path=/path/to/share1 \
> +--map node=node2,path=/path/to/share2,submounts=1 \
> +--xattr 1 \
> +--acl 1
> +
> +
> +The `acl` parameter automatically implies `xattr`, that is, it makes no
> +difference whether you set `xattr` to `0` if `acl` is set to `1`.
> +
> +Set `submounts` to `1` when multiple file systems are mounted in a shared
> +directory to prevent the guest from creating duplicates because of file 
> system
> +specific inode IDs that get passed through.
> +
> +
> +Add virtio-fs to a VM
> +^
> +
> +To share a directory using virtio-fs, add the parameter `virtiofs` (N can 
> be
> +anything between 0 and 9) to the VM config and use a directory ID (dirid) 
> that
> +has been configured in the resource mapping. Additionally, you can set the
> +`cache` option to either `always`, `never`, or `auto` (default: `auto`),
> +depending on your requirements. How the different caching modes behave can be
> +read at https://lwn.net/Articles/774495/ under the title "Caching Modes". To

s/title/section/ ?

> +enable writeback cache set `writeback` to `1`.
> +


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH guest-common v10 2/11] add dir mapping section config

2024-06-12 Thread Fiona Ebner
Am 14.05.24 um 12:54 schrieb Markus Frank:
> diff --git a/src/PVE/Mapping/Dir.pm b/src/PVE/Mapping/Dir.pm
> new file mode 100644
> index 000..8f131c2
> --- /dev/null
> +++ b/src/PVE/Mapping/Dir.pm
> @@ -0,0 +1,205 @@
> +package PVE::Mapping::Dir;
> +
> +use strict;
> +use warnings;
> +
> +use PVE::Cluster qw(cfs_register_file cfs_read_file cfs_lock_file 
> cfs_write_file);
> +use PVE::INotify;
> +use PVE::JSONSchema qw(get_standard_option parse_property_string);
> +use PVE::SectionConfig;
> +use PVE::Storage::Plugin;

Storage::Plugin isn't used anywhere?

> +
> +use base qw(PVE::SectionConfig);
> +
> +my $FILENAME = 'mapping/dir.cfg';
> +
> +cfs_register_file($FILENAME,
> +  sub { __PACKAGE__->parse_config(@_); },
> +  sub { __PACKAGE__->write_config(@_); });

Style nit: uses only spaces, non-standard indentation

---snip---

> +my $map_fmt = {
> +node => get_standard_option('pve-node'),
> +path => {
> + description => "Absolute directory path that should be shared with the 
> guest.",
> + type => 'string',
> + format => 'pve-storage-path',
> +},
> +submounts => {
> + type => 'boolean',
> + description => "Announce that the directory contains other mounted"
> + ." file systems. If this is not set and multiple file systems are"
> + ." mounted, the guest may encounter duplicates due to file system"
> + ." specific inode IDs.",
> + optional => 1,
> + default => 0,

Hmm, so even if this option is not set, the files from submounts are
accessible? Should this rather be turned on by default or always? Or
what are the downsides when setting this?

---snip---

> +sub assert_valid {
> +my ($dir_cfg) = @_;
> +
> +my $path = $dir_cfg->{path};
> +
> +if (! -e $path) {
> + die "Path $path does not exist\n";
> +} elsif (! -d $path) {
> + die "Path $path exists but is not a directory\n"

Nit, missing comma before 'but', missing semicolon at the end of the line.

> +}
> +
> +return 1;
> +};
> +
> +sub check_duplicate {

Better called e.g. assert_no_duplicate_node, because it dies. "check" is
mostly used for functions returning a "boolean" in our code base.

> +my ($map_list) = @_;
> +
> +my %count;
> +for my $map (@$map_list){

Style nit: missing space after closing parenthesis

> + my $entry = parse_property_string($map_fmt, $map);
> + $count{$entry->{node}}++;> +}
> +for my $node (keys %count) {
> + if ($count{$node} > 1) {
> + die "Node '$node' is specified $count{$node} times.\n";
> + }
> +}
> +}
> +


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH v2 manager] vzdump config: add fleecing property string

2024-06-12 Thread Fabian Grünbichler
thanks!

On June 12, 2024 10:16 am, Fiona Ebner wrote:
> This makes it clear(er) that fleecing can be configured as a node-wide
> default too.
> 
> Signed-off-by: Fiona Ebner 
> ---
> 
> Changes in v2:
> * rebase
> 
> Since the new pbs-change-detection-mode got added, would be nice to
> add fleecing soon too to avoid pestering admins twice with the
> ==> Package distributor has shipped an updated version.
> dialoge during update.
> 
>  configs/vzdump.conf | 1 +
>  1 file changed, 1 insertion(+)
> 
> diff --git a/configs/vzdump.conf b/configs/vzdump.conf
> index 54913073..1835e208 100644
> --- a/configs/vzdump.conf
> +++ b/configs/vzdump.conf
> @@ -17,3 +17,4 @@
>  #pigz: N
>  #notes-template: {{guestname}}
>  #pbs-change-detection-mode: legacy|data|metadata
> +#fleecing: enabled=BOOLEAN,storage=STORAGE_ID
> -- 
> 2.39.2
> 
> 
> 
> ___
> pve-devel mailing list
> pve-devel@lists.proxmox.com
> https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel
> 
> 
> 


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] applied: [PATCH proxmox-backup-qemu] tree-wide: fix typos in comments

2024-06-12 Thread Fabian Grünbichler
thanks!

On January 19, 2024 10:57 am, Fiona Ebner wrote:
> Signed-off-by: Fiona Ebner 
> ---
>  header-preamble.c |  4 ++--
>  src/backup.rs |  2 +-
>  src/commands.rs   |  6 +++---
>  src/lib.rs| 24 
>  4 files changed, 18 insertions(+), 18 deletions(-)
> 
> diff --git a/header-preamble.c b/header-preamble.c
> index b44ab32..e6a6068 100644
> --- a/header-preamble.c
> +++ b/header-preamble.c
> @@ -19,6 +19,6 @@
>   * result: *mut c_int,
>   * error: *mut *mut c_char,
>   *
> - * The callback function is called when the the async function is
> - * ready. Possible errors are returned in 'error'.
> + * The callback function is called when the async function is ready.
> + * Possible errors are returned in 'error'.
>   */
> diff --git a/src/backup.rs b/src/backup.rs
> index bbe4f00..d662520 100644
> --- a/src/backup.rs
> +++ b/src/backup.rs
> @@ -34,7 +34,7 @@ pub(crate) struct BackupTask {
>  registry: Arc>>,
>  known_chunks: Arc>>,
>  abort: tokio::sync::broadcast::Sender<()>,
> -aborted: OnceCell, // set on abort, conatins abort reason
> +aborted: OnceCell, // set on abort, contains abort reason
>  }
>  
>  impl BackupTask {
> diff --git a/src/commands.rs b/src/commands.rs
> index 37d653c..ba1832f 100644
> --- a/src/commands.rs
> +++ b/src/commands.rs
> @@ -58,7 +58,7 @@ pub(crate) fn deserialize_state(data: &[u8]) -> Result<(), 
> Error> {
>  Ok(())
>  }
>  
> -// Note: We alway register/upload a chunk containing zeros
> +// Note: We always register/upload a chunk containing zeros
>  async fn register_zero_chunk(
>  client: Arc,
>  crypt_config: Option>,
> @@ -439,7 +439,7 @@ pub(crate) async fn write_data(
>  
>  match upload_queue {
>  Some(ref mut upload_queue) => {
> -// Phase 2: send reponse future to other task
> +// Phase 2: send response future to other task
>  if upload_queue.send(upload_future).await.is_err() {
>  let upload_result = {
>  let mut guard = registry.lock().unwrap();
> @@ -463,7 +463,7 @@ pub(crate) async fn write_data(
>  }
>  }
>  
> -//println!("upload chunk sucessful");
> +//println!("upload chunk successful");
>  
>  Ok(if reused { 0 } else { size as c_int })
>  }
> diff --git a/src/lib.rs b/src/lib.rs
> index 02e74f7..3b0c1fa 100644
> --- a/src/lib.rs
> +++ b/src/lib.rs
> @@ -98,7 +98,7 @@ macro_rules! param_not_null {
>  
>  /// Returns the text presentation (relative path) for a backup snapshot
>  ///
> -/// The resturned value is allocated with strdup(), and can be freed
> +/// The returned value is allocated with strdup(), and can be freed
>  /// with free().
>  #[no_mangle]
>  #[allow(clippy::not_unsafe_ptr_arg_deref)]
> @@ -142,7 +142,7 @@ pub(crate) struct BackupSetup {
>  pub fingerprint: Option,
>  }
>  
> -// helper class to implement synchrounous interface
> +// helper class to implement synchronous interface
>  struct GotResultCondition {
>  lock: Mutex,
>  cond: Condvar,
> @@ -327,7 +327,7 @@ fn backup_handle_to_task(handle: *mut 
> ProxmoxBackupHandle) -> Arc {
>  /// Open connection to the backup server (sync)
>  ///
>  /// Returns:
> -///  0 ... Sucecss (no prevbious backup)
> +///  0 ... Success (no previous backup)
>  ///  1 ... Success (found previous backup)
>  /// -1 ... Error
>  #[no_mangle]
> @@ -358,7 +358,7 @@ pub extern "C" fn proxmox_backup_connect(
>  /// Open connection to the backup server
>  ///
>  /// Returns:
> -///  0 ... Sucecss (no prevbious backup)
> +///  0 ... Success (no previous backup)
>  ///  1 ... Success (found previous backup)
>  /// -1 ... Error
>  #[no_mangle]
> @@ -565,7 +565,7 @@ pub extern "C" fn proxmox_backup_add_config_async(
>  ///
>  /// Returns:
>  /// -1: on error
> -///  0: successful, chunk already exists on server, so it was resued
> +///  0: successful, chunk already exists on server, so it was reused
>  ///  size: successful, chunk uploaded
>  #[no_mangle]
>  #[allow(clippy::not_unsafe_ptr_arg_deref)]
> @@ -608,11 +608,11 @@ pub extern "C" fn proxmox_backup_write_data(
>  /// (only allowed if size == chunk_size)
>  ///
>  /// Note: The data pointer needs to be valid until the async
> -/// opteration is finished.
> +/// operation is finished.
>  ///
>  /// Returns:
>  /// -1: on error
> -///  0: successful, chunk already exists on server, so it was resued
> +///  0: successful, chunk already exists on server, so it was reused
>  ///  size: successful, chunk uploaded
>  #[no_mangle]
>  #[allow(clippy::not_unsafe_ptr_arg_deref)]
> @@ -772,7 +772,7 @@ fn restore_handle_to_task(handle: *mut 
> ProxmoxRestoreHandle) -> Arc
>  Arc::clone(restore_task)
>  }
>  
> -/// DEPRECATED: Connect the the backup server for restore (sync)
> +/// DEPRECATED: Connect to the backup server for restore (sync)
>  ///
>  /// Deprecated in favor of `proxmox_restore_new_ns` which includes a 
> namespace parameter.
>  /// Also, it used "lossy" utf8 

Re: [pve-devel] [PATCH v2 common] REST environment: warn helpers: also log messages to syslog

2024-06-12 Thread Fiona Ebner
Ping

Am 09.02.24 um 15:15 schrieb Fiona Ebner:
> for better visibility. When not in a task, warnings from these helpers
> are only logged to STDERR, which is particularly unhelpful in case of
> daemons. This is the main motivation behind this change.
> 
> For tasks, warnings from these helpers are already more visible on the
> UI side, but when looking at the syslog, one can only see the warning
> count from the task right now, not the actual messages. This is
> another reason in favor of the change.
> 
> Reported-by: Friedrich Weber 
> Suggested-by: Thomas Lamprecht 
> Signed-off-by: Fiona Ebner 
> ---
> 
> Changes in v2:
> * Go with the approach suggested by Thomas.
> 
>  src/PVE/RESTEnvironment.pm | 2 ++
>  1 file changed, 2 insertions(+)
> 
> diff --git a/src/PVE/RESTEnvironment.pm b/src/PVE/RESTEnvironment.pm
> index 191c6eb..bfde7e6 100644
> --- a/src/PVE/RESTEnvironment.pm
> +++ b/src/PVE/RESTEnvironment.pm
> @@ -723,6 +723,7 @@ sub log_warn {
>   $rest_env->warn($message);
>  } else {
>   chomp($message);
> + syslog('warning', "%s", $message);
>   print STDERR "WARN: $message\n";
>  }
>  }
> @@ -732,6 +733,7 @@ sub warn {
>  
>  chomp($message);
>  
> +syslog('warning', "%s", $message);
>  print STDERR "WARN: $message\n";
>  
>  $self->{warning_count}++;


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



Re: [pve-devel] [PATCH proxmox-backup-qemu] tree-wide: fix typos in comments

2024-06-12 Thread Fiona Ebner
Ping, still applies


___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel



[pve-devel] [PATCH v2 manager] vzdump config: add fleecing property string

2024-06-12 Thread Fiona Ebner
This makes it clear(er) that fleecing can be configured as a node-wide
default too.

Signed-off-by: Fiona Ebner 
---

Changes in v2:
* rebase

Since the new pbs-change-detection-mode got added, would be nice to
add fleecing soon too to avoid pestering admins twice with the
==> Package distributor has shipped an updated version.
dialoge during update.

 configs/vzdump.conf | 1 +
 1 file changed, 1 insertion(+)

diff --git a/configs/vzdump.conf b/configs/vzdump.conf
index 54913073..1835e208 100644
--- a/configs/vzdump.conf
+++ b/configs/vzdump.conf
@@ -17,3 +17,4 @@
 #pigz: N
 #notes-template: {{guestname}}
 #pbs-change-detection-mode: legacy|data|metadata
+#fleecing: enabled=BOOLEAN,storage=STORAGE_ID
-- 
2.39.2



___
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel