On 3/31/21 11:23 AM, Fabian Grünbichler wrote:
sorry for the long pause - haven't checked the GUI part, but that should
be unaffected by my comments in-line.

thanks and yes, the GUI should be oblivious to any changes here


On December 15, 2020 1:48 pm, Aaron Lauterer wrote:
The goal of this new API endpoint is to provide an easy way to move a
disk between VMs as this was only possible with manual intervention
until now. Either by renaming the VM disk or by manually adding the
disks volid to the config of the other VM.

The latter can easily cause unexpected behavior such as disks attached
to VM B would be deleted if it used to be a disk of VM A. This happens
because PVE assumes that the VMID in the volname always matches the VM
the disk is attached to and thus, would remove any disk with VMID A
when VM A was deleted.

The term `reassign` was chosen as it is not yet used
for VM disks.

Signed-off-by: Aaron Lauterer <a.laute...@proxmox.com>
---
v4 -> v5:
* implemented suggestions from Fabian [1]
     * logging before action
     * improving description
     * improving error messages
     * using Replication::prepare to remove replication snapshots
     * check if disk is physical disk using /dev/...

v3 -> v4: nothing

v2 -> v3:
* reordered the locking as discussed with fabian [0] to
run checks
     fork worker
        lock source config
            lock target config
                run checks
                ...

* added more checks
     * will not reassign to or from templates
     * will not reassign if VM has snapshots present
* cleanup if disk used to be replicated
* made task log slightly more verbose
* integrated general recommendations regarding code
* renamed `disk` to `drive_key`
* prepended some vars with `source_` for easier distinction

v1 -> v2: print config key and volid info at the end of the job so it
shows up on the CLI and task log

rfc -> v1:
* add support to reassign unused disks
* add support to provide a config digest for the target vm
* add additional check if disk key is present in config
* reorder checks a bit

In order to support unused disk I had to extend
PVE::QemuServer::Drive::valid_drive_names for the API parameter
validation.

Checks are ordered so that cheap tests are run at the first chance to
fail early.

The check if both VMs are present on the node is a bit redundant because
locking the config files will fail if the VM is not present. But with
the additional check we can provide a useful error message to the user
instead of a "Configuration file xyz does not exist" error.

[0] https://lists.proxmox.com/pipermail/pve-devel/2020-September/044930.html
[1] https://lists.proxmox.com/pipermail/pve-devel/2020-November/046030.html

  PVE/API2/Qemu.pm        | 151 ++++++++++++++++++++++++++++++++++++++++
  PVE/QemuServer/Drive.pm |   4 ++
  2 files changed, 155 insertions(+)

diff --git a/PVE/API2/Qemu.pm b/PVE/API2/Qemu.pm
index 00d98ab..614f068 100644
--- a/PVE/API2/Qemu.pm
+++ b/PVE/API2/Qemu.pm
@@ -35,6 +35,7 @@ use PVE::API2::Qemu::Agent;
  use PVE::VZDump::Plugin;
  use PVE::DataCenterConfig;
  use PVE::SSHInfo;
+use PVE::Replication;
BEGIN {
      if (!$ENV{PVE_GENERATING_DOCS}) {
@@ -4319,4 +4320,154 @@ __PACKAGE__->register_method({
        return PVE::QemuServer::Cloudinit::dump_cloudinit_config($conf, 
$param->{vmid}, $param->{type});
      }});
+__PACKAGE__->register_method({
+    name => 'reassign_vm_disk',
+    path => '{vmid}/reassign_disk',
+    method => 'POST',
+    protected => 1,
+    proxyto => 'node',
+    description => "Reassign a disk to another VM",
+    permissions => {
+       description => "You need 'VM.Config.Disk' permissions on /vms/{vmid} and 
/vms/{target vmid}, and 'Datastore.Allocate' permissions on the storage.",
+       check => [ 'and',
+                  ['perm', '/vms/{vmid}', [ 'VM.Config.Disk' ]],
+                  ['perm', '/storage/{storage}', [ 'Datastore.Allocate' ]],
+           ],

did you actually check this works? you can check the source vmid here,
but the rest as described in the schema needs to be manually checked
down below..

Thanks for catching this.

+    },
+    parameters => {
+        additionalProperties => 0,
+       properties => {
+           node => get_standard_option('pve-node'),
+           vmid => get_standard_option('pve-vmid', { completion => 
\&PVE::QemuServer::complete_vmid }),
+           target_vmid => get_standard_option('pve-vmid', { completion => 
\&PVE::QemuServer::complete_vmid }),
+           drive_name => {
+               type => 'string',
+               description => "The config key of the disk to reassign (for example, 
ide0 or scsi1).",
+               enum => 
[PVE::QemuServer::Drive::valid_drive_names_with_unused()],
+           },
+           digest => {
+               type => 'string',
+               description => 'Prevent changes if current the configuration 
file of the source VM has a different SHA1 digest. This can be used to prevent 
concurrent modifications.',
+               maxLength => 40,
+               optional => 1,
+           },
+           target_digest => {
+               type => 'string',
+               description => 'Prevent changes if current the configuration 
file of the target VM has a different SHA1 digest. This can be used to prevent 
concurrent modifications.',
+               maxLength => 40,
+               optional => 1,
+           },
+       },
+    },
+    returns => {
+       type => 'string',
+       description => "the task ID.",
+    },
+    code => sub {
+       my ($param) = @_;
+
+       my $rpcenv = PVE::RPCEnvironment::get();
+       my $authuser = $rpcenv->get_user();
+
+       my $node = extract_param($param, 'node');
+       my $source_vmid = extract_param($param, 'vmid');
+       my $target_vmid = extract_param($param, 'target_vmid');
+       my $source_digest = extract_param($param, 'digest');
+       my $target_digest = extract_param($param, 'target_digest');
+       my $drive_name = extract_param($param, 'drive_name');
+
+       my $storecfg = PVE::Storage::config();
+       my $vmlist;
+       my $drive;
+       my $source_volid;
+
+       die "Reassigning a disk to the same VM is not possible. Did you mean to move 
the disk?\n"
+           if $source_vmid eq $target_vmid;
+
+       my $load_and_check_configs = sub {
+           $vmlist = PVE::Cluster::get_vmlist()->{ids};
+           die "Both VMs need to be on the same node ($vmlist->{$source_vmid}->{node}) 
but target VM is on $vmlist->{$target_vmid}->{node}.\n"
+               if $vmlist->{$source_vmid}->{node} ne 
$vmlist->{$target_vmid}->{node};
+
+           my $source_conf = PVE::QemuConfig->load_config($source_vmid);
+           PVE::QemuConfig->check_lock($source_conf);
+           my $target_conf = PVE::QemuConfig->load_config($target_vmid);
+           PVE::QemuConfig->check_lock($target_conf);
+
+           die "Can't reassign disks from or to templates\n"
+               if ($source_conf->{template} || $target_conf->{template});
+
+           if ($source_digest) {
+               eval { PVE::Tools::assert_if_modified($source_digest, 
$source_conf->{digest}) };
+               if (my $err = $@) {
+                   die "VM ${source_vmid}: ${err}";
+               }
+           }
+
+           if ($target_digest) {
+               eval { PVE::Tools::assert_if_modified($target_digest, 
$target_conf->{digest}) };
+               if (my $err = $@) {
+                   die "VM ${target_vmid}: ${err}";
+               }
+           }
+
+           die "Disk '${drive_name}' does not exist\n"
+               if !defined($source_conf->{$drive_name});
+
+           $drive = PVE::QemuServer::parse_drive($drive_name, 
$source_conf->{$drive_name});
+           $source_volid = $drive->{file};
+           die "disk '${drive_name}' has no associated volume\n" if 
!$source_volid;
+           die "CD drive contents can't be reassigned\n" if 
PVE::QemuServer::drive_is_cdrom($drive, 1);
+           die "Can't reassign physical disk\n" if $drive->{file} =~ m|^/dev/|;
+           die "Can't reassign disk used by a snapshot\n"
+               if PVE::QemuServer::Drive::is_volume_in_use($storecfg, 
$source_conf, $drive_name, $source_volid);
+
+           die "Storage does not support the reassignment of this disk\n"
+               if !PVE::Storage::volume_has_feature($storecfg, 'reassign', 
$source_volid);
+
+           die "Cannot reassign disk while the source VM is running\n"
+               if PVE::QemuServer::check_running($source_vmid) && $drive_name 
!~ m/unused[0-9]/;

we use /^unused\d+$/ in other places for this, which is a bit less
error-prone in case we ever add other "unused" keys..

Good to know :)


+
+           return ($source_conf, $target_conf);
+       };
+
+       my $logfunc = sub {
+           my ($msg) = @_;
+           print STDERR "$msg\n";
+       };
+
+       my $reassign_func = sub {
+           return PVE::QemuConfig->lock_config($source_vmid, sub {
+               return PVE::QemuConfig->lock_config($target_vmid, sub {
+                   my ($source_conf, $target_conf) = 
&$load_and_check_configs();
+
+                   PVE::Cluster::log_msg('info', $authuser, "reassign disk VM 
$source_vmid: reassign --disk ${drive_name} --target_vmid $target_vmid");
+
+                   my $new_volid = PVE::Storage::reassign_volume($storecfg, 
$source_volid, $target_vmid);
+
+                   delete $source_conf->{$drive_name};
+                   print "removing disk '${drive_name}' from VM 
'${source_vmid}'\n";
+                   PVE::QemuConfig->write_config($source_vmid, $source_conf);
+
+                   # remove possible replication snapshots
+                   PVE::Replication::prepare($storecfg, [$new_volid], undef, 
undef, undef, $logfunc);

might make sense to wrap this in an eval as well, and print an
appropriate warning that manual cleanup might be required here as well..

+
+                   my $key;
+                   eval { $key = 
PVE::QemuConfig->add_unused_volume($target_conf, $new_volid) };
+                   if (my $err = $@) {
+                       print "failed to add reassigned disk '${new_volid}' to VM 
'${target_vmid}'. Try to free an 'unused' disk slot and run 'qm rescan 
${target_vmid}'.\n";
+                       return 0;
+                   }
+
+                   print "adding disk to VM '${target_vmid}' as '${key}: 
${new_volid}'\n";
+                   PVE::QemuConfig->write_config($target_vmid, $target_conf);
+               });
+           });
+       };
+
+       &$load_and_check_configs();
+
+       return $rpcenv->fork_worker('qmreassign', $source_vmid, $authuser, 
$reassign_func);
+    }});
+
  1;
diff --git a/PVE/QemuServer/Drive.pm b/PVE/QemuServer/Drive.pm
index d560937..95c0538 100644
--- a/PVE/QemuServer/Drive.pm
+++ b/PVE/QemuServer/Drive.pm
@@ -383,6 +383,10 @@ sub valid_drive_names {
              'efidisk0');
  }
+sub valid_drive_names_with_unused {
+    return (valid_drive_names(), map {"unused$_"} (0 .. ($MAX_UNUSED_DISKS 
-1)));
+}
+
  sub is_valid_drivename {
      my $dev = shift;
--
2.20.1



_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel





_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel




_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to