On 3/7/24 10:33, Dominik Csapak wrote:
> when triggering a vm reboot from the host (via cli/api), the reboot code
> is called under a guest lock, which creates a reboot request, shuts down
> the vm and calls the regular cleanup code (which includes the mdev
> cleanup).
> 
> in parallel the qmeventd sees the vanished vm, and starts 'qm cleanup'
> which is (among other tasks) used to restart a vm with a pending reboot
> request. It does this also under a guest lock, with a default timeout of
> 10 seconds.
> 
> Since we currently wait 10 seconds for the nvidia driver to clean the
> mdev, this creates a race condition for the cleanup lock. Iow. when the
> call to `qm cleanup` starts before we started to sleep for 10 seconds,
> it will not be able to acquire its lock and not start the vm again.
> 
> To fix it, do two things:
> * increase the timeout in `qm cleanup` to 60 seconds
>   (technically this still might run into a timeout, as we can configure
>   up to 16 mediated devices with up to 10 seconds sleep each, but
>   realistically most users won't configure more than two or three of
>   them, if even that)
> 
> * change the `sleep 10` to a loop sleeping for 1 second each before
>   checking the state again. This shortens the timeout when the driver
>   can clean it up in the meantime.
> 
> Further, add a bit of logging, so we can properly see in the (task) log
> what is happening when.
> 
> Fixes: 49c51a60 (pci: workaround nvidia driver issue on mdev cleanup)
> Signed-off-by: Dominik Csapak <d.csa...@proxmox.com>
> ---
>  PVE/CLI/qm.pm     |  3 ++-
>  PVE/QemuServer.pm | 16 ++++++++++++----
>  2 files changed, 14 insertions(+), 5 deletions(-)
> 
> diff --git a/PVE/CLI/qm.pm b/PVE/CLI/qm.pm
> index b17b4fe2..dce6c7a1 100755
> --- a/PVE/CLI/qm.pm
> +++ b/PVE/CLI/qm.pm
> @@ -915,7 +915,8 @@ __PACKAGE__->register_method({
>       my $storecfg = PVE::Storage::config();
>       warn "Starting cleanup for $vmid\n";
>  
> -     PVE::QemuConfig->lock_config($vmid, sub {
> +     # mdev cleanup can take a while, so wait up to 60 seconds
> +     PVE::QemuConfig->lock_config_full($vmid, 60, sub {
>           my $conf = PVE::QemuConfig->load_config ($vmid);
>           my $pid = PVE::QemuServer::check_running ($vmid);
>           die "vm still running\n" if $pid;
> diff --git a/PVE/QemuServer.pm b/PVE/QemuServer.pm
> index b45507aa..efe93c62 100644
> --- a/PVE/QemuServer.pm
> +++ b/PVE/QemuServer.pm
> @@ -6133,12 +6133,20 @@ sub cleanup_pci_devices {
>           my $dev_sysfs_dir = "/sys/bus/mdev/devices/$uuid";
>  
>           # some nvidia vgpu driver versions want to clean the mdevs up 
> themselves, and error
> -         # out when we do it first. so wait for 10 seconds and then try it
> -         if ($d->{ids}->[0]->[0]->{vendor} =~ m/^(0x)?10de$/) {
> -             sleep 10;
> +         # out when we do it first. so wait for up to 10 seconds and then 
> try it manually
> +         if ($d->{ids}->[0]->[0]->{vendor} =~ m/^(0x)?10de$/ && -e 
> $dev_sysfs_dir) {
> +             my $count = 0;
> +             while (-e $dev_sysfs_dir && $count < 10) {
> +                 sleep 1;
> +                 $count++;
> +             }
> +             warn "waited $count seconds for mdev cleanup\n";
>           }
>  
> -         PVE::SysFSTools::file_write("$dev_sysfs_dir/remove", "1") if -e 
> $dev_sysfs_dir;
> +         if (-e $dev_sysfs_dir) {
> +             warn "cleaning up mediated device $uuid\n";
> +             PVE::SysFSTools::file_write("$dev_sysfs_dir/remove", "1");
> +         }
>       }
>      }
>      PVE::QemuServer::PCI::remove_pci_reservation($vmid);

lgtm

Reviewed-by: Mira Limbeck <m.limb...@proxmox.com>


Sadly I can't test it since I don't have a vGPU-capable nvidia card. But
the customer who reported it has tested the patch already and it seemed
to fix the issue without introducing other issues so far.


_______________________________________________
pve-devel mailing list
pve-devel@lists.proxmox.com
https://lists.proxmox.com/cgi-bin/mailman/listinfo/pve-devel

Reply via email to