genegr commented on PR #13061:
URL: https://github.com/apache/cloudstack/pull/13061#issuecomment-4299974292

   Heads-up: pushed an additional commit `c0cdfa41da — kvm: implement 
copyPhysicalDisk on MultipathNVMeOFAdapterBase` to this PR. The original 
description noted this as "future work, not in this PR", but after validating 
the rest of the NVMe-TCP path end-to-end I wanted VMs to be fully deployable on 
an NVMe-TCP pool (root + data), which requires `copyPhysicalDisk` to land the 
template as a raw image on the provisioned namespace.
   
   Implementation mirrors `MultipathSCSIAdapterBase.copyPhysicalDisk`: resolve 
the destination device path via the existing `getPhysicalDisk` plumbing (which 
triggers `nvme ns-rescan` and waits for the `by-id/nvme-eui.<NGUID>` symlink), 
then `qemu-img convert` the source image into the raw block device. User-space 
encrypted source or destination volumes are rejected by design — the FlashArray 
already encrypts at rest and layering qemu-img LUKS on top of a 
hostgroup-scoped namespace is not a sensible layering (and would break across 
live-migration).
   
   With this commit I was able to:
   
   - Deploy a Rocky 9 VM with `pooltype: NVMeTCP` on the root volume 
(previously the deploy failed as soon as the root disk tried to land on the 
NVMe-TCP pool).
   - Attach an additional `tags=nvme` data disk, so both `vda` and `vdb` are 
NVMe-backed.
   - `createVMSnapshot` with `quiescevm=true, snapshotmemory=false` → 
array-side snapshots on both volumes, CloudStack `state: Ready`, `type: Disk`.
   - `revertToVMSnapshot` → both volumes came back with identical SHA-256 
content to pre-snapshot.
   
   I've also updated the PR description to reflect the 7-commit set and add the 
full-NVMe test evidence. Happy to split this commit into a separate follow-up 
PR if reviewers prefer — let me know.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: [email protected]

For queries about this service, please contact Infrastructure at:
[email protected]

Reply via email to