From: Dexuan Cui <[email protected]> Sent: Tuesday, April 28, 2026 8:13 PM
> > From: Michael Kelley <[email protected]> Sent: Thursday, April 23, 2026 
> > 10:40 AM

[snip]

> > > + /* Hyper-V CoCo guests do not have a framebuffer device. */
> > > + if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
> > > +         return;
> >
> > This test is testing feature "A" (mem encryption) in order to determine
> > the presence of feature "B" (no framebuffer), because current
> > configurations happen to always have "A" and "B" at the same time. But
> > the linkage between the features is tenuous, and if configurations should
> > change in the future, testing this way could be bogus. It works now, but I'm
> > leery of depending on the linkage between "A" and "B".
> >
> > You could set up a "can_have_framebuffer" flag in ms_hyperv_init_platform()
> > if running in a CVM, and test that flag here. But I'd suggest just dropping
> > this optimization. CVMs are always Gen2 (and that's not going to change),
> > so they have plenty of low mmio space.
> 
> This is not true on a lab host, e.g. I have a TDX VM on a lab host created
> by these 2 commands (without the 2nd command, Hyper-V won't allow
> the TDX VM to start):
> 
>     New-VM -Generation 2 -GuestStateIsolationType Tdx -Name $vmName
>     Disable-VMConsoleSupport -VMName $vmName
> 
> The low_mmio_base is still 4GB-128MB. In this case, it's not a good idea
> to try to reserve the 128MB:
> 
> 1) the available low MMIO size is smaller than 128MB due to the vTPM
> MMIO range.
> 
> 2) even if we can reserve the 109.25 low mmio range
> [0xf8000000-0xfed3ffff], we may not want to do that, just in case
> some assigned PCI device has 32-bit BARs.
> 
> So, IMO we need to keep the check:
>  +    if (cc_platform_has(CC_ATTR_GUEST_MEM_ENCRYPT))
>  +            return;
> 
> BTW, I think this may be a slightly better check here:
> +        if (hv_is_isolation_supported())
> +                return;

Agreed. Using hv_is_isolation_supported() seems better than
cc_platform_has() for this purpose.

> 
> A CVM on Hyper-V won't start without the command line
>     Disable-VMConsoleSupport -VMName $vmName

Unfortunately, on my laptop Hyper-V, a VM with VBS Isolation appears
to *not* require Disable-VMConsoleSupport. I can start the VM, and the
VM is offered the VMBus synthvid, mouse, and keyboard devices.

But what's weird in this case is that vmbus_reserved_fb() sees lfb_base
and lfb_start as 0. Furthermore, as a test, I changed the "allowed_in_isolated"
flag to true for the synthvid device, and the Hyper-V DRM driver loads and
initializes. In doing so, the vmconnect.exe window is resized larger, as is
done in a normal VM. /proc/iomem shows that the DRM driver claimed
the expected MMIO range at the start of low MMIO space. I can run a user
space program that mmaps /dev/fb0 and writes pixels to the mmap'ed
memory, and that succeeds as it would in a normal VM, but the
vmconnect.exe window doesn't show anything. It appears that the Hyper-V
host has allocated memory for the frame buffer, but is ignoring anything
that is written to it.

Running Disable-VMConsoleSupport works as expected -- the synthvid,
mouse, and keyboard devices are no longer offered to the VM.

> 
> IMO this is very unlikely to change in the future, because the Hyper-V
> synthetic framebuffer VMBus device is not a trusted device for a CVM,
> so there is no reason for Hyper-V to offer such a device to CVMs; even
> if the host offers it, currently the guest hv_vmbus driver ignores it.
> 

In the case of VBS Isolation, if such a VM also had a PCI pass-thru device,
the core problem could recur. I.e., not reserving space for the framebuffer
could allow the PCI device to try to use MMIO space that Hyper-V has
set up for the frame buffer, causing the PCI device to fail. And that's a
worse problem than just having the graphics console not function. I
can't actually try the failure case because I don't have an assignable PCI
device on my laptop, but it seems likely based on the evidence that
Hyper-V is setting up a framebuffer device.

So instead of not reserving any MMIO space for the framebuffer on
CVMs, the code you already have limits the reservation to half of the
MMIO space below 4 GB. Won't that work to avoid exhausting the low
MMIO space in a CVM that's running on a local Hyper-V with only 128
MiB of low MMIO space?

> When we assign a physical PCI GPU device to a CVM, I'm not sure if there
> is any framebuffer from the GPU or not. Even if there is, that's a completely
> different scenario and not reserving some low MMIO for "framebuffer"
> is unrelated: I think hyperv_drm (or the deprecated hyperv_fb) is the only
> driver that sets the fb_overlap_ok parameter of vmbus_allocate_mmio().
> 
> > And at the moment, CVMs don't
> > support PCI devices,
> 
> This is not true: recently I created a "Standard DC16eds v6" TDX CVM
> on Azure, and I did see two NVMe local temporary disks in "nvme list"
>  (here TDISP is not used). In 2023, we added the commit
> 2c6ba4216844 ("PCI: hv: Enable PCI pass-thru devices in Confidential VMs")
> and I believe some users are running CVMs with GPUs.

Interesting! I worked on commit 2c6ba4216844, but had not noticed
that Azure now has offerings that makes use of it. I'll take a look at
that TDX VM size.

Thanks,

Michael

Reply via email to