> From: Michael Kelley <[email protected]>
> Sent: Wednesday, April 8, 2026 6:54 AM
> > > ...
> > > A slightly different approach to the whole problem is to change
> > > vmbus_reserve_fb(). If it is unable to get a non-zero "start" value, then
> > > it should use the same assumption as above, and reserve a frame buffer
> > > area starting at the lowest address in low MMIO space. The reserved size
The framebuffer base of Gen1 VMs always starts at 4GB-128MB, even if
the low mmio base is 1GB.
> > > could be the max possible frame buffer size, which I think is 64 MiB (?).
> >
> > It can be 128MB with the highest resolution 7680*4320 (I hope the
> > highest resolution won't become bigger in the future).
>
> Indeed!
>
> >
> > > This still leaves low MMIO space for subsequent PCI devices, and allows
> > > 32-bit BARs to continue to work. This approach requires one further
> > > assumption, which is that the host, plus any movement by hyperv_drm,
> > > has kept the frame buffer at the low end of the low MMIO space. From
> > > what I've seen, that assumption is reality -- the frame buffer always
> > > starts at the beginning of low MMIO space.
> > >
> > > This approach could be taken one step further, where vmbus_reserve_fb()
> > > *always* reserves 64 MiB starting at the low end of low MMIO space,
> > > regardless of the value of "start". The messy code for getting "start"
> > > could be dropped entirely, and the dependency on CONFIG_SYSFB goes
> > > away. Or maybe still get the value of "start" and "size", and if non-zero
> > > just do a sanity check that they are within the fixed 64 MiB reserved
> > > area.
> > >
> > > Thoughts? To me tweaking vmbus_reserve_fb() is a more
> > > straightforward and explicit way to do the reserving, vs. modifying
> > > the requested range in the Hyper-V PCI driver.
> >
> > Agreed. Let me try to make a new patch for review.
Please refer to my testing results and my thoughts below:
On x86-64 lab hosts, I tested Gen1 and Gen2 VMs on the latest
Hyper-V build, and on Windows Server 2019
(Hyper-V: Hypervisor Build 10.0.17763.8510-8-0), and I saw the
same host behavior on both the hosts:
1) The max required framebuffer size is determined by Set-VMVideo,
and is reported to the guest hyperv_drm driver via
hdev->channel->offermsg.offer.mmio_megabytes.
1.1) For Gen1 VMs, the framebuffer's base is reported via the
legacy PCI graphics device's BAR: the PCI BAR's base is
hardcoded to 4G-128MB, and the size is hardcoded to 64MB,
but the hyperv_drm driver can use a framebuffer size bigger
than 64MB when Set-VMVideo specifies a big framebuffer.
1.2) For Gen2 VMs, the framebuffer's base is reported via the
UEFI firmware, and the size is hardcoded to 3MB, but the
hyperv_drm driver can use a framebuffer size bigger than
64MB when Set-VMVideo specifies a big framebuffer.
2) The low mmio range is affected by the PowerShell command
"Set-VM -LowMemoryMappedIoSpace". Note: the command only accepts
a value between 128MB and 3.5GB.
3) For Gen2 VMs, the low mmio range is also affected by another
command "Set-VMVideo", and the framebuffer always starts at the
beginning of the low mmio range.
3.1) By default, both the low mmio range and the framebuffer
start at the fixed location 4G-128MB. If the max
framebuffer size is X MB bigger than 64MB, the
low_mmio_base decreases by 2*X MB.
3.2) With "Set-VM -LowMemoryMappedIoSpace 1GB", the
low_mmio_base is 3GB, the low_mmio_size=1GB. The
fb_mmio_base is also 3GB; if the max framebuffer size is
X MB bigger than 64MB, the low_mmio_base decreases by
2*X MB.
4) For Gen1 VMs, the framebuffer always starts at the fixed
location 4G-128MB.
4.1) By default, the low mmio range also starts at 4G-128MB,
and the size is 127.75 MB, i.e. if
hdev->channel->offermsg.offer.mmio_megabytes needs 128MB,
the guest hyperv_drm driver can't find enough available
mmio in the low mmio range, and has to use the high mmio
range.
4.2) With "Set-VM -LowMemoryMappedIoSpace 1GB", the
low_mmio_base is 3GB, the low_mmio_size=1023.75 MB. The
fb_mmio_base is still 4G-128MB, i.e. if hyperv_drm needs
128 MB of mmio, it still has to use the high mmio range.
5) Note: the mmio range [VTPM_BASE_ADDRESS, 4GB), whose size is
18.75MB, can not be used by the framebuffer.
To recap, according to my testing, the pseudo code of the
host/guest firmware that determine the low mmio range and the
framebuffer range should be:
max_fb_size = round_up_to_2MB(HorizontalResolution *
VerticalResolution * 4);
if (is_gen1_VM) {
low_mmio_base = 4G - 128MB
fb_mmio_base = 4G - 128MB
low_mmio_size = 128MB - 0.25MB
} else { /* Gen2 VMs */
low_mmio_base = 4G - 128MB
low_mmio_size = 128MB
excess_fb_size = (max_fb_size > 64MB) ?
(max_fb_size - 64MB) : 0;
low_mmio_base -= excess_fb_size * 2;
low_mmio_size = 4GB - low_mmio_base
fb_mmio_base = low_mmio_base;
}
If ("Set-VM -LowMemoryMappedIoSpace" sets a target_low_mmio_size) {
target_low_mmio_size = round_up_to_2MB(target_low_mmio_size)
if (4GB - target_low_mmio_size < low_mmio_base) {
low_mmio_base = 4GB - target_low_mmio_size
if (is_gen1_VM) {
low_mmio_size = target_low_mmio_size - 0.25MB
// fb_mmio_base is still 4GB - 128MB
} else {
low_mmio_size = target_low_mmio_size
fb_mmio_base = low_mmio_base;
}
}
}
e.g. for a Gen2 VM with the below commands:
Set-VM -LowMemoryMappedIoSpace 128MB \
-VMName decui-u2204-gen2-fb
// i.e. the default setting on a lab host
Set-VMVideo -VMName decui-u2204-gen2-fb \
-HorizontalResolution 4834 \
-VerticalResolution 3622 \
-ResolutionType Single
we have:
max_fb_size = round_up_to_2MB(4834*3622*4) = 68 MB
excess_fb_size = 4MB
low_mmio_base = 4GB - 128MB - 4MB * 2
= 4GB - 136 MB = 0xf7800000
fb_mmio_base = low_mmio_base
low_mmio_size = 4GB - low_mmio_base = 136MB
In this case, we'd like to reserve low_mmio_size/2 = 68MB
(rather than a fixed value of 128MB) for the framebuffer mmio:
actually we can't reserve 128MB from the low mmio range,
because the range [VTPM_BASE_ADDRESS, 4GB), whose size is
18.75MB, is reserved for vTPM and other system devices like
the I/O APIC, so the available low mmio size is only
136MB - 18.75MB = 117.25MB.
If we further run
"Set-VM -LowMemoryMappedIoSpace 150MB \
-VMName decui-u2204-gen2-fb", we have
max_fb_size = round_up_to_2MB(4834*3622*4) = 68 MB
excess_fb_size = 4MB
low_mmio_base = 4GB - 128MB - 4MB * 2
= 4GB - 136 MB = 0xf7800000
but 4GB - target_low_mmio_size = 4GB - 150MB, which is
smaller than low_mmio_base, so low_mmio_base and
fb_mmio_base are both set to 4GB - 150MB = 0xf6a00000,
and low_mmio_size = 150MB. In this case, we'd like to
reserve low_mmio_size/2 = 75MB for the framebuffer mmio,
since we don't know the exact framebuffer size in
vmbus_reserve_fb().
With the same PowerShell commands, if the VM is a Gen1 VM,
the low_mmio_base = 0xf6a00000, and
low_mmio_size = 149.75MB but the fb_mmio_base is
4GB - 128MB = 0xf8000000.
Another example is: for a Gen2 VM with the below commands:
Set-VM -LowMemoryMappedIoSpace 1GB \
-VMName decui-u2204-gen2-fb
// i.e. the default setting on Azure. Let's ignore CVMs here.
Set-VMVideo -VMName decui-u2204-gen2-fb \
-HorizontalResolution 4834 \
-VerticalResolution 3622 \
-ResolutionType Single
we have:
max_fb_size = round_up_to_2MB(4834*3622*4) = 68 MB
excess_fb_size = 4MB
low_mmio_base = 4GB - 128MB - 4MB * 2
= 4GB - 136 MB = 0xf7800000
but 4GB - target_low_mmio_size = 4GB - 1GB, which is
smaller than low_mmio_base, so low_mmio_base and
fb_mmio_base are both set to 4GB - 1GB = 0xc0000000,
and low_mmio_size = 1GB.
In this case, we'd like to reserve
min(low_mmio_size/2, 128MB) = 128MB for the framebuffer
mmio, since the max possible framebuffer so far is 128MB.
************************************
On an ARM64 lab host, I also tested Gen2 VMs (there is no Gen1 VM
for ARM VMs):
By default:
low_mmio_base = 4GB - 512MB, i.e. 0xe0000000
low_mmio_size = 512MB
fb_mmio_base = low_mmio_base
The default framebuffer size is 3MB
(i.e. screen.lfb_size = 3MB) but hyperv_drm:
mmio_megabytes = 8 MB, which supports up to 1920 * 1080.
With the below commands:
Set-VM -LowMemoryMappedIoSpace 512MB \
-VMName decui-u2204-gen2-fb
// the command only accepts a value between 512MB and 3.5GB.
Set-VMVideo -VMName decui-u2204-gen2-fb \
-HorizontalResolution 4834 \
-VerticalResolution 3622 \
-ResolutionType Single
I thought we would have:
max_fb_size = round_up_to_2MB(4834*3622*4) = 68 MB
excess_fb_size = 4MB
low_mmio_base = 4GB - 512MB - 4MB * 2
= 4GB - 520MB
fb_mmio_base = low_mmio_base
low_mmio_size = 4GB - low_mmio_base = 520MB
Since 4GB - target_low_mmio_size = 4GB - 512MB, which is
smaller than low_mmio_base, so low_mmio_base and
fb_mmio_base would be both set to 4GB - 520MB, and
low_mmio_size would be 520MB.
However, the actual result is:
max_fb_size is indeed 68MB.
but fb_mmio_base = low_mmio_base = 4GB - 512MB, and
low_mmio_size = 512MB, i.e. the 'excess_fb_size' is not
considered on ARM64!
In this case, we'd like to reserve
min(low_mmio_size/2, 128MB) = 128MB for the framebuffer
mmio, since the max possible framebuffer so far is 128MB.
With the below command:
Set-VM -LowMemoryMappedIoSpace 3GB \
-VMName decui-u2204-gen2-fb
// i.e. the default setting on Azure. Unlike x86-64, an ARM64
// VM on Azure has 3GB of mmio below 4GB.
Set-VMVideo -VMName decui-u2204-gen2-fb \
-HorizontalResolution 4834 \
-VerticalResolution 3622 \
-ResolutionType Single
we have:
max_fb_size = round_up_to_2MB(4834*3622*4) = 68 MB
low_mmio_base = 4GB - 3GB = 1GB = 0x40000000
low_mmio_size = 3GB
fb_mmio_base = low_mmio_base = 1GB
In this case, we'd like to reserve
min(low_mmio_size/2, 128MB) = 128MB for the framebuffer
mmio, since the max possible framebuffer so far is 128MB.
************************************
To recap, I think the bottom line is:
a) For Gen2 VMs, we can safely reserve a mmio range starting at
sysfb_primary_display.screen.lfb_base with a size of
min(low_mmio_size/2, 128MB).
If sysfb_primary_display.screen.lfb_base is 0, i.e. in the case
of kdump kernel, we use low_mmio_base instead.
This should fix the mmio conflict in the kdump kernel.
b) For Gen1 VMs, let's still only reserve a mmio range starting at
4GB - 128MB with a size of 64MB, because when we are in
vmbus_reserve_fb(), we still don't know the exact size of the
max_fb_size, and we don't want to reserve too much as we would
want to reserve some low mmio space for PCI devices with 32-bit
BARs (if any).
If the user runs Set-VMVideo and needs a framebuffer size
bigger than 64MB (IMO this is not a typical scenario in
practice), we have to use high mmio for hyperv_drm in the first
kernel, and the kdump kernel still suffers from the mmio
conflict between hyperv_drm and hv_pci. We encourage Gen1 VM
users to upgrade to Gen2 VMs to resolve the issue. Anyway, the
mmio conflict is inevitable for Gen1 VMs, if the max required
framebuffer size is bigger than 108MB (Note:
128MB - VTPM_BASE_ADDRESS = 109.25, and the required framebuffer
size is always rounded up to 2MB).
c) CVMs don't have the framebuffer device, so we don't need to reserve
any mmio in vmbus_reserve_fb() for them.
Thanks for reading through this long email!
I'm making a patch right now...
Thanks,
Dexuan