> From: Michael Kelley <[email protected]>
> Sent: Thursday, April 23, 2026 10:40 AM
> > ...
> > Another example is: for a Gen2 VM with the below commands:
> >    Set-VM -LowMemoryMappedIoSpace 1GB \
> >           -VMName decui-u2204-gen2-fb
> >    // i.e. the default setting on Azure. Let's ignore CVMs here.

Sorry for the incorrect statement: this is not the default setting
on Azure. The default for regular VMs on Azure should be
"-LowMemoryMappedIoSpace 3GB".  Not sure how I made the
incorrect statement -- I guess I might have confused my local VM
with my Azure VM, and at some moment, I might have mistaken
the meaning of the "-LowMemoryMappedIoSpace" parameter:
for that local VM, I might somehow incorrectly though that the
param means low_mmio_base rather than low_mmio_size.

> FWIW, I'm seeing that in Gen2 VMs in Azure, the low_mmio_size
> is 3 GiB. I'm looking at a D16ds_v5, and a D16lds_v6. The v5 VM
> is newly created, while the v6 has been around for a few months.

This is also my observation, after I double checked my Azure VM.

> In a CVM, the low_mmio_size should be 1 GiB. This overall example
> is still correct -- it's just the comment that I have doubts about. Or
> maybe you are looking at a different VM size that has a different
> default?

For CVMs, yes, the low_mmio_size is 1GB.

> 
> Some years back, I had gotten into a discussion with Azure about
> this size because the swiotlb memory wants to be allocated below
> the 4 GiB line, and reserving 3 GiB for low mmio limited the size
> of the swiotlb. CVMs were changed to have only 1 GiB for low
> mmio because they need a larger swiotlb.

Right, I also remember the story. :-)

> > With the below command:
> >    Set-VM -LowMemoryMappedIoSpace 3GB \
> >           -VMName decui-u2204-gen2-fb
> >    // i.e. the default setting on Azure. Unlike x86-64, an ARM64
> >    // VM on Azure has 3GB of mmio below 4GB.
> 
> See my previous comment on the same topic. I think arm64
> and x86/x64 are the same.

Agreed.

> Question about Gen 1 VMs: If the Linux frame buffer driver moves
> the frame buffer somewhere other than the default location, and
> then the VM does a kexec/kdump, what does the legacy PCI graphic
> device BAR report as the frame buffer location? Does it *always*
> report 4G-128MB, or does it report the new location? I can run

It always reports 4G-128MB. 
BTW,  I suspect a Gen2 VM may have the same issue, i.e. 
currently we only reserve 8MB below 4GB; if hyperv_drm uses
high MMIO, I suspect the UEFI firmware would still report the
same original low MMIO framebuffer base/size to the kdump kernel,
but there is no easy way to verify this for Gen2 VMs...

> an experiment to find out, but maybe you've already done so and
> not reported that detail here.
> 
> Michael

I have a Gen1 Ubuntu 22.04 VM, and I run the below commands:
Set-VM -LowMemoryMappedIoSpace 128MB -VMName decui-u2204-gen1-fb
Set-VMVideo -VMName decui-u2204-gen1-fb -HorizontalResolution 7680 
-VerticalResolution 4320 -ResolutionType Single

When the VM boots up, we reserve 64MB at 4G-128MB:
[   11.492075] hv_vmbus: hv_mmio=[mem 0xf8000000-0xfed3ffff],[mem 
0xfe0000000-0xfffffffff] fb=[mem 0xf8000000-0xfbffffff]

Since the required mmio size in the hyperv-drm driver is 128MB:
[   28.631923] hyperv_connect_vsp: hyperv_drm: mmio_megabytes=128 MB
the driver has to allocate MMIO from the high MMIO space, because 
we only reserve 64MB below 4GB, and the available low_mmio_size is
smaller than 128MB due to the vTPM MMIO range:

# cat /proc/iomem
00000000-00000fff : Reserved
00001000-0009fbff : System RAM
0009fc00-0009ffff : Reserved
000a0000-000bffff : PCI Bus 0000:00
000c0000-000c7fff : Video ROM
000e0000-000fffff : Reserved
  000f0000-000fffff : System ROM
00100000-f7feffff : System RAM
  d7000000-f6ffffff : Crash kernel
f7ff0000-f7ffefff : ACPI Tables
f7fff000-f7ffffff : ACPI Non-volatile Storage
f8000000-fffbffff : PCI Bus 0000:00
  f8000000-fbffffff : 0000:00:08.0
  fec00000-fec003ff : IOAPIC 0
  fee00000-fee00fff : PNP0C02:01
fffc0000-ffffffff : PNP0C01:00
100000000-507ffffff : System RAM
  281600000-28295449f : Kernel code
  282a00000-283746fff : Kernel rodata
  283800000-283c5287f : Kernel data
  28411a000-2845fffff : Kernel bss
fe0000000-fffffffff : PCI Bus 0000:00
  fe0000000-fe7ffffff : 5620e0c7-8062-4dce-aeb7-520c7ef76171

However,  when the kdump kernel starts to run, and I print the
pci_resource_start(pdev, 0) and pci_resource_len(pdev, 0)
from vmbus_reserve_fb(), I still see 4G-128MB:
[   12.506159] Gen1 VM: start=0xf8000000, size=0x4000000

In this case, we can't really fix the MMIO conflict, e.g.
if both hv_pci and hyperv_drm are built as modules, then
the order of loading them can be nondeterministic:if the order
in the first kernel is different from the order in
the kdump kernel, we run into trouble.

If the order is deterministic (e.g. hv_pci is
built-in, and hyperv_drm is built as a module),
we should be good since both allocates MMIO from
the high MMIO range in a deterministic way.

Thanks,
Dexuan

Reply via email to