On 09/01/16 09:44, Ard Biesheuvel wrote:
> On 31 August 2016 at 21:43, Jordan Justen <jordan.l.jus...@intel.com> wrote:
>> On 2016-08-19 07:25:54, Laszlo Ersek wrote:
>>> On 08/19/16 15:06, Ard Biesheuvel wrote:
>>>> On 19 August 2016 at 14:49, Laszlo Ersek <ler...@redhat.com> wrote:
>>>>> This series solves
>>>>> <https://tianocore.acgmultimedia.com/show_bug.cgi?id=66>. In particular,
>>>>> it gives AARCH64 guests running on KVM a clean, uncorrupted graphical
>>>>> console.
>>>>>
>>>>
>>>> Impressive! I suppose this means no direct frame buffer access for the
>>>> OS using the GOP?
>>>
>>> That's correct.
>>>
>>>> That is fine with me, btw, after finding out that
>>>> VGA is really the only problematic QEMU/KVM device (unlike the
>>>> reported USB issues, which were solved by making the PCI RC
>>>> dma-coherent in the DT),
>>>
>>> Good to know, thanks!
>>>
>>>> I think this approach is the best solution,
>>>> since OS accessing the GOP is a hack anyway
>>
>> How it is a hack? It seems to be pretty standard for graphics devices
>> to provide a simple framebuffer mode. True, it is not required by the
>> GOP protocol, but many devices and GOP drivers enable it. Thus, it
>> seems reasonable for a UEFI OS to take advantage of it while loading
>> the native driver.
>>
> 
> Because ExitBootServices() tears down the whole driver stack, protocol
> database, etc but leaves a single struct in place which describes a
> framebuffer whose methods are now inoperable but which can be driven
> in the mode that the firmware happened to leave it in. Furthermore,
> there is no context anymore that describes which device owns the
> framebuffer, and so it is not generally possible to decide if it is
> safe to reconfigure any part of the PCI layer without interfering with
> the framebuffer mapping.
> 
> So yes, it is a hack. A useful one, but still a hack.

I can't disagree with this argument either! ;)

The real deal-breaker for me is naturally the fact that the framebuffer
address inherited from the GOP almost universally points into some PCI
device's MMIO BAR. If the guest runtime OS decides to re-enumerate PCI
resources, the LFB address from the GOP will point into outer space.

> 
>> If an OS can't load or find the native driver, the framebuffer also
>> provides a way to communicate with the user.
>>
> 
> Of course.
> 
>>>> (and breaks with the PCI
>>>> reconfiguration that occurs under ARM/Linux, even in the ACPI case,
>>>> which I expected would leave the firmware PCI setup alone. /me makes
>>>> mental note to revert the 'pci-probe-only' patch)
>>>
>>> The expectation is that the AARCH64 installer media of all guest OSes
>>> should come with a native virtio-gpu-pci driver included.
>>
>> Like mentioned above, there are potential cases where the OS may want
>> to update the screen before loading the the native drivers, or if
>> loading the native driver failed.
>>
> 
> Yes, but this is fundamentally problematic on ARM under
> virtualization. Emulated framebuffers are backed by host memory, which
> is mapped cacheable. Typical framebuffer mapping guest code uses
> uncached or write-combining mappings, which are incoherent with the
> host mapping, which means the host does not get to see what the guest
> puts into the framebuffer without major surgery. This means the
> standard VGA QEMU device is unusable on ARM with KVM acceleration.

Thanks for spelling out the details, and apologies that I described the
same as you already had described -- I did it superfluously *and* less
precisely :)

>> Regarding VirtIo GPU: Shouldn't we wait until it makes it into the
>> actual specs?
>>
> 
> As I explained above, virtio-gpu support without the framebuffer is
> indispensable for supporting graphics under QEMU/KVM. So I would
> rather have this in sooner than later, either under OvmfPkg or
> ArmVirtPkg.

I'd prefer to keep it under OvmfPkg, because it does work well for
x86_64 KVM guests too (if you specify "-device virtio-gpu-pci").

However, I do agree that keeping the driver under ArmVirtPkg would make
*perfect* sense:

- for x86_64 KVM guests, we recommend QXL or virtio-vga anyway, for
  better compatibility with Windows 8 / Windows 10 (for Windows 7, QXL
  or stdvga), which are all bound by QemuVideoDxe,

- while for aarch64 KVM guests, virtio-gpu-pci (bound by VirtioGpuDxe)
  is the only choice.

In practice, the separation between QemuVideoDxe and VirtioGpuDxe is a
very clear one: do you need (and can have) a linear framebuffer, based
on your guest architecture?
- If so, use QemuVideoDxe, with Cirrus / stdvga / QXL / virtio-vga (from
these, pick dependent on other factors, like guest OS driver support, S3
support, etc).
- Otherwise, use VirtioGpuDxe, with virtio-gpu-pci.

So, an argument can certainly be made that VirtioGpuDxe be included in
the ArmVirtQemu DSC / FDF files *only*, and that VirtioGpuDxe actually
*replace* QemuVideoDxe in the ArmVirtQemu DSC / FDF files. (The current
series doesn't remove QemuVideoDxe from ArmVirtQemu, because with TCG
*emulation*, the QXL / stdvga framebuffer happens to function. But, for
"production use", QemuVideoDxe is certainly useless in ArmVirtQemu.)

> Also, I do believe that it is generally useful to make
> implementations such as this one widely available especially when the
> spec is not finalized yet, so the additional exposure may help in
> validation before it is set in stone.

I agree absolutely. I can honestly claim that I made the same argument
in my previous email independently, not having read yours yet.

>> Is there any chance to update the spec provide a simple (directly
>> scanned out) framebuffer mode?
>>
> 
> Lack of a framebuffer is a deliberate choice. If you need a
> framebuffer, you can use the virtio-vga flavor, which exposes a VGA
> compatible framebuffer + registers in addition to the standard virtio
> GPU.

Glad we agree on this one too! :)

Thanks!
Laszlo
_______________________________________________
edk2-devel mailing list
edk2-devel@lists.01.org
https://lists.01.org/mailman/listinfo/edk2-devel

Reply via email to