Gerd Hoffmann <kra...@redhat.com> writes: > On 05/29/13 01:53, Kevin O'Connor wrote: >> On Thu, May 23, 2013 at 03:41:32PM +0300, Michael S. Tsirkin wrote: >>> Juan is not available now, and Anthony asked for >>> agenda to be sent early. >>> So here comes: >>> >>> Agenda for the meeting Tue, May 28: >>> >>> - Generating acpi tables >> >> I didn't see any meeting notes, but I thought it would be worthwhile >> to summarize the call. This is from memory so correct me if I got >> anything wrong. >> >> Anthony believes that the generation of ACPI tables is the task of the >> firmware. Reasons cited include security implications of running more >> code in qemu vs the guest context, > > I fail to see the security issues here. It's not like the apci table > generation code operates on untrusted input from the guest ...
But possibly untrusted input from a malicious user. You can imagine something like a IaaS provider that let's a user input arbitrary values for memory, number of nics, etc. It's a stretch of an example, I agree, but the general principle I think is sound: we should push as much work as possible to the least privileged part of the stack. In this case, firmware has much less privileges than QEMU. >> complexities in running iasl on >> big-endian machines, > > We already have a bunch of prebuilt blobs in the qemu repo for simliar > reasons, we can do that with iasl output too. > >> possible complexity of having to regenerate >> tables on a vm reboot, > > Why tables should be regenerated at reboot? I remember hotplug being > mentioned in the call. Hmm? Which hotplugged component needs acpi > table updates to work properly? And what is the point of hotplugging if > you must reboot the guest anyway to get the acpi updates needed? > Details please. See my response to Michael. > Also mentioned in the call: "architectural reasons", which I understand > as "real hardware works that way". Correct. But qemu's virtual > hardware is configurable in more ways than real hardware, so we have > different needs. For example: pci slots can or can't be hotpluggable. > On real hardware this is fixed. IIRC this is one of the reasons why we > have to patch acpi tables. It's not really fixed. Hardware supports PCI expansion chassises. Multi-node NUMA systems also affect the ACPI tables. >> overall sloppiness of doing it in QEMU. > > /me gets the feeling that this is the *main* reason, given that the > other ones don't look very convincing to me. > >> Raised >> that QOM interface should be sufficient. > > Agree on this one. Ideally the acpi table generation code should be > able to gather all information it needs from the qom tree, so it can be > a standalone C file instead of being scattered over all qemu. Ack. So my basic argument is why not expose the QOM interfaces to firmware and move the generation code there? Seems like it would be more or less a copy/paste once we had a proper implementation in QEMU. >> There were discussions on potentially introducing a middle component >> to generate the tables. Coreboot was raised as a possibility, and >> David thought it would be okay to use coreboot for both OVMF and >> SeaBIOS. > > Certainly an option, but that is a long-term project. Out of curiousity, are there other benefits to using coreboot as a core firmware in QEMU? Is there a payload we would ever plausibly use besides OVMF and SeaBIOS? Regards, Anthony Liguori _______________________________________________ SeaBIOS mailing list SeaBIOS@seabios.org http://www.seabios.org/mailman/listinfo/seabios