On 09/20/19 11:28, Laszlo Ersek wrote: > On 09/20/19 10:28, Igor Mammedov wrote: >> On Thu, 19 Sep 2019 19:02:07 +0200 >> "Laszlo Ersek" <ler...@redhat.com> wrote: >> >>> Hi Igor, >>> >>> (+Brijesh) >>> >>> long-ish pondering ahead, with a question at the end. >> [...] >> >>> Finally: can you please remind me why we lock down 128KB (32 pages) at >>> 0x3_0000, and not just half of that? What do we need the range at >>> [0x4_0000..0x4_FFFF] for? >> >> >> If I recall correctly, CPU consumes 64K of save/restore area. >> The rest 64K are temporary RAM for using in SMI relocation handler, >> if it's possible to get away without it then we can drop it and >> lock only 64K required for CPU state. It won't help with SEV >> conflict though as it's in the first 64K. > > OK. Let's go with 128KB for now. Shrinking the area is always easier > than growing it. > >> On QEMU side, we can drop black-hole approach and allocate >> dedicated SMRAM region, which explicitly gets mapped into >> RAM address space and after SMI hanlder initialization, gets >> unmapped (locked). So that SMRAM would be accessible only >> from SMM context. That way RAM at 0x30000 could be used as >> normal when SMRAM is unmapped. > > I prefer the black-hole approach, introduced in your current patch > series, if it can work. Way less opportunity for confusion. > > I've started work on the counterpart OVMF patches; I'll report back.
I've got good results. For this (1/2) QEMU patch: Tested-by: Laszlo Ersek <ler...@redhat.com> I tested the following scenarios. In every case, I verified the OVMF log, and also the "info mtree" monitor command's result (i.e. whether "smbase-blackhole" / "smbase-window" were disabled or enabled). Mostly, I diffed these text files between the test scenarios (looking for desired / undesired differences). In the Linux guests, I checked / compared the dmesg too (wrt. the UEFI memmap). - unpatched OVMF (regression test), Fedora guest, normal boot and S3 - patched OVMF, but feature disabled with "-global mch.smbase-smram=off" (another regression test), Fedora guest, normal boot and S3 - patched OVMF, feature enabled, Fedora and various Windows guests (win7, win8, win10 families, client/server), normal boot and S3 - a subset of the above guests, with S3 disabled (-global ICH9-LPC.disable_s3=1), and obviously S3 resume not tested SEV: used 5.2-ish Linux guest, with S3 disabled (no support under SEV for that now): - unpatched OVMF (regression test), normal boot - patched OVMF but feature disabled on the QEMU cmdline (another regression test), normal boot - patched OVMF, feature enabled, normal boot. I plan to post the OVMF patches tomorrow, for discussion. (It's likely too early to push these QEMU / edk2 patches right now -- we don't know yet if this path will take us to the destination. For now, it certainly looks great.) Thanks Laszlo