On 1/31/22 9:26 AM, Daniel P. Berrangé wrote:

> 
> Ok, so the usage scenario is that the platform owner is deciding 
> which OVMF build in use, not the guest owner. That guest owner just
> knows that it is an OVMF build from a set of builds published by the
> platform owner. Good enough if you trust the cloud owner in general,
> but want confidence that their compute host isn't compromised. Would
> need  an independantly reproducible build, if you don't trust the
> cloud owner at all.
> 
> 
> Assuming we've got 5 possible OVMF builds, currently we would need
> to calculate 5 HMACs over the inpuot data.
> 
> With this extra piece of info, we only need to calculate 1 HMAC.
> 
> So this is enabling a performance optimization, that could indeed
> be used in a production deployment.  The HMAC ops are not exactly
> performance intensive though until we get to the point of choosing
> between a huge number of possible OVMFs.
> 
> If we can't get the VMSA info included, then the guest owner still
> needs a local copy of every possible OVMF binary that is valid. IOW
> this digest is essentially no more than a filename to identify which
> OVMF binary to calc the HMAC over.

For us the guest owner isn't monolithic. The guest owner will rely on a
Key Broker Service (KBS) running in some trusted domain to process
individual measurements and secret requests. The guest owner must
provision the KBS at the start of the day. I mention this because while
the guest owner will definitely need to build their own version of OVMF
and the components accounted for in the hashes table, ideally the
binaries won't need to be uploaded to the KBS.

If we forget about SEV-ES, the flow is pretty easy. At the start of the
day the guest owner builds the firmware, kernel, etc that they expect
the VM to be started with. They use a script to hash all the components,
formulate the hashes table, and ultimately produce the launch digest.
The guest owner uploads the launch digest to the KBS. If multiple
configurations are supported, they upload multiple digests.

We could do something very similar for SEV-ES, but there are some drawbacks.
> 
> IOW, I think there's only two scenarios that make sense
> 
> 1. The combined launch digest over firmware, kernel hashes
>    and VMSA state.
> 
> 2. Individual hashes for each of firmware, kernel hashes table and
>    VMSA state
> 
> I think we should assume that anyone who has access to SEV-ES hardware
> is likely to run in SEV-ES policy, not SEV policy. So without VMSA
> state I think that (1) is of quite limited usefulness. (2) would
> be nicer to allow optimization of picking which OVMF blob to use,
> as you wouldn't need to figure out the cross-product of very valid
> OVMF and every valid kernel hashes table - the latter probably isn't
> even a finite bounded set.
> 
I see something very similar. We could support -ES by doing the same
thing described above for SEV. The catch is that the guest owner would
have to calculate a firmware digest for every possible VMSA. That said,
if we assume that the VMSAs aren't going to change much, this just comes
down to a different VMSA and launch digest for each allowed cpu count.

Generating these digests could probably be automatically handled by the
same script that generates the secret table and calculates the launch
digest. No matter what we do, the guest owner will need to come up with
an expected VMSA. We would probably have to extend QEMU to include the
VMSA in the debug hash field for approach this to work.

>> More generally: within space/network constraints, give the Guest Owner
>> all the information it needs to compute the launch measurement.  There's
>> a problem with OVMF there (we don't want to send the whole 4 MB over the
>> QMP response, but given its hash we can't "extend" it to include the
>> kernel-hashes struct).
> 
> Yeah its a shame we aren't just measuring the digest of each piece
> of information in  GCTX.LD, instead of measuring the raw information
> directly.
Exactly. This is annoying because it complicates the other option, which
is to have the guest owner upload hashes of each individual component to
the KBS, which would generate the final hash when checking a launch
measurement. Unfortunately the launch digest is the hash of the firmware
binary itself, not the hash of the hash of the firmware. This means that
the guest owner would have to upload the firmware binary to the KBS. If
we took this approach the guest owner wouldn't have to generate a bunch
of launch digests, although they might need to provide the KBS with
complex instructions about which components can be used together.

Anyway, I think either of these options is fine. The first is more work
for the guest owner at configuration time and the second is more work
for the KBS at runtime. Like I said, the guest owner will need to know
an expected VMSA either way so maybe we should think of that as a
separate issue.
> 
> 
> I wonder if we're thinking of this at the wrong level though. Does
> it actually need to be QEMU providing this info to the guest owner ?
> 
The CSP could generate the hash that it expects to boot without the help
of QEMU, although it might be more complicated for SEV-ES. Even so, it
would be convenient if the CSP could ask QEMU/libvirt for the expected
hashes via the same interface that it gets the measurement. The CSP will
have to report the real launch measurement to the KBS. It would be handy
if the debug measurement were available at the same time with no extra
bookkeeping.

-Tobin

Reply via email to