Hi Erik and Daniel,
Earlier we were considering a different method of accessing Linux kernel ring 
services to get a key-handle i.e through daemon process in openstack. But later 
on we decided to do it through the Libvirt as Daemon processes is not feasible 
in Openstack. Hence we had separate calls for getting key-handle and using that 
key launching VM through Libvirt.

However we are open for, the idea of getting key-handle during startup of 
Libvirt.  Looks like getting key-handle in QEMU is not possible because of some 
limitations in the code it seems(we will give you more info on this one, once 
we have accurate information from our QEMU team).

We want to understand following things in virtualization.

1. What do you mean by " letting QEMU or libvirt to allocate key-handle  during 
startup ?. Is this means, before launching a VM get key-handle in either in 
Libvirt/QEMU?.. if so next tpoint
2. If not in QEMU, can we make Linux syscalls to get key-handle before 
launching  a command to QEMU during VM launch? In Libvirt?  like

        . Nova compute  during launch of an VM , sends mktme policy in the same 
xml file to Libvirt.
               . Libvirt makes kernel syscall to get key-handle and then sends 
QEMU command to launch the VM with addition of key-handle parameter


Here is the brief presentation of, how we plan to execute mtkme end to end in 
openstack, so that we are on same page..
1. Cloud service provider(CSP) launches a VM instance using  mktme policy in 
Image meta-data
        Mktme-policy {
                        Mktme-key-id = "Mname1" (String),
                        Mktme-key-type = cpu or user (CPU = hardware generated 
key, user = user given key)
                        Mktme-key-url  = https://xx.xx.xx ( URL to fetch the 
key)
                        }
2. After all checks , Nova compute will get a command to launch a VM, if 
mtkme-policy to be found. If mktme-key-type is user, it gets the key from the 
url specified in mktme-key-url and that key is called mktme-key. If 
mktme-key-type is cpu no action and mktme-key will be null.

3. Assuming key-handle and VM launch are separate calls. Nova compute execute a 
new libvirt command to get the key-handle given the arguments { Mktme-key-id, 
Mktme-key-type, Mktme-key(if user type key). }. 
        a. Libvirt make a Linux kernel ring service syscall request_key( with 
above parameters), request_key syscall return a key-handle if a key exist with 
mktme-key-id or it will create a new key-handle and returns it. This is true 
for both user and cpu type keys. (Now this command can also be extended/execute 
in QEMU)
              b. Nova gets the key-handle in return and launched the VM 
instance using this additional key-handle argument to Libvirt again.

4. Assuming key-handle is done in Libvirt( if I'm not mistaken, this what you 
were proposing). Nova compute execute VM launch libvirt call using mktme 
additional parameters { Mktme-key-id, Mktme-key-type, Mktme-key(if user type 
key, ). }
        a. Libvirt upon receiving this call , execute request_key kernel 
syscall using the above parameters and gets the key-handle and thereby launches 
 the VM using the usual QEMU command with addition parameter of mktme 
key-handle. And again this whole process can also get executed in QEMU, but we 
have some limitation I guess at this point).

AMS SEV and MKTME are not quite similar, but we are open to execute key-handle 
get/set operations in libvirt at this point. We have seen lot of pros in 
executing the things the way you were proposing. Like, the key leakage, number 
of calls to Libvirt will be minimized and less Nova code and modification.

We really appreciate your suggestions and advices , hope the above explanation 
helps understand our design. We are open for any changes , that gives us a good 
design in libvirt.

Here is the link to mktme spec : 
https://software.intel.com/sites/default/files/managed/a5/16/Multi-Key-Total-Memory-Encryption-Spec.pdf

Thanks again
Karim.
-----Original Message-----
From: Erik Skultety [mailto:eskul...@redhat.com] 
Sent: Wednesday, March 6, 2019 12:08 AM
To: Daniel P. Berrangé <berra...@redhat.com>
Cc: Mohammed, Karimullah <karimullah.moham...@intel.com>; 
libvir-list@redhat.com; Carvalho, Larkins L <larkins.l.carva...@intel.com>
Subject: Re: [libvirt] New Feature: Intel MKTME Support

On Tue, Mar 05, 2019 at 05:35:09PM +0000, Daniel P. Berrangé wrote:
> On Tue, Mar 05, 2019 at 05:23:04PM +0000, Mohammed, Karimullah wrote:
> > Hi Daniel,
> > MKTME supports encryption of memory(NVRAM) for Virtual 
> > Machines(hardware based encryption). This features uses Linux kernel key 
> > ring services, i.e.
> > Operations like, allocation and clearing of secret/keys. These keys 
> > are used in encryption of memory in Virtual machines. So MKTME 
> > provided encryption of entire RAM of a VM, allocated to it, thereby 
> > supporting VM isolation feature.
> >
> > So to implement this functionality in openstack
> >
> > 1. Nova executes host capability command, to identify if the hardware
> >     support for MKTME (openstack xml host_capabilities command request
> >     -->> libvirt ->> QEMU)-- qemu monitoring commands 2. Once the 
> > hardware is identified and if user configures mktme policy
> >    to launch a VM in openstack,  Nova
> >     a. Sends a new xml command request to libvirt, then libvirt makes
> >          a syscall to Linux kernel key ring services to get/retrieve a
> >          key/key-handle for this VM ( we are not sure at this point
> >          whether to make this syscall directly in libvirt or through 
> > QEMU)


>
> What will openstack do with the key / key-handle  it gets back from 
> libvirt ?

Same question here.

>
> Why does it need to allocate one before starting the VMs, as opposed 
> to letting QEMU or libvirt allocate it during startup ?
>
> By allocating it separately from the VM start request it opens the 
> possibility for leaking keys, if VM startup fails and the mgmt app 
> doesn't release the now unused key.

I would expect this key/handle work similarly as it does with SEV, we (libvirt) 
treat everything as a blob since the session key is encrypted by a transport 
key shipped along with an integrity key which were derived by a master secret 
both parties know.

My question is whether you have a draft of this MKTME spec that we could have a 
look at to give us more technical insight and therefore help us to make better 
design decisions.

>
> >     b. Once the key is retrieved , Nova compute executes a VM launch
> >          xml command request to libvirt with a new argument called
> >          mktme- keyhandle , which will send a command request to QEMU
> >          to launch the VM( We are in process of supporting  this
> >          functionality in  QEMU  for VM launch operation, with new
> >          mktme-key argument)

As Dan asked above, this really depends on why does openstack need to interact 
with the key and whether the key handle can be computed during the launch 
phase. For example, in SEV's case we pass the VM owner's certificate to the SEV 
firmware as part of the VM configuration and the handshake and a measurement 
verification both happen after we initialized QEMU and if necessary 
(measurement purposes) we start the VM in the paused state so that commands can 
be passed to QEMU handling all the interactions with SEV in the kernel instead 
of us.

Thanks,
Erik

--
libvir-list mailing list
libvir-list@redhat.com
https://www.redhat.com/mailman/listinfo/libvir-list

Reply via email to