Hi Daniel,
After adding code for this feature, when we run "make check" as expected we are
failing at two
test cases (domaincapstest, qemucapabilitiestest).
So in order to test with right data we figured out that we need to add new .xml files with
mktme qemu info
in qemucapabilitiesdata/ and domaincapsschemadata/ directories and .replies file.
We have internal qemu binary with mktme queries enabled. We figured out how to generate a
new .replies file with
our internal qemu binary using qemucapsprobe executable.
Could you please help us in understanding the libvirt test directory qemu and caps xml
files?
We have the following questions.
1. How to do we generate .xml files with new mktme data in both qemucapabilitiesdata/ and
domaincapsschemadata/ directories?. Or for time being , can we add mktme info to existing
.xml files lets sat caps_4.0.0.x86-64.xml and qemu_4.0.0.x86-64.xml?.
2. Do we have to pass all these test cases (make check) before pushing a patch? Just for
the review. We want to get preliminary review feedback from the libvirt community for our
feature.
3. we are planning to generate the .replies file using our internal qemu binary to make
sure we pass unit and functional test cases with "make check", because for this
feature as I mentioned earlier QEMU is not available until mid of next month, is this ok?
Thanks
karim
-----Original Message-----
From: Daniel P. Berrangé [mailto:berrange@redhat.com]
Sent: Tuesday, March 5, 2019 9:35 AM
To: Mohammed, Karimullah <karimullah.mohammed(a)intel.com>
Cc: Carvalho, Larkins L <larkins.l.carvalho(a)intel.com>; libvir-list(a)redhat.com
Subject: Re: [libvirt] New Feature: Intel MKTME Support
On Tue, Mar 05, 2019 at 05:23:04PM +0000, Mohammed, Karimullah wrote:
Hi Daniel,
MKTME supports encryption of memory(NVRAM) for Virtual
Machines(hardware based encryption). This features uses Linux kernel key ring services,
i.e.
Operations like, allocation and clearing of secret/keys. These keys
are used in encryption of memory in Virtual machines. So MKTME
provided encryption of entire RAM of a VM, allocated to it, thereby
supporting VM isolation feature.
So to implement this functionality in openstack
1. Nova executes host capability command, to identify if the hardware
support for MKTME (openstack xml host_capabilities command request
-->> libvirt ->> QEMU)-- qemu monitoring commands 2. Once the
hardware is identified and if user configures mktme policy
to launch a VM in openstack, Nova
a. Sends a new xml command request to libvirt, then libvirt makes
a syscall to Linux kernel key ring services to get/retrieve a
key/key-handle for this VM ( we are not sure at this point
whether to make this syscall directly in libvirt or through
QEMU)
What will openstack do with the key / key-handle it gets back from libvirt ?
Why does it need to allocate one before starting the VMs, as opposed to letting QEMU or
libvirt allocate it during startup ?
By allocating it separately from the VM start request it opens the possibility for leaking
keys, if VM startup fails and the mgmt app doesn't release the now unused key.
b. Once the key is retrieved , Nova compute executes a VM launch
xml command request to libvirt with a new argument called
mktme- keyhandle , which will send a command request to QEMU
to launch the VM( We are in process of supporting this
functionality in QEMU for VM launch operation, with new
mktme-key argument)
We are not sure , where to make this(2a) kernel system calls at
present and looking for suggestions.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|