REST service for libvirt to simplify SEV(ES) launch measurement

Extending management apps using libvirt to support measured launch of QEMU guests with SEV/SEV-ES is unreasonably complicated today, both for the guest owner and for the cloud management apps. We have APIs for exposing info about the SEV host, the SEV guest, guest measurements and secret injections. This is a "bags of bits" solution. We expect apps to them turn this into a user facting solution. It is possible but we're heading to a place where every cloud mgmt app essentially needs to reinvent the same wheel and the guest owner will need to learn custom APIs for dealing with SEV/SEV-ES for each cloud mgmt app. This is pretty awful. We need to do a better job at providing a solution that is more general purpose IMHO. Consider a cloud mgmt app, right now the flow to use the bag of bits libvirt exposes, looks something like * Guest owner tells mgmt app they want to launch a VM * Mgmt app decides what host the VM will be launched on * Guest owner requests cert chain for the virt host from mgmt app * Guest owner validates cert chain for the virt host * Guest owner generates launch blob for the VM * Guest owner provides launch blob to the mgmt app * Management app tells libvirt to launch VM with blob, with CPUs in a paused state * Libvirt luanches QEMU with CPUs stopped * Guest owner requests launch measurement from mgmt app * Guest owner validates measurement * Guest owner generates secret blob * Guest owner sends secret blob to management app * Management app tells libvirt to inject secrets * Libvirt injects secrets to QEMU * Management app tells libvirt to start QEMU CPUs * Libvirt tells QEMU to start CPUs Compare to a non-confidental VM * Guest owner tells mgmt app they want to launch a VM * Mgmt app decides what host the VM will be launched on * Mgmt app tells libvirt to launch VM with CPUs in running state * Libvirt launches QEMU with CPUs running Now, of course the guest owner wouldn't be manually performing the earlier steps, they would want some kind of software to take care of this. No matter what, it still involves a large number of back and forth operations between the guest owner & mgmt app, and between the mgmt app and libvirt. One of libvirt's key jobs is to isolate mgmt apps from differences in behaviour of underlying hypervisor technologies, and we're failing at that job with SEV/SEV-ES, because the mgmt app needs to go through a multi-stage dance on every VM start, that is different from what they do with non-confidential VMs. It is especially unpleasant because there needs to be a "wait state" between when the app selects a host to deploy a VM on, and when it can actually start a VM. In essence the app needs to reserve capacity on a host ahead of time for a VM that will be created some arbitrary time later. This can have significant implications for the mgmt app architectural design that are not neccessarily easy to address, when they expect to just call virDomainCreate have the VM running in one step. It also harms interoperability to libvirt tools. For example if a mgmt tool like virt-manager/OpenStack created a VM using SEV, and you want to start it manually using a different tool like 'virsh', you enter a world of complexity and pain, due to the multi step dance required. AFAICT, in all of this, the mgmt app is really acting as a conduit and is not implementing any interesting logic. The clever stuff is all the responsibility of the guest owner, and/or whatever software for attestation they are using remotely. I think there is scope for enhancing libvirt, such that usage of SEV/SEV-ES has little-to-no burden for the management apps, and much less burden for guest owners. The key to achieving this is to define a protocol for libvirt to connect to a remote service to handle the launch measurements & secret acquisition. The guest owner can provide the address of a service they control (or trust), and libvirt can take care of all the interactions with it. This frees both the user and mgmt app from having to know much about SEV/SEV-ES, with VM startup process being essentially the same as it has always been. The sequence would look like * Guest owner tells attestation service they intend to create a VM with a given UUID, policy, and any other criteria such as cert of the cloud owner, valid OVMF firmware hashes, and providing any needed LUKS keys. * Guest owner tells mgmt app they want to launch a VM, using attestation service at https://somehost/and/url * Mgmt app decides what host the VM will be launched on * Mgmt app tells libvirt to launch VM with CPUs in running state The next steps involve solely libvirt & the attestation service. The mgmt app and guest owner have done their work. * Libvirt contacts the service providing certificate chain for the host to be used, the UUID of the guest, and any other required info about the host. * Attestation service validates the cert chain to ensure it belongs to the cloud owner that was identified previously * Attestation service generates a launch blob and puts it in the response back to libvirt * Libvirt launches QEMU with CPUs paused * Libvirt gets the launch measurement and sends it to the attestation server, with any other required info about the VM instance * Attestation service validates the measurement * Attestation builds the secret table with LUKS keys and puts it in the response back to libvirt * Libvirt injects the secret table to QEMU * Libvirt tells QEMU to start CPUs All the same exchanges of information are present, but the management app doesn't have to get involved. The guest owner also doesn't have to get involved except for a one-time setup step. The software the guest owner uses for attestation also doesn't have to be written to cope with talking to OpenStack, CNV and whatever other vendor specific cloud mgmt apps exist today. This will significantly reduce the burden if supporting SEV/SEV-ES launch measurement in libvirt based apps, and make SEV/SEV-ES guests more "normal" from a mgmt POV. What could this look like from POV of an attestation server API, if we assume HTTPS REST service with a simple JSON payload ... * Guest Owner: Register a new VM to be booted: POST /vm/<UUID> Request body: { "scheme": "amd-sev", "cloud-cert": "certificate of the cloud owner that signs the PEK", "policy": 0x3, "cpu-count": 3, "firmware-hashes": [ "xxxx", "yyyy", ], "kernel-hash": "aaaa", "initrd-hash": "bbbb", "cmdline-hash": "cccc", "secrets": [ { "type": "luks-passphrase", "passphrase": "<blah>" } ] } * Libvirt: Request permission to launch a VM on a host POST /vm/<UUID>/launch Request body: { "pdh": "<blah>", "cert-chain": "<blah>", "cpu-id": "<CPU ID>", ...other relevant bits... } Service decides if the proposed host is acceptable Response body (on success) { "session": "<blah>", "owner-cert": "<blah>", "policy": 3, } * Libvirt: Request secrets to inject to launched VM POST /vm/<UUID>/validate Request body: { "api-minor": 1, "api-major": 2, "build-id": 241, "policy": 3, "measurement": "<blah>", "firmware-hash": "xxxx", "cpu-count": 3, ....other relevant stuff.... } Service validates the measurement... Response body (on success): { "secret-header": "<blah>", "secret-table": "<blah>", } So we can see there are only a couple of REST API calls we need to be able to define. If we could do that then creating a SEV/SEV-ES enabled guest with libvirt would not involve anything more complicated for the mgmt app that providing the URI of the guest owner's attestation service and an identifier for the VM. ie. the XML config could be merely: <launchSecurity type="sev"> <attestation vmid="57f669c2-c427-4132-bc7a-26f56b6a718c" service="http://somehost/some/url"/> </launchSecurity> And then involve virDomainCreate as normal with any other libvirt / QEMU guest. No special workflow is required by the mgmt app. There is a small extra task for the guest owner to register existance of their VM with the attestation service. Aside from that the only change to the way they interact with the cloud mgmt app is to provide the VM ID and URI for the attestation service. No need to learn custom APIs for each different cloud vendor, for dealing with fetching launch measurements or injecting secrets. Finally this attestation service REST protocol doesn't have to be something controlled or defined by libvirt. I feel like it could be a protocol that is defined anywhere and libvirt merely be one consumer of it. Other apps that directly use QEMU may also wish to avail themselves of it. All that really matters from libvirt POV is: - The protocol definition exist to enable the above workflow, with a long term API stability guarantee that it isn't going to changed in incompatible ways - There exists a fully open source reference implementation of sufficient quality to deploy in the real world I know https://github.com/slp/sev-attestation-server exists, but its current design has assumptions about it being used with libkrun AFAICT. I have heard of others interested in writing similar servers, but I've not seen code. We are at a crucial stage where mgmt apps are looking to support measured boot with SEV/SEV-ES and if we delay they'll all go off and do their own thing, and it'll be too late, leading to https://xkcd.com/927/. Especially for apps using libvirt to manage QEMU, I feel we have got a few months window of opportunity to get such a service available, before they all end up building out APIs for the tedious manual workflow, reinventing the wheel. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

+cc Tobin, James On 23/02/2022 19:28, Daniel P. Berrangé wrote:
Extending management apps using libvirt to support measured launch of QEMU guests with SEV/SEV-ES is unreasonably complicated today, both for the guest owner and for the cloud management apps. We have APIs for exposing info about the SEV host, the SEV guest, guest measurements and secret injections. This is a "bags of bits" solution. We expect apps to them turn this into a user facting solution. It is possible but we're heading to a place where every cloud mgmt app essentially needs to reinvent the same wheel and the guest owner will need to learn custom APIs for dealing with SEV/SEV-ES for each cloud mgmt app. This is pretty awful. We need to do a better job at providing a solution that is more general purpose IMHO.
Consider a cloud mgmt app, right now the flow to use the bag of bits libvirt exposes, looks something like
* Guest owner tells mgmt app they want to launch a VM
* Mgmt app decides what host the VM will be launched on
* Guest owner requests cert chain for the virt host from mgmt app
* Guest owner validates cert chain for the virt host
* Guest owner generates launch blob for the VM
* Guest owner provides launch blob to the mgmt app
* Management app tells libvirt to launch VM with blob, with CPUs in a paused state
* Libvirt luanches QEMU with CPUs stopped
* Guest owner requests launch measurement from mgmt app
* Guest owner validates measurement
* Guest owner generates secret blob
* Guest owner sends secret blob to management app
* Management app tells libvirt to inject secrets
* Libvirt injects secrets to QEMU
* Management app tells libvirt to start QEMU CPUs
* Libvirt tells QEMU to start CPUs
Compare to a non-confidental VM
* Guest owner tells mgmt app they want to launch a VM
* Mgmt app decides what host the VM will be launched on
* Mgmt app tells libvirt to launch VM with CPUs in running state
* Libvirt launches QEMU with CPUs running
Now, of course the guest owner wouldn't be manually performing the earlier steps, they would want some kind of software to take care of this. No matter what, it still involves a large number of back and forth operations between the guest owner & mgmt app, and between the mgmt app and libvirt.
One of libvirt's key jobs is to isolate mgmt apps from differences in behaviour of underlying hypervisor technologies, and we're failing at that job with SEV/SEV-ES, because the mgmt app needs to go through a multi-stage dance on every VM start, that is different from what they do with non-confidential VMs.
It is especially unpleasant because there needs to be a "wait state" between when the app selects a host to deploy a VM on, and when it can actually start a VM. In essence the app needs to reserve capacity on a host ahead of time for a VM that will be created some arbitrary time later. This can have significant implications for the mgmt app architectural design that are not neccessarily easy to address, when they expect to just call virDomainCreate have the VM running in one step.
It also harms interoperability to libvirt tools. For example if a mgmt tool like virt-manager/OpenStack created a VM using SEV, and you want to start it manually using a different tool like 'virsh', you enter a world of complexity and pain, due to the multi step dance required.
AFAICT, in all of this, the mgmt app is really acting as a conduit and is not implementing any interesting logic. The clever stuff is all the responsibility of the guest owner, and/or whatever software for attestation they are using remotely.
I think there is scope for enhancing libvirt, such that usage of SEV/SEV-ES has little-to-no burden for the management apps, and much less burden for guest owners. The key to achieving this is to define a protocol for libvirt to connect to a remote service to handle the launch measurements & secret acquisition. The guest owner can provide the address of a service they control (or trust), and libvirt can take care of all the interactions with it.
This frees both the user and mgmt app from having to know much about SEV/SEV-ES, with VM startup process being essentially the same as it has always been.
The sequence would look like
* Guest owner tells attestation service they intend to create a VM with a given UUID, policy, and any other criteria such as cert of the cloud owner, valid OVMF firmware hashes, and providing any needed LUKS keys.
* Guest owner tells mgmt app they want to launch a VM, using attestation service at https://somehost/and/url
* Mgmt app decides what host the VM will be launched on
* Mgmt app tells libvirt to launch VM with CPUs in running state
The next steps involve solely libvirt & the attestation service. The mgmt app and guest owner have done their work.
* Libvirt contacts the service providing certificate chain for the host to be used, the UUID of the guest, and any other required info about the host.
* Attestation service validates the cert chain to ensure it belongs to the cloud owner that was identified previously
* Attestation service generates a launch blob and puts it in the response back to libvirt
* Libvirt launches QEMU with CPUs paused
* Libvirt gets the launch measurement and sends it to the attestation server, with any other required info about the VM instance
* Attestation service validates the measurement
* Attestation builds the secret table with LUKS keys and puts it in the response back to libvirt
* Libvirt injects the secret table to QEMU
* Libvirt tells QEMU to start CPUs
All the same exchanges of information are present, but the management app doesn't have to get involved. The guest owner also doesn't have to get involved except for a one-time setup step. The software the guest owner uses for attestation also doesn't have to be written to cope with talking to OpenStack, CNV and whatever other vendor specific cloud mgmt apps exist today. This will significantly reduce the burden if supporting SEV/SEV-ES launch measurement in libvirt based apps, and make SEV/SEV-ES guests more "normal" from a mgmt POV.
What could this look like from POV of an attestation server API, if we assume HTTPS REST service with a simple JSON payload ...
* Guest Owner: Register a new VM to be booted:
POST /vm/<UUID>
Request body:
{ "scheme": "amd-sev", "cloud-cert": "certificate of the cloud owner that signs the PEK", "policy": 0x3, "cpu-count": 3, "firmware-hashes": [ "xxxx", "yyyy", ], "kernel-hash": "aaaa", "initrd-hash": "bbbb", "cmdline-hash": "cccc", "secrets": [ { "type": "luks-passphrase", "passphrase": "<blah>" } ] }
* Libvirt: Request permission to launch a VM on a host
POST /vm/<UUID>/launch
Request body:
{ "pdh": "<blah>", "cert-chain": "<blah>", "cpu-id": "<CPU ID>", ...other relevant bits... }
Service decides if the proposed host is acceptable
Response body (on success)
{ "session": "<blah>", "owner-cert": "<blah>", "policy": 3, }
* Libvirt: Request secrets to inject to launched VM
POST /vm/<UUID>/validate
Request body:
{ "api-minor": 1, "api-major": 2, "build-id": 241, "policy": 3, "measurement": "<blah>", "firmware-hash": "xxxx", "cpu-count": 3, ....other relevant stuff.... }
Service validates the measurement...
Response body (on success):
{ "secret-header": "<blah>", "secret-table": "<blah>", }
So we can see there are only a couple of REST API calls we need to be able to define. If we could do that then creating a SEV/SEV-ES enabled guest with libvirt would not involve anything more complicated for the mgmt app that providing the URI of the guest owner's attestation service and an identifier for the VM. ie. the XML config could be merely:
<launchSecurity type="sev"> <attestation vmid="57f669c2-c427-4132-bc7a-26f56b6a718c" service="http://somehost/some/url"/> </launchSecurity>
And then involve virDomainCreate as normal with any other libvirt / QEMU guest. No special workflow is required by the mgmt app. There is a small extra task for the guest owner to register existance of their VM with the attestation service. Aside from that the only change to the way they interact with the cloud mgmt app is to provide the VM ID and URI for the attestation service. No need to learn custom APIs for each different cloud vendor, for dealing with fetching launch measurements or injecting secrets.
Finally this attestation service REST protocol doesn't have to be something controlled or defined by libvirt. I feel like it could be a protocol that is defined anywhere and libvirt merely be one consumer of it. Other apps that directly use QEMU may also wish to avail themselves of it.
All that really matters from libvirt POV is:
- The protocol definition exist to enable the above workflow, with a long term API stability guarantee that it isn't going to changed in incompatible ways
- There exists a fully open source reference implementation of sufficient quality to deploy in the real world
I know https://github.com/slp/sev-attestation-server exists, but its current design has assumptions about it being used with libkrun AFAICT. I have heard of others interested in writing similar servers, but I've not seen code.
Tobin has just released kbs-rs which has similar properties to what you're proposing above, aiming to solve similar issues. Better talk with him before running into building yet another attestation server. -Dov
We are at a crucial stage where mgmt apps are looking to support measured boot with SEV/SEV-ES and if we delay they'll all go off and do their own thing, and it'll be too late, leading to https://xkcd.com/927/.
Especially for apps using libvirt to manage QEMU, I feel we have got a few months window of opportunity to get such a service available, before they all end up building out APIs for the tedious manual workflow, reinventing the wheel.
Regards, Daniel

On 2/23/22 1:38 PM, Dov Murik wrote:
+cc Tobin, James
On 23/02/2022 19:28, Daniel P. Berrangé wrote:
Extending management apps using libvirt to support measured launch of QEMU guests with SEV/SEV-ES is unreasonably complicated today, both for the guest owner and for the cloud management apps. We have APIs for exposing info about the SEV host, the SEV guest, guest measurements and secret injections. This is a "bags of bits" solution. We expect apps to them turn this into a user facting solution. It is possible but we're heading to a place where every cloud mgmt app essentially needs to reinvent the same wheel and the guest owner will need to learn custom APIs for dealing with SEV/SEV-ES for each cloud mgmt app. This is pretty awful. We need to do a better job at providing a solution that is more general purpose IMHO.
A few general thoughts, We've been working on our own attestation server over the last few weeks. We're in the process of making it available publicly <https://github.com/confidential-containers/kbs-rs/pull/1> In working on this, we've come up against many of the things that you are talking about here. Note in particular that we provide a client script called LaunchVM.py that uses libvirt to start an SEV VM in conjunction with the attestation server. This is basically a stand in for a management app or cloud control plane. The modifications needed to launch an SEV VM are not particularly extensive. I agree with some of your comments though. In some ways it might be nice to have libvirt take care of things and hide the complexity from the management app. When we started working on our attestation server, our initial plan was to make PRs to libvirt that would add one end of the attestation API to libvirt, which would directly query the KBS. This is basically what you are proposing. We decided against this for a couple of reasons. First, we were concerned that libvirt might not have network connectivity to an arbitrary attestation server in a cloud environment. We had envisioned that the guest owner would provide a URI for their attestation server as part of the XML. This assumes that the node where the VM is going to run can connect to an attestation server living somewhere on the internet. I think that this might be challenging in some cloud environments. By having the management app connect to libvirt and the attestation server, we add some flexibility. Second, we were worried that it would be difficult to settle on and maintain a standard. Fortunately this discussion is only relevant for SEV(-ES), given that SNP measurements are reported from inside the guest, but nonetheless there are already a number of approaches for handling things. By using a management app, each CSP can easily adapt the standard libvirt api into whatever attestation API they want. This does put a burden on the management apps, but it might sidestep a tricky problem for libvirt and like I said, we found it pretty easy to write our LaunchVM script (except for the CEK issue mentioned elsewhere). Finally, we didn't think that there would be any interest in the libvirt community. It seems like we might have been wrong about this. Like I said, our first instinct was to extend libvirt, and if there is interest in doing this, we could dust off those ideas. I certainly have a lot of ideas about how to design an API for attestation. Of course we now have an API for attestation that we think is pretty good. It is gRPC, but we are thinking about also supporting a REST interface. If an attestation api is added to libvirt, I will definitely try to be involved, although honestly I think it's fine, and in some ways maybe better, to have the management app take care of things. I may comment separately on some of the details that you have provided. -Tobin
Regards, Daniel

On Wed, Feb 23, 2022 at 03:33:22PM -0500, Tobin Feldman-Fitzthum wrote:
On 2/23/22 1:38 PM, Dov Murik wrote:
+cc Tobin, James
On 23/02/2022 19:28, Daniel P. Berrangé wrote:
Extending management apps using libvirt to support measured launch of QEMU guests with SEV/SEV-ES is unreasonably complicated today, both for the guest owner and for the cloud management apps. We have APIs for exposing info about the SEV host, the SEV guest, guest measurements and secret injections. This is a "bags of bits" solution. We expect apps to them turn this into a user facting solution. It is possible but we're heading to a place where every cloud mgmt app essentially needs to reinvent the same wheel and the guest owner will need to learn custom APIs for dealing with SEV/SEV-ES for each cloud mgmt app. This is pretty awful. We need to do a better job at providing a solution that is more general purpose IMHO.
Note in particular that we provide a client script called LaunchVM.py that uses libvirt to start an SEV VM in conjunction with the attestation server. This is basically a stand in for a management app or cloud control plane. The modifications needed to launch an SEV VM are not particularly extensive. I agree with some of your comments though. In some ways it might be nice to have libvirt take care of things and hide the complexity from the management app.
LaunchVM.py nicely illustrates my concerns. Every application that uses libvirt that knows how to start VMs, now needs to be changed to support the series of operations shown in LaunchVM.py. THe guest owner probably can't use LaunchVM.py except for demoware, as they'll need a equivalent that talks to the API of their cloud mgmt app, of which there are many.
When we started working on our attestation server, our initial plan was to make PRs to libvirt that would add one end of the attestation API to libvirt, which would directly query the KBS. This is basically what you are proposing. We decided against this for a couple of reasons.
First, we were concerned that libvirt might not have network connectivity to an arbitrary attestation server in a cloud environment. We had envisioned that the guest owner would provide a URI for their attestation server as part of the XML. This assumes that the node where the VM is going to run can connect to an attestation server living somewhere on the internet. I think that this might be challenging in some cloud environments. By having the management app connect to libvirt and the attestation server, we add some flexibility.
Agreed, we can't assume that libvirt will always have ability to connect to an arbitrary service on the internet. That said, it does not neccessarily need this ability. If the user gives a URL of 'https://myhost.com/attest', the cloud doesn't have to give that straight to libvirt. The cloud software could have a attestation proxy server. So they could tell libvirt to use the URI https://10.0.0.5/attest, and then libvirt connects to that, it will proxy the calls through the guest owner's real attestation server. If even that isn't possible though, there's still the fallback option of ignoring libvirt's native support for talking to an attestation server, and doing it manually as per LaunchVM.py illustration.
Second, we were worried that it would be difficult to settle on and maintain a standard. Fortunately this discussion is only relevant for SEV(-ES), given that SNP measurements are reported from inside the guest, but nonetheless there are already a number of approaches for handling things. By using a management app, each CSP can easily adapt the standard libvirt api into whatever attestation API they want. This does put a burden on the management apps, but it might sidestep a tricky problem for libvirt and like I said, we found it pretty easy to write our LaunchVM script (except for the CEK issue mentioned elsewhere).
The attestation server is ultimately something that the guest owner needs to control / use. Whether the cloud mgmt app conects to it, or if libvirt connects to it, it feels like we would benefit from having a standard that can be used from either approach. I don't think we want to end up with IBM's cloud requiring one attestation server design and OpenStack requiring another and KubeVirt requiring yet another, etc. Guest owners souldn't be given the burden of using different services depending on which cloud they deploy on each time, as that would effectively become a form of vendor lockin. The needs of all the apps look similar enough, because they are all ultimately constrained by the functionality made available by SEV(ES). Thus at least wrt apps doing traditional VM management using QEMU, it feels like it should be practical to come up with a common solution. I can understand if it is harder to achieve commonality with tech like libkrun though, since that's consuming virt in quite a different way at the userspace level.
Finally, we didn't think that there would be any interest in the libvirt community. It seems like we might have been wrong about this. Like I said, our first instinct was to extend libvirt, and if there is interest in doing this, we could dust off those ideas. I certainly have a lot of ideas about how to design an API for attestation. Of course we now have an API for attestation that we think is pretty good. It is gRPC, but we are thinking about also supporting a REST interface. If an attestation api is added to libvirt, I will definitely try to be involved, although honestly I think it's fine, and in some ways maybe better, to have the management app take care of things.
Thanks for the feedback so far. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Wed, Feb 23, 2022 at 03:33:22PM -0500, Tobin Feldman-Fitzthum wrote:
On 2/23/22 1:38 PM, Dov Murik wrote:
+cc Tobin, James
On 23/02/2022 19:28, Daniel P. Berrangé wrote:
Extending management apps using libvirt to support measured launch of QEMU guests with SEV/SEV-ES is unreasonably complicated today, both for the guest owner and for the cloud management apps. We have APIs for exposing info about the SEV host, the SEV guest, guest measurements and secret injections. This is a "bags of bits" solution. We expect apps to them turn this into a user facting solution. It is possible but we're heading to a place where every cloud mgmt app essentially needs to reinvent the same wheel and the guest owner will need to learn custom APIs for dealing with SEV/SEV-ES for each cloud mgmt app. This is pretty awful. We need to do a better job at providing a solution that is more general purpose IMHO.
Note in particular that we provide a client script called LaunchVM.py that uses libvirt to start an SEV VM in conjunction with the attestation server. This is basically a stand in for a management app or cloud control plane. The modifications needed to launch an SEV VM are not particularly extensive. I agree with some of your comments though. In some ways it might be nice to have libvirt take care of things and hide the complexity from the management app.
LaunchVM.py nicely illustrates my concerns. Every application that uses libvirt that knows how to start VMs, now needs to be changed to support the series of operations shown in LaunchVM.py. THe guest owner probably can't use LaunchVM.py except for demoware, as they'll need a equivalent that talks to the API of their cloud mgmt app, of which there are many.
When we started working on our attestation server, our initial plan was to make PRs to libvirt that would add one end of the attestation API to libvirt, which would directly query the KBS. This is basically what you are proposing. We decided against this for a couple of reasons.
First, we were concerned that libvirt might not have network connectivity to an arbitrary attestation server in a cloud environment. We had envisioned that the guest owner would provide a URI for their attestation server as part of the XML. This assumes that the node where the VM is going to run can connect to an attestation server living somewhere on the internet. I think that this might be challenging in some cloud environments. By having the management app connect to libvirt and the attestation server, we add some flexibility.
Agreed, we can't assume that libvirt will always have ability to connect to an arbitrary service on the internet.
That said, it does not neccessarily need this ability. If the user gives a URL of 'https://myhost.com/attest', the cloud doesn't have to give that straight to libvirt. The cloud software could have a attestation proxy server. So they could tell libvirt to use the URI https://10.0.0.5/attest, and then libvirt connects to that, it will proxy the calls through the guest owner's real attestation server. This might slightly contradict the idea of the management app being out of the loop, but I guess setting up a proxy isn't very difficult. I
On 2/24/22 7:26 AM, Daniel P. Berrangé wrote: think CSPs already do this kind of thing to enable other features.
If even that isn't possible though, there's still the fallback option of ignoring libvirt's native support for talking to an attestation server, and doing it manually as per LaunchVM.py illustration.
Second, we were worried that it would be difficult to settle on and maintain a standard. Fortunately this discussion is only relevant for SEV(-ES), given that SNP measurements are reported from inside the guest, but nonetheless there are already a number of approaches for handling things. By using a management app, each CSP can easily adapt the standard libvirt api into whatever attestation API they want. This does put a burden on the management apps, but it might sidestep a tricky problem for libvirt and like I said, we found it pretty easy to write our LaunchVM script (except for the CEK issue mentioned elsewhere).
The attestation server is ultimately something that the guest owner needs to control / use. Whether the cloud mgmt app conects to it, or if libvirt connects to it, it feels like we would benefit from having a standard that can be used from either approach.
I don't think we want to end up with IBM's cloud requiring one attestation server design and OpenStack requiring another and KubeVirt requiring yet another, etc. Guest owners souldn't be given the burden of using different services depending on which cloud they deploy on each time, as that would effectively become a form of vendor lockin.
Yes. We're probably going to end up with a bunch of attestation servers no matter, but it would be great if they were interoperable.
The needs of all the apps look similar enough, because they are all ultimately constrained by the functionality made available by SEV(ES). Thus at least wrt apps doing traditional VM management using QEMU, it feels like it should be practical to come up with a common solution. I am all for common ground here. I had basically given up on it, but
maybe libvirt has enough influence to set a standard. In your first email you said that there is a relatively short window to come up with something and I think that is probably correct.
I can understand if it is harder to achieve commonality with tech like libkrun though, since that's consuming virt in quite a different way at the userspace level. Yeah, extending the focus beyond SEV(-ES) with QEMU might make things more difficult. There is some discussion right now about trying to find common ground between SEV-SNP and TDX attestation, but I assume that is all out of scope since libvirt isn't really involved.
-Tobin
Finally, we didn't think that there would be any interest in the libvirt community. It seems like we might have been wrong about this. Like I said, our first instinct was to extend libvirt, and if there is interest in doing this, we could dust off those ideas. I certainly have a lot of ideas about how to design an API for attestation. Of course we now have an API for attestation that we think is pretty good. It is gRPC, but we are thinking about also supporting a REST interface. If an attestation api is added to libvirt, I will definitely try to be involved, although honestly I think it's fine, and in some ways maybe better, to have the management app take care of things.
Thanks for the feedback so far.
Regards, Daniel

On Fri, Feb 25, 2022 at 03:10:35PM -0500, Tobin Feldman-Fitzthum wrote:
On 2/24/22 7:26 AM, Daniel P. Berrangé wrote:
On Wed, Feb 23, 2022 at 03:33:22PM -0500, Tobin Feldman-Fitzthum wrote:
On 2/23/22 1:38 PM, Dov Murik wrote:
+cc Tobin, James
On 23/02/2022 19:28, Daniel P. Berrangé wrote:
Extending management apps using libvirt to support measured launch of QEMU guests with SEV/SEV-ES is unreasonably complicated today, both for the guest owner and for the cloud management apps. We have APIs for exposing info about the SEV host, the SEV guest, guest measurements and secret injections. This is a "bags of bits" solution. We expect apps to them turn this into a user facting solution. It is possible but we're heading to a place where every cloud mgmt app essentially needs to reinvent the same wheel and the guest owner will need to learn custom APIs for dealing with SEV/SEV-ES for each cloud mgmt app. This is pretty awful. We need to do a better job at providing a solution that is more general purpose IMHO.
Note in particular that we provide a client script called LaunchVM.py that uses libvirt to start an SEV VM in conjunction with the attestation server. This is basically a stand in for a management app or cloud control plane. The modifications needed to launch an SEV VM are not particularly extensive. I agree with some of your comments though. In some ways it might be nice to have libvirt take care of things and hide the complexity from the management app.
LaunchVM.py nicely illustrates my concerns. Every application that uses libvirt that knows how to start VMs, now needs to be changed to support the series of operations shown in LaunchVM.py. THe guest owner probably can't use LaunchVM.py except for demoware, as they'll need a equivalent that talks to the API of their cloud mgmt app, of which there are many.
When we started working on our attestation server, our initial plan was to make PRs to libvirt that would add one end of the attestation API to libvirt, which would directly query the KBS. This is basically what you are proposing. We decided against this for a couple of reasons.
First, we were concerned that libvirt might not have network connectivity to an arbitrary attestation server in a cloud environment. We had envisioned that the guest owner would provide a URI for their attestation server as part of the XML. This assumes that the node where the VM is going to run can connect to an attestation server living somewhere on the internet. I think that this might be challenging in some cloud environments. By having the management app connect to libvirt and the attestation server, we add some flexibility.
Agreed, we can't assume that libvirt will always have ability to connect to an arbitrary service on the internet.
That said, it does not neccessarily need this ability. If the user gives a URL of 'https://myhost.com/attest', the cloud doesn't have to give that straight to libvirt. The cloud software could have a attestation proxy server. So they could tell libvirt to use the URI https://10.0.0.5/attest, and then libvirt connects to that, it will proxy the calls through the guest owner's real attestation server.
This might slightly contradict the idea of the management app being out of the loop, but I guess setting up a proxy isn't very difficult. I think CSPs already do this kind of thing to enable other features.
The difference I see with a proxy approach is that it ought to end up being a dumb transport. It won't have to define any protocol or interpret the data, just blindly pass data back & forth. This is already something often done with VNC where the user connects to a public endpoint exposed by the cloud on its internet boundary, which then forwards the data onto QEMU's real VNC server on the compute host. In the VNC case, the public facing side often does websockets encapsulation, while the private side is pure VNC, but it still doesn't ened to understnd the VNC protocol so it is fairly easy to setup such a proxy. A proxy could also address the other problem I've realized. At least the first VM to be booted on a given cloud might be harder to attest because the attestation service would need to exist outside the cloud being used because the guest owner won't trust anything initially. This could imply that the attestation service is on a local machine controlled by the guest owner, even their local laptop. The implication is that the attestation service could easily be stuck behind NAT and be unable to accept incoming connections from the cloud. To address this the proxy might need to sit in the middle and accept incoming connections from both the guest owner's attestation service and from libvirt on the compute host. IOW, with VNC you get a unidirectional flow of connection establishment guest owner -> cloud proxy -> compute host but with the attestation service you might need a flow with the second arrow reversed ie guest owner -> cloud proxy <- compute host Once the first VM is bootstrapped, it could be used to run an attestation service that is local to the cloud, avoiding the need for the traffic to traverse the internet when booting future VMs, and removing the dependancy on the guest owner having a local machine to run an attestation service on.
The needs of all the apps look similar enough, because they are all ultimately constrained by the functionality made available by SEV(ES). Thus at least wrt apps doing traditional VM management using QEMU, it feels like it should be practical to come up with a common solution. I am all for common ground here. I had basically given up on it, but maybe libvirt has enough influence to set a standard. In your first email you said that there is a relatively short window to come up with something and I think that is probably correct.
Yes, there are a handful of major cloud software projects that use libvirt, and once they've all built their own solution, it won't be so easy to convince them to change into a shared solution, as it would create back compat problems for them.
I can understand if it is harder to achieve commonality with tech like libkrun though, since that's consuming virt in quite a different way at the userspace level. Yeah, extending the focus beyond SEV(-ES) with QEMU might make things more difficult. There is some discussion right now about trying to find common ground between SEV-SNP and TDX attestation, but I assume that is all out of scope since libvirt isn't really involved.
I admit I don't know much about TDX, but from what I've understood talking to other people, SEV-SNP might not end up looking all that different. IIUC the attestation has to be initiated from inside the SNP guest after CPUs are running. It is going need to be run as early as possible and while you might be able todo it in the initrd, it feels likely that it could be put into the firmware (OVMF) instead, such that it does the validation before even loading the kernel. This would facilitate supporting it with arbitrary guest OS, as the firmware is common to all. We can't assume the firmware will have direct network connectivity to any attestation service needed to verify the boot. This implies the firmware might need to talk to the host via something like virtio-serial / virtio-vsock, from where libvirt or QEMU can proxy the traffic onto the real attestation service. Such an architecture might end up aligning quite well with SEV/SEV-ES, possible allowing the same protocol to be used in both cases, just with differnt ultimate end points (libvirt for SEV(-ES) vs guest firmware for SEV-SNP). Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 3/3/22 12:20 PM, Daniel P. Berrangé wrote:
On Fri, Feb 25, 2022 at 03:10:35PM -0500, Tobin Feldman-Fitzthum wrote:
On 2/24/22 7:26 AM, Daniel P. Berrangé wrote:
On Wed, Feb 23, 2022 at 03:33:22PM -0500, Tobin Feldman-Fitzthum wrote:
On 2/23/22 1:38 PM, Dov Murik wrote:
+cc Tobin, James
On 23/02/2022 19:28, Daniel P. Berrangé wrote:
Extending management apps using libvirt to support measured launch of QEMU guests with SEV/SEV-ES is unreasonably complicated today, both for the guest owner and for the cloud management apps. We have APIs for exposing info about the SEV host, the SEV guest, guest measurements and secret injections. This is a "bags of bits" solution. We expect apps to them turn this into a user facting solution. It is possible but we're heading to a place where every cloud mgmt app essentially needs to reinvent the same wheel and the guest owner will need to learn custom APIs for dealing with SEV/SEV-ES for each cloud mgmt app. This is pretty awful. We need to do a better job at providing a solution that is more general purpose IMHO.
Note in particular that we provide a client script called LaunchVM.py that uses libvirt to start an SEV VM in conjunction with the attestation server. This is basically a stand in for a management app or cloud control plane. The modifications needed to launch an SEV VM are not particularly extensive. I agree with some of your comments though. In some ways it might be nice to have libvirt take care of things and hide the complexity from the management app.
LaunchVM.py nicely illustrates my concerns. Every application that uses libvirt that knows how to start VMs, now needs to be changed to support the series of operations shown in LaunchVM.py. THe guest owner probably can't use LaunchVM.py except for demoware, as they'll need a equivalent that talks to the API of their cloud mgmt app, of which there are many.
When we started working on our attestation server, our initial plan was to make PRs to libvirt that would add one end of the attestation API to libvirt, which would directly query the KBS. This is basically what you are proposing. We decided against this for a couple of reasons.
First, we were concerned that libvirt might not have network connectivity to an arbitrary attestation server in a cloud environment. We had envisioned that the guest owner would provide a URI for their attestation server as part of the XML. This assumes that the node where the VM is going to run can connect to an attestation server living somewhere on the internet. I think that this might be challenging in some cloud environments. By having the management app connect to libvirt and the attestation server, we add some flexibility.
Agreed, we can't assume that libvirt will always have ability to connect to an arbitrary service on the internet.
That said, it does not neccessarily need this ability. If the user gives a URL of 'https://myhost.com/attest', the cloud doesn't have to give that straight to libvirt. The cloud software could have a attestation proxy server. So they could tell libvirt to use the URI https://10.0.0.5/attest, and then libvirt connects to that, it will proxy the calls through the guest owner's real attestation server.
This might slightly contradict the idea of the management app being out of the loop, but I guess setting up a proxy isn't very difficult. I think CSPs already do this kind of thing to enable other features.
The difference I see with a proxy approach is that it ought to end up being a dumb transport. It won't have to define any protocol or interpret the data, just blindly pass data back & forth.
This is already something often done with VNC where the user connects to a public endpoint exposed by the cloud on its internet boundary, which then forwards the data onto QEMU's real VNC server on the compute host. In the VNC case, the public facing side often does websockets encapsulation, while the private side is pure VNC, but it still doesn't ened to understnd the VNC protocol so it is fairly easy to setup such a proxy.
A proxy could also address the other problem I've realized. At least the first VM to be booted on a given cloud might be harder to attest because the attestation service would need to exist outside the cloud being used because the guest owner won't trust anything initially.
This could imply that the attestation service is on a local machine controlled by the guest owner, even their local laptop. The implication is that the attestation service could easily be stuck behind NAT and be unable to accept incoming connections from the cloud.
To address this the proxy might need to sit in the middle and accept incoming connections from both the guest owner's attestation service and from libvirt on the compute host.
IOW, with VNC you get a unidirectional flow of connection establishment
guest owner -> cloud proxy -> compute host
but with the attestation service you might need a flow with the second arrow reversed ie
guest owner -> cloud proxy <- compute host
Yeah I think a proxy is a reasonable solution. I am still not sure where people are going to run their attestation servers in practice. We have been assuming that they could be anywhere. I don't know if people will insist on running them locally or if they will decide to trust some managed attestation solution. I guess we will see.
Once the first VM is bootstrapped, it could be used to run an attestation service that is local to the cloud, avoiding the need for the traffic to traverse the internet when booting future VMs, and removing the dependancy on the guest owner having a local machine to run an attestation service on.
The needs of all the apps look similar enough, because they are all ultimately constrained by the functionality made available by SEV(ES). Thus at least wrt apps doing traditional VM management using QEMU, it feels like it should be practical to come up with a common solution. I am all for common ground here. I had basically given up on it, but maybe libvirt has enough influence to set a standard. In your first email you said that there is a relatively short window to come up with something and I think that is probably correct.
Yes, there are a handful of major cloud software projects that use libvirt, and once they've all built their own solution, it won't be so easy to convince them to change into a shared solution, as it would create back compat problems for them.
I can understand if it is harder to achieve commonality with tech like libkrun though, since that's consuming virt in quite a different way at the userspace level. Yeah, extending the focus beyond SEV(-ES) with QEMU might make things more difficult. There is some discussion right now about trying to find common ground between SEV-SNP and TDX attestation, but I assume that is all out of scope since libvirt isn't really involved.
I admit I don't know much about TDX, but from what I've understood talking to other people, SEV-SNP might not end up looking all that different. IIUC the attestation has to be initiated from inside the SNP guest after CPUs are running. It is going need to be run as early as possible and while you might be able todo it in the initrd, it feels likely that it could be put into the firmware (OVMF) instead, such that it does the validation before even loading the kernel. This would facilitate supporting it with arbitrary guest OS, as the firmware is common to all. We can't assume the firmware will have direct network connectivity to any attestation service needed to verify the boot. This implies the firmware might need to talk to the host via something like virtio-serial / virtio-vsock, from where libvirt or QEMU can proxy the traffic onto the real attestation service. Such an architecture might end up aligning quite well with SEV/SEV-ES, possible allowing the same protocol to be used in both cases, just with differnt ultimate end points (libvirt for SEV(-ES) vs guest firmware for SEV-SNP).
Yeah that is an interesting point. Most SNP approaches that I have seen so far use the kernel/initrd to handle decryption. There is potentially a gap if the kernel/initrd are not themselves part of the measurement that is provided in the attestation report. We have been using this measured direct boot thing for SEV(-ES) and I think it can be extended to SEV-SNP as well. This would close that gap and make it feasible to do the decryption in the kernel. There might be reasons to do the measurement earlier, however. For instance, it is easier to keep track of the hashes of fewer things (just the firmware vs the firmware + initrd + kernel + cmdline). As you say networking becomes a bit of an issue if you do the attestation in firmware. Using a local device that is handled by libvirt could be a good solution. I'm sure we could come up with a protocol that is general enough to handle both pre-attestation and runtime attestation, but there definitely are some differences. For instance with pre-attestation we know that there will basically be two calls from management app to attestation server. This is defined by the way that we inject secrets. With SNP things are much more flexible. We can setup a persistent connection between the guest and the attestation server and ask for secrets throughout the runtime of the guest. It might take some thinking to reconcile these approaches and could put some dependencies on how the guest is supposed to behave, which isn't really our business (although a firmware-based solution could be reasonable). -Tobin
Regards, Daniel

* Tobin Feldman-Fitzthum (tobin@linux.ibm.com) wrote:
On 3/3/22 12:20 PM, Daniel P. Berrangé wrote:
On Fri, Feb 25, 2022 at 03:10:35PM -0500, Tobin Feldman-Fitzthum wrote:
On 2/24/22 7:26 AM, Daniel P. Berrangé wrote:
On Wed, Feb 23, 2022 at 03:33:22PM -0500, Tobin Feldman-Fitzthum wrote:
On 2/23/22 1:38 PM, Dov Murik wrote:
+cc Tobin, James
On 23/02/2022 19:28, Daniel P. Berrangé wrote: > Extending management apps using libvirt to support measured launch of > QEMU guests with SEV/SEV-ES is unreasonably complicated today, both for > the guest owner and for the cloud management apps. We have APIs for > exposing info about the SEV host, the SEV guest, guest measurements > and secret injections. This is a "bags of bits" solution. We expect > apps to them turn this into a user facting solution. It is possible > but we're heading to a place where every cloud mgmt app essentially > needs to reinvent the same wheel and the guest owner will need to > learn custom APIs for dealing with SEV/SEV-ES for each cloud mgmt > app. This is pretty awful. We need to do a better job at providing > a solution that is more general purpose IMHO. >
Note in particular that we provide a client script called LaunchVM.py that uses libvirt to start an SEV VM in conjunction with the attestation server. This is basically a stand in for a management app or cloud control plane. The modifications needed to launch an SEV VM are not particularly extensive. I agree with some of your comments though. In some ways it might be nice to have libvirt take care of things and hide the complexity from the management app.
LaunchVM.py nicely illustrates my concerns. Every application that uses libvirt that knows how to start VMs, now needs to be changed to support the series of operations shown in LaunchVM.py. THe guest owner probably can't use LaunchVM.py except for demoware, as they'll need a equivalent that talks to the API of their cloud mgmt app, of which there are many.
When we started working on our attestation server, our initial plan was to make PRs to libvirt that would add one end of the attestation API to libvirt, which would directly query the KBS. This is basically what you are proposing. We decided against this for a couple of reasons.
First, we were concerned that libvirt might not have network connectivity to an arbitrary attestation server in a cloud environment. We had envisioned that the guest owner would provide a URI for their attestation server as part of the XML. This assumes that the node where the VM is going to run can connect to an attestation server living somewhere on the internet. I think that this might be challenging in some cloud environments. By having the management app connect to libvirt and the attestation server, we add some flexibility.
Agreed, we can't assume that libvirt will always have ability to connect to an arbitrary service on the internet.
That said, it does not neccessarily need this ability. If the user gives a URL of 'https://myhost.com/attest', the cloud doesn't have to give that straight to libvirt. The cloud software could have a attestation proxy server. So they could tell libvirt to use the URI https://10.0.0.5/attest, and then libvirt connects to that, it will proxy the calls through the guest owner's real attestation server.
This might slightly contradict the idea of the management app being out of the loop, but I guess setting up a proxy isn't very difficult. I think CSPs already do this kind of thing to enable other features.
The difference I see with a proxy approach is that it ought to end up being a dumb transport. It won't have to define any protocol or interpret the data, just blindly pass data back & forth.
This is already something often done with VNC where the user connects to a public endpoint exposed by the cloud on its internet boundary, which then forwards the data onto QEMU's real VNC server on the compute host. In the VNC case, the public facing side often does websockets encapsulation, while the private side is pure VNC, but it still doesn't ened to understnd the VNC protocol so it is fairly easy to setup such a proxy.
A proxy could also address the other problem I've realized. At least the first VM to be booted on a given cloud might be harder to attest because the attestation service would need to exist outside the cloud being used because the guest owner won't trust anything initially.
This could imply that the attestation service is on a local machine controlled by the guest owner, even their local laptop. The implication is that the attestation service could easily be stuck behind NAT and be unable to accept incoming connections from the cloud.
To address this the proxy might need to sit in the middle and accept incoming connections from both the guest owner's attestation service and from libvirt on the compute host.
IOW, with VNC you get a unidirectional flow of connection establishment
guest owner -> cloud proxy -> compute host
but with the attestation service you might need a flow with the second arrow reversed ie
guest owner -> cloud proxy <- compute host
Yeah I think a proxy is a reasonable solution. I am still not sure where people are going to run their attestation servers in practice. We have been assuming that they could be anywhere. I don't know if people will insist on running them locally or if they will decide to trust some managed attestation solution. I guess we will see.
Once the first VM is bootstrapped, it could be used to run an attestation service that is local to the cloud, avoiding the need for the traffic to traverse the internet when booting future VMs, and removing the dependancy on the guest owner having a local machine to run an attestation service on.
The needs of all the apps look similar enough, because they are all ultimately constrained by the functionality made available by SEV(ES). Thus at least wrt apps doing traditional VM management using QEMU, it feels like it should be practical to come up with a common solution. I am all for common ground here. I had basically given up on it, but maybe libvirt has enough influence to set a standard. In your first email you said that there is a relatively short window to come up with something and I think that is probably correct.
Yes, there are a handful of major cloud software projects that use libvirt, and once they've all built their own solution, it won't be so easy to convince them to change into a shared solution, as it would create back compat problems for them.
I can understand if it is harder to achieve commonality with tech like libkrun though, since that's consuming virt in quite a different way at the userspace level. Yeah, extending the focus beyond SEV(-ES) with QEMU might make things more difficult. There is some discussion right now about trying to find common ground between SEV-SNP and TDX attestation, but I assume that is all out of scope since libvirt isn't really involved.
I admit I don't know much about TDX, but from what I've understood talking to other people, SEV-SNP might not end up looking all that different. IIUC the attestation has to be initiated from inside the SNP guest after CPUs are running. It is going need to be run as early as possible and while you might be able todo it in the initrd, it feels likely that it could be put into the firmware (OVMF) instead, such that it does the validation before even loading the kernel. This would facilitate supporting it with arbitrary guest OS, as the firmware is common to all. We can't assume the firmware will have direct network connectivity to any attestation service needed to verify the boot. This implies the firmware might need to talk to the host via something like virtio-serial / virtio-vsock, from where libvirt or QEMU can proxy the traffic onto the real attestation service. Such an architecture might end up aligning quite well with SEV/SEV-ES, possible allowing the same protocol to be used in both cases, just with differnt ultimate end points (libvirt for SEV(-ES) vs guest firmware for SEV-SNP).
Yeah that is an interesting point. Most SNP approaches that I have seen so far use the kernel/initrd to handle decryption. There is potentially a gap if the kernel/initrd are not themselves part of the measurement that is provided in the attestation report. We have been using this measured direct boot thing for SEV(-ES) and I think it can be extended to SEV-SNP as well. This would close that gap and make it feasible to do the decryption in the kernel.
With the direct boot setup, it feels like using 'clevis' in the initrd would be the right way to wire things to disk decryption. [ https://github.com/latchset/clevis ] It would need a 'pin' writing for SNP that then did whatever communication mechanism we settled on. (A clevis pin might also be the way to wire the simple disk key from your EFI/SEV mechanism up to LUKS? ) Dave
There might be reasons to do the measurement earlier, however. For instance, it is easier to keep track of the hashes of fewer things (just the firmware vs the firmware + initrd + kernel + cmdline). As you say networking becomes a bit of an issue if you do the attestation in firmware. Using a local device that is handled by libvirt could be a good solution.
I'm sure we could come up with a protocol that is general enough to handle both pre-attestation and runtime attestation, but there definitely are some differences. For instance with pre-attestation we know that there will basically be two calls from management app to attestation server. This is defined by the way that we inject secrets. With SNP things are much more flexible. We can setup a persistent connection between the guest and the attestation server and ask for secrets throughout the runtime of the guest. It might take some thinking to reconcile these approaches and could put some dependencies on how the guest is supposed to behave, which isn't really our business (although a firmware-based solution could be reasonable).
-Tobin
Regards, Daniel
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

On Wed, 2022-03-09 at 16:42 +0000, Dr. David Alan Gilbert wrote:
* Tobin Feldman-Fitzthum (tobin@linux.ibm.com) wrote:
On 3/3/22 12:20 PM, Daniel P. Berrangé wrote:
On Fri, Feb 25, 2022 at 03:10:35PM -0500, Tobin Feldman-Fitzthum wrote:
On 2/24/22 7:26 AM, Daniel P. Berrangé wrote:
[...]
I can understand if it is harder to achieve commonality with tech like libkrun though, since that's consuming virt in quite a different way at the userspace level. Yeah, extending the focus beyond SEV(-ES) with QEMU might make things more difficult. There is some discussion right now about trying to find common ground between SEV-SNP and TDX attestation, but I assume that is all out of scope since libvirt isn't really involved.
I admit I don't know much about TDX, but from what I've understood talking to other people, SEV-SNP might not end up looking all that different. IIUC the attestation has to be initiated from inside the SNP guest after CPUs are running. It is going need to be run as early as possible and while you might be able todo it in the initrd, it feels likely that it could be put into the firmware (OVMF) instead, such that it does the validation before even loading the kernel. This would facilitate supporting it with arbitrary guest OS, as the firmware is common to all. We can't assume the firmware will have direct network connectivity to any attestation service needed to verify the boot. This implies the firmware might need to talk to the host via something like virtio-serial / virtio-vsock, from where libvirt or QEMU can proxy the traffic onto the real attestation service. Such an architecture might end up aligning quite well with SEV/SEV-ES, possible allowing the same protocol to be used in both cases, just with differnt ultimate end points (libvirt for SEV(-ES) vs guest firmware for SEV-SNP).
Yeah that is an interesting point. Most SNP approaches that I have seen so far use the kernel/initrd to handle decryption. There is potentially a gap if the kernel/initrd are not themselves part of the measurement that is provided in the attestation report. We have been using this measured direct boot thing for SEV(-ES) and I think it can be extended to SEV-SNP as well. This would close that gap and make it feasible to do the decryption in the kernel.
With the direct boot setup, it feels like using 'clevis' in the initrd would be the right way to wire things to disk decryption. [ https://github.com/latchset/clevis ] It would need a 'pin' writing for SNP that then did whatever communication mechanism we settled on.
(A clevis pin might also be the way to wire the simple disk key from your EFI/SEV mechanism up to LUKS? )
We did a write up about this a while ago on the virt list: https://listman.redhat.com/mailman/private/ibm-virt-security/2021-December/0... Dimitri Pal is on the reply suggesting effectively the above and we had quite a discussion about it, the upshot of which was that we might get it to work for -SNP and TDX, but it couldn't work for plain SEV and -ES. What we were looking at above is a mechanism for unifying all the flavours of boot. James

* James Bottomley (jejb@linux.ibm.com) wrote:
On Wed, 2022-03-09 at 16:42 +0000, Dr. David Alan Gilbert wrote:
* Tobin Feldman-Fitzthum (tobin@linux.ibm.com) wrote:
On 3/3/22 12:20 PM, Daniel P. Berrangé wrote:
On Fri, Feb 25, 2022 at 03:10:35PM -0500, Tobin Feldman-Fitzthum wrote:
On 2/24/22 7:26 AM, Daniel P. Berrangé wrote:
[...]
I can understand if it is harder to achieve commonality with tech like libkrun though, since that's consuming virt in quite a different way at the userspace level. Yeah, extending the focus beyond SEV(-ES) with QEMU might make things more difficult. There is some discussion right now about trying to find common ground between SEV-SNP and TDX attestation, but I assume that is all out of scope since libvirt isn't really involved.
I admit I don't know much about TDX, but from what I've understood talking to other people, SEV-SNP might not end up looking all that different. IIUC the attestation has to be initiated from inside the SNP guest after CPUs are running. It is going need to be run as early as possible and while you might be able todo it in the initrd, it feels likely that it could be put into the firmware (OVMF) instead, such that it does the validation before even loading the kernel. This would facilitate supporting it with arbitrary guest OS, as the firmware is common to all. We can't assume the firmware will have direct network connectivity to any attestation service needed to verify the boot. This implies the firmware might need to talk to the host via something like virtio-serial / virtio-vsock, from where libvirt or QEMU can proxy the traffic onto the real attestation service. Such an architecture might end up aligning quite well with SEV/SEV-ES, possible allowing the same protocol to be used in both cases, just with differnt ultimate end points (libvirt for SEV(-ES) vs guest firmware for SEV-SNP).
Yeah that is an interesting point. Most SNP approaches that I have seen so far use the kernel/initrd to handle decryption. There is potentially a gap if the kernel/initrd are not themselves part of the measurement that is provided in the attestation report. We have been using this measured direct boot thing for SEV(-ES) and I think it can be extended to SEV-SNP as well. This would close that gap and make it feasible to do the decryption in the kernel.
With the direct boot setup, it feels like using 'clevis' in the initrd would be the right way to wire things to disk decryption. [ https://github.com/latchset/clevis ] It would need a 'pin' writing for SNP that then did whatever communication mechanism we settled on.
(A clevis pin might also be the way to wire the simple disk key from your EFI/SEV mechanism up to LUKS? )
We did a write up about this a while ago on the virt list:
https://listman.redhat.com/mailman/private/ibm-virt-security/2021-December/0...
(Note that's a private list, while libvir-list cc'd above is public - hello all!)
Dimitri Pal is on the reply suggesting effectively the above and we had quite a discussion about it, the upshot of which was that we might get it to work for -SNP and TDX, but it couldn't work for plain SEV and -ES. What we were looking at above is a mechanism for unifying all the flavours of boot.
Hmm yes for SNP; for the simple non-SNP one, it actually becomes easier with Clevis; you ignore Tang altogether and just add a Clevis pin that wires the secret through - it looks like a few lines of shell but fits into Clevis which we already have, and Clevis has the smarts to fall back to letting you put a password in from what I can tell. Although Christophe did just point me to: https://github.com/confidential-containers/attestation-agent which seems to have some wiring for basic SEV and Alibaba's online protocol which I've yet to look at. Dave
James
-- Dr. David Alan Gilbert / dgilbert@redhat.com / Manchester, UK

+cc Tobin, James
On 23/02/2022 19:28, Daniel P. Berrangé wrote:
What could this look like from POV of an attestation server API, if we assume HTTPS REST service with a simple JSON payload .>>
* Guest Owner: Register a new VM to be booted:
We're trying to set the API between libvirt and the AS. I would assume
POST /vm/<UUID>
Note that this is a privileged endpoint (unlike the ones below).
Request body:
{ "scheme": "amd-sev", "cloud-cert": "certificate of the cloud owner that signs the PEK", "policy": 0x3, "cpu-count": 3, "firmware-hashes": [ "xxxx", "yyyy", ],
I think we'd need to provide the full firmware binary rather than just
"kernel-hash": "aaaa", "initrd-hash": "bbbb", "cmdline-hash": "cccc", "secrets": [ { "type": "luks-passphrase", "passphrase": "<blah>" } ] }
Registering an individual VM is kind of an interesting perspective. With kbs-rs, rather than registering an individual VM, the guest owner registers secrets and can set a policy (which specifies launch
* Libvirt: Request permission to launch a VM on a host
POST /vm/<UUID>/launch
Since I've been thinking about VM identity a little differently, our setup for the UUID is a bit different as well. We use a UUID to track a connection (as in TIK, TEK), but this is not known at the time of the launch request (aka GetBundle request). Instead, the UUID is returned in
Some comments on the example protocol stuff On 2/23/22 1:38 PM, Dov Murik wrote: that the API between the AS and the guest owner is out of scope, although maybe this is just an example. the hash if we plan to calculate the launch digest in the AS. Alternatively the guest owner can calculate the launch digest themself and pass it to the AS. This is what kbs-rs does. There are pros and cons to both and we should probably support both (which should be easy). parameters like the SEV policy) for each secret. Then secrets are released to VMs that meet the policy requirements. There isn't really any tracking of an individual VM (besides the secure channel briefly used for secret injection). In SEV(-ES) individual VMs don't really have an identity separate from their launch parameters and launch measurement. I guess we're not trying to design an AS here, so we can leave for another time. the launch response so that it can be used for the secret request. If we have a UUID in the launch request, it creates an interesting coordination requirement between the CSP and the AS. Imagine a situation where we spin up a bunch of identical VMs dynamically. Here the guest owner would have to register a new VM with a UUID for each instance and then get all of that information over to libvirt. This seems difficult. Shifting focus from VMs to secrets and policies and automatically provisioning the UUID sidesteps this issue. This is especially important for something like Confidential Containers (of course CC doesn't use libvirt but we'd like to share the AS API).
Request body:
{ "pdh": "<blah>", "cert-chain": "<blah>", "cpu-id": "<CPU ID>",
There's an interesting question as to whether the CEK should be signed by libvirt or by the AS.
...other relevant bits... }
Service decides if the proposed host is acceptable
Response body (on success)
{ "session": "<blah>", "owner-cert": "<blah>", "policy": 3,
I've assumed that the policy would be part of the request, having been set in the libvirt XML.
}
* Libvirt: Request secrets to inject to launched VM
POST /vm/<UUID>/validate
Request body:
{ "api-minor": 1, "api-major": 2, "build-id": 241, "policy": 3, "measurement": "<blah>", "firmware-hash": "xxxx", "cpu-count": 3, ....other relevant stuff.... }
Service validates the measurement...
Response body (on success):
{ "secret-header": "<blah>", "secret-table": "<blah>",
Referring to secret payload format as OVMF secret table?
Looks pretty good overall. I am a bit worried about the UUID stuff. -Tobin
}
So we can see there are only a couple of REST API calls we need to be able to define. If we could do that then creating a SEV/SEV-ES enabled guest with libvirt would not involve anything more complicated for the mgmt app that providing the URI of the guest owner's attestation service and an identifier for the VM. ie. the XML config could be merely:
<launchSecurity type="sev"> <attestation vmid="57f669c2-c427-4132-bc7a-26f56b6a718c" service="http://somehost/some/url"/> </launchSecurity>
And then involve virDomainCreate as normal with any other libvirt / QEMU guest. No special workflow is required by the mgmt app. There is a small extra task for the guest owner to register existance of their VM with the attestation service. Aside from that the only change to the way they interact with the cloud mgmt app is to provide the VM ID and URI for the attestation service. No need to learn custom APIs for each different cloud vendor, for dealing with fetching launch measurements or injecting secrets.
Finally this attestation service REST protocol doesn't have to be something controlled or defined by libvirt. I feel like it could be a protocol that is defined anywhere and libvirt merely be one consumer of it. Other apps that directly use QEMU may also wish to avail themselves of it.
All that really matters from libvirt POV is:
- The protocol definition exist to enable the above workflow, with a long term API stability guarantee that it isn't going to changed in incompatible ways
- There exists a fully open source reference implementation of sufficient quality to deploy in the real world
I know https://github.com/slp/sev-attestation-server exists, but its current design has assumptions about it being used with libkrun AFAICT. I have heard of others interested in writing similar servers, but I've not seen code.
Tobin has just released kbs-rs which has similar properties to what you're proposing above, aiming to solve similar issues. Better talk with him before running into building yet another attestation server.
-Dov
We are at a crucial stage where mgmt apps are looking to support measured boot with SEV/SEV-ES and if we delay they'll all go off and do their own thing, and it'll be too late, leading to https://xkcd.com/927/.
Especially for apps using libvirt to manage QEMU, I feel we have got a few months window of opportunity to get such a service available, before they all end up building out APIs for the tedious manual workflow, reinventing the wheel.
Regards, Daniel

On Fri, Feb 25, 2022 at 04:11:27PM -0500, Tobin Feldman-Fitzthum wrote:
Some comments on the example protocol stuff
+cc Tobin, James
On 23/02/2022 19:28, Daniel P. Berrangé wrote:
What could this look like from POV of an attestation server API, if we assume HTTPS REST service with a simple JSON payload .>>
* Guest Owner: Register a new VM to be booted:
We're trying to set the API between libvirt and the AS. I would assume
On 2/23/22 1:38 PM, Dov Murik wrote: that the API between the AS and the guest owner is out of scope, although maybe this is just an example.
Agreed, it is out of scope from libvirt's POV. I just wanted to illustrate a possible end-to-end solution for all parties.
POST /vm/<UUID>
Note that this is a privileged endpoint (unlike the ones below).
Request body:
{ "scheme": "amd-sev", "cloud-cert": "certificate of the cloud owner that signs the PEK", "policy": 0x3, "cpu-count": 3, "firmware-hashes": [ "xxxx", "yyyy", ],
I think we'd need to provide the full firmware binary rather than just the hash if we plan to calculate the launch digest in the AS. Alternatively the guest owner can calculate the launch digest themself and pass it to the AS. This is what kbs-rs does. There are pros and cons to both and we should probably support both (which should be easy).
Since this particular endpoint is an interface exclusively between the guest owner and the AS, it could be said to be an API that does not need standardization. Different implementations may choose to approach it different ways based on how they evaluate the tradeoffs.
"kernel-hash": "aaaa", "initrd-hash": "bbbb", "cmdline-hash": "cccc", "secrets": [ { "type": "luks-passphrase", "passphrase": "<blah>" } ] }
Registering an individual VM is kind of an interesting perspective. With kbs-rs, rather than registering an individual VM, the guest owner registers secrets and can set a policy (which specifies launch parameters like the SEV policy) for each secret. Then secrets are released to VMs that meet the policy requirements. There isn't really any tracking of an individual VM (besides the secure channel briefly used for secret injection). In SEV(-ES) individual VMs don't really have an identity separate from their launch parameters and launch measurement. I guess we're not trying to design an AS here, so we can leave for another time.
Agree with what you say here. The distinction of registering a single VM vs registering an image that can be instantiated to many VMs can likely be a decision for the specific implementation of the AS. The reason I suggested registering an individual VM was that I was trying to more closely match the behaviour the virt platform would have if it was not talkin directly with an attestation service. In the manul case a guest owner feeds in the launch blob for each VM at boot time. Thus the compute host can't boot instances of the VM without explicit action from the user. If the AS releases the launch blob and secrets upon request from teh compute host, it can potentially boot many instances of a VM even if the guest owner only asked for one. Of course the host admin can't get into the VMs todo anything, but the mere act of being able to launch many instances without guest owner action might lead to a denial of service attack on other things that the VM talks to. None the less this risk is easy to mitigate, even if you're just registering an image with the AS. It could easily be set to require a confirmation befere releasing more than 'n' instances of the launch blob and secrets
* Libvirt: Request permission to launch a VM on a host
POST /vm/<UUID>/launch
Since I've been thinking about VM identity a little differently, our setup for the UUID is a bit different as well. We use a UUID to track a connection (as in TIK, TEK), but this is not known at the time of the launch request (aka GetBundle request). Instead, the UUID is returned in the launch response so that it can be used for the secret request.
If we have a UUID in the launch request, it creates an interesting coordination requirement between the CSP and the AS. Imagine a situation where we spin up a bunch of identical VMs dynamically. Here the guest owner would have to register a new VM with a UUID for each instance and then get all of that information over to libvirt. This seems difficult. Shifting focus from VMs to secrets and policies and automatically provisioning the UUID sidesteps this issue. This is especially important for something like Confidential Containers (of course CC doesn't use libvirt but we'd like to share the AS API).
From the libvirt POV, we don't have to declare what the UUID represents. It could represent a single VM instances, or it could represent a VM image supporting many instances. Libvirt wouldn't care, the UUID is just a token passed to the AS to indicate what libvirt needs to launch.
Request body:
{ "pdh": "<blah>", "cert-chain": "<blah>", "cpu-id": "<CPU ID>",
There's an interesting question as to whether the CEK should be signed by libvirt or by the AS.
I'm mostly ambivalent on that question - either way works well enough, though if libvirt needs todo the signing, then libvirt needs to be able to talk to AMD's REST service to acquire the signature. If libvirt doesn't have network access from the compute host, it might not be possible for it to acquire the signature. In terms of the protocol spec, both approaches could be supported. The 'cpu-id' can be provided unconditionally. The 'cert-chain' can be declared to be signed or unsigned. If libvirt is capable of getting a signature, it could provide the cert-chain with thue CEK signed. If not, then the AS could acquire the signature itself as a fallback.
...other relevant bits... }
Service decides if the proposed host is acceptable
Response body (on success)
{ "session": "<blah>", "owner-cert": "<blah>", "policy": 3,
I've assumed that the policy would be part of the request, having been set in the libvirt XML.
My thought was that since the guest owner needs to explicitly give the AS the policy in order to create the correct launch blob, it is pointless for the guest owner to give it to libvirt again in the XML. Just give it the AS and let it pass it onto libvirt automatically. Again though, both approaches work - even if libvirt alrady has the policy, there's no harm in the AS providing the policy in its response again.
* Libvirt: Request secrets to inject to launched VM
POST /vm/<UUID>/validate
Request body:
{ "api-minor": 1, "api-major": 2, "build-id": 241, "policy": 3, "measurement": "<blah>", "firmware-hash": "xxxx", "cpu-count": 3, ....other relevant stuff.... }
Service validates the measurement...
Response body (on success):
{ "secret-header": "<blah>", "secret-table": "<blah>", Referring to secret payload format as OVMF secret table?
Essentially I intended it to be the data that is expected by the 'sev-inject-launch-secret' QMP command from QEMU, which is consumed by libvirt. If using the OVMF firmware with the guest then its the OVMF secret table, with other firmware it is whatever they declare it to be. Calling it 'secret-payload' probably makes more sense. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 3/3/22 12:40 PM, Daniel P. Berrangé wrote:
On Fri, Feb 25, 2022 at 04:11:27PM -0500, Tobin Feldman-Fitzthum wrote:
Some comments on the example protocol stuff
+cc Tobin, James
On 23/02/2022 19:28, Daniel P. Berrangé wrote:
What could this look like from POV of an attestation server API, if we assume HTTPS REST service with a simple JSON payload .>>
* Guest Owner: Register a new VM to be booted:
We're trying to set the API between libvirt and the AS. I would assume
On 2/23/22 1:38 PM, Dov Murik wrote: that the API between the AS and the guest owner is out of scope, although maybe this is just an example.
Agreed, it is out of scope from libvirt's POV. I just wanted to illustrate a possible end-to-end solution for all parties.
POST /vm/<UUID>
Note that this is a privileged endpoint (unlike the ones below).
Request body:
{ "scheme": "amd-sev", "cloud-cert": "certificate of the cloud owner that signs the PEK", "policy": 0x3, "cpu-count": 3, "firmware-hashes": [ "xxxx", "yyyy", ],
I think we'd need to provide the full firmware binary rather than just the hash if we plan to calculate the launch digest in the AS. Alternatively the guest owner can calculate the launch digest themself and pass it to the AS. This is what kbs-rs does. There are pros and cons to both and we should probably support both (which should be easy).
Since this particular endpoint is an interface exclusively between the guest owner and the AS, it could be said to be an API that does not need standardization. Different implementations may choose to approach it different ways based on how they evaluate the tradeoffs.
"kernel-hash": "aaaa", "initrd-hash": "bbbb", "cmdline-hash": "cccc", "secrets": [ { "type": "luks-passphrase", "passphrase": "<blah>" } ] }
Registering an individual VM is kind of an interesting perspective. With kbs-rs, rather than registering an individual VM, the guest owner registers secrets and can set a policy (which specifies launch parameters like the SEV policy) for each secret. Then secrets are released to VMs that meet the policy requirements. There isn't really any tracking of an individual VM (besides the secure channel briefly used for secret injection). In SEV(-ES) individual VMs don't really have an identity separate from their launch parameters and launch measurement. I guess we're not trying to design an AS here, so we can leave for another time.
Agree with what you say here.
The distinction of registering a single VM vs registering an image that can be instantiated to many VMs can likely be a decision for the specific implementation of the AS.
The reason I suggested registering an individual VM was that I was trying to more closely match the behaviour the virt platform would have if it was not talkin directly with an attestation service. In the manul case a guest owner feeds in the launch blob for each VM at boot time. Thus the compute host can't boot instances of the VM without explicit action from the user. If the AS releases the launch blob and secrets upon request from teh compute host, it can potentially boot many instances of a VM even if the guest owner only asked for one.
Of course the host admin can't get into the VMs todo anything, but the mere act of being able to launch many instances without guest owner action might lead to a denial of service attack on other things that the VM talks to.
None the less this risk is easy to mitigate, even if you're just registering an image with the AS. It could easily be set to require a confirmation befere releasing more than 'n' instances of the launch blob and secrets
There are some very interesting questions on the borders of confidentiality and orchestration. In Confidential Containers we try to separate those things as much as possible. Confidential Computing can guarantee confidentiality, but is it the right technology for preventing DoS or resource starvation? If the underlying hardware doesn't provide guarantees about host behavior, why would we have any guarantees about the behavior of an orchestrator. On the other hand, you point out that it's actually easy to enforce certain guarantees via key release. There are other things we can do to regulate orchestration, but they usually involve the attestation server knowing more and more about the cloud environment. Fortunately we aren't designing an attestation server here, so we can skip these questions, but I think they're really interesting.
* Libvirt: Request permission to launch a VM on a host
POST /vm/<UUID>/launch
Since I've been thinking about VM identity a little differently, our setup for the UUID is a bit different as well. We use a UUID to track a connection (as in TIK, TEK), but this is not known at the time of the launch request (aka GetBundle request). Instead, the UUID is returned in the launch response so that it can be used for the secret request.
If we have a UUID in the launch request, it creates an interesting coordination requirement between the CSP and the AS. Imagine a situation where we spin up a bunch of identical VMs dynamically. Here the guest owner would have to register a new VM with a UUID for each instance and then get all of that information over to libvirt. This seems difficult. Shifting focus from VMs to secrets and policies and automatically provisioning the UUID sidesteps this issue. This is especially important for something like Confidential Containers (of course CC doesn't use libvirt but we'd like to share the AS API).
From the libvirt POV, we don't have to declare what the UUID represents. It could represent a single VM instances, or it could represent a VM image supporting many instances. Libvirt wouldn't care, the UUID is just a token passed to the AS to indicate what libvirt needs to launch.
I think you need to uniquely identify the connection between the PSP and the AS so that you use the same TIK/TEK to wrap the keys as was provided on startup. We don't want to reuse these. In other words the UUID should be unique for every startup.
Request body:
{ "pdh": "<blah>", "cert-chain": "<blah>", "cpu-id": "<CPU ID>",
There's an interesting question as to whether the CEK should be signed by libvirt or by the AS.
I'm mostly ambivalent on that question - either way works well enough, though if libvirt needs todo the signing, then libvirt needs to be able to talk to AMD's REST service to acquire the signature. If libvirt doesn't have network access from the compute host, it might not be possible for it to acquire the signature.
In terms of the protocol spec, both approaches could be supported. The 'cpu-id' can be provided unconditionally. The 'cert-chain' can be declared to be signed or unsigned. If libvirt is capable of getting a signature, it could provide the cert-chain with thue CEK signed. If not, then the AS could acquire the signature itself as a fallback.
...other relevant bits... }
Service decides if the proposed host is acceptable
Response body (on success)
{ "session": "<blah>", "owner-cert": "<blah>", "policy": 3,
I've assumed that the policy would be part of the request, having been set in the libvirt XML.
My thought was that since the guest owner needs to explicitly give the AS the policy in order to create the correct launch blob, it is pointless for the guest owner to give it to libvirt again in the XML. Just give it the AS and let it pass it onto libvirt automatically. Again though, both approaches work - even if libvirt alrady has the policy, there's no harm in the AS providing the policy in its response again.
Yeah, not a big deal either way.
* Libvirt: Request secrets to inject to launched VM
POST /vm/<UUID>/validate
Request body:
{ "api-minor": 1, "api-major": 2, "build-id": 241, "policy": 3, "measurement": "<blah>", "firmware-hash": "xxxx", "cpu-count": 3, ....other relevant stuff.... }
Service validates the measurement...
Response body (on success):
{ "secret-header": "<blah>", "secret-table": "<blah>", Referring to secret payload format as OVMF secret table?
Essentially I intended it to be the data that is expected by the 'sev-inject-launch-secret' QMP command from QEMU, which is consumed by libvirt. If using the OVMF firmware with the guest then its the OVMF secret table, with other firmware it is whatever they declare it to be. Calling it 'secret-payload' probably makes more sense.
Ok -Tobin
Regards, Daniel
participants (5)
-
Daniel P. Berrangé
-
Dov Murik
-
Dr. David Alan Gilbert
-
James Bottomley
-
Tobin Feldman-Fitzthum