On Fri, Feb 25, 2022 at 04:11:27PM -0500, Tobin Feldman-Fitzthum wrote:
Some comments on the example protocol stuff
On 2/23/22 1:38 PM, Dov Murik wrote:
> +cc Tobin, James
>
> On 23/02/2022 19:28, Daniel P. Berrangé wrote:
>>
>>
>> What could this look like from POV of an attestation server API, if
>> we assume HTTPS REST service with a simple JSON payload .>>
>>
>> * Guest Owner: Register a new VM to be booted:
We're trying to set the API between libvirt and the AS. I would assume
that the API between the AS and the guest owner is out of scope,
although maybe this is just an example.
Agreed, it is out of scope from libvirt's POV. I just wanted to
illustrate a possible end-to-end solution for all parties.
>>
>> POST /vm/<UUID>
Note that this is a privileged endpoint (unlike the ones below).
>>
>> Request body:
>>
>> {
>> "scheme": "amd-sev",
>> "cloud-cert": "certificate of the cloud owner that
signs the PEK",
>> "policy": 0x3,
>> "cpu-count": 3,
>> "firmware-hashes": [
>> "xxxx",
>> "yyyy",
>> ],
I think we'd need to provide the full firmware binary rather than just
the hash if we plan to calculate the launch digest in the AS.
Alternatively the guest owner can calculate the launch digest themself
and pass it to the AS. This is what kbs-rs does. There are pros and cons
to both and we should probably support both (which should be easy).
Since this particular endpoint is an interface exclusively between
the guest owner and the AS, it could be said to be an API that does
not need standardization. Different implementations may choose to
approach it different ways based on how they evaluate the tradeoffs.
>> "kernel-hash": "aaaa",
>> "initrd-hash": "bbbb",
>> "cmdline-hash": "cccc",
>> "secrets": [
>> {
>> "type": "luks-passphrase",
>> "passphrase": "<blah>"
>> }
>> ]
>> }
>>
Registering an individual VM is kind of an interesting perspective. With
kbs-rs, rather than registering an individual VM, the guest owner
registers secrets and can set a policy (which specifies launch
parameters like the SEV policy) for each secret. Then secrets are
released to VMs that meet the policy requirements. There isn't really
any tracking of an individual VM (besides the secure channel briefly
used for secret injection). In SEV(-ES) individual VMs don't really have
an identity separate from their launch parameters and launch
measurement. I guess we're not trying to design an AS here, so we can
leave for another time.
Agree with what you say here.
The distinction of registering a single VM vs registering an image
that can be instantiated to many VMs can likely be a decision for
the specific implementation of the AS.
The reason I suggested registering an individual VM was that I was
trying to more closely match the behaviour the virt platform would
have if it was not talkin directly with an attestation service.
In the manul case a guest owner feeds in the launch blob for each
VM at boot time. Thus the compute host can't boot instances of the
VM without explicit action from the user. If the AS releases the
launch blob and secrets upon request from teh compute host, it can
potentially boot many instances of a VM even if the guest owner
only asked for one.
Of course the host admin can't get into the VMs todo anything, but
the mere act of being able to launch many instances without guest
owner action might lead to a denial of service attack on other
things that the VM talks to.
None the less this risk is easy to mitigate, even if you're just
registering an image with the AS. It could easily be set to
require a confirmation befere releasing more than 'n' instances
of the launch blob and secrets
>>
>>
>> * Libvirt: Request permission to launch a VM on a host
>>
>> POST /vm/<UUID>/launch
Since I've been thinking about VM identity a little differently, our
setup for the UUID is a bit different as well. We use a UUID to track a
connection (as in TIK, TEK), but this is not known at the time of the
launch request (aka GetBundle request). Instead, the UUID is returned in
the launch response so that it can be used for the secret request.
If we have a UUID in the launch request, it creates an interesting
coordination requirement between the CSP and the AS. Imagine a situation
where we spin up a bunch of identical VMs dynamically. Here the guest
owner would have to register a new VM with a UUID for each instance and
then get all of that information over to libvirt. This seems difficult.
Shifting focus from VMs to secrets and policies and automatically
provisioning the UUID sidesteps this issue. This is especially important
for something like Confidential Containers (of course CC doesn't use
libvirt but we'd like to share the AS API).
From the libvirt POV, we don't have to declare what the UUID
represents. It could represent a single VM instances, or it could
represent a VM image supporting many instances. Libvirt wouldn't
care, the UUID is just a token passed to the AS to indicate what
libvirt needs to launch.
>>
>> Request body:
>>
>> {
>> "pdh": "<blah>",
>> "cert-chain": "<blah>",
>> "cpu-id": "<CPU ID>",
There's an interesting question as to whether the CEK should be signed
by libvirt or by the AS.
I'm mostly ambivalent on that question - either way works well
enough, though if libvirt needs todo the signing, then libvirt
needs to be able to talk to AMD's REST service to acquire the
signature. If libvirt doesn't have network access from the
compute host, it might not be possible for it to acquire the
signature.
In terms of the protocol spec, both approaches could be supported.
The 'cpu-id' can be provided unconditionally. The 'cert-chain' can
be declared to be signed or unsigned. If libvirt is capable of
getting a signature, it could provide the cert-chain with thue CEK
signed. If not, then the AS could acquire the signature itself as
a fallback.
>> ...other relevant bits...
>> }
>>
>> Service decides if the proposed host is acceptable
>>
>> Response body (on success)
>>
>> {
>> "session": "<blah>",
>> "owner-cert": "<blah>",
>> "policy": 3,
I've assumed that the policy would be part of the request, having been
set in the libvirt XML.
My thought was that since the guest owner needs to explicitly give
the AS the policy in order to create the correct launch blob, it
is pointless for the guest owner to give it to libvirt again in
the XML. Just give it the AS and let it pass it onto libvirt
automatically. Again though, both approaches work - even if libvirt
alrady has the policy, there's no harm in the AS providing the
policy in its response again.
>> * Libvirt: Request secrets to inject to launched VM
>>
>> POST /vm/<UUID>/validate
>>
>> Request body:
>>
>> {
>> "api-minor": 1,
>> "api-major": 2,
>> "build-id": 241,
>> "policy": 3,
>> "measurement": "<blah>",
>> "firmware-hash": "xxxx",
>> "cpu-count": 3,
>> ....other relevant stuff....
>> }
>>
>> Service validates the measurement...
>>
>> Response body (on success):
>>
>> {
>> "secret-header": "<blah>",
>> "secret-table": "<blah>",
Referring to secret payload format as OVMF secret table?
Essentially I intended it to be the data that is expected by
the 'sev-inject-launch-secret' QMP command from QEMU, which
is consumed by libvirt. If using the OVMF firmware with the
guest then its the OVMF secret table, with other firmware it
is whatever they declare it to be. Calling it 'secret-payload'
probably makes more sense.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|