----- "Daniel P. Berrange" <berrange(a)redhat.com> wrote:
> Not quite: the main case of a "dumb" client would be a
large-scale
> virtualization management software that contains a primary store
> of encryption information, and gives each node access only to those
> keys that are currently necessary by the node to run its domains;
> the fact that each node has access to only a limited set of keys
> prevents an attacker that compromises a single node from reading
> disk images of all domains managed in the entire site, even if
> the disk image storage (e.g. unencrypted NFS) does not allow
> managing access by each node separately.
>
> Such a client must be able to transfer the actual secrets, not only
> identifiers, to libvirt. (The idea of a "dumb" client, that does
> not know the specifics of the format, is an additional feature on
> top, but one that implies that the client does send the actual
secrets.)
This implies a flow of secrets
Key server --\
+-> libvirt client -> libvirt daemon -> qemu
MGMT server --/
This does not in fact guarentee that secrets for a particular
node are only used on the node for which they are intended,
because the key server cannot be sure of what libvirt daemons
the libvirt client is connected to.
A client in this case is the central, fully
trusted, management system (e.g. oVirt), there is no need to protect against it. A more
likely flow is
MGMT client (no knowledge of secrets)
|
v
MGMT server + key server (integrated or separate but cooperating)
|
v
libvirt daemon
|
v
qemu
What I am suggesting is that libvirt daemon should communicate
with the key server directly in all cases, and take the client
out of the loop. The client should merely indicate whether it
wants encryption or not, and never be able to directly access
any key material itself. With a direct trust relationship
between the key server and each libvirtd daemon, you do now
have a guarentee that keys are only ever used on the node for
which they are intended. You also have the additional guarentee
that no libvirt client can ever see any key secrets or passphrases
since it has been taken completely out of the loop.
As far as I understand it, the
whole point of virtual machine encryption is that the nodes are _not_ trusted, and
different encryption keys protect data on different nodes.
If all nodes are trusted, what additional functionality does volume encryption with
per-volume keys provide? If the nodes are trusted to read only data from the domains they
currently host, the nodes could just as well use an encrypted local hard drive to store
all images, or share a single key to encrypt all images stored on a NFS/SAN.
Key server
|
V
MGMT server -> libvirt client -> libvirt daemon -> qemu
> Storage of secrets in a separate keystore is more important for "local"
> libvirt deployments, where libvirt manages the primary, long-term,
> store of the secrets.
Nope, I think that having a keystore for all scenarios is the
desirable end goal. Passing secrets around in the XML should
ideally never be done at all - we should aim to always have a
keystore that can be used. Secrets in the XML would just be a
fallback for a rarely used niche cases, or disaster recovery,
or experimentation.
That means that any deployment of more than one node with
migration requires a separate server providing a shared keystore, even if there is only
one client to manage the nodes. The N nodes x N "management consoles" case
requires a centralized key store, but it's not necessary to impose it on the "1
management console" case.
> > 2. A desktop key agent (eg gnome-keyring)
> >
> > This would be useful for the unprivileged libvirtd instances
> > that run in the context of the desktop login session. Users
> > already have SSH, GPG, website keys stored here, so having
> > keys for their VM disks is obviously desirable
>
> Another option here is to let the client store the secrets in
> gnome-keyring, and transfer them to libvirt only when starting
> a domain (especially when there are no persistent domains).
> That doesn't affect the design in any way, though.
This is undesriable because it lets requires that any client which
wishes to start the guest must have access to the secrets. We really
need to be able to have separation of this, so that when we introduce
fine grained access control, you can setup separate roles for users
who can access / work with secrets, vs users who can start/define
guests.
> > 3. A off-node key management server
> >
> > This would be useful for a large scale data center / cluster
> > cloud deployment. This is good to allow management
scalability
> > and better separation of responsiblities of adminstration.
> >
> > If no keystore is in use, then I clearly all keys must go in &
out
> > of libvirt using the XML, which is pretty much what you're doing
> > in this series. I would say though, there is no point in clearing
> > the secret from the virStorageVolDefPtr instance/XML after volume
> > creation, since the secret is going to be kept in memory forever
> > for any guest using the volume. By not clearing the secret, an
> > app could create a volume, requesting automatic key generation,
> > then just do virStorageVolDumpXML(vol,
VIR_STORAGE_VOL_XML_SECURE)
> > to extract it and pass to onto the XML for the guest that's
created
>
> That is unreliable with current implementation, because a pool refresh
> creates all information about volumes anew by reading the volume files;
> therefore, after any pool refresh libvirt "forgets" the secrets directly
> associated with a volume (not secrets associated with an use of a volume
> in a domain). If client A creates a volume, client B can refresh the
> pool before client A is able to read the automatically generated secrets.
> One of the reasons the patch is clearing the information immediately is
> to make sure a {virStorageVolCreateXML; virStorageVolGetXMLDesc) always
> fails and no client is written depending on this racy operation sequence.
> There is another, more important, reason why the node should never return
> encryption data to anyone who can connect to it. Consider the above-
> described situation of a large-scale virtualization deployment that uses
> volume image encryption to restrict access of nodes to volume data: If
> domain migration is supported, the nodes (all of them, or at least nodes
> within some groups) must be able to connect "read-write" to other nodes.
This is just another argument for taking clients out of the loop completely
and having libvirt daemon always talk directly with keystores.
> > With a keystore we'd likely need a handful of APIs
> >
> > - create a secret providing a passphrase
> > - list all known secret UUIDs
> > - get the passphrase assoicated with a secret
> > - delete a secret based on UUID
> > - lookup a secret UUID based on the a disk path
>
> We might not need all of these. There are three main use
> cases for storing keys outside of XML:
>
> 1) This is a "small-scale deployment", where libvirt is the
> primary key store, and anyone that is able to connect to
> libvirtd is implicitly allowed to use the secrets. In this
> case the keystore can be managed completely automatically -
> creating a volume, or perhaps using it for the first time,
> implies storing a secret, and deleting a volume implies
> deleting a secret. Secrets are identified using a volume-
> unique object (is that a path, or an UUID?), which is
> completely transparent to the client. As long as the volume
> is "known" to libvirt, the client does not even need to
> specify any <encryption> element when creating a domain.
> (1a, a "small-scale deployment" where there are multiple
> clients that should be protected against each other, and
> libvirt is the primary key store, does not make sense
> unless a client account system is added to libvirt.
> 2) The secrets are primarily stored outside libvirt (to
> restrict node access to image data), and libvirt stores
> only secrets for currently defined persistent domains (to
> support domain autostart). In this case the keystore can
> be managed completely automatically - persistent domain
> definition implies storing a secret, deleting the domain
> implies deleting a secret. Secrets are identified using
> a (domain, volume) pair, and this identifier is never
> exposed to the client.
Deleting the domain does not imply deleting a secret, since
secrets are really associated with disks, not domains.
It does, in this case - if
the domain is not running on the node, the secret should not be stored on the node at
all.
A disk may be shared by multiple domains.
If a disk is shared,
there will be multiple (domain volume) key ID pairs, and deleting one instace of the
secret does not delete the other.
In addition, when you delete
a domain, libvirt does not delete the disk. It is the clients
responsiblity to delete disks after the fact.
In this scenario the client does the
long-term key storage. libvirt does not need to - and shouldn't - store the key
merely because a volume exists.
In any case items 1 & 2 are really just 2 different
implementations
of the same concept. There is a keystore, and libvirt talks to
it. Whether the keystore is local, or remote from the node is a
minor impl detail.
Whether libvirt can request any secret, even if does not need to
know it, is not a minor detail.
Use of a keystore of some format should be our primary goal here,
since it takes the client out of the loop. When libvirt does ACLs
on clients this is even more important, because when you revoke a
clients' access to libvirt you can be sure they don't have a
record of any of the secrets, since they never had any opportunity
to see / use them during the time they were authenticated
> 3) An external key store (such as a KMIP server) is used. In
this
> case the management of access rights, listing and deleting secrets,
> would be performed by interacting with the external key store
> directly. Secrets are identified by using the external key store
> identifiers, and the client and libvirtd send these identifiers to
> each other.
As long as we allow for secrets to be passed in the XML it will always
be possible for a client to talk to a keystore, obtain the secrets and
pass them in the XML. This is really not a desirable model to aim for
by default though, because it requires clients to know about all the
different types of keystore. If a client does not know about a particular
type of keystore, then it becomes unable to manage guests on that
libvirtd.
Not to mention the issue of trust of the client & revocation of
access.
Yes, libvirt will grow the concept of a user in the future, as well
as
access controls on (user, object, operation) tuples
Overall, it seems this all
boils down to one thing - is libvirt intended to be a simple "virtualization
server" that locally performs requested operations, managed by a separate
"virtualization management" client that has full knowledge about the site
(nodes, storage, secrets, access rights), with clients connecting to the
"virtualization management" software instead of libvirtd, or a complete
"virtualization management solution" that integrates all site-wide knowedge,
accessed by dumb clients using the libvirt protocol?
My reading of
http://www.libvirt.org/goals.html implies the former; you seem to talk about
the latter model. Have I misunderstood the role of libvirt in a large-scale deployment?
Thank you,
Mirek