-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Daniel P. Berrange wrote:
On Fri, Feb 20, 2009 at 01:52:31PM -0500, Daniel J Walsh wrote:
>
> + def _default_seclabels(self):
> + try:
> + fd = open(selinux.selinux_virtual_domain_context_path(), 'r')
> + except OSError, (err_no, msg):
> + raise RuntimeError, \
> + "failed to SELinux virtual domains context: %s: %s %s" %
(selinux.selinux_virtual_domain_context_path(),err_no, msg)
> +
> + label = fd.read()
> + fd.close()
> + try:
> + fd = open(selinux.selinux_virtual_image_context_path(), 'r')
> + except OSError, (err_no, msg):
> + raise RuntimeError, \
> + "failed to SELinux virtual domains context: %s: %s %s" %
(selinux.selinux_virtual_domain_context_path(), err_no, msg)
> +
> + image = fd.read()
> + fd.close()
> +
> + return (label, image)
Opening local files in virt-install code is an approach we're trying to
get rid of, because it prevents you doing remote provisioning. eg running
virt-install on your laptop to provision on a server in the data center.
Ok the only goal here is a mechanism for the selinux policy writer to
tell the tools what label to run virtual processes as, and what label to
label there images.
> + def is_conflict_seclabel(self, conn, seclabel):
> + """
> + check if security label is in use by any other VMs on passed
> + connection.
> +
> + @param conn: connection to check for collisions on
> + @type conn: libvirt.virConnect
> +
> + @param seclabel: Security Label
> + @type str: Security label
> +
> + @return: True if a collision, False otherwise
> + @rtype: C{bool}
> + """
> + if not seclabel:
> + return False
> +
> + vms = []
> + # get working domain's name
> + ids = conn.listDomainsID()
> + for i in ids:
> + try:
> + vm = conn.lookupByID(i)
> + vms.append(vm)
> + except libvirt.libvirtError:
> + # guest probably in process of dieing
> + logging.warn("Failed to lookup domain id %d" % i)
> + # get defined domain
> + names = conn.listDefinedDomains()
> + for name in names:
> + try:
> + vm = conn.lookupByName(name)
> + vms.append(vm)
> + except libvirt.libvirtError:
> + # guest probably in process of dieing
> + logging.warn("Failed to lookup domain name %s" % name)
> +
> + count = 0
> + for vm in vms:
> + doc = None
> + try:
> + doc = libxml2.parseDoc(vm.XMLDesc(0))
> + except:
> + continue
> + ctx = doc.xpathNewContext()
> + try:
> + try:
> + label = ctx.xpathEval("/domain/seclabel/label/")
> + if label[0].content == seclabel:
> + count += 1
> + break
> + except:
> + continue
> + finally:
> + if ctx is not None:
> + ctx.xpathFreeContext()
> + if doc is not None:
> + doc.freeDoc()
> + if count > 0:
> + return True
> + else:
> + return False
> +
> + def _get_random_mcs(self):
> + f1 = random.randrange(1024)
> + f2 = random.randrange(1024)
> + if f1 < f2:
> + return "s0:c%s,c%s" % (f1, f2)
> + else:
> + if f1 == f2:
> + return "s0:c%s" % f1
> + else:
> + return "s0:c%s,c%s" % (f2, f1)
> +
> + def gen_seclabels(self):
> + mcs = self._get_random_mcs()
> + con = self.default_seclabel.split(':')
> + seclabel = "%s:%s:%s:%s" % (con[0], con[1], con[2], mcs)
> + con = self.default_imagelabel.split(':')
> + imagelabel = "%s:%s:%s:%s" % (con[0], con[1], con[2], mcs)
> + return (seclabel, imagelabel)
Now that I look at this I'm not so sure its a good idea to have the
client code responsible for allocating the 'mcs' level part of the
label. I think virt-install should only specify the first bit of
the label 'root:system_r:qemu_t' and that libvirt should be doing
allocation of the mcs level on the fly at domain startup.
For sVirt 1.0 our assumption is that mcs levels will only be unique
within scope of running VMs on a each host machine. If we are including
the mcs level in the label we pass into the XML, we are going to create
a number of headaches for ourselves.
- The inactive config written in /etc/libvirt/qemu contains an
allocate mcs level even though it doesn't need it unless it is
running.
One of 500,000, which we can bounce up by adding one or more
categories,
currently we are only using 2, we could use as many as 1024.
- If someone copies this config to another machine, then the mcs
level in the config may no longer be unique on the target machine
- If you live migrate a VM, again you have problem that the mcs used
on the source may already be in use on the target.
- Save/restore across machines also has uniqueness issues
The way virt-install is allocating the mcs level here is also open to
race condition if more than one VM provisioning operation is taking
place concurrently.
Daniel
This is the way I was originally thinking, but there are a couple of
problems with this. libvirt would need to relabel all of the images on
the fly when it chooses the MCS Level. So if I have multiple disks I
would need to run through them and execute a setfilecon on each image.
What happens to the labels when them image is complete? Do I rerun
through the images setting the labels to something that no qemu can
read/write? system_u:object_r:virt_image_t:SystemHigh? Since we do not
want to accidently allow a later qemu to gain access.
What about shared storage? If I have multiple libvirt running on shared
storage, they could both pick out the same MCS Label and set labels on
the images allowing the remote machine access. Now since we are
allowing for ~500,000 labels, this is limited risk, but it does exist.
Finally the current solution does allow for MLS. In MLS the
administrator wants to say that these images will run as TopSecret and
do not care about isolation. So we would need a mechanism to override
libvirt choosing a label on the fly and allow the user to force the label.
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.9 (GNU/Linux)
Comment: Using GnuPG with Fedora -
http://enigmail.mozdev.org
iEYEARECAAYFAkmkF48ACgkQrlYvE4MpobPNpwCgz3hleD/Q88P3MQalmSPSfsqk
LFoAni64GmLHsLaU+fCHChIlBI8Z9DBO
=WScn
-----END PGP SIGNATURE-----