On Mon, Jul 27, 2015 at 6:05 PM, Daniel P. Berrange <berrange(a)redhat.com> wrote:
> > Is it enough if ivshmem-server is started by libvirt to
solve the selinux issue?
> >
> > What's missing to get started to support it with libvirt?
>
> The complexity arises when multiple QEMUs want to connect to the same
> memory region. Each QEMU has its own unique SELinux label (eg something
> like svirt_t:s0:c352,c850 with random category values) So there is
> no single SElinux label you can give to an ivshmem server process to
> let it "just work" with multiple QEMUs, unless we chose to effectively
> just let any QEMU connect whatsoever by running ivmshmem-server under
> svirt_t:s0:c0.c1023 which removes all isolation between the guests.
> This is label we use for disk images which must be shared between
> QEMUs currently, but long term we're going to need to come up with
> a way to allow concurrent access but kep separation. At that point
> we'll likely need to implement the ivmshmem server as part of the
> libvirt project itself, so we can deal with SELinux.
Could we start with a simple support, like with disk sharing? Tthe
problem is similar, so it's unfair to give ivshmem more requirements
than we do with disks. Furthermore, it's likely that the solution will
be similar for both, so this could be treated seperately for both.
>
> Until that point though, I think the simplest thing todo is to get
> an addition to the SELinux policy. We want to have
>
> - ivshmem-server running under a 'qemu_shmemd_t' type
> - ivshmem-server UNIX domain socket labeled 'qemu_shmemd_sock_t'
> - svirt_t permitted to connect to any qemu_shmemd_sock_t
>
That looks compatible with Luyao patches.
> this doesn't require any code in libvirt or QEMU - should be
possible
> todo it entirely in selinux policy rules.
Just for clarification - this means I am NACK'ing all 4 patches
here, because I don't think any of this extra code is needed.
Luyao patches are also about keeping track and cleaning up zombie shm.
This is useful in various cases, especially when running ivshmem-vms
without ivhsmem server.
The security part permits specifying different labels, like with disk.
Why wouldn't that be useful? (when the shm is not managed by a ivshmem
server for ex).
--
Marc-André Lureau