
On Thu, May 19, 2016 at 10:27:17AM +0100, Daniel P. Berrange wrote:
On Mon, May 16, 2016 at 04:58:48PM -0700, Stefan Hajnoczi wrote:
On Wed, May 11, 2016 at 03:42:34PM +0100, Daniel P. Berrange wrote:
On Tue, May 10, 2016 at 01:03:48PM +0100, Stefan Hajnoczi wrote:
On Mon, May 09, 2016 at 05:18:42PM +0100, Daniel P. Berrange wrote:
On Mon, May 09, 2016 at 04:57:17PM +0100, Stefan Hajnoczi wrote:
virtio-vsock support has been added to the nfs-ganesha NFS server. I'm currently working on upstreaming virtio-vsock into Linux and QEMU. I also have patches for the Linux NFS client and server.
Users wishing to share a file system with the guest will need to configure the NFS server. Perhaps libvirt could handle that given that it already has <filesystem> syntax.
The basic task is setting up either the kernel nfsd or nfs-ganesha for the VM to access the NFS export(s). When the VM is destroy the NFS server can be shut down.
Can you elaborate on the interaction between QEMU and the NFS server on the host ? What actually needed changing in nfs-ganesha to support virtio-vsock ? I thought that on the host side we wouldn't need any changes, because QEMU would just talk to a regular NFS server over TCP, and the only virtio-vsock changes would be in QEMU and the guest kernel.
The NFS protocol (and SUNRPC) is aware of the transport its running over. In order to fully support the protocol it needs to know about AF_VSOCK and addressing.
The NFS server changes allow specifying an AF_VSOCK listen port. The machine name format in /etc/exports or equivalent config also needs to support vsock.
So from host POV, in our current model of exposing host FS to the guest where libvirt wants control over managing exports, I don't think we would be inclined to use the in-kernel NFS server at all, nor would we likely use the main system ganesha server instance.
Instead what I think we'd want is to run an isolated instance of ganesha for each QEMU VM that requires filesystem passthrough. First of all this removes any unpredictability around setup, as arbitrary admin config changes to the default system ganesha server would not conflict with settings libvirt needed to make for QEMU. Second it would let us place the ganesha server associated with a VM in the same cgroup, so we can ensure resources limits associated with the VM get applied. Third it would let us apply the per-VM svirt MCS level to each ganesha, to ensure that there's no risk of cross-VM attack vectors via the ganesha services. Ideally QEMU would talk to ganesha over a private UNIX domain socket though it looks like ganesha only has the ability to use TCP or UDP right now, so that'd be something we need to add support for.
virtio-vsock uses a vhost kernel module so that traffic comes from the host's network stack, not from QEMU. So there wouldn't be a unix domain socket connecting the VM to ganesha (although something along those lines could be implemented in the future).
Hmm, are there any docs explaining the virtio-vsock architecture/setup in a bit more detail ? It feels some undesirable for the vsock backend to be directly connected to the host network - feels like it would be opening avenues for attacking the host network services.
Only AF_VSOCK host network services would be listening. Today there are none. There are no docs yet but it's pretty much the same as VMware vSockets (which shares the AF_VSOCK address family and some of the code with virtio-vsock). The VMware guide covering sockets usage is here: https://pubs.vmware.com/vsphere-60/topic/com.vmware.ICbase/PDF/ws9_esx60_vmc... Stefan