[cc:d Ian Jackson— sorry I should have cc:d you originally!]
On 4 Jun 2014, at 10:38, Daniel P. Berrange <berrange(a)redhat.com> wrote:
On Wed, Jun 04, 2014 at 09:11:59AM +0000, Dave Scott wrote:
> Hi,
>
> Two of the applications I’d like to use with libvirt (cloudstack
> and oVirt) make use of “<channels>” in the domain XML, like this:
>
> <channel type='unix'>
> <source mode='bind'
path='/var/lib/libvirt/qemu/s-4-VM.agent'/>
> <target type='virtio' name='s-4-VM.vport'/>
> <address type='virtio-serial'/>
> </channel>
>
> I don’t believe these are currently supported by libvirt + libxl
> — I’d like to see what it would take to hook these up.
>
> I chatted with Daniel Berrange at the Xen hackathon last week,
> and if I understood correctly these channels are analogous to
> serial ports used for low-bandwidth communication to (e.g.)
> guest agents. Daniel suggested that the xen console mechanism
> ought to be adequate to power these things. The other option
> (if higher bandwidth was required) would be to use the Xen
> vchan protocol.
Yep, the only really relevant difference between a console and
a channel, is that a channel has an string name associated with
it. The distinction between console & channel serves a few purposes
1. A guest OS can be set to automatically spawn getty login
process on all (paravirt based) <console> devices safe in
the knowledge no application is expecting to use them for
some higher level protocol.
2. A guest agent can reliably identify which <channel> it is
supposed to be using from its name. ie QEMU guest agent
knows that it should always open /dev/virtio-port/org.qemu.guest_agent.0
and no other app will touch that.
3. The named port can be used to write udev rules that automatically
spawn the correct guest agent when the port with the matching
name appears in the guest. eg here we trigger a systemd service
matching on QEMU guest agent port
$ cat /lib/udev/rules.d/99-qemu-guest-agent.rules
SUBSYSTEM=="virtio-ports", ATTR{name}=="org.qemu.guest_agent.0",
\
TAG+="systemd" ENV{SYSTEMD_WANTS}="qemu-guest-agent.service”
> I think the behaviour is:
> * bind a unix domain socket on the host (‘/var/lib/libvirt/qemu/…')
> * connect a bidirectional low-bandwidth channel to the guest
> * manifest the channel in the guest as a ‘/dev/vport/s-4-VM.vport’ device (?)
Yep, that's pretty much it. For virtio-serial the dir is /dev/virtio-ports.
If Xen uses a different non-virtio-serial device type, it can provide a
suitable guest udev rule to setup named devices in a dir that you think
is best.
> So an application on the host can connect() to the host socket, an
> application in the guest can open() the guest device and they can
> talk privately. [Have I got this right?]
Yes.
> I had a quick read of the libxl code and I think the consoles are
> considered an internal detail: there is a function libxl_console_get_tty
> to retrieve a console’s endpoint in dom0 but I couldn’t see a way to
> request additional consoles are created. The libxl_domain_config has
> disks, nics, pcidevs, vfbs, vkbs, and vtpms but no consoles. (Have I
> missed something?)
>
> Bypassing libxl I was able to manually create a /local/domain/%d/device/console/1
> which was recognised by the VM as /dev/hvc1. As an aside, I notice that there
> are 2 console backends now: xenconsoled seems to only watch for the initial
> console, while a per-domain qemu process is used for all subsequent consoles,
> so any enhancements to the dom0 end would have to go into qemu?
>
> So to implement channels via consoles I would need to:
>
> 1. check if qemu when acting as a console server in dom0 is able to connect
> the console to a suitably named Unix domain socket in dom0 (signalled via
> xenstore in the usual way)
>
> 2. modify libxl to support consoles as configurable devices alongside disks,
> nics etc
I'd suggest you also want to support a non-tty backend, specifically UNIX
sockets. This means that the process connecting to the backend on the host
does not need to query libvirt to discover what random /dev/pts/NN was
allocated to it - it can just inotify watch /var/lib/libvirt/libxl/ for
new files appearing and connect to it straight off based on the name of
the file.
I'd really strongly recommend the ability to give names to the devices
if you want to re-use the xen console device type, so that the guest
agents can automatically determine the correct device and so that the
guest OS can tell which console instances it should launch a getty on
vs ignore.
This sounds sensible to me.
As an experiment I created a console frontend in xenstore with a “name = <some nice
label>” and wrote a simple udev rule to catch the device creation, read this key and
‘mknod’ the device in a nice-sounding location. So I was able to create something like
/dev/xenchannel/<some nice label>.
As experiment #2 I ran a qemu in dom0 with a command-line:
qemu-system-i386 -xen-domid 28 -xen-attach -name trusty2 -machine xenpv \
-chardev socket,id=charchannel0,path=/tmp/foo.agent,server,nowait
and I created a console frontend in xenstore with “output=chardev:charchannel0”. Happily
everything worked as expected and I was able to use ‘socat’ to pump data into the
nicely-named dom0 Unix domain socket and see it appear in the guest in the nicely-named
tty (/dev/xenchannel/…)
> 3. add support to libvirt’s libxl driver
>
> 4. see if I can write something like a udev rule in the guest to
> notice the console, look up the ‘name’ from xenstore and make a
> suitably-named file?
>
> What do you think?
Sounds reasonable to me.
Ian — what do you think? If you think the approach is sensible then I’ll prepare a
prototype set of patches for libxl for review.
Thanks,
Dave