[Thanks to Dan Berrangé for doing the analysis of this one]
I was investigating a 200+ millisecond delay when libvirt starts a
qemu guest. You can see the traces here:
http://oirase.annexia.org/tmp/libvirt.log
http://oirase.annexia.org/tmp/libvirtd.log
The delay happens at around 16:52:57.327-6:52:57.528 in the libvirtd log.
As you can see the delay is almost precisely 200ms.
Dan found the cause which seems to be this code:
https://libvirt.org/git/?p=libvirt.git;a=blob;f=src/qemu/qemu_monitor.c;h...
(There are other examples of the same anti-pattern in src/fdstream.c
and src/qemu/qemu_agent.c, but it's the particular code above which
seems to be causing the delay).
To give you some sense why I regard this as a problem, the TOTAL time
taken to launch and shutdown the libguestfs appliance (that includes
qemu, BIOS, guest kernel, probing and mouting disks, running the
guestfs daemon, and the shutdown process in reverse), without libvirt,
is now 900ms. Libvirt adds about 220ms on top of this.
What can we do about this? Obviously we could simply reduce the
delay, but even if it was set to 20ms, that would be too much (aim is
to reduce the whole process from 900ms down to 150ms), and it would
also mean that libvirt was essentially polling.
Can we use inotify to detect if the socket has been created? Seems to
create portability problems.
Dan suggested:
Can we create the socket in libvirtd and pass it to qemu?
Can we pass a file descriptor to qemu?
Other suggestions?
Rich.
--
Richard Jones, Virtualization Group, Red Hat
http://people.redhat.com/~rjones
Read my programming and virtualization blog:
http://rwmj.wordpress.com
virt-top is 'top' for virtual machines. Tiny program with many
powerful monitoring features, net stats, disk stats, logging, etc.
http://people.redhat.com/~rjones/virt-top