On Sat, Feb 22, 2020 at 01:41:06PM +0100, Michal Prívozník wrote:
On 2/14/20 1:51 PM, Daniel P. Berrangé wrote:
> This series changes the way we manage the QEMU monitor and
> QEMU agent, such that all I/O is processed by a dedicated
> event loop thread.
>
> Many times in the past years people are reported issues
> where long running monitor event callbacks block the main
> libvirtd event loop for an unacceptably long period of
> time. In the best case, this delays other work being
> completed, but in bad cases it leads to mgmt app failures
> when keepalive times trigger a client disconnect.
>
> With this series, when we spawn QEMU, we also spawn a
> dedicated thread running a GMainLoop instance. Then QEMU
> monitor and QEMU agent UNIX sockets are switched to use
> GMainContext for events instead of the traditional libvirt
> event loop APIs. We kill off the event thread when we see
> EOF on the QEMU monitor during shutdown.
>
> The cost of this approach is one extra thread per VM,
> which incurs a new OS process and a new stack allocation.
>
> The QEMU driver already delegates some QMP event handling
> to a thread pool for certain types of event. This was a
> previous hack to mitigate the impact on the main event
> loop. It is likely that we can remove this thread pool
> from the QEMU driver & rely on the per-VM event threads
> to do all the work. This will, however, require careful
> analysis of each handler we pushed into the thread pool
> to make sure its work doesn't have a dependency on the
> event loop running in parallel.
>
> This should also eliminate the need to have the libvirt
> event loop registered when using the embedded QEMU driver.
> This has not yet been validated, however, so it is left
> for a future patch to relax the constraint.
>
> Daniel P. Berrangé (11):
> qemu: drop support for agent connections on PTYs
> qemu: drop ability to open monitor from FD
> src: set the OS level thread name
> src: improve thread naming with human targetted names
> src: introduce an abstraction for running event loops
> qemu: start/stop an event loop thread for domains
> qemu: start/stop an event thread for QMP probing
> tests: start/stop an event thread for QEMU monitor/agent tests
> qemu: convert monitor to use the per-VM event loop
> qemu: fix variable naming in agent code
> qemu: convert agent to use the per-VM event loop
>
> po/POTFILES.in | 1 +
> src/libvirt_private.syms | 6 +
> src/libxl/libxl_domain.c | 10 +-
> src/libxl/libxl_migration.c | 23 +-
> src/lxc/lxc_fuse.c | 4 +-
> src/node_device/node_device_udev.c | 7 +-
> src/nwfilter/nwfilter_dhcpsnoop.c | 11 +-
> src/nwfilter/nwfilter_learnipaddr.c | 10 +-
> src/qemu/qemu_agent.c | 634 ++++++++++++++--------------
> src/qemu/qemu_agent.h | 1 +
> src/qemu/qemu_domain.c | 33 ++
> src/qemu/qemu_domain.h | 6 +
> src/qemu/qemu_driver.c | 3 +-
> src/qemu/qemu_migration.c | 8 +-
> src/qemu/qemu_monitor.c | 155 +++----
> src/qemu/qemu_monitor.h | 8 +-
> src/qemu/qemu_process.c | 61 ++-
> src/qemu/qemu_process.h | 2 +
> src/remote/remote_daemon.c | 9 +-
> src/rpc/virnetserver.c | 9 +-
> src/storage/storage_backend_scsi.c | 4 +-
> src/storage/storage_driver.c | 4 +-
> src/util/Makefile.inc.am | 2 +
> src/util/vircommand.c | 5 +-
> src/util/vireventthread.c | 175 ++++++++
> src/util/vireventthread.h | 31 ++
> src/util/virfdstream.c | 10 +-
> src/util/virnodesuspend.c | 8 +-
> src/util/virthread.c | 44 +-
> src/util/virthread.h | 4 +-
> src/util/virthreadpool.c | 14 +-
> src/util/virthreadpool.h | 2 +-
> tests/qemumonitortestutils.c | 15 +
> 33 files changed, 832 insertions(+), 487 deletions(-)
> create mode 100644 src/util/vireventthread.c
> create mode 100644 src/util/vireventthread.h
>
I've ACKed the first four patches which can be pushed independently. Do
you want me to continue or will you resend v2 starting from patch 05/11?
I'll send a new series with the remaining patches.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|