On Tue, Feb 21, 2017 at 01:14:57PM +0100, Pavel Hrdina wrote:
QEMU 2.9.0 will introduce polling feature for AioContext that
instead
of blocking syscalls polls for events without blocking. This means
that polling can be in most cases faster but it also increases CPU
utilization.
To address this issue QEMU implements self-tuning algorithm that
modifies the current polling time to adapt to different workloads
and it can also fallback to blocking syscalls.
For each IOThread this all is controlled by three parameters,
poll-max-ns, poll-grow and poll-shrink. If parameter poll-max-ns
is set to 0 it disables the polling, if it is omitted the default
behavior is used and any value more than 0 enables polling.
Parameters poll-grow and poll-shrink configure how the self-tuning
algorithm will adapt the current polling time. If they are omitted
or set to 0 default values will be used.
With my app developer hat on I have to wonder how an app is supposed
to figure out what to set these parameters to ? It has been difficult
enough figuring out existing QEMU block tunables, but at least most
of those can be set dependant on the type of storage use on the host
side. Tunables whose use depends on the guest workload are harder to
use since it largely involves predicting the unknown. IOW is there
a compelling reason to add these low level parameters that are tightly
coupled to the specific algorithm that QEMU happens to use today.
The QEMU commits say the tunables all default to sane parameters so
I'm inclined to say we ignore them at the libvirt level entirely.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://entangle-photo.org -o-
http://search.cpan.org/~danberr/ :|