On Mon, 1 Jul 2024, Daniel P. Berrangé wrote:
On Sun, Jun 30, 2024 at 08:49:48PM +0200, Mikulas Patocka wrote:
>
>
> On Sun, 30 Jun 2024, Tejun Heo wrote:
>
> > Do you happen to know why libvirt is doing that? There are many other
> > implications to configuring the system that way and I don't think we want
to
> > design kernel behaviors to suit topology information fed to VMs which can be
> > arbitrary.
> >
> > Thanks.
>
> I don't know why. I added users(a)lists.libvirt.org to the CC.
>
> How should libvirt properly advertise "we have 16 threads that are
> dynamically scheduled by the host kernel, so the latencies between them
> are changing and unpredictable"?
NB, libvirt is just control plane, the actual virtual hardware exposed
is implemented across QEMU and the KVM kernel mod. Guest CPU topology
and/or NUMA cost information is the responsibility of QEMU.
When QEMU's virtual CPUs are floating freely across host CPUs there's
no perfect answer. The host admin needs to make a tradeoff in their
configuration
They can optimize for density, by allowing guest CPUs to float freely
and allow CPU overcommit against host CPUs, and the guest CPU topology
is essentially a lie.
They can optimize for predictable performance, by strictly pinning
guest CPUs 1:1 to host CPUs, and minimize CPU overcommit, and have
the guest CPU topology 1:1 match the host CPU topology.
With regards,
Daniel
The problem that we have here is that the commit
63c5484e74952f60f5810256bd69814d167b8d22 ("workqueue: Add multiple
affinity scopes and interface to select them") changes the behavior of
unbound workqueues, so that work items are only executed on CPUs that
share last level cache with the task that submitted them.
If there are 16 virtual CPUs that are freely floating across physical
CPUs, virt-manager by default selects a topology where it advertises 16
sockets, 1 CPU per socket, 1 thread per CPU. The result is that the
unbound workqueues are no longer unbound, they can't move work across
sockets and they are bound to just one virtual CPU, causing dm-crypt
performance degradation. (the crypto operations are no longer
parallelized).
Whose bug is this? Is it a bug in virt-manager because it advertises
invalid topology? Is this a bug in that patch 63c5484e7495 because it
avoids moving work items across sockets?
Mikulas