Hello,
on my Fedora 39 with
libvirt 9.7.0-3.fc39
qemu-kvm 8.1.3-5.fc39
kernel 6.8.11-200.fc39.x86_64
I'm testing cpu pinning
The hw is a Nuc with
13th Gen Intel(R) Core(TM) i7-1360P
If I pass from this in my guest xml:
<vcpu placement='static'>4</vcpu>
<cpu mode='host-passthrough' check='none'
migratable='on'>
<topology sockets='1' dies='1' cores='4'
threads='1'/>
</cpu>
to this:
<vcpu placement='static'>4</vcpu>
<cputune>
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='2'/>
<vcpupin vcpu='2' cpuset='4'/>
<vcpupin vcpu='3' cpuset='6'/>
</cputune>
<cpu mode='host-passthrough' check='none'
migratable='on'>
<topology sockets='1' dies='1' cores='4'
threads='1'/>
</cpu>
It seems to me that the generated command line of the qemu-system-x88_64
process doesn't change.
As if the cputune options were not considered
What should I see as different?
Actually it seems it is indeed honored, because if I run stress-ng in the
VM in the second scenario and top command in the host, I see only pcpu
0,2,4,6 going up with the load.
Instead the first scenario keeps several different cpus alternating in the
load.
The real question could be: if I want to reproduce from the command line
the cputune options, how can I do it?
Is it only a cpuset wrapper used for the qemu-system-x86_64 process to
place it in a cpuset control group?
I see for the pid of the process
$ sudo cat /proc/340215/cgroup
0::/machine.slice/machine-qemu\x2d8\x2dc7anstgt.scope/libvirt/emulator
and
$ sudo systemd-cgls /machine.slice
CGroup /machine.slice:
└─machine-qemu\x2d8\x2dc7anstgt.scope …
└─libvirt
├─340215 /usr/bin/qemu-system-x86_64....
├─vcpu1
├─vcpu2
├─vcpu0
├─emulator
└─vcpu3
What could be an easy command to replicate from the command line what virsh
does?
Thanks in advance
Gianluca