Command line equivalent for cpu pinning

Hello, on my Fedora 39 with libvirt 9.7.0-3.fc39 qemu-kvm 8.1.3-5.fc39 kernel 6.8.11-200.fc39.x86_64 I'm testing cpu pinning The hw is a Nuc with 13th Gen Intel(R) Core(TM) i7-1360P If I pass from this in my guest xml: <vcpu placement='static'>4</vcpu> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='1'/> </cpu> to this: <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='6'/> </cputune> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='1'/> </cpu> It seems to me that the generated command line of the qemu-system-x88_64 process doesn't change. As if the cputune options were not considered What should I see as different? Actually it seems it is indeed honored, because if I run stress-ng in the VM in the second scenario and top command in the host, I see only pcpu 0,2,4,6 going up with the load. Instead the first scenario keeps several different cpus alternating in the load. The real question could be: if I want to reproduce from the command line the cputune options, how can I do it? Is it only a cpuset wrapper used for the qemu-system-x86_64 process to place it in a cpuset control group? I see for the pid of the process $ sudo cat /proc/340215/cgroup 0::/machine.slice/machine-qemu\x2d8\x2dc7anstgt.scope/libvirt/emulator and $ sudo systemd-cgls /machine.slice CGroup /machine.slice: └─machine-qemu\x2d8\x2dc7anstgt.scope … └─libvirt ├─340215 /usr/bin/qemu-system-x86_64.... ├─vcpu1 ├─vcpu2 ├─vcpu0 ├─emulator └─vcpu3 What could be an easy command to replicate from the command line what virsh does? Thanks in advance Gianluca

On 6/28/24 00:12, Gianluca Cecchi wrote:
Hello, on my Fedora 39 with libvirt 9.7.0-3.fc39 qemu-kvm 8.1.3-5.fc39 kernel 6.8.11-200.fc39.x86_64 I'm testing cpu pinning The hw is a Nuc with 13th Gen Intel(R) Core(TM) i7-1360P
If I pass from this in my guest xml: <vcpu placement='static'>4</vcpu> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='1'/> </cpu>
to this: <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='2'/> <vcpupin vcpu='2' cpuset='4'/> <vcpupin vcpu='3' cpuset='6'/> </cputune> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='4' threads='1'/> </cpu>
It seems to me that the generated command line of the qemu-system-x88_64 process doesn't change. As if the cputune options were not considered What should I see as different? Actually it seems it is indeed honored, because if I run stress-ng in the VM in the second scenario and top command in the host, I see only pcpu 0,2,4,6 going up with the load. Instead the first scenario keeps several different cpus alternating in the load.
The real question could be: if I want to reproduce from the command line the cputune options, how can I do it? Is it only a cpuset wrapper used for the qemu-system-x86_64 process to place it in a cpuset control group? I see for the pid of the process $ sudo cat /proc/340215/cgroup 0::/machine.slice/machine-qemu\x2d8\x2dc7anstgt.scope/libvirt/emulator and $ sudo systemd-cgls /machine.slice CGroup /machine.slice: └─machine-qemu\x2d8\x2dc7anstgt.scope … └─libvirt ├─340215 /usr/bin/qemu-system-x86_64.... ├─vcpu1 ├─vcpu2 ├─vcpu0 ├─emulator └─vcpu3
What could be an easy command to replicate from the command line what virsh does?
I'm not sure why you want to replicate what libvirt does, but anyway. Libvirt uses all sorts of process management techniques and passing cmd line arguments is just one of them. In this specific case, QEMU is started in paused mode (notice -S on its cmd line), so that vCPUs are initialized. Then, libvirt queries their PIDs via monitor (among other run time configuration) and the uses CGroups (cpuset controller specifically), and cpuset_setaffinity() to pin vCPUs onto desired pCPUs. The qemuProcessSetupPid() is the function you want to be looking at: https://gitlab.com/libvirt/libvirt/-/blob/master/src/qemu/qemu_process.c?ref... Happy hacking! Michal

On Fri, Jun 28, 2024 at 11:52 AM Michal Prívozník <mprivozn@redhat.com> wrote:
I'm not sure why you want to replicate what libvirt does, but anyway.
[snip]
Happy hacking! Michal
Thanks for your reply, Michal. It is not for hacking, but mainly for curiosity. I never really reasoned about it before, but until yesterday I wrongly assumed that all the details for the VM, set from within virt-manager (or similar) or with "virsh edit", were driven then through and translated into command line parameters. I was following some instructions on a github project with Intel TDX enablement and the examples were using a direct qemu-system-x86_64 command, so that it came to my mind the question about how to confine the VM virtual cpus to only one specific physical processor to optimize latency and what command line parameters could have been necessary to add. Gianluca

On Fri, Jun 28, 2024 at 01:08:43PM +0200, Gianluca Cecchi wrote:
On Fri, Jun 28, 2024 at 11:52 AM Michal Prívozník <mprivozn@redhat.com> wrote:
I'm not sure why you want to replicate what libvirt does, but anyway.
[snip]
Happy hacking! Michal
Thanks for your reply, Michal. It is not for hacking, but mainly for curiosity. I never really reasoned about it before, but until yesterday I wrongly assumed that all the details for the VM, set from within virt-manager (or similar) or with "virsh edit", were driven then through and translated into command line parameters.
Alot of stuff does translate to QEMU command line parameters. The resource management side though is mostly implemented by libvirt with help from QEMU. All QEMU really does is tell libvirt what PIDs each of its threads have, so libvirt can then issue suitable syscalls / make cgroups changes for these PIDs. With regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
participants (3)
-
Daniel P. Berrangé
-
Gianluca Cecchi
-
Michal Prívozník