https://gitlab.com/abologna/kubevirt-and-kvm/-/blob/master/CPU-Pinning.md
# CPU pinning
As is the case for many of KubeVirt's features, CPU pinning is
partially achieved using standard Kubernetes components: this both
reduces the amount of new code that has to be written and guarantees
better integration with containers running side by side with the VMs.
## Kubernetes CPU Manager
The Static policy allocates exclusive CPUs to pod containers in the
Guaranteed QoS class which request integer CPUs. On a best-effort
basis, the Static policy tries to allocate CPUs topologically in the
following order:
* Allocate all the CPUs in the same processor socket if available and
the container requests at least an entire socket worth of CPUs.
* Allocate all the logical CPUs (hyperthreads) from the same physical
CPU core if available and the container requests an entire core
worth of CPUs.
* Allocate any available logical CPU, preferring to acquire CPUs from
the same socket.
## KubeVirt dedicated CPU placement
KubeVirt relies on the Kubernetes CPU Manager to allocate dedicated
CPUs to the `virt-launcher` container.
When `virt-launcher` starts, it reads
`/sys/fs/cgroup/cpuset/cpuset.cpus` and generates `<vcpupin>`
configuration for libvirt based on the information found within.
However, affinity changes require `CAP_SYS_NICE` so this additional
capability has to be granted to the VM pod.
Going forward, we would like to perform the affinity change in
`virt-handler` (the privileged component running at the node level),
which would allow the VM pod to work without additional capabilities.