Hi,
we try to use vcpu pinning on a 2 socket server with Intel Xeon E5620 cpus, HT
enabled and 2*6*16GiB Ram but experience problems if we try to start a guest
on the second socket:
error: Failed to start domain test
error: internal error: process exited while connecting to monitor:
kvm_init_vcpu failed: Cannot allocate memory
Libvirt version 1.1.1
Linux 3.11-rc7
Because I coudn't find any other service which allowed a 7M file upload, I put
the log file and everything else which could perhabs be relevant into a github
repository:
https://github.com/David-Weber/vcpu-pinning
When we try to start a guest on the first node it runs fine:
<vcpu placement='static' cpuset='0-3,8-11'>4</vcpu>
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
Starting it on the second node fails
<vcpu placement='static' cpuset='4-7,12-15'>4</vcpu>
<numatune>
<memory mode='strict' nodeset='1'/>
</numatune>
Even more strange, starting it with the CPUs of the second node and the memory
of the first node works:
<vcpu placement='static' cpuset='4-7,12-15'>4</vcpu>
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
The log file contains these three cases.
Using the placement='auto' parameter leads to the same problem. If numad
return the second node, the guest won't start.
Is this a configuration, a libvirt or a cgroup problem? :)
With mode='strict' you are telling QEMU that if it can't allocate
memory from the requested node, it should fail. Is it possible
that some of your Numa nodes have insufficient memory free ?
The combination of 'virsh capabilities' output and the results
of 'virsh freecell NODENUM' for each NUMA node will give an
indication of the allocation state.
Daniel
--
|: