On 04.02.2015 01:59, G. Richard Bellamy wrote:
As I mentioned, I got the instances to launch... but they're
only
taking HugePages from "Node 0", when I believe my setup should pull
from both nodes.
[atlas]
http://sprunge.us/FSEf
[prometheus]
http://sprunge.us/PJcR
[pasting interesting nits from both XMLs]
<domain type='kvm' id='2'>
<name>atlas</name>
<uuid>d9991b1c-2f2d-498a-9d21-51f3cf8e6cd9</uuid>
<memory unit='KiB'>16777216</memory>
<currentMemory unit='KiB'>16777216</currentMemory>
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB' nodeset='0'/>
</hugepages>
<nosharepages/>
</memoryBacking>
<!-- no numa pining -->
</domain>
<domain type='kvm' id='3'>
<name>prometheus</name>
<uuid>dda7d085-701b-4d0a-96d4-584678104fb3</uuid>
<memory unit='KiB'>16777216</memory>
<currentMemory unit='KiB'>16777216</currentMemory>
<memoryBacking>
<hugepages>
<page size='2048' unit='KiB' nodeset='2'/>
</hugepages>
<nosharepages/>
</memoryBacking>
<!-- again no numa pining -->
</domain>
So, at start, the @nodeset attribute to <page/> element refers to guest
numa nodes, not host ones. And since you don't define any numa nodes for
your guests, it's useless. Side note - I wonder if we should make
libvirt fail explicitly in this case.
Moreover, you haven't pinned your guests onto any host numa nodes. This
means it's up to the host kernel and its scheduler where guest will take
memory from. And subsequently hugepages as well. I think you want to add:
<numatune>
<memory mode='strict' nodeset='0'/>
</numatune>
to guest XMLs, where @nodeset refers to host numa nodes and tells where
the guest should be placed. There are other modes too so please see
documentation to tune the XML to match your use case perfectly.
Michal