Richard W.M. Jones wrote:
beth kon wrote:
> Richard W.M. Jones wrote:
>
>> My results are a bit inconclusive. I have a machine here which
>> supposedly supports NUMA (2 socket, 2 core AMD with hypertransport
>> and two separate banks of RAM).
>>
>> BIOS is _not_ configured to interleave memory. Other BIOS settings
>> lead me to suppose that NUMA is enabled (or at least not disabled).
>>
>> Booting with Daniel's Xen & kernel does not give any messages about
>> NUMA enabled or disabled. (See attached messages).
>>
>> # numactl --show
>> physcpubind: 0 1 2 3
>> No NUMA support available on this system.
>>
> Are you setting "numa=on dom0_mem=512m" on the kernel line in grub?
> I'm not sure if the dom0_mem=512m should be required but we were
> having problems when trying to boot numa without it.
Aha, the results are quite a bit better now :-)
virsh shows the correct topology:
<topology>
<cells num='2'>
<cell id='0'>
<cpus num='2'>
<cpu id='0'/>
<cpu id='1'/>
</cpus>
</cell>
<cell id='1'>
<cpus num='2'>
<cpu id='2'/>
<cpu id='3'/>
</cpus>
</cell>
</cells>
</topology>
numactl --show still doesn't work (missing support in dom0 kernel or
is this just completely incompatible with Xen?)
'virsh freecell 0' and 'virsh freecell 1' show numbers which are
plausible (I have no idea if they're actually correct though).
Can I pin a domain or vCPU to memory to see if that works?
Rich.
From /etc/xen/xmexample1:
# List of which CPUS this domain is allowed to use, default Xen picks
#cpus = "" # leave to Xen to pick
#cpus = "0" # all vcpus run on CPU0
#cpus = "0-3,5,^1" # run on cpus 0,2,3,5
If you start a domU with a config set up this way, you can specify that
it run only on node 0 or 1, and then watch the freecell before and after
starting and confirm that the memory was taken from the right node.
--
Elizabeth Kon (Beth)
IBM Linux Technology Center
Open Hypervisor Team
email: eak(a)us.ibm.com