于 2011年08月19日 16:09, Bharata B Rao 写道:
On Fri, Aug 19, 2011 at 12:55 PM, Osier Yang <jyang(a)redhat.com>
wrote:
> 于 2011年08月19日 14:35, Bharata B Rao 写道:
>> How about something like this ? (OPTION 1)
>>
>> <cpu>
>> ...
>> <numa nodeid='node' cpus='cpu[-cpu]' mem='size'>
>> ...
>> </cpu>
>>
> Libvirt already supported NUMA setting (both cpu and memory)
> on host yet, but yes, nothing for NUMA setting inside guest yet.
>
> We have talked once about the XML when adding the support
> for numa memory setting on host. And finally choosed to introduce
> new XML node for it with considering to add support for NUMA
> setting inside guest one day. The XML is:
>
> <numatune>
> <memory mode="strict" nodeset="1-4,^3"/>
> </numatune>
But this only specifies the host NUMA policy that should be used for
guest VM processes.
Yes.
> So, personlly, I think the new XML should be inside
"<numatune>"
> as a child node.
>
>
>> And we could specify multiple such lines, one for each node.
>>
>> -numa and -smp options in qemu do not work all that well since they
>> are parsed independent of each other and one could specify a cpu set
>> with -numa option that is incompatible with sockets,cores and threads
>> specified on -smp option. This should be fixed in qemu, but given that
>> such a problem has been observed, should libvirt tie the specification
>> of numa and smp (sockets,threads,cores) together so that one is forced
>> to specify only valid combinations of nodes and cpus in libvirt ?
>>
>> May be something like this: (OPTION 2)
>>
>> <cpu>
>> ...
>> <topology sockets='1' cores='2' threads='1'
nodeid='0' cpus='0-1'
>> mem='size'>
>> <topology sockets='1' cores='2' threads='1'
nodeid='1' cpus='2-3'
>> mem='size'>
>> ...
>> </cpu
> This will cause we have 3 places for NUMA,
> one is <numatune>,
As I observed above, this controls the NUMA policy of the guest VM
threads on host.
Yes, I known your meaning.
> the other is "<vcpu>",
vcpu/cpuset specifies how vcpu threads should be pinned on host.
> and this one.
I think what we are addressing here is a bit different from the above
two. Here we are actually trying to _define_ the NUMA topology of the
guest, while via other capabilites (numatune, vcpu) we only control
the cpu and memory bindings of vcpu threads on host.
Hence I am not sure if if <numatune> is the right place for defining
host NUMA topology which btw should be independent of the host
topology.
Maybe something like:
<numatune>
<guest>
......
</guest>
</numatune>
Thanks for your response.
Regards,
Bharata.