On 05/12/2011 06:45 AM, Daniel P. Berrange wrote:
On Thu, May 12, 2011 at 06:22:49PM +0800, Osier Yang wrote:
> Hi, All
>
> This series adopts Daniel's suggestion on v1, using libnuma but
> not invoking numactl to set the NUMA policy. Add support for
> "interleave" and "preferred" modes, except the "strict"
mode
> supported in v1.
>
> The new XML is like:
>
> <numatune>
> <memory model="interleave" nodeset="+0-4,8-12"/>
> <numatune>
>
> I persist in using the numactl nodeset syntax to represent
> the "nodeset", as I think the purpose of adding NUMA tuning
> support is to provide the use for NUMA users, keeping the
> syntax same as numactl will make them feel better.
Compatibility with numactl syntax is an explicit non-goal.
numactl is just one platform specific impl. Compatibility
with numactl syntax is of no interest to the ESX or VirtualBox
drivers. The libvirt NUMA syntax should be using other
existing libvirt XML as the design compatibility target.
I won't argue semantic of XML with you, but please keep in mind
that one of the main differences between using a numactl like
mechanism and taskset is that the NUMA mechanisms also let you
bind to specific, NUMA node memory, as well as specifying the
access type.
So from the outside looking in, keeping things in terms of cpusets
would seem to not be in full agreement with the RFE for NUMA support.
I would think that the specification of NUMA binding would need to
include NUMA nodes and specify memory bindings as well as the
access type. From a performance perspective, support for true
NUMA is what is the last hurdle that is keeping libvirt from being
used in high performance situations.
I think that specifying things in terms of nodes instead of
cpus will make it easier for the end user. So I guess I need
to withdraw the part about not arguing XML...
Thanks for your time,
-mark
Regards,
Daniel
--
Mark Wagner
Principal SW Engineer - Performance
Red Hat