On Thu, May 05, 2011 at 05:38:27PM +0800, Osier Yang wrote:
Currently we only want to use "membind" function of
numactl, but
perhaps more other functions in future, so introduce element
"<numatune>", future NUMA tuning related XML stuffs should go
into it.
---
docs/formatdomain.html.in | 17 +++++++++++++++++
docs/schemas/domain.rng | 20 ++++++++++++++++++++
2 files changed, 37 insertions(+), 0 deletions(-)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 5013c48..6da6465 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -288,6 +288,9 @@
<min_guarantee>65536</min_guarantee>
</memtune>
<vcpu cpuset="1-4,^3,6"
current="1">2</vcpu>
+ <numatune>
+ <membind nodeset="1,2,!3-6">
+ </numatune>
I don't think we should be creating a new <numatune> element here since
it is not actually covering all aspects of NUMA tuning. We already have
CPU NUMA pinning in the separate <vcpu> element. NUMA memory pinning
should likely be either in the <memtune> or <memoryBacking> elements,
probably the latter.
Also, it is not very nice to use a different syntax for negation for the
VCPU specification, vs memory node specification "^3" vs "!3"
Looking to the future, we may want to consider how we'd allow host NUMA
mapping on a fine grained basis, per guest NUMA node. eg It is possible
with QEMU to actually define a guest visible NUMA topology for the virtual
CPUs and memory using
-numa node[,mem=size][,cpus=cpu[-cpu]][,nodeid=node]
We don't support that yet, which is something we ought to do. At which
point you probably also want to be ale to map guest NUMA nodes to host
NUMA nodes.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|