
On Fri, May 30, 2014 at 10:14:17AM +0100, Daniel P. Berrange wrote:
On Thu, May 29, 2014 at 10:32:46AM +0200, Michal Privoznik wrote:
A PCI device can be associated with a specific NUMA node. Later, when a guest is pinned to one NUMA node the PCI device can be assigned on different NUMA node. This makes DMA transfers travel across nodes and thus results in suboptimal performance. We should expose the NUMA node locality for PCI devices so management applications can make better decisions.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com> ---
Notes: All the machines I have tried this on had only -1 in the numa_node file. From the kernel sources it seems that this is the default, so I'm not printing the <numa/> element into the XML in this case. But I'd like to hear your opinion.
Yes, I believe '-1' means that there is no NUMA locality info available for the device, so it makes sense to skip this.
Confirmed in the kernel source include/linux/numa.h:#define NUMA_NO_NODE (-1) Is used when the ACPI tables don't specify any NUMA node for the PCI device, or when the NUMA node is not online. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|