On Fri, Jun 06, 2014 at 01:09:58PM +0200, Michal Privoznik wrote:
A PCI device can be associated with a specific NUMA node. Later,
when
a guest is pinned to one NUMA node the PCI device can be assigned on
different NUMA node. This makes DMA transfers travel across nodes and
thus results in suboptimal performance. We should expose the NUMA node
locality for PCI devices so management applications can make better
decisions.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
diff --git a/src/node_device/node_device_udev.c
b/src/node_device/node_device_udev.c
index 9a951d9..8e98ad2 100644
--- a/src/node_device/node_device_udev.c
+++ b/src/node_device/node_device_udev.c
@@ -493,6 +493,13 @@ static int udevProcessPCI(struct udev_device *device,
goto out;
}
+ if (udevGetIntSysfsAttr(device,
+ "numa_node",
+ &data->pci_dev.numa_node,
+ 10) == PROPERTY_ERROR) {
+ goto out;
Will this result in an error if the 'numa_node' file does not exist
in sysfs - I wouldn't be suprised if older kernels lack it.
ACK, if that doesn't cause an error on missing numa_node.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|