[Libvir] RFC: requesting APIs for host physical resource discovery

The current libvirt APIs allow the host's physical resources to be split up and allocated to guest domains, however, there is no way to discover what the available host resources actually are. Thus I would like to suggest the inclusion of new APIs to enable host resource discovery. As a starting point I'd like to be able to query the following information: * Number of physical CPUs - ability to enumerate the CPUs in the host, both those currently present, and theoretical maximum (to take account of hotplug). * Amount of RAM - actual physical RAM present, and that available for guest usage (eg discounting that reserved by a hypervisor or equiv) * CPU relationship - ie ability to distinguish between CPUs which are hyperthread siblings, on same core, or on separate sockets Alonside these basic queries it would be desirable to add a further resource resource management API to allow for setting of a guest domain's CPU affinity. ie ability control what CPUs the VMM is allowed to schedule a domain on. Once this first basic set of caapbilties for resource discovery are provided for, then I believe it will be neccessary to provide some more advanced queries, in particular: * NUMA topology - ability to enumerate NUMA nodes, the CPUs associated with each node & the RAM range mapped to that node Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

I agree to some extent, but I might suggest that discovery and exposure of host system physical resources may be better left to other APIs, since such functionality has widespread uses outside of virtualization management. Since you bring it up, we - Open Source CIM interfaces for Xen, via libvirt - are actually having to face this exact problem today, namely what is/are good standardized cross-platform cross-distro Linux interfaces for exposing physical hardware info necessary for virtual resource allocation. Right now we have a some Open Source Linux CIM providers exposing h/w info mined out of, say, /proc, but the architecture and distro ifdefs are getting out of hand... You are quite correct in stating this requirement, and there seems to be multiple candidates (SMBIOS, HPI, SNMP, etc) but I don't have a good answer. My concern would be trying to add and solve this problem within the scope of libvirt. - Gareth Dr. Gareth S. Bestor IBM Linux Technology Center M/S DES2-01 15300 SW Koll Parkway, Beaverton, OR 97006 503-578-3186, T/L 775-3186, Fax 503-578-3186 |---------+------------------------------> | | "Daniel P. | | | Berrange" | | | <berrange@redhat.co| | | m> | | | Sent by: | | | libvir-list-bounces| | | @redhat.com | | | | | | | | | 03/23/06 08:04 AM | | | Please respond to | | | "Daniel P. | | | Berrange" | |---------+------------------------------>
--------------------------------------------------------------------------------------------------------------------| | | | To: libvir-list@redhat.com | | cc: | | Subject: [Libvir] RFC: requesting APIs for host physical resource discovery | --------------------------------------------------------------------------------------------------------------------|
The current libvirt APIs allow the host's physical resources to be split up and allocated to guest domains, however, there is no way to discover what the available host resources actually are. Thus I would like to suggest the inclusion of new APIs to enable host resource discovery. As a starting point I'd like to be able to query the following information: * Number of physical CPUs - ability to enumerate the CPUs in the host, both those currently present, and theoretical maximum (to take account of hotplug). * Amount of RAM - actual physical RAM present, and that available for guest usage (eg discounting that reserved by a hypervisor or equiv) * CPU relationship - ie ability to distinguish between CPUs which are hyperthread siblings, on same core, or on separate sockets Alonside these basic queries it would be desirable to add a further resource resource management API to allow for setting of a guest domain's CPU affinity. ie ability control what CPUs the VMM is allowed to schedule a domain on. Once this first basic set of caapbilties for resource discovery are provided for, then I believe it will be neccessary to provide some more advanced queries, in particular: * NUMA topology - ability to enumerate NUMA nodes, the CPUs associated with each node & the RAM range mapped to that node Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=| -- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Thu, Mar 23, 2006 at 12:35:12PM -0800, Gareth S Bestor wrote:
I agree to some extent, but I might suggest that discovery and exposure of host system physical resources may be better left to other APIs, since such functionality has widespread uses outside of virtualization management.
Since you bring it up, we - Open Source CIM interfaces for Xen, via libvirt - are actually having to face this exact problem today, namely what is/are good standardized cross-platform cross-distro Linux interfaces for exposing physical hardware info necessary for virtual resource allocation. Right now we have a some Open Source Linux CIM providers exposing h/w info mined out of, say, /proc, but the architecture and distro ifdefs are getting out of hand... You are quite correct in stating this requirement, and there seems to be multiple candidates (SMBIOS, HPI, SNMP, etc) but I don't have a good answer. My concern would be trying to add and solve this problem within the scope of libvirt.
I don't think it is neccessary (or desirable) to solve the general resource discovery problem within libvirt - I'd limit scope to only those resources where there is a need for consistency with the VMM's view of resources. Taking CPU enumeration as an example, if an app is to later instruct a VMM to restrict scheduling of a domain's VCPUs to a particular set of host CPUs, then it is critical to ensure the app's enumeration of host CPUs matches the way the VMM enumerates them. eg CPU #1 in the VMM's world view, may appear as CPU #3 in /proc/cpuinfo. The most reliable way to avoid this & guarentee consistency of view would be to have the app use the VMM for enumerating the CPUs. Now while the app could talk to the VMM directly, it then looses the two key benefits of libvirt - isolation from VMM comms protocol change & isolation from underlying virtualization technology. Thus I think libvirt needs at least some limited resource discovery capabilities. Regards, Dan.
|---------+------------------------------> | | "Daniel P. | | | Berrange" | | | <berrange@redhat.co| | | m> | | | Sent by: | | | libvir-list-bounces| | | @redhat.com | | | | | | | | | 03/23/06 08:04 AM | | | Please respond to | | | "Daniel P. | | | Berrange" | |---------+------------------------------>
--------------------------------------------------------------------------------------------------------------------| | | | To: libvir-list@redhat.com | | cc: | | Subject: [Libvir] RFC: requesting APIs for host physical resource discovery | --------------------------------------------------------------------------------------------------------------------|
The current libvirt APIs allow the host's physical resources to be split up and allocated to guest domains, however, there is no way to discover what the available host resources actually are. Thus I would like to suggest the inclusion of new APIs to enable host resource discovery. As a starting point I'd like to be able to query the following information:
* Number of physical CPUs - ability to enumerate the CPUs in the host, both those currently present, and theoretical maximum (to take account of hotplug). * Amount of RAM - actual physical RAM present, and that available for guest usage (eg discounting that reserved by a hypervisor or equiv) * CPU relationship - ie ability to distinguish between CPUs which are hyperthread siblings, on same core, or on separate sockets
Alonside these basic queries it would be desirable to add a further resource resource management API to allow for setting of a guest domain's CPU affinity. ie ability control what CPUs the VMM is allowed to schedule a domain on.
Once this first basic set of caapbilties for resource discovery are provided for, then I believe it will be neccessary to provide some more advanced queries, in particular:
* NUMA topology - ability to enumerate NUMA nodes, the CPUs associated with each node & the RAM range mapped to that node
Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
-- Libvir-list mailing list Libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list
-- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Thu, Mar 23, 2006 at 09:03:55PM +0000, Daniel P. Berrange wrote:
On Thu, Mar 23, 2006 at 12:35:12PM -0800, Gareth S Bestor wrote:
I agree to some extent, but I might suggest that discovery and exposure of host system physical resources may be better left to other APIs, since such functionality has widespread uses outside of virtualization management.
Since you bring it up, we - Open Source CIM interfaces for Xen, via libvirt - are actually having to face this exact problem today, namely what is/are good standardized cross-platform cross-distro Linux interfaces for exposing physical hardware info necessary for virtual resource allocation. Right now we have a some Open Source Linux CIM providers exposing h/w info mined out of, say, /proc, but the architecture and distro ifdefs are getting out of hand... You are quite correct in stating this requirement, and there seems to be multiple candidates (SMBIOS, HPI, SNMP, etc) but I don't have a good answer. My concern would be trying to add and solve this problem within the scope of libvirt.
I don't think it is neccessary (or desirable) to solve the general resource discovery problem within libvirt - I'd limit scope to only those resources where there is a need for consistency with the VMM's view of resources.
Okay, here is what I added: http://libvirt.org/html/libvirt-libvirt.html#virNodeInfo Structure virNodeInfo struct _virNodeInfo { charmodel[32] model : string indicating the CPU model unsigned long memory : memory size in megabytes unsigned int cpus : the number of active CPUs unsigned int mhz : expected CPU frequency unsigned int nodes : the number of NUMA cell, 1 for uniform unsigned int sockets : number of CPU socket per node unsigned int cores : number of core per socket unsigned int threads : number of threads per core } and http://libvirt.org/html/libvirt-libvirt.html#virNodeGetInfo int virNodeGetInfo (virConnectPtr conn, virNodeInfoPtr info) This limits the API purely to hardware informations. Those informations can be extracted from Xend, and could potentially be obtained directly (I didn't checked if there was hypervisor calls for this, could be done with /proc and heuristics, but I would rather avoid it). This doesn't cover topology, nor maps of online/offline processors, this sounds premature to me at this point and could be added as separate APIs later. It's checked in CVS and there is Python bindings for it see python/tests/node.py Daniel -- Daniel Veillard | Red Hat http://redhat.com/ veillard@redhat.com | libxml GNOME XML XSLT toolkit http://xmlsoft.org/ http://veillard.com/ | Rpmfind RPM search engine http://rpmfind.net/
participants (3)
-
Daniel P. Berrange
-
Daniel Veillard
-
Gareth S Bestor