[Libvir] VCPU mapping into the XML

Hello, I want to add the info of VCPU mapping into the XML, because we cannot manage VCPU mapping automatically when a domain starts or reboots. But in the following patch, the function written in "TODO" which I wanted has not been accepted. https://www.redhat.com/archives/libvir-list/2006-August/msg00015.html If I provided the patch managing VCPU mapping in the XML, do you accept it? Regards, Yuzuru Asano.

On Tue, Feb 27, 2007 at 06:40:54PM +0900, ASANO Yuzuru wrote:
Hello,
I want to add the info of VCPU mapping into the XML, because we cannot manage VCPU mapping automatically when a domain starts or reboots.
But in the following patch, the function written in "TODO" which I wanted has not been accepted.
https://www.redhat.com/archives/libvir-list/2006-August/msg00015.html
If I provided the patch managing VCPU mapping in the XML, do you accept it?
The VCPU mapping, schedular parameters, and similar types of data are all examples of 'runtime policy' you'd apply to a guest domain. The XML format meanwhile, is expressing a guest's virtual hardware definition. IMHO we should not mix runtime policy with hardware defintiion, and thus I'd say it was not appropriate to include VCPU mapping in the XML. Libvirt is really just the lowest level is what I see as a stack of tools for managing virtual machines. Above libvirt I'd expect to see some form of 'policy manager' which defines/controls things such as VCPU mapping, or schedular parameters, and even managing when a VM runs at all. One simple policy manager might just apply a statically defined VCPU mapping when a new guest starts up. A more advanced policy manager would collect resource utilization data, perform some analysis on this data, and thus apply changes to the VCPU mapping periodically over lifetime of a guest. Such VM policy management tools can already just use the existing APIs for setting VCPU mapping. Anyone with arguments for/against including VCPU info in the XML do feel free to (dis-)agree with me though... Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
Libvirt is really just the lowest level is what I see as a stack of tools for managing virtual machines. Above libvirt I'd expect to see some form of 'policy manager' which defines/controls things such as VCPU mapping, or schedular parameters, and even managing when a VM runs at all. One simple policy manager might just apply a statically defined VCPU mapping when a new guest starts up. A more advanced policy manager would collect resource utilization data, perform some analysis on this data, and thus apply changes to the VCPU mapping periodically over lifetime of a guest. Such VM policy management tools can already just use the existing APIs for setting VCPU mapping.
I'd be very excited if we could get to a first cut of a simple policy manager in the fedora 8 timeframe. At a minimum, I would think such a thing would need to be able to do the following: 1. Store policy information about a guest. At a minimum we would want to store cpu pin/weight/cap information; beyond that, dependency information (don't start me until some other VM is up and running). 2. Retrieve policy information on request. So for example we might want to tell libvirt to ask the manager for policy information before starting a guest, or before shutting one down. Just these features would be great for a first cut. Then going forward, we could start thinking about receiving monitoring information and responding to events. The tricky thing, it seems to me, is going to be designing the thing in such a way that it won't take forever to implement the simple part now, but can still be extended to do more later. Further thoughts, anyone? --Hugh -- Red Hat Virtualization Group http://redhat.com/virtualization Hugh Brock | virt-manager http://virt-manager.org hbrock@redhat.com | virtualization library http://libvirt.org

On Tue, Feb 27, 2007 at 02:07:14PM -0500, Hugh Brock wrote:
Daniel P. Berrange wrote:
Libvirt is really just the lowest level is what I see as a stack of tools for managing virtual machines. Above libvirt I'd expect to see some form of 'policy manager' which defines/controls things such as VCPU mapping, or schedular parameters, and even managing when a VM runs at all. One simple policy manager might just apply a statically defined VCPU mapping when a new guest starts up. A more advanced policy manager would collect resource utilization data, perform some analysis on this data, and thus apply changes to the VCPU mapping periodically over lifetime of a guest. Such VM policy management tools can already just use the existing APIs for setting VCPU mapping.
I'd be very excited if we could get to a first cut of a simple policy manager in the fedora 8 timeframe. At a minimum, I would think such a thing would need to be able to do the following:
1. Store policy information about a guest. At a minimum we would want to store cpu pin/weight/cap information; beyond that, dependency information (don't start me until some other VM is up and running).
2. Retrieve policy information on request. So for example we might want to tell libvirt to ask the manager for policy information before starting a guest, or before shutting one down.
Just these features would be great for a first cut. Then going forward, we could start thinking about receiving monitoring information and responding to events. The tricky thing, it seems to me, is going to be designing the thing in such a way that it won't take forever to implement the simple part now, but can still be extended to do more later.
A couple of things - This doesn't need to be at all related to libvirt release dates, so if people want to experiment with policy management daemons it can be done today.... - Expect to throw away the first few versions - policy management is seriously non-trivial and I doubt anyone will get it right first time. Better to prototype ideas quickly than to over design it from the start. - I expect it'll turn out that there a few different ways to approach policy, and the idea of a single policy manager may well be impossible. - There is no good way to collect, distribute & process resource utilization data at this time - traditional monitoring systems have all sorts of plugins for collecting data, but once collected they are typically a black box - no formal API for external apps to get at the data collected for analysis.
Further thoughts, anyone?
I'd encourage anyone interested in it, to experiment / prototype any ideas as a project in their own right. The core libvirt APIs needed for such a system are all there.... Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Hugh Brock wrote:
Daniel P. Berrange wrote:
Libvirt is really just the lowest level is what I see as a stack of tools for managing virtual machines. Above libvirt I'd expect to see some form of 'policy manager' which defines/controls things such as VCPU mapping, or schedular parameters, and even managing when a VM runs at all. One simple policy manager might just apply a statically defined VCPU mapping when a new guest starts up. A more advanced policy manager would collect resource utilization data, perform some analysis on this data, and thus apply changes to the VCPU mapping periodically over lifetime of a guest. Such VM policy management tools can already just use the existing APIs for setting VCPU mapping.
I'd be very excited if we could get to a first cut of a simple policy manager in the fedora 8 timeframe. At a minimum, I would think such a thing would need to be able to do the following:
1. Store policy information about a guest. At a minimum we would want to store cpu pin/weight/cap information; beyond that, dependency information (don't start me until some other VM is up and running).
2. Retrieve policy information on request. So for example we might want to tell libvirt to ask the manager for policy information before starting a guest, or before shutting one down.
Just these features would be great for a first cut. Then going forward, we could start thinking about receiving monitoring information and responding to events. The tricky thing, it seems to me, is going to be designing the thing in such a way that it won't take forever to implement the simple part now, but can still be extended to do more later.
The features I would want to implement now is essentially included in the above mentioned contents. More correctly, I would particularly want to implement the following 3 features. - New domain started by libvirt is set the policy information (cpu pin/weight/cap) by policy manager. - A domain rebooted by libvirt keeps configuration of policy information. - Some domains can use specified pCPUs by occupancy and the other domains are configured not able to use the pCPUs by policy manager. Are these proposed futures enough for your first step? If you agreed, I will start design & implementation. Regards, Yuzuru Asano.

ASANO Yuzuru wrote:
Hugh Brock wrote:
Daniel P. Berrange wrote:
Libvirt is really just the lowest level is what I see as a stack of tools for managing virtual machines. Above libvirt I'd expect to see some form of 'policy manager' which defines/controls things such as VCPU mapping, or schedular parameters, and even managing when a VM runs at all. One simple policy manager might just apply a statically defined VCPU mapping when a new guest starts up. A more advanced policy manager would collect resource utilization data, perform some analysis on this data, and thus apply changes to the VCPU mapping periodically over lifetime of a guest. Such VM policy management tools can already just use the existing APIs for setting VCPU mapping.
I'd be very excited if we could get to a first cut of a simple policy manager in the fedora 8 timeframe. At a minimum, I would think such a thing would need to be able to do the following:
1. Store policy information about a guest. At a minimum we would want to store cpu pin/weight/cap information; beyond that, dependency information (don't start me until some other VM is up and running).
2. Retrieve policy information on request. So for example we might want to tell libvirt to ask the manager for policy information before starting a guest, or before shutting one down.
Just these features would be great for a first cut. Then going forward, we could start thinking about receiving monitoring information and responding to events. The tricky thing, it seems to me, is going to be designing the thing in such a way that it won't take forever to implement the simple part now, but can still be extended to do more later.
The features I would want to implement now is essentially included in the above mentioned contents. More correctly, I would particularly want to implement the following 3 features.
- New domain started by libvirt is set the policy information (cpu pin/weight/cap) by policy manager. - A domain rebooted by libvirt keeps configuration of policy information. - Some domains can use specified pCPUs by occupancy and the other domains are configured not able to use the pCPUs by policy manager.
Are these proposed futures enough for your first step? If you agreed, I will start design & implementation.
Regards, Yuzuru Asano.
I think this is a good place to start. As Dan says in another email, we may wind up doing many implementations of this idea before we get it right. Once you have a design in mind, post it here for comments if you would. Thanks! --Hugh -- Red Hat Virtualization Group http://redhat.com/virtualization Hugh Brock | virt-manager http://virt-manager.org hbrock@redhat.com | virtualization library http://libvirt.org

On Tue, Feb 27, 2007 at 02:04:22PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 27, 2007 at 06:40:54PM +0900, ASANO Yuzuru wrote:
Hello,
I want to add the info of VCPU mapping into the XML, because we cannot manage VCPU mapping automatically when a domain starts or reboots.
But in the following patch, the function written in "TODO" which I wanted has not been accepted.
https://www.redhat.com/archives/libvir-list/2006-August/msg00015.html
If I provided the patch managing VCPU mapping in the XML, do you accept it?
The VCPU mapping, schedular parameters, and similar types of data are all examples of 'runtime policy' you'd apply to a guest domain. The XML format meanwhile, is expressing a guest's virtual hardware definition. IMHO we should not mix runtime policy with hardware defintiion, and thus I'd say it was not appropriate to include VCPU mapping in the XML.
If I good remember we've selected XML, because it supports namespaces and you can extend a domain description by arbitrary three-party (non-libvirt) stuff. Karel -- Karel Zak <kzak@redhat.com>
participants (4)
-
ASANO Yuzuru
-
Daniel P. Berrange
-
Hugh Brock
-
Karel Zak