[Libvir] Proposed XML format for capabilities, with examples [long]

Lots of people had various things to say about the first capabilities patches (see thread starting here: https://www.redhat.com/archives/libvir-list/2007-March/msg00153.html). So I thought I'd try to pull together ideas into a single thread, and post some information about what this is trying to achieve and some example proposed XML. Motivation ---------- At the moment there is a considerable amount of conditional code in libvirt clients. For example the following code from virt-manager (src/virtManager/create.py): if self.connection.get_type() == "QEMU": <------- 1 self.window.get_widget("virt-method-pv").set_sensitive(False) self.window.get_widget("virt-method-fv").set_active(True) self.window.get_widget("virt-method-fv-unsupported").hide() self.window.get_widget("virt-method-fv-disabled").hide() else: self.window.get_widget("virt-method-pv").set_sensitive(True) self.window.get_widget("virt-method-pv").set_active(True) if virtinst.util.is_hvm_capable(): <--------- 2 self.window.get_widget("virt-method-fv").set_sensitive(True) self.window.get_widget("virt-method-fv-unsupported").hide() self.window.get_widget("virt-method-fv-disabled").hide() This hopefully demonstrates two points which I'd like to fix: Firstly having code which requires knowledge of what a driver is capable of (line 1 above) is not scalable as we add more and more types of virtualisation (OpenVZ, VMWare, ...). Secondly the virtinst.util module does some probing to find out what the local hardware is capable of (line 2 above), and that won't work over remote connections. Thirdly, although you can only really connect to one driver at a time (except in the Xen case, but that's just weird), the <domain> description required by virDomainCreateLinux is to some extent driver-specific. So you need additional logic when creating domains, and for some of those (eg. qemu/kqemu/kvm) it's non-trivial and depends on information that you "just know" about the driver. The proposed new API ("virConnectGetCapabilities") would return an XML description of the capabilities of the driver / hypervisor. In the remote case, probing would be done on the remote machine. In either case the idea would be to remove conditional code from virt-manager, remove local hardware probing from virtinst, and provide some description of what optional features the <domain> XML supports. In the next two sections I'll analyse the problem areas in the current virtinst and virt-manager. You may want to skip the next two sections and go straight to the example XML. Analysis of virtinst -------------------- The main issue is with virtinst/util.py which contains the following functions which do local probing: (1) is_pae_capable: host supports PAE? Uses Linux-specific /proc/cpuinfo (2) is_hvm_capable: host supports HVM & enabled in Xen? Uses Xen-specific /sys/hypervisor/properties/capabilities (3) is_kqemu_capable: [Linux] kernel supports kqemu? Uses Linux-specific /dev/kqemu (4) is_kvm_capable: [Linux] kernel supports kvm? Uses Linux-specific /dev/kvm In virtinst/FullVirtGuest.py: (5) "emulator" and "loader" must be specified for Xen/QEMU and Xen guests respectively. They have architecture- and distro-specific paths which refer to files on the remote machine. In virtinst/ParaVirtGuest.py: (6) Uses "xm console" to connect to the console in the Xen case (but then this sort of paravirtualisation implies Xen). In virtinst/DistroManager.py: (7) We download the kernel and boot.iso files, and download and modify the initrd file. [Comment: I'm not sure that capabilities can solve this, but it does need to be fixed properly for the remote case to work]. Analysis of virt-manager ------------------------ In src/virtManager/create.py: (8) Tries to detect if VT-x/AMD-V is present in the hardware but disabled in the BIOS by doing some Xen-specific heuristics. (9) Paravirt and fullvirt dialogs are enabled based on the type of the connection (eg. if self.connection.get_type() == "QEMU" it disables paravirt). (10) CPU architecture combo box is enabled only for type == "QEMU". (11) Similarly, the "accelerate" checkbox is enabled only for type == "QEMU". [Comment: Dan tells me that this requirement comes about because kqemu is sometimes unreliable, so users need a way to disable it]. (12) Local media are required for FV installs. Also we use HAL to detect media inserted and removed. [Comment: Can be solved separately by abstracting storage]. In the *.glade files: (13) The list of CPU architectures for qemu is hard-coded in the interface description file. Other areas which are beyond the role of capabilities: * Remote console * Serial console * Saving images Proposed XML format ------------------- My thoughts are that capabilities need to return the following information about the underlying driver/hypervisor: * Host CPU flags (1,8) * List of guest architectures supported. For each of these: - Model of virtualised CPU (10,13) example: x86_64 - Name of the virtualised machine (if applic.) example: pc - Virtualised CPU features: PAE, ... - Fullvirt flag: can we run an unmodified OS as a guest? (2,9) or Paravirt: what sort of paravirt API does a guest need (eg. xen, pvops, VMI, ...)? - The <domain type='?'> for this guest example: kqemu - The <domain><os><type machine='?'> for this guest (if applic.) example: pc - Suggested emulator and loader paths (5) - Driver-specific flags which libvirt clients would not be required to understand, but could be used to enhance libvirt clients. examples: uses kqemu, uses kvm (3,4,11) (Notes: (a) I have flattened qemu's nested arch/machine list, because I there is not a natural hierarchy. (b) The guest architectures list is a Cartesian product, although at the moment the worst case (qemu) would only have about 14 entries. An alternate way to do this is discussed at the end. (c) The host CPU model is already provided by virNodeGetInfo). Example: Xen ------------ For Xen the primary source for capabilities are the files /sys/hypervisor/properties/capabilities and /proc/cpuinfo. A Xen driver might present the following description of its capabilities: <capabilities> <host> <cpu_flags> <cpu_flag> vmx </cpu_flag> <cpu_flag> pae </cpu_flag> <!-- etc --> </cpu_flags> </host> <guest_architectures> <guest_architecture> <model> x86_64 </model> <paravirt> xen </paravirt> <domain_type> xen </domain_type> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture> <guest_architecture> <model> i686 </model> <pae/> <paravirt> xen </paravirt> <domain_type> xen </domain_type> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture> <guest_architecture> <model> i686 </model> <fullvirt/> <domain_type> xen </domain_type> <loader> /usr/lib/xen/boot/hvmloader </loader> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture> <guest_architecture> <model> i686 </model> <pae/> <fullvirt/> <domain_type> xen </domain_type> <loader> /usr/lib/xen/boot/hvmloader </loader> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture> <guest_architecture> <model> x86_64 </model> <fullvirt/> <domain_type> xen </domain_type> <loader> /usr/lib/xen/boot/hvmloader </loader> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture> </guest_architectures> </capabilities> Example: qemu + kqemu + kvm --------------------------- Qemu has by far the longest list of supported guest architectures. Out of the box it supports 10 distinct machine types and then you can add 4 extra machine types if the kernel can do kqemu and kvm, making 14 in all. Below I have abbreviated this list for clarity. <capabilities> <host> <cpu_flags> <cpu_flag> vmx </cpu_flag> <cpu_flag> pae </cpu_flag> <!-- etc --> </cpu_flags> </host> <guest_architectures> <guest_architecture> <model> sparc </model> <machine> sun4m </machine> <fullvirt/> <domain_type> qemu </domain_type> <machine_type> sun4m </machine_type> <emulator> /usr/bin/qemu-system-sparc </emulator> </guest_architecture> <guest_architecture> <model> i686 </model> <machine> pc </machine> <fullvirt/> <domain_type> qemu </domain_type> <machine_type> pc </machine_type> <emulator> /usr/bin/qemu </emulator> </guest_architecture> <guest_architecture> <model> x86_64 </model> <machine> pc </machine> <fullvirt/> <domain_type> kqemu </domain_type> <machine_type> pc </machine_type> <emulator> /usr/bin/qemu </emulator> <qemu_uses_kqemu /> </guest_architecture> <guest_architecture> <model> x86_64 </model> <machine> pc </machine> <fullvirt/> <domain_type> kvm </domain_type> <machine_type> pc </machine_type> <emulator> /usr/bin/qemu-kvm </emulator> <qemu_uses_kvm /> </guest_architecture> </guest_architecture> </capabilities> Guest architectures: Cartesian product or UI builder? ----------------------------------------------------- Currently the list of guest architectures is a flat list, worst case 5 entries long for Xen and 14 entries long for qemu. Presenting this in user interfaces could be challenging. One suggestion is that the user interface looks like: [*] Show only fullvirt [*] Show only PC architectures [ ] Show 32 bit architectures [*] Show 64 bit architectures | Shorter list of architectures which match | the criteria checked above. | ... | Another is that we change the XML description so that it matches the UI. For instance: <guest_architecture> <models> <model> sparc </model> <model> ppc </model> ... Or: <pae> <can_enable/> <can_disable/> </pae> This is relatively easy to do with qemu, but the format of Xen's /sys/hypervisor/properties/capabilities makes it quite hard. i18n ---- Some proposed features may make translation challenging. For example qemu supports a whole list of machine types ("pc", "sun4m", etc.) and it would be nice for libvirt clients to be able to provide some sort of description for the user. It would not be wise to carry this description in the XML because it would not be possible to localise it. To avoid all libvirt clients duplicating and maintaining lists of machine types and descriptions, it may be worth adding a call to the API along the lines of: virConnectGetMachineDescription (const char *machine_name, const char *lang); (where lang == NULL would mean to use the current language). [EOF] -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 "[Negative numbers] darken the very whole doctrines of the equations and make dark of the things which are in their nature excessively obvious and simple" (Francis Maseres FRS, mathematician, 1759)

On Wed, Mar 14, 2007 at 11:59:57AM +0000, Richard W.M. Jones wrote:
(1) is_pae_capable: host supports PAE? Uses Linux-specific /proc/cpuinfo
This is also arch specific - only relevant for i386/x86_64.
(2) is_hvm_capable: host supports HVM & enabled in Xen? Uses Xen-specific /sys/hypervisor/properties/capabilities
The cpuinfo tells you whether the CPU supports particular features. Whether the hypervisor actually uses these features to provide HVM, or whether the BIOS even lets you use them is not easy to figure out. So we need driver specific code in this case - for Xen as described. For KVM we look for presence of the /dev/kvm device node. As you say, this should really be hidden away in libvirt rather than apps.
(3) is_kqemu_capable: [Linux] kernel supports kqemu? Uses Linux-specific /dev/kqemu
(4) is_kvm_capable: [Linux] kernel supports kvm? Uses Linux-specific /dev/kvm
This is really equivalent to the 'is_hvm_capable' check but for KVM.
In virtinst/FullVirtGuest.py:
(5) "emulator" and "loader" must be specified for Xen/QEMU and Xen guests respectively. They have architecture- and distro-specific paths which refer to files on the remote machine.
In virtinst/ParaVirtGuest.py:
(6) Uses "xm console" to connect to the console in the Xen case (but then this sort of paravirtualisation implies Xen).
This on the 'TODO' list to fix. In the 0.3.0 release of libvirt we now have a 'virsh console' command which works with QEMU / KVM too, so we can remove this Xen-ism now.
In virtinst/DistroManager.py:
(7) We download the kernel and boot.iso files, and download and modify the initrd file. [Comment: I'm not sure that capabilities can solve this, but it does need to be fixed properly for the remote case to work].
A tricky one with several possible answers: - Have APIs for grabbing the kenel/initrd files in libvirt. Pure evil really - Require use of PXE in remote case. QEMU/HVM does PXE now, and there is work in progress to write a paravirt bootloader which does PXE too. - Have a library of pre-defined install images, and a libvirt API to enumerate the installs images and their kernel/initrd paths. In light of the recent thread about OpenVZ support I'm now leaning towards the latter because OpenVZ guest creation has the idea of OS images / templates as a core requirement. Our current code which downloads the kernl/initrd every single time is kindof wasteful. If we had a library of install kernels, we could assume that in the remote case, the administrator has prepopulated it with valid options. In the local cases, we could simply repuporse our downloading code as a means to populate the image library.
(11) Similarly, the "accelerate" checkbox is enabled only for type == "QEMU". [Comment: Dan tells me that this requirement comes about because kqemu is sometimes unreliable, so users need a way to disable it].
It is actually KVM which is the bigger trouble - there's a frequent stream of messages on kvm-devel from people whoi can install in plain QEMU but not in KVM. So if we forced everyone to always use KVM we'd have alot of unhappy people. This is really just an artifact of KVM being a very young project, so I'd expect that in a year or so we could quite likely *always* use KVM if its available & get rid of the checkbox.
(12) Local media are required for FV installs. Also we use HAL to detect media inserted and removed. [Comment: Can be solved separately by abstracting storage].
In the *.glade files:
(13) The list of CPU architectures for qemu is hard-coded in the interface description file.
Indeed, that sucks.
Other areas which are beyond the role of capabilities: * Remote console
Work in progress. Got a new VNC widget that supports Hextile for efficient remote access, and TLS encryption & X509 auth.
* Serial console
We need a way to expose the serial console over the network. All existing programs I'm aware of are only really suitable for exposing console output since they are clear-text network sockets. Since Xen serial console is a real bi-directional channel we need good encrypted access, preferrably without having to grant shell access to the host. Maybe we can do something incredibly evil & layer it into the RFB VNC stream, or maybe we need a real network daemon.
* Saving images
Although we popup a file dialog letting the user choose where to save an image, in reality they have no choice in the matter. SELinux mandates that it be under /var/lib/xen/. I'm inclined to remove this flexibility from the UI and just pick a sensisible directory for each HV. Or adjust the libvirt APIs for save restore so that they work with a relative filename, as well as the existing full qualified path.
Guest architectures: Cartesian product or UI builder? -----------------------------------------------------
Currently the list of guest architectures is a flat list, worst case 5 entries long for Xen and 14 entries long for qemu. Presenting this in user interfaces could be challenging.
I think I'd like to avoid doing a plain meta-data driven UI for this because it is really going to suck horribly. The user doesn't care about many of the choices so we need to be intelligent about only presenting the choices which really matter, and making the rest on their behalf. If they want to use the full range of options, then there is virt-install - virt-manager should be a much simpler UI dealing with the '95%' common case & ignoring the hairy 5% that's left.
Some proposed features may make translation challenging. For example qemu supports a whole list of machine types ("pc", "sun4m", etc.) and it would be nice for libvirt clients to be able to provide some sort of description for the user. It would not be wise to carry this description in the XML because it would not be possible to localise it.
I'm in two minds about this - in virt-manager I don't anticipate ever giving the user the choice of machine types since this is an advanced niche case which I doubt many people would use. For virt-install I'd expect people would just want to use the explicit machine types rather than a prettified & xlated string.
To avoid all libvirt clients duplicating and maintaining lists of machine types and descriptions, it may be worth adding a call to the API along the lines of:
virConnectGetMachineDescription (const char *machine_name, const char *lang); (where lang == NULL would mean to use the current language).
That's one option - or we could use XML regular l18n support to provide all the different translations inline. Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

Daniel P. Berrange wrote:
(4) is_kvm_capable: [Linux] kernel supports kvm? Uses Linux-specific /dev/kvm
This is really equivalent to the 'is_hvm_capable' check but for KVM.
Just an update on this one for all on the list: We went round several loops with this, but think we've got a way whereby a simple 'test -e /dev/kvm'-style test will not only check if KVM exists, but whether all necessary things have been enabled in the hardware/BIOS too. Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 "[Negative numbers] darken the very whole doctrines of the equations and make dark of the things which are in their nature excessively obvious and simple" (Francis Maseres FRS, mathematician, 1759)

Richard W.M. Jones wrote:
Example: Xen ------------
<capabilities> <host> <cpu_flags> <cpu_flag> vmx </cpu_flag> <cpu_flag> pae </cpu_flag> <!-- etc --> </cpu_flags> </host>
After some discussion with Daniel Veillard, I've evolved this into: <capabilities> <host> <features> <vmx/> <pae/> <!-- etc --> </features> </host> (This reason for this is to be consistent with the features listed in <domain> descriptions).
<guest_architectures> <guest_architecture> <model> x86_64 </model> <paravirt> xen </paravirt> <domain_type> xen </domain_type> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture>
Also after discussion with DV: <guest> <architectures> <architecture> <!-- <model> etc. as above --> <architecture> </architectures> </guest> </capabilities> (The reason for this is to reduce the verbosity of the XML and to allow other guest-related non-architecture-related sections in a future version). Rich. -- Emerging Technologies, Red Hat http://et.redhat.com/~rjones/ 64 Baker Street, London, W1U 7DF Mobile: +44 7866 314 421 "[Negative numbers] darken the very whole doctrines of the equations and make dark of the things which are in their nature excessively obvious and simple" (Francis Maseres FRS, mathematician, 1759)

On Wed, Mar 14, 2007 at 11:59:57AM +0000, Richard W.M. Jones wrote:
My thoughts are that capabilities need to return the following information about the underlying driver/hypervisor:
* Host CPU flags (1,8) * List of guest architectures supported. For each of these: - Model of virtualised CPU (10,13) example: x86_64 - Name of the virtualised machine (if applic.) example: pc - Virtualised CPU features: PAE, ... - Fullvirt flag: can we run an unmodified OS as a guest? (2,9) or Paravirt: what sort of paravirt API does a guest need (eg. xen, pvops, VMI, ...)? - The <domain type='?'> for this guest example: kqemu - The <domain><os><type machine='?'> for this guest (if applic.) example: pc - Suggested emulator and loader paths (5) - Driver-specific flags which libvirt clients would not be required to understand, but could be used to enhance libvirt clients. examples: uses kqemu, uses kvm (3,4,11)
(Notes: (a) I have flattened qemu's nested arch/machine list, because I there is not a natural hierarchy. (b) The guest architectures list is a Cartesian product, although at the moment the worst case (qemu) would only have about 14 entries. An alternate way to do this is discussed at the end. (c) The host CPU model is already provided by virNodeGetInfo).
Currently this shows a Cartesian product of (arch,ostype,domaintype,flags) but I think we should reduce it to merely (arch,ostype) and use a slightly more heirarchical structure. The domaintype & flags only really add a slight specialization of the basic (arch,ostype) info, so I think it is worthwhile. Also rather than having '<paravirt>xen</paravirt>' and '<fullyvirt/>' I think we should just call it 'os_type' since this info is used to populate the <os><type>..</type></os> field in the domain XML. A second reason is that KVM is very likely to blur the boundaries between paravirt & fullyvirt. ie, KVM is in its heart a fullyvirt system, but it supports various paravirt extensions - a fullyvirt guest OS can detect these paravirt extensions & make use of them on the fly. So its better not to express a hard paravirt/fullyvirt split in the XML. So I'm suggesting just <os_type>xen</os_type> and <os_type>hvm</os_type>
Example: Xen ------------
For Xen the primary source for capabilities are the files /sys/hypervisor/properties/capabilities and /proc/cpuinfo. A Xen driver might present the following description of its capabilities:
<capabilities> <host> <cpu_flags> <cpu_flag> vmx </cpu_flag> <cpu_flag> pae </cpu_flag> <!-- etc --> </cpu_flags> </host>
<guest_architectures> <guest_architecture> <model> x86_64 </model> <paravirt> xen </paravirt> <domain_type> xen </domain_type> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture>
<guest_architecture> <model> i686 </model> <pae/> <paravirt> xen </paravirt> <domain_type> xen </domain_type> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture>
<guest_architecture> <model> i686 </model> <fullvirt/> <domain_type> xen </domain_type> <loader> /usr/lib/xen/boot/hvmloader </loader> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture>
<guest_architecture> <model> i686 </model> <pae/> <fullvirt/> <domain_type> xen </domain_type> <loader> /usr/lib/xen/boot/hvmloader </loader> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture>
<guest_architecture> <model> x86_64 </model> <fullvirt/> <domain_type> xen </domain_type> <loader> /usr/lib/xen/boot/hvmloader </loader> <emulator> /usr/lib/xen/bin/qemu-dm </emulator> </guest_architecture> </guest_architectures> </capabilities>
This would look like: <capabilities> <host> <cpu> <arch>x86_64</arch> <features> <vmx/> <pae/> </features> </cpu> </host> <!-- xen-3.0-x86_64p --> <guest> <os_type>xen</os_type> <arch name="x86_64"> <wordsize>64</wordsize> <domain type="xen"/> </arch> <features> <pae/> </features> </guest> <!-- xen-3.0-x86_32p --> <guest> <os_type>xen</os_type> <arch name="i686"> <wordsize>32</wordsize> <domain type="xen"/> </arch> <features> <pae/> </features> </guest> <!-- hvm-3.0-x86_64p --> <guest> <os_type>hvm</os_type> <arch name="x86_64"> <machine >pc</machine> <machine>isapc</machine> <emulator>/usr/lib/xen/qemu-dm</emulator> <loader>/usr/lib/xen/hvmloader</loader> <domain type="xen"/> </arch> <features> <pae/> <nonpae/> </features> </guest> <!-- hvm-3.0-x86_64p --> <guest> <os_type>hvm</os_type> <arch name="i686"> <machine >pc</machine> <machine>isapc</machine> <emulator>/usr/lib/xen/qemu-dm</emulator> <loader>/usr/lib/xen/hvmloader</loader> <domain type="xen"/> </arch> <features> <pae/> <nonpae/> </features> </guest> </capabilities> Notice I have an expliciyt '<nonpae/>' flag, because PAE is really a tri-state. A domain can support PAE, or none-PAE or both.
Example: qemu + kqemu + kvm ---------------------------
Qemu has by far the longest list of supported guest architectures. Out of the box it supports 10 distinct machine types and then you can add 4 extra machine types if the kernel can do kqemu and kvm, making 14 in all. Below I have abbreviated this list for clarity.
<capabilities> <host> <cpu_flags> <cpu_flag> vmx </cpu_flag> <cpu_flag> pae </cpu_flag> <!-- etc --> </cpu_flags> </host>
<guest_architectures> <guest_architecture> <model> sparc </model> <machine> sun4m </machine> <fullvirt/> <domain_type> qemu </domain_type> <machine_type> sun4m </machine_type> <emulator> /usr/bin/qemu-system-sparc </emulator> </guest_architecture>
<guest_architecture> <model> i686 </model> <machine> pc </machine> <fullvirt/> <domain_type> qemu </domain_type> <machine_type> pc </machine_type> <emulator> /usr/bin/qemu </emulator> </guest_architecture>
<guest_architecture> <model> x86_64 </model> <machine> pc </machine> <fullvirt/> <domain_type> kqemu </domain_type> <machine_type> pc </machine_type> <emulator> /usr/bin/qemu </emulator> <qemu_uses_kqemu /> </guest_architecture>
<guest_architecture> <model> x86_64 </model> <machine> pc </machine> <fullvirt/> <domain_type> kvm </domain_type> <machine_type> pc </machine_type> <emulator> /usr/bin/qemu-kvm </emulator> <qemu_uses_kvm /> </guest_architecture> </guest_architecture> </capabilities>
<capabilities> <host> <cpu> <arch>x86_64</arch> <features> <vmx/> <pae/> </features> </cpu> </host> <guest> <os_type>hvm</os_type> <arch name="x86_64"> <wordsize>64</wordsize> <machine >pc</machine> <machine>isapc</machine> <emulator>/usr/bin/qemu-system-x86_64</emulator> <domain type="qemu"/> <domain type="kqemu"/> <domain type="kvm"> <emulator>/usr/bin/qemu-kvm</emulator> </domain> </arch> <features> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name="i686"> <wordsize>32</wordsize> <machine >pc</machine> <machine>isapc</machine> <emulator>/usr/bin/qemu</emulator> <domain type="qemu"/> </arch> <features> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name="sparc"> <wordsize>32</wordsize> <machine >sun4m</machine> <emulator>/usr/bin/qemu-system-sparc</emulator> <domain type="qemu"/> </arch> <features> <pae/> <nonpae/> </features> </guest> <guest> <os_type>hvm</os_type> <arch name="ppc"> <wordsize>32</wordsize> <machine>prep</machine> <machine>g3bw</machine> <machine>mac99</machine> <emulator>/usr/bin/qemu-system-ppc</emulator> <domain type="qemu"/> </arch> <features> <pae/> <nonpae/> </features> </guest> </capabilities> Notice in this example, that the <domain type='kvm'> block inside the the <arch> can override / specialize arch data. eg we provide an alternate <emulator> block for KVM. Als notice mutliple <machine> elements - the first is taken to be the implicit default machine. Alternatively we could add an explicit default="true" attribute. So in summary, with Xen this results in N * <guest> blocks where N equals the number of entries in /sys/hypervisor/properties/capabilities, and with QEMU N == number of QEMU architectures (ie 7). Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|

On Thu, Mar 15, 2007 at 02:27:04PM +0000, Daniel P. Berrange wrote:
<guest> <os_type>hvm</os_type> <arch name="sparc"> <wordsize>32</wordsize> <machine >sun4m</machine> <emulator>/usr/bin/qemu-system-sparc</emulator> <domain type="qemu"/> </arch> <features> <pae/> <nonpae/> </features> </guest>
One thing I forgot to mention is that upstream QEMU recently added a -cpu flag to allow specification of different CPU models within a particular architecture. So I expect in the future we may add a <cpu> field within <arch> in a similar way to the multiple <machine> fields. Regards, Dan. -- |=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=| |=- Perl modules: http://search.cpan.org/~danberr/ -=| |=- Projects: http://freshmeat.net/~danielpb/ -=| |=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|
participants (2)
-
Daniel P. Berrange
-
Richard W.M. Jones