[libvirt] Proposal PCI/PCIe device placement on PAPR guests
by David Gibson
There was a discussion back in November on the qemu list which spilled
onto the libvirt list about how to add support for PCIe devices to
POWER VMs, specifically 'pseries' machine type PAPR guests.
Here's a more concrete proposal for how to handle part of this in
future from the libvirt side. Strictly speaking what I'm suggesting
here isn't intrinsically linked to PCIe: it will make adding PCIe
support sanely easier, as well as having a number of advantages for
both PCIe and plain-PCI devices on PAPR guests.
Background:
* Currently the pseries machine type only supports vanilla PCI
buses.
* This is a qemu limitation, not something inherent - PAPR guests
running under PowerVM (the IBM hypervisor) can use passthrough
PCIe devices (PowerVM doesn't emulate devices though).
* In fact the way PCI access is para-virtalized in PAPR makes the
usual distinctions between PCI and PCIe largely disappear
* Presentation of PCIe devices to PAPR guests is unusual
* Unlike x86 - and other "bare metal" platforms, root ports are
not made visible to the guest. i.e. all devices (typically)
appear as though they were integrated devices on x86
* In terms of topology all devices will appear in a way similar to
a vanilla PCI bus, even PCIe devices
* However PCIe extended config space is accessible
* This means libvirt's usual placement of PCIe devices is not
suitable for PAPR guests
* PAPR has its own hotplug mechanism
* This is used instead of standard PCIe hotplug
* This mechanism works for both PCIe and vanilla-PCI devices
* This can hotplug/unplug devices even without a root port P2P
bridge between it and the root "bus
* Multiple independent host bridges are routine on PAPR
* Unlike PC (where all host bridges have multiplexed access to
configuration space) PCI host bridges (PHBs) are truly
independent for PAPR guests (disjoint MMIO regions in system
address space)
* PowerVM typically presents a separate PHB to the guest for each
host slot passed through
The Proposal:
I suggest that libvirt implement a new default algorithm for placing
(i.e. assigning addresses to) both PCI and PCIe devices for (only)
PAPR guests.
The short summary is that by default it should assign each device to a
separate vPHB, creating vPHBs as necessary.
* For passthrough sometimes a group of host devices can't be safely
isolated from each other - this is known as a (host) Partitionable
Endpoint (PE). In this case, if any device in the PE is passed
through to a guest, the whole PE must be passed through to the
same vPHB in the guest. From the guest POV, each vPHB has exactly
one (guest) PE.
* To allow for hotplugged devices, libvirt should also add a number
of additional, empty vPHBs (the PAPR spec allows for hotplug of
PHBs, but this is not yet implemented in qemu). When hotplugging
a new device (or PE) libvirt should locate a vPHB which doesn't
currently contain anything.
* libvirt should only (automatically) add PHBs - never root ports or
other PCI to PCI bridges
In order to handle migration, the vPHBs will need to be represented in
the domain XML, which will also allow the user to override this
topology if they want.
Advantages:
There are still some details I need to figure out w.r.t. handling PCIe
devices (on both the qemu and libvirt sides). However the fact that
PAPR guests don't typically see PCIe root ports means that the normal
libvirt PCIe allocation scheme won't work. This scheme has several
advantages with or without support for PCIe devices:
* Better performance for 32-bit devices
With multiple devices on a single vPHB they all must share a (fairly
small) 32-bit DMA/IOMMU window. With separate PHBs they each have a
separate window. PAPR guests have an always-on guest visible IOMMU.
* Better EEH handling for passthrough devices
EEH is an IBM hardware-assisted mechanism for isolating and safely
resetting devices experiencing hardware faults so they don't bring
down other devices or the system at large. It's roughly similar to
PCIe AER in concept, but has a different IBM specific interface, and
works on both PCI and PCIe devices.
Currently the kernel interfaces for handling EEH events on passthrough
devices will only work if there is a single (host) iommu group in the
vfio container. While lifting that restriction would be nice, it's
quite difficult to do so (it requires keeping state synchronized
between multiple host groups). That also means that an EEH error on
one device could stop another device where that isn't required by the
actual hardware.
The unit of EEH isolation is a PE (Partitionable Endpoint) and
currently there is only one guest PE per vPHB. Changing this might
also be possible, but is again quite complex and may result in
confusing and/or broken distinctions between groups for EEH isolation
and IOMMU isolation purposes.
Placing separate host groups in separate vPHBs sidesteps these
problems.
* Guest NUMA node assignment of devices
PAPR does not (and can't reasonably) use the pxb device. Instead to
allocate devices to different guest NUMA nodes they should be placed
on different vPHBs. Placing them on different PHBs by default allows
NUMA node to be assigned to those PHBs in a straightforward manner.
--
David Gibson | I'll have my music baroque, and my code
david AT gibson.dropbear.id.au | minimalist, thank you. NOT _the_ _other_
| _way_ _around_!
http://www.ozlabs.org/~dgibson
8 years, 3 months
[libvirt] [PATCH v3 0/3] target-i386: Implement query-cpu-model-expansion
by Eduardo Habkost
This series implements query-cpu-model-expansion on target-i386.
Changes v2 -> v3:
-----------------
* Rebased on top of my x86-next branch:
https://github.com/ehabkost/qemu x86-next
* Added new patch that will extend type=full expansion to
return every (writeable) QOM property from the CPU object
Git branch for testing:
https://github.com/ehabkost/qemu-hacks work/x86-query-cpu-expansion
libvirt code to use the new feature already exist, and were
submitted to libvir-list, at:
https://www.mail-archive.com/libvir-list@redhat.com/msg142168.html
Changes v1 -> v2:
-----------------
This version is highly simplified compared to v1. It contains
only an implementation that will return a limited set of
properties. I have a follow-up series that will expend type=full
expansion to return every single QOM property [note: this is now
implemented in v3], but this version will return the same data
for type=static and type=full expansion for simplicity (except
that type=static expansion will use the "base" CPU model as
base).
This means this version also won't include "pmu" and
"host-cache-info" in full expansion, and won't require special
code for those properties.
The unit test code was also removed in this version, to keep the
series simple and easier to review. Most of the patches on the
previous series were changes just to make the test case work. I
will send the test-case-related changes as a follow-up series.
---
Cc: Cornelia Huck <cornelia.huck(a)de.ibm.com>
Cc: Christian Borntraeger <borntraeger(a)de.ibm.com>
Cc: David Hildenbrand <david(a)redhat.com>
Cc: libvir-list(a)redhat.com
Cc: Jiri Denemark <jdenemar(a)redhat.com>
Cc: "Jason J. Herne" <jjherne(a)linux.vnet.ibm.com>
Cc: Markus Armbruster <armbru(a)redhat.com>
Cc: Richard Henderson <rth(a)twiddle.net>
Cc: Igor Mammedov <imammedo(a)redhat.com>
Cc: Eric Blake <eblake(a)redhat.com>
Eduardo Habkost (3):
target-i386: Define static "base" CPU model
target-i386: Implement query-cpu-model-expansion QMP command
i386: Improve query-cpu-model-expansion full mode
target/i386/cpu-qom.h | 2 +
monitor.c | 4 +-
target/i386/cpu.c | 239 +++++++++++++++++++++++++++++++++++++++++++++++++-
3 files changed, 243 insertions(+), 2 deletions(-)
--
2.11.0.259.g40922b1
8 years, 3 months
[libvirt] [PATCH 0/5] virstring and virbuffer improvements and bug fix
by Pavel Hrdina
Pavel Hrdina (5):
util: virstring: introduce virStrcat and VIR_STRCAT
util: use VIR_STRCAT instead of strcat
util: virbuffer: introduce virBufferEscapeN
util: virqemu: introduce virQEMUBuildBufferEscape
qemu: properly escape socket path for graphics
cfg.mk | 16 ++--
src/libvirt_private.syms | 4 +
src/qemu/qemu_command.c | 6 +-
src/storage/storage_backend_logical.c | 6 +-
src/test/test_driver.c | 2 +-
src/util/virbuffer.c | 104 +++++++++++++++++++++
src/util/virbuffer.h | 2 +
src/util/vircgroup.c | 4 +-
src/util/virqemu.c | 17 ++++
src/util/virqemu.h | 1 +
src/util/virstring.c | 70 ++++++++++++++
src/util/virstring.h | 27 ++++++
src/xen/xend_internal.c | 2 +-
.../qemuxml2argvdata/qemuxml2argv-name-escape.args | 5 +-
.../qemuxml2argvdata/qemuxml2argv-name-escape.xml | 7 +-
tests/qemuxml2argvtest.c | 3 +-
tests/virbuftest.c | 41 ++++++++
tests/virstringtest.c | 49 ++++++++++
18 files changed, 345 insertions(+), 21 deletions(-)
--
2.11.1
8 years, 3 months
[libvirt] [PATCH 0/4] Fix build with GCC 7
by Daniel P. Berrange
As always with new GCC major releases, we've tickled some new
warnings. What's nice is that two of them identified genuine
bugs in our code.
I would push this as a build-breaker fix, but I wanted some
visibility on the fourth patch before doing that, as it was
not the usual type of quick fix.
Daniel P. Berrange (4):
Use explicit boolean comparison in OOM check
libxl: fix empty string check for channel path
qemu: add missing break in qemuDomainDeviceCalculatePCIConnectFlags
Add ATTRIBUTE_FALLTHROUGH for switch cases without break
src/conf/domain_conf.c | 7 +++++++
src/conf/network_conf.c | 3 ++-
src/internal.h | 8 ++++++++
src/libxl/libxl_domain.c | 2 +-
src/lxc/lxc_container.c | 2 +-
src/network/bridge_driver.c | 6 ++++++
src/qemu/qemu_domain_address.c | 1 +
src/util/viralloc.c | 2 +-
tools/virsh-edit.c | 2 +-
9 files changed, 28 insertions(+), 5 deletions(-)
--
2.9.3
8 years, 3 months
[libvirt] [PATCH v2 00/33] qemu: Detect host CPU model by asking QEMU on x86_64
by Jiri Denemark
Until now host-model CPU mode tried to enable all CPU features supported
by the host CPU even if QEMU/KVM did not support them. This caused a
number of issues and made host-model quite unreliable. Asking QEMU for
the CPU it can provide and the current host makes host-model much more
robust.
This series fixes the following bugs:
https://bugzilla.redhat.com/show_bug.cgi?id=1018251
https://bugzilla.redhat.com/show_bug.cgi?id=1371617
https://bugzilla.redhat.com/show_bug.cgi?id=1372581
https://bugzilla.redhat.com/show_bug.cgi?id=1404627
https://bugzilla.redhat.com/show_bug.cgi?id=870071
In addition to that, the following bug should be mostly limited to cases
when an unsupported feature is explicitly requested:
https://bugzilla.redhat.com/show_bug.cgi?id=1335534
The series relies on features which are not in QEMU yet, but should be
hopefully close enough to be pushed in 2.9.0. In the meantime, Eduardo's
work/x86-query-cpu-expansion-full branch can be used to play with them.
Version 2:
- properly set vendor property in converted test data files
- fix cpu-parse.sh to use "x86_64" prefix for the generated files
Jiri Denemark (33):
docs: Drop obsolete statement about CPU modes and migration
docs: Fix since statement in host-model documentation
qemucapstest: Add test data for QEMU 2.9.0
domaincapstest: Add test data for QEMU 2.9.0
qemu: Refactor virQEMUCapsInitHostCPUModel
qemu: Skip virQEMUCapsCPUFilterFeatures on non-x86 CPUs
qemu: Fix CPU model fallback in domain capabilities
docs: Update description of the host-model CPU mode
qemu: Introduce virQEMUCapsFormatHostCPUModelInfo
qemu: Rename hostCPU/feature element in capabilities cache
qemu: Store more types in qemuMonitorCPUModelInfo
qemu: Probe "max" CPU model in TCG
cpu: Introduce virCPUDataNew
cpu_x86: Drop virCPUx86MakeData and use virCPUDataNew
cpu_x86: Make virCPUx86DataClear static
cpu: Rework cpuDataFree
cpu_x86: Make virCPUx86DataAddCPUID work with virCPUDataPtr
cpu_x86: Introduce virCPUx86DataSetSignature
cpu_x86: Introduce virCPUx86DataSetVendor
cpu_x86: Introduce virCPUx86DataAddFeature
cpu: Use virCPUData.arch in cpuDecode
qemu: Get host CPU model from QEMU on x86_64
qemu: Use enum for CPU model expansion type
qemu: Use full CPU model expansion on x86
qemu: Make virQEMUCapsInitCPUModel testable
cputest: Rename x86 data files
cputest: Use virArch enum rather then strings
cputest: Switch host CPU data scripts to model expansion
cputest: Convert all json data files to query-cpu-model-expansion
cputest: Test virQEMUCapsInitCPUModel
cputest: Drop obsolete CPU test data files
cputest: Drop .new suffix from CPU test data files
news: Detect host CPU model by asking QEMU on x86_64
docs/formatdomain.html.in | 38 +-
docs/news.xml | 11 +
src/bhyve/bhyve_capabilities.c | 2 +-
src/cpu/cpu.c | 42 +-
src/cpu/cpu.h | 7 +-
src/cpu/cpu_arm.c | 7 -
src/cpu/cpu_ppc64.c | 6 +-
src/cpu/cpu_s390.c | 7 -
src/cpu/cpu_x86.c | 280 +-
src/cpu/cpu_x86.h | 13 +-
src/libvirt_private.syms | 8 +-
src/libxl/libxl_capabilities.c | 18 +-
src/qemu/qemu_capabilities.c | 452 +-
src/qemu/qemu_capabilities.h | 3 +-
src/qemu/qemu_capspriv.h | 13 +-
src/qemu/qemu_command.c | 2 +-
src/qemu/qemu_monitor.c | 26 +-
src/qemu/qemu_monitor.h | 31 +-
src/qemu/qemu_monitor_json.c | 109 +-
src/qemu/qemu_monitor_json.h | 4 +-
src/qemu/qemu_parse_command.c | 2 +-
src/qemu/qemu_process.c | 7 +-
src/vmware/vmware_conf.c | 2 +-
src/vz/vz_driver.c | 2 +-
tests/cputest.c | 324 +-
tests/cputestdata/cpu-convert.py | 249 +
tests/cputestdata/cpu-gather.sh | 39 +-
tests/cputestdata/cpu-parse.sh | 5 +-
tests/cputestdata/x86-cpuid-A10-5800K.json | 77 -
tests/cputestdata/x86-cpuid-Core-i5-2500.json | 88 -
tests/cputestdata/x86-cpuid-Core-i5-2540M.json | 82 -
tests/cputestdata/x86-cpuid-Core-i5-4670T.json | 77 -
tests/cputestdata/x86-cpuid-Core-i5-6600.json | 82 -
tests/cputestdata/x86-cpuid-Core-i7-2600.json | 77 -
tests/cputestdata/x86-cpuid-Core-i7-3740QM.json | 77 -
tests/cputestdata/x86-cpuid-Core-i7-3770.json | 77 -
tests/cputestdata/x86-cpuid-Core-i7-4600U.json | 82 -
tests/cputestdata/x86-cpuid-Core-i7-5600U-json.xml | 12 -
tests/cputestdata/x86-cpuid-Core-i7-5600U.json | 88 -
tests/cputestdata/x86-cpuid-Core2-E6850.json | 77 -
tests/cputestdata/x86-cpuid-Opteron-2350.json | 71 -
tests/cputestdata/x86-cpuid-Opteron-6234.json | 88 -
tests/cputestdata/x86-cpuid-Phenom-B95.json | 77 -
tests/cputestdata/x86-cpuid-Xeon-E3-1245.json | 88 -
tests/cputestdata/x86-cpuid-Xeon-E5-2630.json | 77 -
tests/cputestdata/x86-cpuid-Xeon-E5-2650.json | 71 -
tests/cputestdata/x86-cpuid-Xeon-E7-4820.json | 77 -
tests/cputestdata/x86-cpuid-Xeon-W3520.json | 77 -
...ack.xml => x86_64-Haswell-noTSX-nofallback.xml} | 0
...-Haswell-noTSX.xml => x86_64-Haswell-noTSX.xml} | 0
.../{x86-Haswell.xml => x86_64-Haswell.xml} | 0
...e-1-result.xml => x86_64-baseline-1-result.xml} | 0
.../{x86-baseline-1.xml => x86_64-baseline-1.xml} | 0
...e-2-result.xml => x86_64-baseline-2-result.xml} | 0
.../{x86-baseline-2.xml => x86_64-baseline-2.xml} | 0
...expanded.xml => x86_64-baseline-3-expanded.xml} | 0
...e-3-result.xml => x86_64-baseline-3-result.xml} | 0
.../{x86-baseline-3.xml => x86_64-baseline-3.xml} | 0
...expanded.xml => x86_64-baseline-4-expanded.xml} | 0
...e-4-result.xml => x86_64-baseline-4-result.xml} | 0
.../{x86-baseline-4.xml => x86_64-baseline-4.xml} | 0
...expanded.xml => x86_64-baseline-5-expanded.xml} | 0
...e-5-result.xml => x86_64-baseline-5-result.xml} | 0
.../{x86-baseline-5.xml => x86_64-baseline-5.xml} | 0
...atable.xml => x86_64-baseline-6-migratable.xml} | 0
...e-6-result.xml => x86_64-baseline-6-result.xml} | 0
.../{x86-baseline-6.xml => x86_64-baseline-6.xml} | 0
...e-7-result.xml => x86_64-baseline-7-result.xml} | 0
.../{x86-baseline-7.xml => x86_64-baseline-7.xml} | 0
...e-8-result.xml => x86_64-baseline-8-result.xml} | 0
.../{x86-baseline-8.xml => x86_64-baseline-8.xml} | 0
...ml => x86_64-baseline-incompatible-vendors.xml} | 0
...lt.xml => x86_64-baseline-no-vendor-result.xml} | 0
...no-vendor.xml => x86_64-baseline-no-vendor.xml} | 0
...xml => x86_64-baseline-some-vendors-result.xml} | 0
...endors.xml => x86_64-baseline-some-vendors.xml} | 0
...-bogus-feature.xml => x86_64-bogus-feature.xml} | 0
...{x86-bogus-model.xml => x86_64-bogus-model.xml} | 0
...86-bogus-vendor.xml => x86_64-bogus-vendor.xml} | 0
...-guest.xml => x86_64-cpuid-A10-5800K-guest.xml} | 0
...0K-host.xml => x86_64-cpuid-A10-5800K-host.xml} | 0
...0K-json.xml => x86_64-cpuid-A10-5800K-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-A10-5800K.json | 203 +
...id-A10-5800K.xml => x86_64-cpuid-A10-5800K.xml} | 0
...-guest.xml => x86_64-cpuid-Atom-D510-guest.xml} | 0
...10-host.xml => x86_64-cpuid-Atom-D510-host.xml} | 0
...id-Atom-D510.xml => x86_64-cpuid-Atom-D510.xml} | 0
...-guest.xml => x86_64-cpuid-Atom-N450-guest.xml} | 0
...50-host.xml => x86_64-cpuid-Atom-N450-host.xml} | 0
...id-Atom-N450.xml => x86_64-cpuid-Atom-N450.xml} | 0
...est.xml => x86_64-cpuid-Core-i5-2500-guest.xml} | 0
...host.xml => x86_64-cpuid-Core-i5-2500-host.xml} | 0
...json.xml => x86_64-cpuid-Core-i5-2500-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Core-i5-2500.json | 203 +
...e-i5-2500.xml => x86_64-cpuid-Core-i5-2500.xml} | 0
...st.xml => x86_64-cpuid-Core-i5-2540M-guest.xml} | 0
...ost.xml => x86_64-cpuid-Core-i5-2540M-host.xml} | 0
...son.xml => x86_64-cpuid-Core-i5-2540M-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Core-i5-2540M.json | 203 +
...i5-2540M.xml => x86_64-cpuid-Core-i5-2540M.xml} | 0
...st.xml => x86_64-cpuid-Core-i5-4670T-guest.xml} | 0
...ost.xml => x86_64-cpuid-Core-i5-4670T-host.xml} | 0
...son.xml => x86_64-cpuid-Core-i5-4670T-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Core-i5-4670T.json | 203 +
...i5-4670T.xml => x86_64-cpuid-Core-i5-4670T.xml} | 0
...est.xml => x86_64-cpuid-Core-i5-6600-guest.xml} | 0
...host.xml => x86_64-cpuid-Core-i5-6600-host.xml} | 0
...json.xml => x86_64-cpuid-Core-i5-6600-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Core-i5-6600.json | 203 +
...e-i5-6600.xml => x86_64-cpuid-Core-i5-6600.xml} | 0
...est.xml => x86_64-cpuid-Core-i7-2600-guest.xml} | 0
...host.xml => x86_64-cpuid-Core-i7-2600-host.xml} | 0
...json.xml => x86_64-cpuid-Core-i7-2600-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Core-i7-2600.json | 203 +
...e-i7-2600.xml => x86_64-cpuid-Core-i7-2600.xml} | 0
...st.xml => x86_64-cpuid-Core-i7-3520M-guest.xml} | 0
...ost.xml => x86_64-cpuid-Core-i7-3520M-host.xml} | 0
...i7-3520M.xml => x86_64-cpuid-Core-i7-3520M.xml} | 0
...t.xml => x86_64-cpuid-Core-i7-3740QM-guest.xml} | 0
...st.xml => x86_64-cpuid-Core-i7-3740QM-host.xml} | 0
...on.xml => x86_64-cpuid-Core-i7-3740QM-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Core-i7-3740QM.json | 203 +
...-3740QM.xml => x86_64-cpuid-Core-i7-3740QM.xml} | 0
...est.xml => x86_64-cpuid-Core-i7-3770-guest.xml} | 0
...host.xml => x86_64-cpuid-Core-i7-3770-host.xml} | 0
...json.xml => x86_64-cpuid-Core-i7-3770-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Core-i7-3770.json | 203 +
...e-i7-3770.xml => x86_64-cpuid-Core-i7-3770.xml} | 0
...st.xml => x86_64-cpuid-Core-i7-4600U-guest.xml} | 0
...ost.xml => x86_64-cpuid-Core-i7-4600U-host.xml} | 0
...son.xml => x86_64-cpuid-Core-i7-4600U-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Core-i7-4600U.json | 203 +
...i7-4600U.xml => x86_64-cpuid-Core-i7-4600U.xml} | 0
...st.xml => x86_64-cpuid-Core-i7-5600U-guest.xml} | 0
...ost.xml => x86_64-cpuid-Core-i7-5600U-host.xml} | 0
.../x86_64-cpuid-Core-i7-5600U-json.xml | 16 +
tests/cputestdata/x86_64-cpuid-Core-i7-5600U.json | 203 +
...i7-5600U.xml => x86_64-cpuid-Core-i7-5600U.xml} | 0
...uest.xml => x86_64-cpuid-Core2-E6850-guest.xml} | 0
...-host.xml => x86_64-cpuid-Core2-E6850-host.xml} | 0
...-json.xml => x86_64-cpuid-Core2-E6850-json.xml} | 5 +-
tests/cputestdata/x86_64-cpuid-Core2-E6850.json | 203 +
...ore2-E6850.xml => x86_64-cpuid-Core2-E6850.xml} | 0
...uest.xml => x86_64-cpuid-Core2-Q9500-guest.xml} | 0
...-host.xml => x86_64-cpuid-Core2-Q9500-host.xml} | 0
...ore2-Q9500.xml => x86_64-cpuid-Core2-Q9500.xml} | 0
...50-guest.xml => x86_64-cpuid-FX-8150-guest.xml} | 0
...8150-host.xml => x86_64-cpuid-FX-8150-host.xml} | 0
...-cpuid-FX-8150.xml => x86_64-cpuid-FX-8150.xml} | 0
...est.xml => x86_64-cpuid-Opteron-1352-guest.xml} | 0
...host.xml => x86_64-cpuid-Opteron-1352-host.xml} | 0
...eron-1352.xml => x86_64-cpuid-Opteron-1352.xml} | 0
...est.xml => x86_64-cpuid-Opteron-2350-guest.xml} | 0
...host.xml => x86_64-cpuid-Opteron-2350-host.xml} | 0
...json.xml => x86_64-cpuid-Opteron-2350-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Opteron-2350.json | 203 +
...eron-2350.xml => x86_64-cpuid-Opteron-2350.xml} | 0
...est.xml => x86_64-cpuid-Opteron-6234-guest.xml} | 0
...host.xml => x86_64-cpuid-Opteron-6234-host.xml} | 0
...json.xml => x86_64-cpuid-Opteron-6234-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Opteron-6234.json | 203 +
...eron-6234.xml => x86_64-cpuid-Opteron-6234.xml} | 0
...est.xml => x86_64-cpuid-Opteron-6282-guest.xml} | 0
...host.xml => x86_64-cpuid-Opteron-6282-host.xml} | 0
...eron-6282.xml => x86_64-cpuid-Opteron-6282.xml} | 0
...st.xml => x86_64-cpuid-Pentium-P6100-guest.xml} | 0
...ost.xml => x86_64-cpuid-Pentium-P6100-host.xml} | 0
...um-P6100.xml => x86_64-cpuid-Pentium-P6100.xml} | 0
...guest.xml => x86_64-cpuid-Phenom-B95-guest.xml} | 0
...5-host.xml => x86_64-cpuid-Phenom-B95-host.xml} | 0
...5-json.xml => x86_64-cpuid-Phenom-B95-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Phenom-B95.json | 203 +
...-Phenom-B95.xml => x86_64-cpuid-Phenom-B95.xml} | 0
...-guest.xml => x86_64-cpuid-Xeon-5110-guest.xml} | 0
...10-host.xml => x86_64-cpuid-Xeon-5110-host.xml} | 0
...id-Xeon-5110.xml => x86_64-cpuid-Xeon-5110.xml} | 0
...est.xml => x86_64-cpuid-Xeon-E3-1245-guest.xml} | 0
...host.xml => x86_64-cpuid-Xeon-E3-1245-host.xml} | 0
...json.xml => x86_64-cpuid-Xeon-E3-1245-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Xeon-E3-1245.json | 203 +
...n-E3-1245.xml => x86_64-cpuid-Xeon-E3-1245.xml} | 0
...est.xml => x86_64-cpuid-Xeon-E5-2630-guest.xml} | 0
...host.xml => x86_64-cpuid-Xeon-E5-2630-host.xml} | 0
...json.xml => x86_64-cpuid-Xeon-E5-2630-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Xeon-E5-2630.json | 203 +
...n-E5-2630.xml => x86_64-cpuid-Xeon-E5-2630.xml} | 0
...est.xml => x86_64-cpuid-Xeon-E5-2650-guest.xml} | 0
...host.xml => x86_64-cpuid-Xeon-E5-2650-host.xml} | 0
...json.xml => x86_64-cpuid-Xeon-E5-2650-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Xeon-E5-2650.json | 203 +
...n-E5-2650.xml => x86_64-cpuid-Xeon-E5-2650.xml} | 0
...est.xml => x86_64-cpuid-Xeon-E7-4820-guest.xml} | 0
...host.xml => x86_64-cpuid-Xeon-E7-4820-host.xml} | 0
...json.xml => x86_64-cpuid-Xeon-E7-4820-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Xeon-E7-4820.json | 203 +
...n-E7-4820.xml => x86_64-cpuid-Xeon-E7-4820.xml} | 0
...guest.xml => x86_64-cpuid-Xeon-W3520-guest.xml} | 0
...0-host.xml => x86_64-cpuid-Xeon-W3520-host.xml} | 0
...0-json.xml => x86_64-cpuid-Xeon-W3520-json.xml} | 1 +
tests/cputestdata/x86_64-cpuid-Xeon-W3520.json | 203 +
...-Xeon-W3520.xml => x86_64-cpuid-Xeon-W3520.xml} | 0
...guest.xml => x86_64-cpuid-Xeon-X5460-guest.xml} | 0
...0-host.xml => x86_64-cpuid-Xeon-X5460-host.xml} | 0
...-Xeon-X5460.xml => x86_64-cpuid-Xeon-X5460.xml} | 0
...le-extra.xml => x86_64-exact-disable-extra.xml} | 0
...-exact-disable.xml => x86_64-exact-disable.xml} | 0
...xact-disable2.xml => x86_64-exact-disable2.xml} | 0
...bid-extra.xml => x86_64-exact-forbid-extra.xml} | 0
...86-exact-forbid.xml => x86_64-exact-forbid.xml} | 0
...-Haswell.xml => x86_64-exact-force-Haswell.xml} | 0
...{x86-exact-force.xml => x86_64-exact-force.xml} | 0
...re-extra.xml => x86_64-exact-require-extra.xml} | 0
...-exact-require.xml => x86_64-exact-require.xml} | 0
.../{x86-exact.xml => x86_64-exact.xml} | 0
...-nofallback.xml => x86_64-guest-nofallback.xml} | 0
.../{x86-guest.xml => x86_64-guest.xml} | 0
...t.xml => x86_64-host+guest,model486-result.xml} | 0
...ult.xml => x86_64-host+guest,models-result.xml} | 0
...est-result.xml => x86_64-host+guest-result.xml} | 0
.../{x86-host+guest.xml => x86_64-host+guest.xml} | 0
... x86_64-host+host+host-model,models-result.xml} | 0
...k.xml => x86_64-host+host-model-nofallback.xml} | 0
...t+host-model.xml => x86_64-host+host-model.xml} | 0
...l => x86_64-host+host-passthrough-features.xml} | 0
...hrough.xml => x86_64-host+host-passthrough.xml} | 0
.../{x86-host+min.xml => x86_64-host+min.xml} | 0
...ult.xml => x86_64-host+penryn-force-result.xml} | 0
...-host+pentium3.xml => x86_64-host+pentium3.xml} | 0
...l => x86_64-host+strict-force-extra-result.xml} | 0
...-host-Haswell-noTSX+Haswell,haswell-result.xml} | 0
...Haswell-noTSX+Haswell-noTSX,haswell-result.xml} | 0
...64-host-Haswell-noTSX+Haswell-noTSX-result.xml} | 0
...ell-noTSX.xml => x86_64-host-Haswell-noTSX.xml} | 0
...SandyBridge.xml => x86_64-host-SandyBridge.xml} | 0
...-host-amd-fake.xml => x86_64-host-amd-fake.xml} | 0
.../{x86-host-amd.xml => x86_64-host-amd.xml} | 0
....xml => x86_64-host-better+pentium3-result.xml} | 0
...{x86-host-better.xml => x86_64-host-better.xml} | 0
...incomp-arch.xml => x86_64-host-incomp-arch.xml} | 0
...model.xml => x86_64-host-invtsc+host-model.xml} | 0
...{x86-host-invtsc.xml => x86_64-host-invtsc.xml} | 0
...llback.xml => x86_64-host-model-nofallback.xml} | 0
.../{x86-host-model.xml => x86_64-host-model.xml} | 0
...ost-no-vendor.xml => x86_64-host-no-vendor.xml} | 0
...es.xml => x86_64-host-passthrough-features.xml} | 0
...passthrough.xml => x86_64-host-passthrough.xml} | 0
...sult.xml => x86_64-host-worse+guest-result.xml} | 0
.../{x86-host-worse.xml => x86_64-host-worse.xml} | 0
.../cputestdata/{x86-host.xml => x86_64-host.xml} | 0
tests/cputestdata/{x86-min.xml => x86_64-min.xml} | 0
...86-penryn-force.xml => x86_64-penryn-force.xml} | 0
...86-pentium3-amd.xml => x86_64-pentium3-amd.xml} | 0
.../{x86-pentium3.xml => x86_64-pentium3.xml} | 0
...trict-disable.xml => x86_64-strict-disable.xml} | 0
...rce-extra.xml => x86_64-strict-force-extra.xml} | 0
...{x86-strict-full.xml => x86_64-strict-full.xml} | 0
.../{x86-strict.xml => x86_64-strict.xml} | 0
tests/domaincapsschemadata/qemu_2.8.0.s390x.xml | 2 +-
.../domaincapsschemadata/qemu_2.9.0-tcg.x86_64.xml | 145 +
tests/domaincapsschemadata/qemu_2.9.0.x86_64.xml | 124 +
tests/domaincapstest.c | 8 +
.../qemucapabilitiesdata/caps_2.8.0.s390x.replies | 8 +
tests/qemucapabilitiesdata/caps_2.8.0.s390x.xml | 32 +-
.../qemucapabilitiesdata/caps_2.9.0.x86_64.replies | 15365 +++++++++++++++++++
tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml | 763 +
tests/qemucapabilitiestest.c | 1 +
tests/qemumonitorjsontest.c | 4 +-
tests/qemuxml2argvtest.c | 3 +-
268 files changed, 21519 insertions(+), 2062 deletions(-)
create mode 100755 tests/cputestdata/cpu-convert.py
delete mode 100644 tests/cputestdata/x86-cpuid-A10-5800K.json
delete mode 100644 tests/cputestdata/x86-cpuid-Core-i5-2500.json
delete mode 100644 tests/cputestdata/x86-cpuid-Core-i5-2540M.json
delete mode 100644 tests/cputestdata/x86-cpuid-Core-i5-4670T.json
delete mode 100644 tests/cputestdata/x86-cpuid-Core-i5-6600.json
delete mode 100644 tests/cputestdata/x86-cpuid-Core-i7-2600.json
delete mode 100644 tests/cputestdata/x86-cpuid-Core-i7-3740QM.json
delete mode 100644 tests/cputestdata/x86-cpuid-Core-i7-3770.json
delete mode 100644 tests/cputestdata/x86-cpuid-Core-i7-4600U.json
delete mode 100644 tests/cputestdata/x86-cpuid-Core-i7-5600U-json.xml
delete mode 100644 tests/cputestdata/x86-cpuid-Core-i7-5600U.json
delete mode 100644 tests/cputestdata/x86-cpuid-Core2-E6850.json
delete mode 100644 tests/cputestdata/x86-cpuid-Opteron-2350.json
delete mode 100644 tests/cputestdata/x86-cpuid-Opteron-6234.json
delete mode 100644 tests/cputestdata/x86-cpuid-Phenom-B95.json
delete mode 100644 tests/cputestdata/x86-cpuid-Xeon-E3-1245.json
delete mode 100644 tests/cputestdata/x86-cpuid-Xeon-E5-2630.json
delete mode 100644 tests/cputestdata/x86-cpuid-Xeon-E5-2650.json
delete mode 100644 tests/cputestdata/x86-cpuid-Xeon-E7-4820.json
delete mode 100644 tests/cputestdata/x86-cpuid-Xeon-W3520.json
rename tests/cputestdata/{x86-Haswell-noTSX-nofallback.xml => x86_64-Haswell-noTSX-nofallback.xml} (100%)
rename tests/cputestdata/{x86-Haswell-noTSX.xml => x86_64-Haswell-noTSX.xml} (100%)
rename tests/cputestdata/{x86-Haswell.xml => x86_64-Haswell.xml} (100%)
rename tests/cputestdata/{x86-baseline-1-result.xml => x86_64-baseline-1-result.xml} (100%)
rename tests/cputestdata/{x86-baseline-1.xml => x86_64-baseline-1.xml} (100%)
rename tests/cputestdata/{x86-baseline-2-result.xml => x86_64-baseline-2-result.xml} (100%)
rename tests/cputestdata/{x86-baseline-2.xml => x86_64-baseline-2.xml} (100%)
rename tests/cputestdata/{x86-baseline-3-expanded.xml => x86_64-baseline-3-expanded.xml} (100%)
rename tests/cputestdata/{x86-baseline-3-result.xml => x86_64-baseline-3-result.xml} (100%)
rename tests/cputestdata/{x86-baseline-3.xml => x86_64-baseline-3.xml} (100%)
rename tests/cputestdata/{x86-baseline-4-expanded.xml => x86_64-baseline-4-expanded.xml} (100%)
rename tests/cputestdata/{x86-baseline-4-result.xml => x86_64-baseline-4-result.xml} (100%)
rename tests/cputestdata/{x86-baseline-4.xml => x86_64-baseline-4.xml} (100%)
rename tests/cputestdata/{x86-baseline-5-expanded.xml => x86_64-baseline-5-expanded.xml} (100%)
rename tests/cputestdata/{x86-baseline-5-result.xml => x86_64-baseline-5-result.xml} (100%)
rename tests/cputestdata/{x86-baseline-5.xml => x86_64-baseline-5.xml} (100%)
rename tests/cputestdata/{x86-baseline-6-migratable.xml => x86_64-baseline-6-migratable.xml} (100%)
rename tests/cputestdata/{x86-baseline-6-result.xml => x86_64-baseline-6-result.xml} (100%)
rename tests/cputestdata/{x86-baseline-6.xml => x86_64-baseline-6.xml} (100%)
rename tests/cputestdata/{x86-baseline-7-result.xml => x86_64-baseline-7-result.xml} (100%)
rename tests/cputestdata/{x86-baseline-7.xml => x86_64-baseline-7.xml} (100%)
rename tests/cputestdata/{x86-baseline-8-result.xml => x86_64-baseline-8-result.xml} (100%)
rename tests/cputestdata/{x86-baseline-8.xml => x86_64-baseline-8.xml} (100%)
rename tests/cputestdata/{x86-baseline-incompatible-vendors.xml => x86_64-baseline-incompatible-vendors.xml} (100%)
rename tests/cputestdata/{x86-baseline-no-vendor-result.xml => x86_64-baseline-no-vendor-result.xml} (100%)
rename tests/cputestdata/{x86-baseline-no-vendor.xml => x86_64-baseline-no-vendor.xml} (100%)
rename tests/cputestdata/{x86-baseline-some-vendors-result.xml => x86_64-baseline-some-vendors-result.xml} (100%)
rename tests/cputestdata/{x86-baseline-some-vendors.xml => x86_64-baseline-some-vendors.xml} (100%)
rename tests/cputestdata/{x86-bogus-feature.xml => x86_64-bogus-feature.xml} (100%)
rename tests/cputestdata/{x86-bogus-model.xml => x86_64-bogus-model.xml} (100%)
rename tests/cputestdata/{x86-bogus-vendor.xml => x86_64-bogus-vendor.xml} (100%)
rename tests/cputestdata/{x86-cpuid-A10-5800K-guest.xml => x86_64-cpuid-A10-5800K-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-A10-5800K-host.xml => x86_64-cpuid-A10-5800K-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-A10-5800K-json.xml => x86_64-cpuid-A10-5800K-json.xml} (96%)
create mode 100644 tests/cputestdata/x86_64-cpuid-A10-5800K.json
rename tests/cputestdata/{x86-cpuid-A10-5800K.xml => x86_64-cpuid-A10-5800K.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Atom-D510-guest.xml => x86_64-cpuid-Atom-D510-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Atom-D510-host.xml => x86_64-cpuid-Atom-D510-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Atom-D510.xml => x86_64-cpuid-Atom-D510.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Atom-N450-guest.xml => x86_64-cpuid-Atom-N450-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Atom-N450-host.xml => x86_64-cpuid-Atom-N450-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Atom-N450.xml => x86_64-cpuid-Atom-N450.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-2500-guest.xml => x86_64-cpuid-Core-i5-2500-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-2500-host.xml => x86_64-cpuid-Core-i5-2500-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-2540M-json.xml => x86_64-cpuid-Core-i5-2500-json.xml} (94%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i5-2500.json
rename tests/cputestdata/{x86-cpuid-Core-i5-2500.xml => x86_64-cpuid-Core-i5-2500.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-2540M-guest.xml => x86_64-cpuid-Core-i5-2540M-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-2540M-host.xml => x86_64-cpuid-Core-i5-2540M-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-2500-json.xml => x86_64-cpuid-Core-i5-2540M-json.xml} (94%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i5-2540M.json
rename tests/cputestdata/{x86-cpuid-Core-i5-2540M.xml => x86_64-cpuid-Core-i5-2540M.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-4670T-guest.xml => x86_64-cpuid-Core-i5-4670T-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-4670T-host.xml => x86_64-cpuid-Core-i5-4670T-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-4670T-json.xml => x86_64-cpuid-Core-i5-4670T-json.xml} (95%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i5-4670T.json
rename tests/cputestdata/{x86-cpuid-Core-i5-4670T.xml => x86_64-cpuid-Core-i5-4670T.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-6600-guest.xml => x86_64-cpuid-Core-i5-6600-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-6600-host.xml => x86_64-cpuid-Core-i5-6600-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i5-6600-json.xml => x86_64-cpuid-Core-i5-6600-json.xml} (93%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i5-6600.json
rename tests/cputestdata/{x86-cpuid-Core-i5-6600.xml => x86_64-cpuid-Core-i5-6600.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-2600-guest.xml => x86_64-cpuid-Core-i7-2600-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-2600-host.xml => x86_64-cpuid-Core-i7-2600-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-2600-json.xml => x86_64-cpuid-Core-i7-2600-json.xml} (93%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-2600.json
rename tests/cputestdata/{x86-cpuid-Core-i7-2600.xml => x86_64-cpuid-Core-i7-2600.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-3520M-guest.xml => x86_64-cpuid-Core-i7-3520M-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-3520M-host.xml => x86_64-cpuid-Core-i7-3520M-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-3520M.xml => x86_64-cpuid-Core-i7-3520M.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-3740QM-guest.xml => x86_64-cpuid-Core-i7-3740QM-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-3740QM-host.xml => x86_64-cpuid-Core-i7-3740QM-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-3740QM-json.xml => x86_64-cpuid-Core-i7-3740QM-json.xml} (93%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-3740QM.json
rename tests/cputestdata/{x86-cpuid-Core-i7-3740QM.xml => x86_64-cpuid-Core-i7-3740QM.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-3770-guest.xml => x86_64-cpuid-Core-i7-3770-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-3770-host.xml => x86_64-cpuid-Core-i7-3770-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-3770-json.xml => x86_64-cpuid-Core-i7-3770-json.xml} (92%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-3770.json
rename tests/cputestdata/{x86-cpuid-Core-i7-3770.xml => x86_64-cpuid-Core-i7-3770.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-4600U-guest.xml => x86_64-cpuid-Core-i7-4600U-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-4600U-host.xml => x86_64-cpuid-Core-i7-4600U-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-4600U-json.xml => x86_64-cpuid-Core-i7-4600U-json.xml} (95%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-4600U.json
rename tests/cputestdata/{x86-cpuid-Core-i7-4600U.xml => x86_64-cpuid-Core-i7-4600U.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-5600U-guest.xml => x86_64-cpuid-Core-i7-5600U-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core-i7-5600U-host.xml => x86_64-cpuid-Core-i7-5600U-host.xml} (100%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-5600U-json.xml
create mode 100644 tests/cputestdata/x86_64-cpuid-Core-i7-5600U.json
rename tests/cputestdata/{x86-cpuid-Core-i7-5600U.xml => x86_64-cpuid-Core-i7-5600U.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core2-E6850-guest.xml => x86_64-cpuid-Core2-E6850-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core2-E6850-host.xml => x86_64-cpuid-Core2-E6850-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core2-E6850-json.xml => x86_64-cpuid-Core2-E6850-json.xml} (75%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Core2-E6850.json
rename tests/cputestdata/{x86-cpuid-Core2-E6850.xml => x86_64-cpuid-Core2-E6850.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core2-Q9500-guest.xml => x86_64-cpuid-Core2-Q9500-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core2-Q9500-host.xml => x86_64-cpuid-Core2-Q9500-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Core2-Q9500.xml => x86_64-cpuid-Core2-Q9500.xml} (100%)
rename tests/cputestdata/{x86-cpuid-FX-8150-guest.xml => x86_64-cpuid-FX-8150-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-FX-8150-host.xml => x86_64-cpuid-FX-8150-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-FX-8150.xml => x86_64-cpuid-FX-8150.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-1352-guest.xml => x86_64-cpuid-Opteron-1352-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-1352-host.xml => x86_64-cpuid-Opteron-1352-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-1352.xml => x86_64-cpuid-Opteron-1352.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-2350-guest.xml => x86_64-cpuid-Opteron-2350-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-2350-host.xml => x86_64-cpuid-Opteron-2350-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-2350-json.xml => x86_64-cpuid-Opteron-2350-json.xml} (97%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Opteron-2350.json
rename tests/cputestdata/{x86-cpuid-Opteron-2350.xml => x86_64-cpuid-Opteron-2350.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-6234-guest.xml => x86_64-cpuid-Opteron-6234-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-6234-host.xml => x86_64-cpuid-Opteron-6234-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-6234-json.xml => x86_64-cpuid-Opteron-6234-json.xml} (96%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Opteron-6234.json
rename tests/cputestdata/{x86-cpuid-Opteron-6234.xml => x86_64-cpuid-Opteron-6234.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-6282-guest.xml => x86_64-cpuid-Opteron-6282-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-6282-host.xml => x86_64-cpuid-Opteron-6282-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Opteron-6282.xml => x86_64-cpuid-Opteron-6282.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Pentium-P6100-guest.xml => x86_64-cpuid-Pentium-P6100-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Pentium-P6100-host.xml => x86_64-cpuid-Pentium-P6100-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Pentium-P6100.xml => x86_64-cpuid-Pentium-P6100.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Phenom-B95-guest.xml => x86_64-cpuid-Phenom-B95-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Phenom-B95-host.xml => x86_64-cpuid-Phenom-B95-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Phenom-B95-json.xml => x86_64-cpuid-Phenom-B95-json.xml} (97%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Phenom-B95.json
rename tests/cputestdata/{x86-cpuid-Phenom-B95.xml => x86_64-cpuid-Phenom-B95.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-5110-guest.xml => x86_64-cpuid-Xeon-5110-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-5110-host.xml => x86_64-cpuid-Xeon-5110-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-5110.xml => x86_64-cpuid-Xeon-5110.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E3-1245-guest.xml => x86_64-cpuid-Xeon-E3-1245-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E3-1245-host.xml => x86_64-cpuid-Xeon-E3-1245-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E3-1245-json.xml => x86_64-cpuid-Xeon-E3-1245-json.xml} (93%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E3-1245.json
rename tests/cputestdata/{x86-cpuid-Xeon-E3-1245.xml => x86_64-cpuid-Xeon-E3-1245.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E5-2630-guest.xml => x86_64-cpuid-Xeon-E5-2630-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E5-2630-host.xml => x86_64-cpuid-Xeon-E5-2630-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E5-2630-json.xml => x86_64-cpuid-Xeon-E5-2630-json.xml} (95%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E5-2630.json
rename tests/cputestdata/{x86-cpuid-Xeon-E5-2630.xml => x86_64-cpuid-Xeon-E5-2630.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E5-2650-guest.xml => x86_64-cpuid-Xeon-E5-2650-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E5-2650-host.xml => x86_64-cpuid-Xeon-E5-2650-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E5-2650-json.xml => x86_64-cpuid-Xeon-E5-2650-json.xml} (94%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E5-2650.json
rename tests/cputestdata/{x86-cpuid-Xeon-E5-2650.xml => x86_64-cpuid-Xeon-E5-2650.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E7-4820-guest.xml => x86_64-cpuid-Xeon-E7-4820-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E7-4820-host.xml => x86_64-cpuid-Xeon-E7-4820-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-E7-4820-json.xml => x86_64-cpuid-Xeon-E7-4820-json.xml} (94%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-E7-4820.json
rename tests/cputestdata/{x86-cpuid-Xeon-E7-4820.xml => x86_64-cpuid-Xeon-E7-4820.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-W3520-guest.xml => x86_64-cpuid-Xeon-W3520-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-W3520-host.xml => x86_64-cpuid-Xeon-W3520-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-W3520-json.xml => x86_64-cpuid-Xeon-W3520-json.xml} (93%)
create mode 100644 tests/cputestdata/x86_64-cpuid-Xeon-W3520.json
rename tests/cputestdata/{x86-cpuid-Xeon-W3520.xml => x86_64-cpuid-Xeon-W3520.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-X5460-guest.xml => x86_64-cpuid-Xeon-X5460-guest.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-X5460-host.xml => x86_64-cpuid-Xeon-X5460-host.xml} (100%)
rename tests/cputestdata/{x86-cpuid-Xeon-X5460.xml => x86_64-cpuid-Xeon-X5460.xml} (100%)
rename tests/cputestdata/{x86-exact-disable-extra.xml => x86_64-exact-disable-extra.xml} (100%)
rename tests/cputestdata/{x86-exact-disable.xml => x86_64-exact-disable.xml} (100%)
rename tests/cputestdata/{x86-exact-disable2.xml => x86_64-exact-disable2.xml} (100%)
rename tests/cputestdata/{x86-exact-forbid-extra.xml => x86_64-exact-forbid-extra.xml} (100%)
rename tests/cputestdata/{x86-exact-forbid.xml => x86_64-exact-forbid.xml} (100%)
rename tests/cputestdata/{x86-exact-force-Haswell.xml => x86_64-exact-force-Haswell.xml} (100%)
rename tests/cputestdata/{x86-exact-force.xml => x86_64-exact-force.xml} (100%)
rename tests/cputestdata/{x86-exact-require-extra.xml => x86_64-exact-require-extra.xml} (100%)
rename tests/cputestdata/{x86-exact-require.xml => x86_64-exact-require.xml} (100%)
rename tests/cputestdata/{x86-exact.xml => x86_64-exact.xml} (100%)
rename tests/cputestdata/{x86-guest-nofallback.xml => x86_64-guest-nofallback.xml} (100%)
rename tests/cputestdata/{x86-guest.xml => x86_64-guest.xml} (100%)
rename tests/cputestdata/{x86-host+guest,model486-result.xml => x86_64-host+guest,model486-result.xml} (100%)
rename tests/cputestdata/{x86-host+guest,models-result.xml => x86_64-host+guest,models-result.xml} (100%)
rename tests/cputestdata/{x86-host+guest-result.xml => x86_64-host+guest-result.xml} (100%)
rename tests/cputestdata/{x86-host+guest.xml => x86_64-host+guest.xml} (100%)
rename tests/cputestdata/{x86-host+host+host-model,models-result.xml => x86_64-host+host+host-model,models-result.xml} (100%)
rename tests/cputestdata/{x86-host+host-model-nofallback.xml => x86_64-host+host-model-nofallback.xml} (100%)
rename tests/cputestdata/{x86-host+host-model.xml => x86_64-host+host-model.xml} (100%)
rename tests/cputestdata/{x86-host+host-passthrough-features.xml => x86_64-host+host-passthrough-features.xml} (100%)
rename tests/cputestdata/{x86-host+host-passthrough.xml => x86_64-host+host-passthrough.xml} (100%)
rename tests/cputestdata/{x86-host+min.xml => x86_64-host+min.xml} (100%)
rename tests/cputestdata/{x86-host+penryn-force-result.xml => x86_64-host+penryn-force-result.xml} (100%)
rename tests/cputestdata/{x86-host+pentium3.xml => x86_64-host+pentium3.xml} (100%)
rename tests/cputestdata/{x86-host+strict-force-extra-result.xml => x86_64-host+strict-force-extra-result.xml} (100%)
rename tests/cputestdata/{x86-host-Haswell-noTSX+Haswell,haswell-result.xml => x86_64-host-Haswell-noTSX+Haswell,haswell-result.xml} (100%)
rename tests/cputestdata/{x86-host-Haswell-noTSX+Haswell-noTSX,haswell-result.xml => x86_64-host-Haswell-noTSX+Haswell-noTSX,haswell-result.xml} (100%)
rename tests/cputestdata/{x86-host-Haswell-noTSX+Haswell-noTSX-result.xml => x86_64-host-Haswell-noTSX+Haswell-noTSX-result.xml} (100%)
rename tests/cputestdata/{x86-host-Haswell-noTSX.xml => x86_64-host-Haswell-noTSX.xml} (100%)
rename tests/cputestdata/{x86-host-SandyBridge.xml => x86_64-host-SandyBridge.xml} (100%)
rename tests/cputestdata/{x86-host-amd-fake.xml => x86_64-host-amd-fake.xml} (100%)
rename tests/cputestdata/{x86-host-amd.xml => x86_64-host-amd.xml} (100%)
rename tests/cputestdata/{x86-host-better+pentium3-result.xml => x86_64-host-better+pentium3-result.xml} (100%)
rename tests/cputestdata/{x86-host-better.xml => x86_64-host-better.xml} (100%)
rename tests/cputestdata/{x86-host-incomp-arch.xml => x86_64-host-incomp-arch.xml} (100%)
rename tests/cputestdata/{x86-host-invtsc+host-model.xml => x86_64-host-invtsc+host-model.xml} (100%)
rename tests/cputestdata/{x86-host-invtsc.xml => x86_64-host-invtsc.xml} (100%)
rename tests/cputestdata/{x86-host-model-nofallback.xml => x86_64-host-model-nofallback.xml} (100%)
rename tests/cputestdata/{x86-host-model.xml => x86_64-host-model.xml} (100%)
rename tests/cputestdata/{x86-host-no-vendor.xml => x86_64-host-no-vendor.xml} (100%)
rename tests/cputestdata/{x86-host-passthrough-features.xml => x86_64-host-passthrough-features.xml} (100%)
rename tests/cputestdata/{x86-host-passthrough.xml => x86_64-host-passthrough.xml} (100%)
rename tests/cputestdata/{x86-host-worse+guest-result.xml => x86_64-host-worse+guest-result.xml} (100%)
rename tests/cputestdata/{x86-host-worse.xml => x86_64-host-worse.xml} (100%)
rename tests/cputestdata/{x86-host.xml => x86_64-host.xml} (100%)
rename tests/cputestdata/{x86-min.xml => x86_64-min.xml} (100%)
rename tests/cputestdata/{x86-penryn-force.xml => x86_64-penryn-force.xml} (100%)
rename tests/cputestdata/{x86-pentium3-amd.xml => x86_64-pentium3-amd.xml} (100%)
rename tests/cputestdata/{x86-pentium3.xml => x86_64-pentium3.xml} (100%)
rename tests/cputestdata/{x86-strict-disable.xml => x86_64-strict-disable.xml} (100%)
rename tests/cputestdata/{x86-strict-force-extra.xml => x86_64-strict-force-extra.xml} (100%)
rename tests/cputestdata/{x86-strict-full.xml => x86_64-strict-full.xml} (100%)
rename tests/cputestdata/{x86-strict.xml => x86_64-strict.xml} (100%)
create mode 100644 tests/domaincapsschemadata/qemu_2.9.0-tcg.x86_64.xml
create mode 100644 tests/domaincapsschemadata/qemu_2.9.0.x86_64.xml
create mode 100644 tests/qemucapabilitiesdata/caps_2.9.0.x86_64.replies
create mode 100644 tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml
--
2.11.1
8 years, 3 months
[libvirt] [PATCH] qemu_cgroup: Only try to allow devices if devices CGroup's available
by Michal Privoznik
When a domain needs an access to some device (be it a disk, RNG,
chardev, whatever), we have to allow it in the devices CGroup (if
it is available), because by default we disallow all the devices.
But some of the functions that are responsible for setting up
devices CGroup are lacking check whether there is any CGroup
available. Thus users might be unable to hotplug some devices:
virsh # attach-device fedora rng.xml
error: Failed to attach device from rng.xml
error: internal error: Controller 'devices' is not mounted
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/qemu/qemu_cgroup.c | 21 +++++++++++++++++++++
1 file changed, 21 insertions(+)
diff --git a/src/qemu/qemu_cgroup.c b/src/qemu/qemu_cgroup.c
index f0729743a..42a47a798 100644
--- a/src/qemu/qemu_cgroup.c
+++ b/src/qemu/qemu_cgroup.c
@@ -176,6 +176,9 @@ qemuSetupChrSourceCgroup(virDomainObjPtr vm,
qemuDomainObjPrivatePtr priv = vm->privateData;
int ret;
+ if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_DEVICES))
+ return 0;
+
if (source->type != VIR_DOMAIN_CHR_TYPE_DEV)
return 0;
@@ -197,6 +200,9 @@ qemuTeardownChrSourceCgroup(virDomainObjPtr vm,
qemuDomainObjPrivatePtr priv = vm->privateData;
int ret;
+ if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_DEVICES))
+ return 0;
+
if (source->type != VIR_DOMAIN_CHR_TYPE_DEV)
return 0;
@@ -247,6 +253,9 @@ qemuSetupInputCgroup(virDomainObjPtr vm,
qemuDomainObjPrivatePtr priv = vm->privateData;
int ret = 0;
+ if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_DEVICES))
+ return 0;
+
switch (dev->type) {
case VIR_DOMAIN_INPUT_TYPE_PASSTHROUGH:
VIR_DEBUG("Process path '%s' for input device", dev->source.evdev);
@@ -270,6 +279,9 @@ qemuSetupHostdevCgroup(virDomainObjPtr vm,
size_t i, npaths = 0;
int rv, ret = -1;
+ if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_DEVICES))
+ return 0;
+
if (qemuDomainGetHostdevPath(NULL, dev, false, &npaths, &path, &perms) < 0)
goto cleanup;
@@ -344,6 +356,9 @@ qemuSetupGraphicsCgroup(virDomainObjPtr vm,
const char *rendernode = gfx->data.spice.rendernode;
int ret;
+ if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_DEVICES))
+ return 0;
+
if (gfx->type != VIR_DOMAIN_GRAPHICS_TYPE_SPICE ||
gfx->data.spice.gl != VIR_TRISTATE_BOOL_YES ||
!rendernode)
@@ -481,6 +496,9 @@ qemuSetupRNGCgroup(virDomainObjPtr vm,
qemuDomainObjPrivatePtr priv = vm->privateData;
int rv;
+ if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_DEVICES))
+ return 0;
+
if (rng->backend == VIR_DOMAIN_RNG_BACKEND_RANDOM) {
VIR_DEBUG("Setting Cgroup ACL for RNG device");
rv = virCgroupAllowDevicePath(priv->cgroup,
@@ -505,6 +523,9 @@ qemuTeardownRNGCgroup(virDomainObjPtr vm,
qemuDomainObjPrivatePtr priv = vm->privateData;
int rv;
+ if (!virCgroupHasController(priv->cgroup, VIR_CGROUP_CONTROLLER_DEVICES))
+ return 0;
+
if (rng->backend == VIR_DOMAIN_RNG_BACKEND_RANDOM) {
VIR_DEBUG("Tearing down Cgroup ACL for RNG device");
rv = virCgroupDenyDevicePath(priv->cgroup,
--
2.11.0
8 years, 3 months
[libvirt] [PATCH resend v2 0/2] Allow saving QEMU libvirt state to a pipe
by Chen Hanxiao
This series introduce flag VIR_DOMAIN_SAVE_DIRECT
to enable command 'save' to write to PIPE.
Base upon patches from Roy Keene <rkeene(a)knightpoint.com>
with some fixes.
Change from original patch:
1) Check whether the specified path is a PIPE.
2) Rebase on upstream.
3) Add doc for virsh command
v2-resend:
rebase on upstream
v2:
rename VIR_DOMAIN_SAVE_PIPE to VIR_DOMAIN_SAVE_DIRECT
remove S_ISFIFO check
Chen Hanxiao (2):
qemu: Allow saving QEMU libvirt state to a pipe
virsh: introduce flage --direct for save command
include/libvirt/libvirt-domain.h | 1 +
src/qemu/qemu_driver.c | 54 ++++++++++++++++++++++++++--------------
tools/virsh-domain.c | 6 +++++
tools/virsh.pod | 5 +++-
4 files changed, 47 insertions(+), 19 deletions(-)
--
2.7.4
8 years, 3 months
[libvirt] [PATCH v2] qemu_capabilities: introduce QEMU_CAPS_SD_CARD to probe sd-card drivers
by Chen Hanxiao
From: Chen Hanxiao <chenhanxiao(a)gmail.com>
This patch introduces QEMU_CAPS_SD_CARD for probing
whether qemu support SD card by:
{"execute": "device-list-properties",
"arguments":{"typename":"sd-card"}}
It will be helpful for apps which used
cmd 'virsh domcaps` etc.
Also helpful for:
https://bugzilla.redhat.com/show_bug.cgi?id=1387218
Signed-off-by: Chen Hanxiao <chenhanxiao(a)gmail.com>
---
v2:
rebased on upstream
src/qemu/qemu_capabilities.c | 9 +++++++--
src/qemu/qemu_capabilities.h | 1 +
2 files changed, 8 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 5b5e3ac..da983ff 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -358,6 +358,7 @@ VIR_ENUM_IMPL(virQEMUCaps, QEMU_CAPS_LAST,
"query-cpu-model-expansion", /* 245 */
"virtio-net.host_mtu",
"spice-rendernode",
+ "sd-card",
);
@@ -1625,6 +1626,7 @@ struct virQEMUCapsStringFlags virQEMUCapsObjectTypes[] = {
{ "ivshmem-plain", QEMU_CAPS_DEVICE_IVSHMEM_PLAIN },
{ "ivshmem-doorbell", QEMU_CAPS_DEVICE_IVSHMEM_DOORBELL },
{ "vhost-scsi", QEMU_CAPS_DEVICE_VHOST_SCSI },
+ { "sd-card", QEMU_CAPS_SD_CARD },
};
static struct virQEMUCapsStringFlags virQEMUCapsObjectPropsVirtioBalloon[] = {
@@ -5215,8 +5217,7 @@ virQEMUCapsFillDomainDeviceDiskCaps(virQEMUCapsPtr qemuCaps,
VIR_DOMAIN_CAPS_ENUM_SET(disk->bus,
VIR_DOMAIN_DISK_BUS_IDE,
VIR_DOMAIN_DISK_BUS_SCSI,
- VIR_DOMAIN_DISK_BUS_VIRTIO,
- /* VIR_DOMAIN_DISK_BUS_SD */);
+ VIR_DOMAIN_DISK_BUS_VIRTIO);
/* PowerPC pseries based VMs do not support floppy device */
if (!ARCH_IS_PPC64(qemuCaps->arch) ||
@@ -5225,6 +5226,10 @@ virQEMUCapsFillDomainDeviceDiskCaps(virQEMUCapsPtr qemuCaps,
if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE_USB_STORAGE))
VIR_DOMAIN_CAPS_ENUM_SET(disk->bus, VIR_DOMAIN_DISK_BUS_USB);
+
+ if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_SD_CARD))
+ VIR_DOMAIN_CAPS_ENUM_SET(disk->bus, VIR_DOMAIN_DISK_BUS_SD);
+
return 0;
}
diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h
index 0f998c4..746bd9c 100644
--- a/src/qemu/qemu_capabilities.h
+++ b/src/qemu/qemu_capabilities.h
@@ -394,6 +394,7 @@ typedef enum {
QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION, /* qmp query-cpu-model-expansion */
QEMU_CAPS_VIRTIO_NET_HOST_MTU, /* virtio-net-*.host_mtu */
QEMU_CAPS_SPICE_RENDERNODE, /* -spice rendernode */
+ QEMU_CAPS_SD_CARD, /* -sd abc.img */
QEMU_CAPS_LAST /* this must always be the last item */
} virQEMUCapsFlags;
--
2.7.4
8 years, 3 months
[libvirt] [PATCH] test: fix pcie-root-port-too-many test
by Laine Stump
While reviewing a patch from Andrea that modified this test case, I
realized that although it was "properly failing" (it's a negative
test), that it was failing for the wrong reason (the MULTIFUNCTION cap
wasn't set in the test case, so it was saying that multifunction=on
wasn't supported by the QEMU binary; instead it should have been
complaining that it had run out of PCI slots of the appropriate type
and couldn't automatically add any more).
This improper failure had started when I added the patch to
automatically aggregate pcie-root-ports onto multiple functions of
each pcie-root slot, but I hadn't noticed it because the test still
failed.
This patch corrects the test case to 1) set the MULTIFUNCTION flag in
the caps, and 2) attempt to add 241 pcie-root-ports to a domain. Since
there are 30 slots available on a pcie-root (slot 0 is reserved, and
slot 31 is used by the integrated SATA controller), and a
pcie-root-port can only be placed on a function of a slot on
pcie-root, the maximum number of pcie-root-ports in any domain is 240.
---
.../qemuxml2argv-pcie-root-port-too-many.xml | 273 ++++++++++++++++++---
tests/qemuxml2argvtest.c | 1 +
2 files changed, 242 insertions(+), 32 deletions(-)
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-pcie-root-port-too-many.xml b/tests/qemuxml2argvdata/qemuxml2argv-pcie-root-port-too-many.xml
index 5234e3b..d7ac64a 100644
--- a/tests/qemuxml2argvdata/qemuxml2argv-pcie-root-port-too-many.xml
+++ b/tests/qemuxml2argvdata/qemuxml2argv-pcie-root-port-too-many.xml
@@ -20,41 +20,250 @@
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<controller type='pci' index='0' model='pcie-root'/>
- <controller type='pci' index='1' model='pcie-root-port'/>
- <controller type='pci' index='2' model='pcie-root-port'/>
- <controller type='pci' index='3' model='pcie-root-port'/>
- <controller type='pci' index='4' model='pcie-root-port'/>
- <controller type='pci' index='5' model='pcie-root-port'/>
- <controller type='pci' index='6' model='pcie-root-port'/>
- <controller type='pci' index='7' model='pcie-root-port'/>
- <controller type='pci' index='8' model='pcie-root-port'/>
- <controller type='pci' index='9' model='pcie-root-port'/>
- <controller type='pci' index='10' model='pcie-root-port'/>
- <controller type='pci' index='11' model='pcie-root-port'/>
- <controller type='pci' index='12' model='pcie-root-port'/>
- <controller type='pci' index='13' model='pcie-root-port'/>
- <controller type='pci' index='14' model='pcie-root-port'/>
- <controller type='pci' index='15' model='pcie-root-port'/>
- <controller type='pci' index='16' model='pcie-root-port'/>
- <controller type='pci' index='17' model='pcie-root-port'/>
- <controller type='pci' index='18' model='pcie-root-port'/>
- <controller type='pci' index='19' model='pcie-root-port'/>
- <controller type='pci' index='20' model='pcie-root-port'/>
- <controller type='pci' index='21' model='pcie-root-port'/>
- <controller type='pci' index='22' model='pcie-root-port'/>
- <controller type='pci' index='23' model='pcie-root-port'/>
- <controller type='pci' index='24' model='pcie-root-port'/>
- <controller type='pci' index='25' model='pcie-root-port'/>
- <controller type='pci' index='26' model='pcie-root-port'/>
- <controller type='pci' index='27' model='pcie-root-port'/>
- <controller type='pci' index='28' model='pcie-root-port'/>
- <controller type='pci' index='29' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
+ <controller type='pci' model='pcie-root-port'/>
<controller type='sata' index='0'/>
<input type='mouse' bus='ps2'/>
<input type='keyboard' bus='ps2'/>
- <video>
- <model type='qxl' ram='65536' vram='32768' vgamem='8192' heads='1'/>
- </video>
<memballoon model='none'/>
</devices>
</domain>
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index 8d737fd..81217df 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -2041,6 +2041,7 @@ mymain(void)
QEMU_CAPS_DEVICE_DMI_TO_PCI_BRIDGE,
QEMU_CAPS_DEVICE_IOH3420,
QEMU_CAPS_ICH9_AHCI,
+ QEMU_CAPS_PCI_MULTIFUNCTION,
QEMU_CAPS_DEVICE_VIDEO_PRIMARY,
QEMU_CAPS_DEVICE_QXL);
--
2.9.3
8 years, 3 months
[libvirt] [PATCH v2 0/3] Use non-blacklisted family/model/stepping for Haswell CPU model
by Eduardo Habkost
Changes v1 -> v2:
* Coding style fixes
* Make series simpler:
* Don't use trick: char vendor[static (CPUID_VENDOR_SZ + 1)]
because it confuses checkpatch.pl
* Removed patch "Add explicit array size to x86_cpu_vendor_words2str()"
* Rebased on top of my x86-next branch:
https://github.com/ehabkost/qemu x86-next
Git branch for testing:
https://github.com/ehabkost/qemu-hacks work/x86-rtm-blacklist
Diff from v1:
diff --git a/target/i386/cpu.h b/target/i386/cpu.h
index cd94726e43..647435a1d9 100644
--- a/target/i386/cpu.h
+++ b/target/i386/cpu.h
@@ -1431,7 +1431,7 @@ void cpu_x86_cpuid(CPUX86State *env, uint32_t index, uint32_t count,
void cpu_clear_apic_feature(CPUX86State *env);
void host_cpuid(uint32_t function, uint32_t count,
uint32_t *eax, uint32_t *ebx, uint32_t *ecx, uint32_t *edx);
-void host_vendor_fms(char vendor[static (CPUID_VENDOR_SZ + 1)], int *family, int *model, int *stepping);
+void host_vendor_fms(char *vendor, int *family, int *model, int *stepping);
/* helper.c */
int x86_cpu_handle_mmu_fault(CPUState *cpu, vaddr addr,
diff --git a/target/i386/cpu.c b/target/i386/cpu.c
index 25c6c5e115..eab1ad7935 100644
--- a/target/i386/cpu.c
+++ b/target/i386/cpu.c
@@ -682,7 +682,7 @@ void host_cpuid(uint32_t function, uint32_t count,
*edx = vec[3];
}
-void host_vendor_fms(char vendor[static (CPUID_VENDOR_SZ + 1)], int *family, int *model, int *stepping)
+void host_vendor_fms(char *vendor, int *family, int *model, int *stepping)
{
uint32_t eax, ebx, ecx, edx;
@@ -1570,7 +1570,8 @@ static void host_x86_cpu_class_init(ObjectClass *oc, void *data)
xcc->kvm_required = true;
xcc->ordering = 9;
- host_vendor_fms(host_cpudef.vendor, &host_cpudef.family, &host_cpudef.model, &host_cpudef.stepping);
+ host_vendor_fms(host_cpudef.vendor, &host_cpudef.family,
+ &host_cpudef.model, &host_cpudef.stepping);
cpu_x86_fill_model_id(host_cpudef.model_id);
---
A recent glibc commit[1] added a blacklist to ensure it won't use
TSX on hosts that are known to have a broken TSX implementation.
Our existing Haswell CPU model has a blacklisted
family/model/stepping combination, so it has to be updated to
make sure guests will really use TSX. This is done by patch 5/5.
However, to do this safely we need to ensure the host CPU is not
a blacklisted one, so we won't mislead guests by exposing
known-to-be-good FMS values on a known-to-be-broken host. This is
done by patch 3/5.
[1] https://sourceware.org/git/?p=glibc.git;a=commit;h=2702856bf45c82cf8e69f2...
---
Cc: dgilbert(a)redhat.com
Cc: fweimer(a)redhat.com
Cc: carlos(a)redhat.com
Cc: triegel(a)redhat.com
Cc: berrange(a)redhat.com
Cc: jdenemar(a)redhat.com
Cc: pbonzini(a)redhat.com
Eduardo Habkost (3):
i386: host_vendor_fms() helper function
i386/kvm: Blacklist TSX on known broken hosts
i386: Change stepping of Haswell to non-blacklisted value
include/hw/i386/pc.h | 5 +++++
target/i386/cpu.h | 1 +
target/i386/cpu.c | 31 ++++++++++++++++++++++---------
target/i386/kvm.c | 17 +++++++++++++++++
4 files changed, 45 insertions(+), 9 deletions(-)
--
2.11.0.259.g40922b1
8 years, 3 months