[libvirt] [RFC v2 REPOST] arm64: KVM: KVM API extensions for SVE
by Dave Martin
Hi all,
Reposting this to give people another chance to comment before I move
ahead...
---8<---
Here's a second, slightly more complete stab at the KVM API extensions
for SVE.
I haven't started implementing in earnest yet, so any comments at this
stage would be very helpful.
[libvir-list readers: this is a proposal for extending the KVM API on
AArch64 systems to support the Scalable Vector Extension [1], [2].
This has some interesting configuration and migration quirks -- see
"Vector length control" in particular, and feel free to throw questions
my way...]
Cheers
---Dave
[1] Overview
https://community.arm.com/processors/b/blog/posts/technology-update-the-s...
[2] Architecture spec
https://developer.arm.com/products/architecture/a-profile/docs/ddi0584/la...
---8<---
New feature KVM_ARM_VCPU_SVE:
* enables exposure of SVE to the guest
* enables visibility of / access to KVM_REG_ARM_SVE_*() via KVM reg
ioctls. The main purposes of this are a) is to allow userspace to hide
weird-sized registers that it doesn't understand how to deal with,
and b) allow SVE to be hidden from the VM so that it can migrate to
nodes that don't support SVE.
ZCR_EL1 is not specifically hidden, since it is "just a system register"
and does not have a weird size or semantics etc.
Registers:
* A new register size is defined KVM_REG_SIZE_U2048 (which can be
encoded sensibly using the next unused value for the reg size field
in the reg ID) (grep KVM_REG_SIZE_).
* Reg IDs for the SVE regs will be defined as "coproc" 0x14
(i.e., 0x14 << KVM_REG_ARM_COPROC_SHIFT)
KVM_REG_ARM_SVE_Z(n, i) is slice i of Zn (each slice is 2048 bits)
KVM_REG_ARM_SVE_P(n, i) is slice i of Pn (each slice is 256 bits)
KVM_REG_ARM_FFR(i) is slice i of FFR (each slice is 256 bits)
The slice sizes allow each register to be read/written in exactly
one slice for SVE.
Surplus bits (beyond the maximum VL supported by the vcpu) will
be read-as-zero write-ignore.
Reading/writing surplus slices will probably be forbidden, and the
surplus slices would not be reported via KVM_GET_REG_LIST.
(We could make these RAZ/WI too, but I'm not sure if it's worth it,
or why it would be useful.)
Future extensions to the architecture might grow the registers up
to 32 slices: this may or may not actually happen, but SVE keeps the
possibilty open. I've tried to design for it.
* KVM_REG_ARM_SVE_Z(n, 0) bits [127:0] alias Vn in
KVM_REG_ARM_CORE(fp_regs.v[n]) .. KVM_REG_ARM_CORE(fp_regs.v[n])+3.
It's simplest for userspace if the two views always appear to be
in sync, but it's unclear whether this is really useful. Perhaps
this can be relaxed if it's a big deal for the KVM implementation;
I don't know yet.
Vector length control:
Some means is needed to determine the set of vector lengths visible
to guest software running on a vcpu.
When a vcpu is created, the set would be defaulted to the maximal set
that can be supported while permitting each vcpu to run on any host
CPU. SVE has some virtualisation quirks which mean that this set may
exclude some vector lengths that are available for host userspace
applications. The common case should be that the sets are the same
however.
* New ioctl KVM_ARM_VCPU_{SET,GET}_SVE_VLS to set or retrieve the set of
vector lengths available to the guest.
Adding random vcpu ioctls
To configure a non-default set of vector lengths,
KVM_ARM_VCPU_SET_SVE_VLS can be called: this would only be permitted
before the vcpu is first run.
This is primarily intended for supporting migration, by providing a
robust check that the destination node will run the vcpu correctly.
In a cluster with non-uniform SVE implementation across nodes, this
also allows a specific set of VLs to be requested that the caller
knows is usable across the whole cluster.
For migration purposes, userspace would need to do
KVM_ARM_VCPU_GET_SVE_VLS at the origin node and store the returned
set as VM metadata: on the destination node,
KVM_ARM_VCPU_SET_SVE_VLS should be used to request that exact set of
VLs: if the destination node can't support that set of VLs, the call
will fail.
The interface would look something like:
ioctl(vcpu_fd, KVM_ARM_SVE_SET_VLS, __u64 vqs[SVE_VQ_MAX / 64]);
How to expose this to the user in an intelligible way would be a
problem for userspace to solve.
At present, other than initialising each vcpu to the maximum
supportable set of VLs, I don't propose having a way to probe for
what sets of VLs are supportable: the above call either succeeds or
fails.
Cheers
---Dave
_______________________________________________
kvmarm mailing list
kvmarm(a)lists.cs.columbia.edu
https://lists.cs.columbia.edu/mailman/listinfo/kvmarm
6 years, 9 months
[libvirt] [jenkins-ci] lcitool: Use default python for creating salty passwords
by Martin Kletzander
On my system the crypt module in python2 doesn't have mksalt() function.
However python3 does and the code is perfectly fine python3 code as well. So
let's make it run in the default python version as that has the highest chance
to work.
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
guests/lcitool | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/guests/lcitool b/guests/lcitool
index ccd0a597785a..24274d800742 100755
--- a/guests/lcitool
+++ b/guests/lcitool
@@ -18,7 +18,7 @@ die() {
hash_file() {
PASS_FILE="$1"
- python2 -c "
+ python -c "
import crypt
password = open('$PASS_FILE', 'r').read().strip()
print(crypt.crypt(password,
--
2.16.1
6 years, 9 months
[libvirt] [PATCH v3 0/4] qemu: use arp table of host to get the
by Chen Hanxiao
introduce VIR_DOMAIN_INTERFACE_ADDRESSES_SRC_ARP to get ip address
of VM from the output of /proc/net/arp
Chen Hanxiao (4):
util: introduce helper to parse /proc/net/arp
qemu: introduce qemuARPGetInterfaces to get IP from host's arp table
virsh: add --source arp to domifaddr
news: qemu: use arp table of host to get the IP address of guests
docs/news.xml | 9 ++++
include/libvirt/libvirt-domain.h | 1 +
po/POTFILES.in | 1 +
src/Makefile.am | 1 +
src/libvirt-domain.c | 7 +++
src/libvirt_private.syms | 5 ++
src/qemu/qemu_driver.c | 96 +++++++++++++++++++++++++++++++++
src/util/virarptable.c | 114 +++++++++++++++++++++++++++++++++++++++
src/util/virarptable.h | 48 +++++++++++++++++
tools/virsh-domain-monitor.c | 2 +
tools/virsh.pod | 7 +--
11 files changed, 288 insertions(+), 3 deletions(-)
create mode 100644 src/util/virarptable.c
create mode 100644 src/util/virarptable.h
--
2.14.3
6 years, 9 months
[libvirt] [PATCH v2] virt-aa-helper: Set the supported features
by Shivaprasad G Bhat
The virt-aa-helper fails to parse the xmls with the memory/cpu
hotplug features or user assigned aliases. Set the features in
xmlopt->config for the parsing to succeed.
Signed-off-by: Shivaprasad G Bhat <sbhat(a)linux.vnet.ibm.com>
---
src/security/virt-aa-helper.c | 8 +++++++-
1 file changed, 7 insertions(+), 1 deletion(-)
diff --git a/src/security/virt-aa-helper.c b/src/security/virt-aa-helper.c
index f7ccae0..29a459d 100644
--- a/src/security/virt-aa-helper.c
+++ b/src/security/virt-aa-helper.c
@@ -654,6 +654,11 @@ caps_mockup(vahControl * ctl, const char *xmlStr)
return rc;
}
+virDomainDefParserConfig virAAHelperDomainDefParserConfig = {
+ .features = VIR_DOMAIN_DEF_FEATURE_MEMORY_HOTPLUG |
+ VIR_DOMAIN_DEF_FEATURE_OFFLINE_VCPUPIN |
+ VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS,
+};
static int
get_definition(vahControl * ctl, const char *xmlStr)
@@ -673,7 +678,8 @@ get_definition(vahControl * ctl, const char *xmlStr)
goto exit;
}
- if (!(ctl->xmlopt = virDomainXMLOptionNew(NULL, NULL, NULL, NULL, NULL))) {
+ if (!(ctl->xmlopt = virDomainXMLOptionNew(&virAAHelperDomainDefParserConfig,
+ NULL, NULL, NULL, NULL))) {
vah_error(ctl, 0, _("Failed to create XML config object"));
goto exit;
}
6 years, 9 months
[libvirt] [PATCH v4 0/4] nwfilter common object adjustments
by John Ferlan
v3: https://www.redhat.com/archives/libvir-list/2017-October/msg00264.html
Although v3 didn't get any attention - I figured I'd update and repost.
The only difference between this series and that one is that I dropped
patch 1 from v3. It was an attempt to fix a perceived issue in nwfilter
that I actually now have determined is in nodedev for which I'll have a
different set of patches.
John Ferlan (4):
nwfilter: Remove unnecessary UUID comparison bypass
nwfilter: Convert _virNWFilterObj to use virObjectRWLockable
nwfilter: Convert _virNWFilterObjList to use virObjectRWLockable
nwfilter: Remove need for nwfilterDriverLock in some API's
src/conf/virnwfilterobj.c | 555 +++++++++++++++++++++++----------
src/conf/virnwfilterobj.h | 11 +-
src/libvirt_private.syms | 3 +-
src/nwfilter/nwfilter_driver.c | 71 ++---
src/nwfilter/nwfilter_gentech_driver.c | 11 +-
5 files changed, 427 insertions(+), 224 deletions(-)
--
2.13.6
6 years, 9 months
[libvirt] [PATCH 0/6] port allocator: make used port bitmap global etc
by Nikolay Shirokovskiy
This patch set addresses issue described in [1] and the core of
changes go to the first patch. The others are cleanups and
refactorings.
[1] https://www.redhat.com/archives/libvir-list/2017-December/msg00600.html
Nikolay Shirokovskiy (6):
port allocator: make used port bitmap global
port allocator: remove range on manual port reserving
port allocator: remove range check in release function
port allocator: drop skip flag
port allocator: remove release functionality from set used
port allocator: make port range constant object
src/bhyve/bhyve_command.c | 4 +-
src/bhyve/bhyve_driver.c | 4 +-
src/bhyve/bhyve_process.c | 7 +-
src/bhyve/bhyve_utils.h | 2 +-
src/libvirt_private.syms | 3 +-
src/libxl/libxl_conf.c | 8 +--
src/libxl/libxl_conf.h | 12 ++--
src/libxl/libxl_domain.c | 3 +-
src/libxl/libxl_driver.c | 17 +++--
src/libxl/libxl_migration.c | 4 +-
src/qemu/qemu_conf.h | 12 ++--
src/qemu/qemu_driver.c | 27 ++++----
src/qemu/qemu_migration.c | 12 ++--
src/qemu/qemu_process.c | 55 +++++----------
src/util/virportallocator.c | 148 +++++++++++++++++++++++------------------
src/util/virportallocator.h | 24 +++----
tests/bhyvexml2argvtest.c | 5 +-
tests/libxlxml2domconfigtest.c | 7 +-
tests/virportallocatortest.c | 49 ++++++++------
19 files changed, 196 insertions(+), 207 deletions(-)
--
1.8.3.1
6 years, 9 months
[libvirt] PATCH add q35 support ide
by Paul Schlacter
hello everyone:
In q35 motherboard use ide, Currently, the qemu has supported q35
Motherboard support ide bus
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index cc7596b..2dbade8 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -7188,6 +7188,7 @@ bool
qemuDomainMachineHasBuiltinIDE(const char *machine)
{
return qemuDomainMachineIsI440FX(machine) ||
+ qemuDomainMachineIsQ35(machine) ||
STREQ(machine, "malta") ||
STREQ(machine, "sun4u") ||
STREQ(machine, "g3beige");
[root@kvm ~]# virsh dumpxml instance-00000004 | grep machine=
<type arch='x86_64' machine='pc-q35-rhel7.3.0'>hvm</type>
[root@kvm~]#
[root@kvm~]# virsh dumpxml instance-00000004 | grep "'disk'" -A 13
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none'/>
<source file='/var/lib/nova/instances/288271ce-69eb-4629-b98c-
779036661294/disk'/>
<backingStore type='file' index='1'>
<format type='raw'/>
<source file='/var/lib/nova/instances/_base/
8d383eef2e628adfc197a6e40e656916de566ab1'/>
<backingStore/>
</backingStore>
<target dev='vda' bus='ide'/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
6 years, 9 months
[libvirt] [PATCH v2 00/15] Misc build refactoring / isolation work
by Daniel P. Berrangé
This was triggered by the recent Fedora change to add '-z defs' to RPM
builds by default which breaks libvirt. Various make rule changes can
fix much of the problem, but it also requires source refactoring to get
rid of places where virt drivers directly call into the storage/network
drivers. Co-incidentally this work will also be useful in allowing us to
separate out drivers to distinct daemons.
In v2:
- Fixed header file name comment
- Resolve conflicts
- Fix unit tests
- Fix bisectable build by moving libvirt_lxc build patch earlier
- Update syntax check header include rule
Daniel P. Berrangé (15):
storage: extract storage file backend from main storage driver backend
storage: move storage file backend framework into util directory
rpc: don't link in second copy of RPC code to libvirtd & lockd plugin
build: link libvirt_lxc against libvirt.so
conf: introduce callback registration for domain net device allocation
conf: expand network device callbacks to cover bandwidth updates
qemu: replace networkGetNetworkAddress with public API calls
conf: expand network device callbacks to cover resolving NIC type
network: remove conditional declarations
conf: move virStorageTranslateDiskSourcePool into domain conf
storage: export virStoragePoolLookupByTargetPath as a public API
build: explicitly link all modules with libvirt.so
build: provide a AM_FLAGS_MOD for loadable modules
build: passing the "-z defs" linker flag to prevent undefined symbols
cfg: forbid includes of headers in network and storage drivers again
cfg.mk | 2 +-
configure.ac | 1 +
daemon/Makefile.am | 3 +-
include/libvirt/libvirt-storage.h | 2 +
m4/virt-linker-no-undefined.m4 | 32 ++
po/POTFILES.in | 2 +-
src/Makefile.am | 142 ++++----
src/bhyve/bhyve_command.c | 7 +-
src/conf/domain_conf.c | 355 +++++++++++++++++++
src/conf/domain_conf.h | 71 ++++
src/driver-storage.h | 5 +
src/libvirt-storage.c | 40 +++
src/libvirt_private.syms | 29 ++
src/libvirt_public.syms | 6 +
src/libvirt_remote.syms | 11 +-
src/libxl/libxl_domain.c | 5 +-
src/libxl/libxl_driver.c | 7 +-
src/lxc/lxc_driver.c | 5 +-
src/lxc/lxc_process.c | 7 +-
src/network/bridge_driver.c | 144 ++------
src/network/bridge_driver.h | 72 ----
src/qemu/qemu_alias.c | 3 +-
src/qemu/qemu_command.c | 1 -
src/qemu/qemu_domain.c | 3 -
src/qemu/qemu_domain_address.c | 3 +-
src/qemu/qemu_driver.c | 15 +-
src/qemu/qemu_hotplug.c | 18 +-
src/qemu/qemu_migration.c | 3 +-
src/qemu/qemu_process.c | 115 +++++-
src/remote/remote_driver.c | 1 +
src/remote/remote_protocol.x | 17 +-
src/remote_protocol-structs | 7 +
src/security/virt-aa-helper.c | 2 -
src/storage/storage_backend.c | 66 ----
src/storage/storage_backend.h | 75 ----
src/storage/storage_backend_fs.c | 8 +-
src/storage/storage_backend_gluster.c | 4 +-
src/storage/storage_driver.c | 256 +-------------
src/storage/storage_driver.h | 3 -
src/storage/storage_source.c | 645 ----------------------------------
src/storage/storage_source.h | 59 ----
src/util/virstoragefile.c | 609 +++++++++++++++++++++++++++++++-
src/util/virstoragefile.h | 32 ++
src/util/virstoragefilebackend.c | 108 ++++++
src/util/virstoragefilebackend.h | 104 ++++++
src/vz/vz_sdk.c | 1 -
tests/Makefile.am | 4 +-
tests/qemuxml2argvtest.c | 4 +
tests/virstoragetest.c | 1 -
tools/Makefile.am | 1 +
50 files changed, 1675 insertions(+), 1441 deletions(-)
create mode 100644 m4/virt-linker-no-undefined.m4
delete mode 100644 src/storage/storage_source.c
delete mode 100644 src/storage/storage_source.h
create mode 100644 src/util/virstoragefilebackend.c
create mode 100644 src/util/virstoragefilebackend.h
--
2.14.3
6 years, 9 months
[libvirt] [PATCH] qemuDomainRemoveMemoryDevice: unlink() memory backing file
by Michal Privoznik
https://bugzilla.redhat.com/show_bug.cgi?id=1461214
Since fec8f9c49af we try to use predictable file names for
'memory-backend-file' objects. But that made us provide full path
to qemu when hot plugging the object while previously we provided
merely a directory. But this makes qemu behave differently. If
qemu sees a path terminated with a directory it calls mkstemp()
and unlinks the file immediately. But if it sees full path it
just calls open(path, O_CREAT ..); and never unlinks the file.
Therefore it's up to libvirt to unlink the file and not leave it
behind.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
Zack, can you please check if this patch is suitable for your use cases?
src/qemu/qemu_hotplug.c | 3 +++
src/qemu/qemu_process.c | 26 ++++++++++++++++++++++++++
src/qemu/qemu_process.h | 4 ++++
3 files changed, 33 insertions(+)
diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
index 6dc16a105..f26e2ca60 100644
--- a/src/qemu/qemu_hotplug.c
+++ b/src/qemu/qemu_hotplug.c
@@ -3894,6 +3894,9 @@ qemuDomainRemoveMemoryDevice(virQEMUDriverPtr driver,
if (qemuDomainNamespaceTeardownMemory(vm, mem) < 0)
VIR_WARN("Unable to remove memory device from /dev");
+ if (qemuProcessDestroyMemoryBackingPath(driver, vm, mem) < 0)
+ VIR_WARN("Unable to destroy memory backing path");
+
virDomainMemoryDefFree(mem);
/* fix the balloon size */
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 1a0923af3..73624eefe 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -3467,6 +3467,32 @@ qemuProcessBuildDestroyMemoryPaths(virQEMUDriverPtr driver,
}
+int
+qemuProcessDestroyMemoryBackingPath(virQEMUDriverPtr driver,
+ virDomainObjPtr vm,
+ virDomainMemoryDefPtr mem)
+{
+ virQEMUDriverConfigPtr cfg = virQEMUDriverGetConfig(driver);
+ char *path = NULL;
+ int ret = -1;
+
+ if (qemuGetMemoryBackingPath(vm->def, cfg, mem->info.alias, &path) < 0)
+ goto cleanup;
+
+ if (unlink(path) < 0 &&
+ errno != ENOENT) {
+ virReportSystemError(errno, _("Unable to remove %s"), path);
+ goto cleanup;
+ }
+
+ ret = 0;
+ cleanup:
+ VIR_FREE(path);
+ virObjectUnref(cfg);
+ return ret;
+}
+
+
static int
qemuProcessVNCAllocatePorts(virQEMUDriverPtr driver,
virDomainGraphicsDefPtr graphics,
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index cd9a72031..3fc7d6c85 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -43,6 +43,10 @@ int qemuProcessBuildDestroyMemoryPaths(virQEMUDriverPtr driver,
virDomainMemoryDefPtr mem,
bool build);
+int qemuProcessDestroyMemoryBackingPath(virQEMUDriverPtr driver,
+ virDomainObjPtr vm,
+ virDomainMemoryDefPtr mem);
+
void qemuProcessAutostartAll(virQEMUDriverPtr driver);
void qemuProcessReconnectAll(virConnectPtr conn, virQEMUDriverPtr driver);
--
2.13.6
6 years, 9 months
[libvirt] [PATCH v3 00/11] Implement query-dump command
by John Ferlan
v2: https://www.redhat.com/archives/libvir-list/2018-January/msg00636.html
Summary of changes since v2:
* Generate a dump stats extraction helper in order to share with the
DUMP_COMPLETED event and the query-dump command. Additionally add the
error string from Qemu to be processed later
* When processing DUMP_COMPLETED event implementation, use the returned
stats buffer and copy any error message for the waiting completion to
process properly.
NB: Was able to test this by altering the guest memory size larger
than the space to store the dump memory and got the following error:
error: operation failed: memory-only dump failed: dump: failed to save memory
* As suggested during review - alter the jobInfo @stats to be part of
a union. Took the liberty to rename the fields as well.
* The point raised in patch 5 regarding mirrorStats being collect for a
non-migration (e.g. save and dump) was handled by creating a new type
which will use the migStats only to collect data and avoid the mirrorStats.
When converting to a JobInfo, only the migStats will then be used.
* Patches 6 and 7 are new. One for the union and one to separate migrate
and save/dump style migration jobs.
* Former patch 6 (now 8) is altered to handle the union separation
* Former patch 8 (now 10) is altered mainly to handle the data in
buffers from the DUMP_COMPLETED event (either data or error).
NB: Comment from former patch 8:
"We should update job.completed at this point."
I believe that's handled because qemuDomainObjResetAsyncJob will
reset the dumpCompleted back to false. Unless there's a specific
reason to do it there...
NB: Formerly ACK'd patches 4 and 7 (now 9) did not change.
John Ferlan (11):
qemu: Add support for DUMP_COMPLETED event
qemu: Introduce qemuProcessHandleDumpCompleted
qemu: Introduce qemuMonitor[JSON]QueryDump
qemu: Add new parameter to qemuMonitorDumpToFd
qemu: Introduce qemuDomainGetJobInfoMigrationStats
qemu: Convert jobInfo stats into a union
qemu: Introduce QEMU_DOMAIN_JOB_STATS_TYPE_SAVEDUMP
qemu: Introduce qemuDomainGetJobInfoDumpStats
qemu: Add dump completed event to the capabilities
qemu: Allow showing the dump progress for memory only dump
docs: Add news article for query memory-only dump processing
percentage
docs/news.xml | 11 ++
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_domain.c | 130 +++++++++++++--
src/qemu/qemu_domain.h | 18 +-
src/qemu/qemu_driver.c | 183 ++++++++++++++++++---
src/qemu/qemu_migration.c | 13 +-
src/qemu/qemu_migration_cookie.c | 4 +-
src/qemu/qemu_monitor.c | 45 ++++-
src/qemu/qemu_monitor.h | 36 +++-
src/qemu/qemu_monitor_json.c | 106 +++++++++++-
src/qemu/qemu_monitor_json.h | 6 +-
src/qemu/qemu_process.c | 34 +++-
.../caps_2.10.0-gicv2.aarch64.xml | 1 +
.../caps_2.10.0-gicv3.aarch64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.10.0.ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.10.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.10.0.x86_64.xml | 1 +
.../caps_2.6.0-gicv2.aarch64.xml | 1 +
.../caps_2.6.0-gicv3.aarch64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.6.0.ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.6.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.7.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.8.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.8.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.9.0.ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.9.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml | 1 +
tests/qemumonitorjsontest.c | 3 +-
30 files changed, 553 insertions(+), 55 deletions(-)
--
2.13.6
6 years, 9 months