[libvirt] [PATCH v3 0/3] qemu: Use virtio-pci by default for mach-virt guests
by Andrea Bolognani
Changes from [v2]:
* rename qemuDomainCountVirtioMMIODevices() to
qemuDomainHasVirtioMMIODevices() and make it exit as soon
as the first virtio-mmio device is encountered, as
suggested by Laine
* tweak test suite and note no new test cases are needed
* add comments and user documentation
* add release notes entry
* can actually be merged now that all patches it builds on
have been merged :)
Changes from [v1]:
* use virDomainDeviceInfoIterate(), as suggested by Martin
and Laine, which results in cleaner and more robust code
[v1] https://www.redhat.com/archives/libvir-list/2016-October/msg00988.html
[v2] https://www.redhat.com/archives/libvir-list/2016-October/msg01042.html
Andrea Bolognani (3):
qemu: Use virtio-pci by default for mach-virt guests
docs: Document virtio-mmio by default for mach-virt guests
NEWS: Update for virtio-pci by default for mach-virt guests
docs/formatdomain.html.in | 8 +++-
docs/news.html.in | 6 +++
src/qemu/qemu_domain_address.c | 51 ++++++++++++++++++++--
...l2argv-aarch64-virt-2.6-virtio-pci-default.args | 14 +++---
.../qemuxml2argv-aarch64-virtio-pci-default.args | 17 +++++---
.../qemuxml2argv-aarch64-virtio-pci-default.xml | 3 --
tests/qemuxml2argvtest.c | 1 +
.../qemuxml2xmlout-aarch64-virtio-pci-default.xml | 40 ++++++++++++++---
tests/qemuxml2xmltest.c | 1 +
9 files changed, 119 insertions(+), 22 deletions(-)
--
2.7.4
7 years, 9 months
[libvirt] [V2]RFC for support cache tune in libvirt
by Qiao, Liyong
Hi folks
I’v send the V1 version RFC early this week, but no reply yet.
Thanks Qiaowei for the comment, I’v change the RFC much more libvirt specify, please help to comments on.
##Propose Changes
#Libvirtd configure changes
Add a new configure option cache_allocation_ratio to libvirtd, which let libvirt to allocate how many cache to domains.
Default is 0.5
Eg.
On a host which has 55M cache, libvirt can allocate 55M * cache_allocation_ratio cache to domains
## Virsh command line changes:
NAME
cachetune - control or query domain cache allocation
SYNOPSIS
cachetune <domain> [--enabled true/false] [--type <type>][--size <number>] [--config] [--live] [--current]
DESCRIPTION
Allocate cache usage for domain.
OPTIONS
[--domain] <string> domain name, id or uuid
--enabled <true/false> enable cache allocation
--type <string> cache allocations type, support l3/l2 etc
--size <number> the cache size in KB
--config affect next boot
--live affect running domain
--current affect current domain
This will allow libvirt to allocate specify type l3 cache for a domain
Domain xml changes:
<cachetune>
<enabled=’yes’, size=4096, actual_size=4680,type=’l3’>
<enabled=’no’, size=256, actual_size=0, type=’l2’>
</cachetune>
For more information about the detail design, please refer https://www.redhat.com/archives/libvir-list/2016-December/msg01011.html
CAT intro: https://software.intel.com/en-us/articles/software-enabling-for-cache-all...
Best Regards
Eli Qiao(乔立勇)OpenStack Core team OTC Intel.
--
7 years, 9 months
[libvirt] [PATCH] util: fix domain object leaks on closecallbacks
by Wang King
From: wangjing <king.wang(a)huawei.com>
The virCloseCallbacksSet method increase object reference for
VM, and decrease object reference in virCloseCallbacksUnset.
But VM UUID will be deleted from closecallbacks list in
virCloseCallbacksRun when connection disconnected, and then
object reference cannot be decreased by virCloseCallbacksUnset
in callback functions.
Signed-off-by: Wang King <king.wang(a)huawei.com>
---
src/util/virclosecallbacks.c | 29 +++++++++++++++++------------
1 file changed, 17 insertions(+), 12 deletions(-)
diff --git a/src/util/virclosecallbacks.c b/src/util/virclosecallbacks.c
index 891a92b..26d5075 100644
--- a/src/util/virclosecallbacks.c
+++ b/src/util/virclosecallbacks.c
@@ -300,7 +300,9 @@ virCloseCallbacksGetForConn(virCloseCallbacksPtr closeCallbacks,
data.list = list;
data.oom = false;
+ virObjectLock(closeCallbacks);
virHashForEach(closeCallbacks->list, virCloseCallbacksGetOne, &data);
+ virObjectUnlock(closeCallbacks);
if (data.oom) {
VIR_FREE(list->entries);
@@ -329,22 +331,15 @@ virCloseCallbacksRun(virCloseCallbacksPtr closeCallbacks,
* them all from the hash. At that point we can release
* the lock and run the callbacks safely. */
- virObjectLock(closeCallbacks);
list = virCloseCallbacksGetForConn(closeCallbacks, conn);
if (!list)
return;
for (i = 0; i < list->nentries; i++) {
- char uuidstr[VIR_UUID_STRING_BUFLEN];
- virUUIDFormat(list->entries[i].uuid, uuidstr);
- virHashRemoveEntry(closeCallbacks->list, uuidstr);
- }
- virObjectUnlock(closeCallbacks);
-
- for (i = 0; i < list->nentries; i++) {
virDomainObjPtr vm;
+ virDomainObjPtr dom;
- if (!(vm = virDomainObjListFindByUUID(domains,
+ if (!(vm = virDomainObjListFindByUUIDRef(domains,
list->entries[i].uuid))) {
char uuidstr[VIR_UUID_STRING_BUFLEN];
virUUIDFormat(list->entries[i].uuid, uuidstr);
@@ -352,10 +347,20 @@ virCloseCallbacksRun(virCloseCallbacksPtr closeCallbacks,
continue;
}
- vm = list->entries[i].callback(vm, conn, opaque);
- if (vm)
- virObjectUnlock(vm);
+ dom = list->entries[i].callback(vm, conn, opaque);
+ if (dom)
+ virObjectUnlock(dom);
+ virObjectUnref(vm);
}
+
+ virObjectLock(closeCallbacks);
+ for (i = 0; i < list->nentries; i++) {
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+ virUUIDFormat(list->entries[i].uuid, uuidstr);
+ virHashRemoveEntry(closeCallbacks->list, uuidstr);
+ }
+ virObjectUnlock(closeCallbacks);
+
VIR_FREE(list->entries);
VIR_FREE(list);
}
--
2.8.3
7 years, 9 months
[libvirt] [PATCH v2] cmdPerf: Display enabled/disabled message on perf event enable/disable
by Nitesh Konkar
Currently no message is displayed on successful
perf event enable or disable. This patch gives
the enabled/disabled status message on successful
enabling/disabling of the perf event.
Eg:virsh perf Domain --enable instructions --disable cache_misses
instructions : enabled
cache_misses : disabled
Signed-off-by: Nitesh Konkar <nitkon12(a)linux.vnet.ibm.com>
---
tools/virsh-domain.c | 27 ++++++++++++++++++---------
1 file changed, 18 insertions(+), 9 deletions(-)
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 3a6fa5c..287ca28 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -8848,13 +8848,27 @@ virshParseEventStr(const char *event,
return ret;
}
+static void
+virshPrintPerfStatus(vshControl *ctl, virTypedParameterPtr params, int nparams)
+{
+ size_t i;
+
+ for (i = 0; i < nparams; i++) {
+ if (params[i].type == VIR_TYPED_PARAM_BOOLEAN &&
+ params[i].value.b) {
+ vshPrint(ctl, "%-15s: %s\n", params[i].field, _("enabled"));
+ } else {
+ vshPrint(ctl, "%-15s: %s\n", params[i].field, _("disabled"));
+ }
+ }
+}
+
static bool
cmdPerf(vshControl *ctl, const vshCmd *cmd)
{
virDomainPtr dom;
int nparams = 0;
int maxparams = 0;
- size_t i;
virTypedParameterPtr params = NULL;
bool ret = false;
const char *enable = NULL, *disable = NULL;
@@ -8891,18 +8905,13 @@ cmdPerf(vshControl *ctl, const vshCmd *cmd)
vshError(ctl, "%s", _("Unable to get perf events"));
goto cleanup;
}
- for (i = 0; i < nparams; i++) {
- if (params[i].type == VIR_TYPED_PARAM_BOOLEAN &&
- params[i].value.b) {
- vshPrint(ctl, "%-15s: %s\n", params[i].field, _("enabled"));
- } else {
- vshPrint(ctl, "%-15s: %s\n", params[i].field, _("disabled"));
- }
- }
+ virshPrintPerfStatus(ctl, params, nparams);
} else {
if (virDomainSetPerfEvents(dom, params, nparams, flags) != 0) {
vshError(ctl, "%s", _("Unable to enable/disable perf events"));
goto cleanup;
+ } else {
+ virshPrintPerfStatus(ctl, params, nparams);
}
}
--
1.9.3
7 years, 9 months
[libvirt] [PATCH 1/2] perf: Refactor perf code
by Nitesh Konkar
Avoid unnecessary calling of function vshCommandOptStringReq.
In the current code the function vshCommandOptStringReq is
called irrespective of whether --enable and/or --disable is
present in the command line. Eg: 'virsh perf domainName'
also results in calling this function twice. This patch
fixes this.
Signed-off-by: Nitesh Konkar <nitkon12(a)linux.vnet.ibm.com>
---
tools/virsh-domain.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 3a6fa5c..91de532 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -8862,6 +8862,8 @@ cmdPerf(vshControl *ctl, const vshCmd *cmd)
bool current = vshCommandOptBool(cmd, "current");
bool config = vshCommandOptBool(cmd, "config");
bool live = vshCommandOptBool(cmd, "live");
+ bool shouldEnable = vshCommandOptBool(cmd, "enable");
+ bool shouldDisable = vshCommandOptBool(cmd, "disable");
VSH_EXCLUSIVE_OPTIONS_VAR(current, live);
VSH_EXCLUSIVE_OPTIONS_VAR(current, config);
@@ -8871,9 +8873,15 @@ cmdPerf(vshControl *ctl, const vshCmd *cmd)
if (live)
flags |= VIR_DOMAIN_AFFECT_LIVE;
- if (vshCommandOptStringReq(ctl, cmd, "enable", &enable) < 0 ||
- vshCommandOptStringReq(ctl, cmd, "disable", &disable) < 0)
- return false;
+ if (shouldEnable) {
+ if (vshCommandOptStringReq(ctl, cmd, "enable", &enable) < 0)
+ return false;
+ }
+
+ if (shouldDisable) {
+ if (vshCommandOptStringReq(ctl, cmd, "disable", &disable) < 0)
+ return false;
+ }
if (!(dom = virshCommandOptDomain(ctl, cmd, NULL)))
return false;
--
1.9.3
7 years, 9 months
[libvirt] [PATCH v2 0/2] Detect misconfiguration between disk bus and disk address
by Marc Hartmayer
This patch series adds the functionality to detect a misconfiguration
between disk bus type and disk address type for disks that are using
the address type virDomainDeviceDriveAddress. It also adds a test for
it.
A check for other bus types may be needed. This may require a driver
specific function, as it is already implemented in
virDomainDeviceDefPostParse(), for example.
Changelog:
- v1 -> v2:
+ Use full enumeration of the bus types
+ Add warning for unexpected bus type
Marc Hartmayer (2):
conf: Detect misconfiguration between disk bus and disk address
tests: Add tests for disk configuration validation
src/conf/domain_conf.c | 56 ++++++++++++++++++++++
.../qemuxml2argv-disk-fdc-incompatible-address.xml | 22 +++++++++
.../qemuxml2argv-disk-ide-incompatible-address.xml | 23 +++++++++
...qemuxml2argv-disk-sata-incompatible-address.xml | 23 +++++++++
...qemuxml2argv-disk-scsi-incompatible-address.xml | 24 ++++++++++
tests/qemuxml2argvtest.c | 8 ++++
6 files changed, 156 insertions(+)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-fdc-incompatible-address.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-ide-incompatible-address.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-sata-incompatible-address.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-scsi-incompatible-address.xml
--
2.5.5
7 years, 9 months
[libvirt] RFC for support Intel RDT/CAT in libvirt
by Qiao, Liyong
Hi folks
I would like to start a discussion about how to support a new cpu feature in libvirt. CAT support is not fully merged into linux kernel yet, the target release is 4.10, and all patches has been merged into linux tip branch. So there won’t be interface/design changes.
## Background
Intel RDT is a toolkit to do resource Qos for cpu such as llc(l3) cache, memory bandwidth usage, these fine granularity resource control features are very useful in a cloud environment which will run logs of noisy instances.
Currently, Libvirt has supported CAT/MBMT/MBML already, they are only for resource usage monitor, propose to supporting CAT to control VM’s l3 cache quota.
## CAT interface in kernel
In kernel, a new resource interface has been introduced under /sys/fs/resctrl, it’s used for resource control, for more information, refer
Intel_rdt_ui [ https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/tree/Documentati... ]
Kernel requires to provide schemata for l3 cache before add a task to a new partition, these interface is too much detail for a virtual machine user, so propose to let Libvirt manage schemata on the host.
## What will libvirt do?
### Questions:
To enable CAT support in libvirt, we need to think about follow questions
1. Only set CAT when an VM has CPU pin, which is to say, l3 cache is per cpu socket resources. On a host which has 2 cpu sockets, each cpu socket has it own cache, and can not be shared..
2. What the cache allocation policy should be used, this will be looks like:
a. VM has it’s own dedicated l3 cache and also can share other l3 cache.
b. VM can only use the caches which allocated to it.
c. Has some pre-defined policies and priority for a VM
Like COB [1]
1. Should reserve some l3 cache for host’s system usage (related to 2)
2. What’s the unit for l3 cache allocation? (related to 2)
### Propose Changes
XML domain user interface changes:
Option 1: explicit specify cache allocation for a VM
1 work with numa node
Some cloud orchestration software use numa + vcpu pin together, so we can enable cat supporting with numa infra.
Expose how many l3 cache a VM want to reserved and we require that the l3 cache should be bind on some specify cpu socket, just like what we did for numa node.
This is an domain xml example which is generated by OpenStack Nova for allocate llc(l3 cache) when booting a new VM
<domain>
…
<cputune>
<vcpupin vcpu='0' cpuset='19'/>
<vcpupin vcpu='1' cpuset='63'/>
<vcpupin vcpu='2' cpuset='83'/>
<vcpupin vcpu='3' cpuset='39'/>
<vcpupin vcpu='4' cpuset='40'/>
<vcpupin vcpu='5' cpuset='84'/>
<emulatorpin cpuset='19,39-40,63,83-84'/>
</cputune>
...
<cpu mode='host-model'>
<model fallback='allow'/>
<topology sockets='3' cores='1' threads='2'/>
<numa>
<cell id='0' cpus='0-1' memory='2097152' l3cache='1408' unit='KiB'/>
<cell id='1' cpus='2-5' memory='4194304' l3cache='5632' unit='KiB'/>
</numa>
</cpu>
...
</domain>
Refer to [http://libvirt.org/formatdomain.html#elementsCPUTuning]
So finally we can calculate on which CPU socket(cell) we need to allocate how may l3cache for a VM.
2. work with vcpu pin
Forget numa part, CAT setting should have relationship with cpu core setting, we can apply CAT policy if VM has set cpu pin setting (only VM won’t be schedule to another CPU sockets)
Cache allocation on which CPU socket can be calculate as just as 1.
We may need to enable both 1 and 2.
There are several policy for cache allocation:
Let’ take some examples:
For intel e5 v4 2699(Signal socket), there are 55M l3 cache on the chip , the default of L3 schemata is L3:0=ffffff , it represents to use 20 bit to control l3 cache, each bit will represent 2.75M, which will be the minimal unit on this host.
The allocation policy could be 3 policies :
1. One priority VM:
A priority import VM can be allocated a dedicated amount l3 cache (let’s say 2.75 * 4 = 11M) and it can also reach the left 44 M cache which will be shared with other process and VM on the same host.
So that we need to create a new ‘Partition’ n-20371
root@s2600wt:/sys/fs/resctrl# ls
cpus info n-20371 schemata tasks
Inside of n-20371 directory:
root@s2600wt:/sys/fs/resctrl# ls n-20371/
cpus schemata tasks
The schemata content will be L3:0=fffff
The tasks content will be the pids of that VM
Along we need to change the default schemata of system:
root@s2600wt:/sys/fs/resctrl# cat schemata
L3:0=ffff # which can not use the highest 4 bits, only tasks in n-20371 can reach that.
In this design , we can only get 1 priority VM.
Let’s change it a bit to have 2 VMs
The schemata of the 2 VMs could be:
1. L3:0=ffff0 # could not use the 4 low bits 11M l3 cache
2. L3:0=0ffff # could not use the 4 high bits 11M l3 cache
Default schemata changes to
L3:0=0fff0 # default system process could only use the middle 33M l3 cache
2. Isolated l3 cache dedicated allocation for each VM(if required)
A VM can only use the cache allocated to it.
For example
VM 1 requires to allocate 11 M
It’s schemata will be L3:0=f0000 #
VM 2 requires to allocate 11M
It’s schemata will be L3:0=f000
And default schemata will be L3:0=fff
In this case, we can create multiple VMs which each of them can have dedicated l3 cache.
The disadvantage is that we the allocated cache could be not be shared efficiently.
3. Isolated l3 cache shared allocation for each VM(if required by user)
In this case, we will put some VMs (which consider as noisy neighbors) to a ‘Partition’, restrict them to use the only caches allocated to them, by do this, other much more priority VM can be ensure to have enough l3 cache
Then we should decide how much cache the noisy group should have, and put all of their pids in that tasks file.
Option 2: set cache priority and apply policies
Don’t specify cache amount at all, only define cache usage priority when define a VM domain XML.
Cache priority will decide how much the VM can use l3 cache on a host, it’s not a quantized. So user don’t need to think about how much cache it should have when define a domain XML.
Libvirt will decide cache allocation by the priority of VM defined and policies using.
Disadvantage is that caches ability on different host may be different. Same VM domain XML on different host may have vary caches allocation amount.
# Support CAT in libvirt itself or leverage other software
COB
COB is Intel Cache Orchestrator Balancer (COB). please refer http://clrgitlab.intel.com/rdt/cob/tree/master
COB supports some pre-defined policies, it will monitor cpu/cache/cache missing and do cache allocation based on policy using.
If COB support monitor some specified process (VM process) and accept priority defined, it will be good to reuse.
At last the question came out:
* Support a fine-grained llc cache control , let user specify cache allocation
* Support pre-defined policies and user specify llc allocation priority.
Reference
[1] COB http://clrgitlab.intel.com/rdt/cob/tree/master
[2] CAT intro: https://software.intel.com/en-us/articles/software-enabling-for-cache-all...
[3] kernel Intel_rdt_ui [ https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/tree/Documentati... ]
Best Regards
Eli Qiao(乔立勇)OpenStack Core team OTC Intel.
--
7 years, 9 months
[libvirt] [PATCH 0/6] Don't run whole sec driver in namespace
by Michal Privoznik
In eadaa97548 I've tried to solve the issue of setting seclabels
on private /dev/* entries. While my approach works, it has tiny
flaw - anything that happens in the namespace stays in the
namespace. I mean, if there's a internal state change occurring
on relabel operation (it should not, and it doesn't nowadays, but
it's no guarantee), this change is not reflected in the daemon.
This is because when entering the namespace, the daemon forks,
enters the namespace and then executes the RelabelAll() function.
This imperfection is:
a) very easy to forget
b) very hard to debug
Therefore, we may have transaction APIs as suggested here [1]. On
transactionBegin() the sec driver will record [path. seclabel]
somewhere instead of applying the label. Then on
transactionCommit() new process is forked, enters the namespace
and perform previously recorded changes. This way it is only the
minimal code that runs in the namespace. Moreover, it runs over
constant data thus there can be no internal state transition.
1: https://www.redhat.com/archives/libvir-list/2016-December/msg00254.html
Michal Privoznik (6):
security_selinux: s/virSecuritySELinuxSecurity/virSecuritySELinux/
security_dac: Resolve virSecurityDACSetOwnershipInternal const
correctness
security driver: Introduce transaction APIs
security_dac: Implement transaction APIs
security_selinux: Implement transaction APIs
qemu: Use transactions from security driver
src/libvirt_private.syms | 3 +
src/qemu/qemu_driver.c | 28 +++--
src/qemu/qemu_security.c | 98 +++++----------
src/security/security_dac.c | 197 +++++++++++++++++++++++++++++-
src/security/security_driver.h | 9 ++
src/security/security_manager.c | 38 ++++++
src/security/security_manager.h | 7 +-
src/security/security_selinux.c | 219 +++++++++++++++++++++++++++++++---
src/security/security_stack.c | 49 ++++++++
src/storage/storage_backend.h | 2 +-
src/storage/storage_backend_fs.c | 2 +-
src/storage/storage_backend_gluster.c | 2 +-
src/storage/storage_driver.c | 6 +-
src/storage/storage_driver.h | 4 +-
src/util/virstoragefile.c | 2 +-
src/util/virstoragefile.h | 2 +-
16 files changed, 561 insertions(+), 107 deletions(-)
--
2.11.0
7 years, 9 months
[libvirt] [PATCH 0/9] Qemu: s390: Cpu Model Support
by Jason J. Herne
This patch set enables cpu model support for s390. The user can now set exact
cpu models, query supported models via virsh domcapabilities, and use host-model
and host-passthrough modes. The end result is that migration is safer because
Qemu will perform runnability checking on the destination host and quit with an
error if the guest's cpu model is not supported.
Note: Some test data has been separated from corresponding test case updates for
ease of review.
Changelog
---------
[v3]
s390: Cpu driver support for update and compare
- Fixed indentation of error message in virCPUs390Update
test-data: Qemu caps replies and xml for s390x qemu 2.7 and 2.8
- Moved this patch before introduction of query-cpu-model-expansion
- Regenerated all test data
tests: domain capabilities: qemu 2.7 and 2.8 on s390x
- Added Qemu 2.7 test
- Removed fake host model name
- Moved this patch before introduction of query-cpu-model-expansion
tests: qemu capabilites: qemu 2.7 and 2.8 on s390x
- Moved this patch before introduction of query-cpu-model-expansion
- Stop using fake host cpu
qemu: qmp query-cpu-model-expansion command
- Moved query-cpu-model-expansion capability to this patch
- changed label "cleanup" to "error" in qemuMonitorCPUModelInfoCopy
- qemuMonitorJSONParseCPUModelProperty is now static, and also made
appropriate changes when passing a boolean to virJSONValueGetBoolean
- removed unnecessary error checking when assigning "data" variable in
qemuMonitorJSONGetCPUModelExpansion
- Fix up capabilities test data to reflect changes from this commit
- fixed query-cpu-model-expansion's enumeration formatting
qemu-caps: Get host model directly from Qemu when available
- Moved query-cpu-model-expansion capability from this patch
- virQEMUCapsCopyCPUModelFromQEMU is now static and type void
- check for native guest is done before attempting to set host CPU
- s390x no longer falls back to getting host CPU model from the host
if it cannot be retrieved from QEMU
- fixed unnecessary intialization of some variables that were introduced
in v2 of these patches
- virQEMUCapsLoadHostCPUModelInfo now first allocates data into a
qemuMonitorCPUModelInfoPtr before assigning it to appropriate qemuCaps field
- if we do not have QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION available, skip
trying to read the hostCPU portion of the qemu cache file
- all hostCPU element parsing is handled in its entirety within function
virQEMUCapsLoadHostCPUModelInfo
- Fix up capabilities test data to reflect changes from this commit
qemu: command: Support new cpu feature argument syntax
- Add error message for case where s390 guest attempts to use cpu features on
older qemu.
- Combined the tests into this commit
- Now tests s390 cpu features both with and without query-cpu-model-expansion
[v2]
* Added s390x cpu and capabilities tests
* Added cpu feature syntax tests
* Dropped patch: Warn when migrating host-passthrough
* Added patch: Document migrating host-passthrough is dangerous
s390: Cpu driver support for update and compare
- Compare: Added comment explaining why s390 bypasses the cpuCompare operation
- Update: Added error message explaining minimum match mode is not supported
- Update: Ensure user is not using unsupported optional feature policy
- Update: Use virCPUDefUpdateFeature to update/create user requested features
- Other minor fixes
s390-cpu: Remove nodeData and decode
- Completely remove nodeData and decode functions
qemu: qmp query-cpu-model-expansion command
- Cleaned up debug print
- Restructured qemuMonitorJSONGetCPUModelExpansion
- Added more JSON parsing error handling
- CPU model features now parsed via an iterator
- qemuMonitorJSONGetCPUModelExpansion: Fixed double free of model ptr
- Restructure qemuMonitorCPUModelInfoFree
- Other minor fixes
qemu-caps: Get host model directly from Qemu when available
- virQEMUCapsProbeQMPHostCPU: indentation fix
- Fixed rebase error involving a missing 'goto cleanup;'.
- Fix indentation in virQEMUCapsProbeQMPHostCPU
- virQEMUCapsInitHostCPUModel now routes to virQEMUCapsCopyModelFromQEMU or
virQEMUCapsCopyModelFromHost, depending on architecture.
- Restructure hostCpu data in qemu caps cache xml
- Other minor fixes
Collin L. Walling (5):
test-data: Qemu caps replies and xml for s390x qemu 2.7 and 2.8
tests: domain capabilities: qemu 2.7 and 2.8 on s390x
tests: qemu capabilites: qemu 2.7 and 2.8 on s390x
qemu: qmp query-cpu-model-expansion command
qemu: command: Support new cpu feature argument syntax
Jason J. Herne (4):
s390: Cpu driver support for update and compare
s390-cpu: Remove nodeData and decode
qemu-caps: Get host model directly from Qemu when available
tests: qemuxml2argv s390x cpu model
po/POTFILES.in | 1 +
src/cpu/cpu_s390.c | 103 +-
src/qemu/qemu_capabilities.c | 190 +-
src/qemu/qemu_capabilities.h | 3 +
src/qemu/qemu_command.c | 18 +-
src/qemu/qemu_monitor.c | 62 +
src/qemu/qemu_monitor.h | 22 +
src/qemu/qemu_monitor_json.c | 117 +
src/qemu/qemu_monitor_json.h | 6 +
tests/domaincapsschemadata/qemu_2.7.0.s390x.xml | 78 +
tests/domaincapsschemadata/qemu_2.8.0.s390x.xml | 159 +
tests/domaincapstest.c | 18 +
.../qemucapabilitiesdata/caps_2.7.0.s390x.replies | 11999 +++++++++++++++++
tests/qemucapabilitiesdata/caps_2.7.0.s390x.xml | 140 +
.../qemucapabilitiesdata/caps_2.8.0.s390x.replies | 13380 +++++++++++++++++++
tests/qemucapabilitiesdata/caps_2.8.0.s390x.xml | 286 +
tests/qemucapabilitiestest.c | 2 +
.../qemuxml2argv-cpu-s390-features.args | 19 +
.../qemuxml2argv-cpu-s390-features.xml | 23 +
.../qemuxml2argv-cpu-s390-zEC12.args | 19 +
.../qemuxml2argv-cpu-s390-zEC12.xml | 21 +
tests/qemuxml2argvtest.c | 14 +
22 files changed, 26638 insertions(+), 42 deletions(-)
create mode 100644 tests/domaincapsschemadata/qemu_2.7.0.s390x.xml
create mode 100644 tests/domaincapsschemadata/qemu_2.8.0.s390x.xml
create mode 100644 tests/qemucapabilitiesdata/caps_2.7.0.s390x.replies
create mode 100644 tests/qemucapabilitiesdata/caps_2.7.0.s390x.xml
create mode 100644 tests/qemucapabilitiesdata/caps_2.8.0.s390x.replies
create mode 100644 tests/qemucapabilitiesdata/caps_2.8.0.s390x.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-s390-features.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-s390-features.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-s390-zEC12.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-cpu-s390-zEC12.xml
--
2.7.4
7 years, 9 months