[libvirt] [PATCH v2 0/3] Fix case with non-root domain and hugepages
by Michal Privoznik
v2 of:
https://www.redhat.com/archives/libvir-list/2016-November/msg01060.html
diff to v1:
- use virDomainObjGetShortName to construct hugepages path
- instead of implementing virSecurityManagerSetHugepages drop it
Michal Privoznik (3):
virDomainObjGetShortName: take virDomainDef
qemu: Create hugepage path on per domain basis
security: Drop virSecurityManagerSetHugepages
src/conf/domain_conf.c | 4 +-
src/conf/domain_conf.h | 2 +-
src/libvirt_private.syms | 1 -
src/qemu/qemu_command.c | 4 +-
src/qemu/qemu_conf.c | 44 +++++++++----
src/qemu/qemu_conf.h | 16 +++--
src/qemu/qemu_domain.c | 2 +-
src/qemu/qemu_driver.c | 21 ++----
src/qemu/qemu_process.c | 74 ++++++++++++++++------
src/security/security_driver.h | 1 -
src/security/security_manager.c | 17 -----
src/security/security_manager.h | 3 -
src/security/security_stack.c | 19 ------
.../qemuxml2argv-hugepages-numa.args | 6 +-
.../qemuxml2argv-hugepages-pages.args | 16 ++---
.../qemuxml2argv-hugepages-pages2.args | 2 +-
.../qemuxml2argv-hugepages-pages3.args | 2 +-
.../qemuxml2argv-hugepages-pages5.args | 2 +-
.../qemuxml2argv-hugepages-shared.args | 14 ++--
tests/qemuxml2argvdata/qemuxml2argv-hugepages.args | 2 +-
.../qemuxml2argv-memory-hotplug-dimm-addr.args | 4 +-
.../qemuxml2argv-memory-hotplug-dimm.args | 4 +-
22 files changed, 136 insertions(+), 124 deletions(-)
--
2.8.4
7 years, 11 months
[libvirt] [PATCH v1 00/21] Run qemu under its own namespace
by Michal Privoznik
Finally. This is full implementation of my RFC:
https://www.redhat.com/archives/libvir-list/2016-November/msg00691.html
The first two patches were posted separately, but since they lack
review I'm sending them here too because they are important for
the feature:
https://www.redhat.com/archives/libvir-list/2016-November/msg01060.html
All of these patches:
a) can be found on my github:
https://github.com/zippy2/libvirt/tree/qemu_container_v2
b) pass my basic testing:
- run domain with device passthrough
- device hot(un-)plug (disks, RNGs, chardevs, PCI/USB)
c) seem to add negligible overhead to domain startup process
Michal Privoznik (21):
qemu: Create hugepage path on per domain basis
security: Implement virSecurityManagerSetHugepages
virprocess: Introduce virProcessSetupPrivateMountNS
virfile: Introduce virFileSetupDev
virfile: Introduce ACL helpers
virusb: Introduce virUSBDeviceGetPath
virscsi: Introduce virSCSIDeviceGetPath
qemu_cgroup: Expose defaultDeviceACL
qemu: Spawn qemu under mount namespace
qemu: Prepare disks when starting a domain
qemu: Prepare hostdevs when starting a domain
qemu: Prepare chardevs when starting a domain
qemu: Prepare TPM when starting a domain
qemu: Prepare inputs when starting a domain
qemu: Prepare RNGs when starting a domain
qemu: Enter the namespace on relabelling
qemu: Manage /dev entry on disk hotplug
qemu: Manage /dev entry on hostdev hotplug
qemu: Manage /dev entry on chardev hotplug
qemu: Manage /dev entry on RNG hotplug
qemu: Let users opt-out from containerization
configure.ac | 12 +-
src/Makefile.am | 7 +-
src/libvirt_private.syms | 9 +
src/lxc/lxc_container.c | 20 +-
src/lxc/lxc_controller.c | 32 +-
src/qemu/libvirtd_qemu.aug | 1 +
src/qemu/qemu.conf | 8 +
src/qemu/qemu_cgroup.c | 2 +-
src/qemu/qemu_cgroup.h | 1 +
src/qemu/qemu_command.c | 4 +-
src/qemu/qemu_conf.c | 50 +-
src/qemu/qemu_conf.h | 18 +-
src/qemu/qemu_domain.c | 1147 ++++++++++++++++++++
src/qemu/qemu_domain.h | 42 +
src/qemu/qemu_driver.c | 24 +-
src/qemu/qemu_hotplug.c | 90 +-
src/qemu/qemu_process.c | 53 +-
src/qemu/qemu_security.c | 208 ++++
src/qemu/qemu_security.h | 55 +
src/qemu/test_libvirtd_qemu.aug.in | 1 +
src/security/security_dac.c | 11 +
src/security/security_selinux.c | 10 +
src/util/virfile.c | 153 +++
src/util/virfile.h | 17 +
src/util/virprocess.c | 38 +
src/util/virprocess.h | 2 +
src/util/virscsi.c | 6 +
src/util/virscsi.h | 1 +
src/util/virusb.c | 5 +
src/util/virusb.h | 1 +
.../qemuxml2argv-hugepages-numa.args | 4 +-
.../qemuxml2argv-hugepages-pages.args | 14 +-
.../qemuxml2argv-hugepages-pages2.args | 2 +-
.../qemuxml2argv-hugepages-pages3.args | 2 +-
.../qemuxml2argv-hugepages-pages5.args | 2 +-
.../qemuxml2argv-hugepages-shared.args | 12 +-
tests/qemuxml2argvdata/qemuxml2argv-hugepages.args | 2 +-
.../qemuxml2argv-memory-hotplug-dimm-addr.args | 4 +-
.../qemuxml2argv-memory-hotplug-dimm.args | 4 +-
39 files changed, 1933 insertions(+), 141 deletions(-)
create mode 100644 src/qemu/qemu_security.c
create mode 100644 src/qemu/qemu_security.h
--
2.8.4
7 years, 11 months
[libvirt] [PATCH 0/6] Qemu: s390: Cpu Model Support
by Jason J. Herne
This patch set enables cpu model support for s390. The user can now set exact
cpu models, query supported models via virsh domcapabilities, and use host-model
and host-passthrough modes. The end result is that migration is safer because
Qemu will perform runnability checking on the destination host and quit with an
error if the guest's cpu model is not supported.
Big Thanks for Jiri and Eduardo for being patient and answering our questions
while we figured out what we were doing!
Collin L. Walling (5):
s390: Stop reporting "host" for host model
qemu: qmp query-cpu-model-expansion command
qemu-caps: Get host model directly from Qemu when available
qemu: migration: warn if migrating with host-passthrough
qemu: command: Support new cpu feature argument syntax
Jason J. Herne (1):
s390: Cpu driver support for update and compare
po/POTFILES.in | 1 +
src/cpu/cpu_s390.c | 61 ++++++++++++++++++++----
src/qemu/qemu_capabilities.c | 109 +++++++++++++++++++++++++++++++++++++++++--
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 10 +++-
src/qemu/qemu_migration.c | 4 ++
src/qemu/qemu_monitor.c | 60 ++++++++++++++++++++++++
src/qemu/qemu_monitor.h | 22 +++++++++
src/qemu/qemu_monitor_json.c | 94 +++++++++++++++++++++++++++++++++++++
src/qemu/qemu_monitor_json.h | 6 +++
10 files changed, 353 insertions(+), 15 deletions(-)
--
1.9.1
7 years, 11 months
[libvirt] Libvirt domain event usage and consistency
by Roman Mohr
Hi,
I recently started to use the libvirt domain events. With them I increase
the responsiveness of my VM state wachers.
In general it works pretty well. I just listen to the events and do a
periodic resync to cope with missed events.
While watching the events I ran into a few interesting situations I wanted
to share. The points 1-3 describe some minor issues or irregularities.
Point 4 is about the fact that domain and state updates are not versioned
which makes it very hard to stay in sync with libvirt when using events.
My libvirt version is 1.2.18.4.
1) Event order seems to be weird on startup:
When listening for VM lifecycle events I get this order:
{"event_type": "Started", "timestamp": "2016-11-25T11:59:53.209326Z",
"reason": "Booted", "domain_name": "generic", "domain_id":
"8ff7047b-fb46-44ff-a4c6-7c20c73ab86e"}
{"event_type": "Defined", "timestamp": "2016-11-25T11:59:53.435530Z",
"reason": "Added", "domain_name": "generic", "domain_id":
"8ff7047b-fb46-44ff-a4c6-7c20c73ab86e"}
It is strange that a VM already boots before it is defined. Is this the
intended order?
2) Defining a VM with VIR_DOMAIN_START_PAUSED gives me this event order
{"event_type": "Defined", "timestamp": "2016-11-25T12:02:44.037817Z",
"reason": "Added", "domain_name": "core_node", "domain_id":
"b9906489-6d5b-40f8-a742-ca71b2b84277"}
{"event_type": "Resumed", "timestamp": "2016-11-25T12:02:44.813104Z",
"reason": "Unpaused", "domain_name": "core_node", "domain_id":
"b9906489-6d5b-40f8-a742-ca71b2b84277"}
{"event_type": "Started", "timestamp": "2016-11-25T12:02:44.813733Z",
"reason": "Booted", "domain_name": "core_node", "domain_id":
"b9906489-6d5b-40f8-a742-ca71b2b84277"}
This boot-order makes it hard to track active domains by listening to
life-cycle events. One could theoretically still always fetch the VM state
in the event callback and check the state, but if the state is not
immediately transferred with the event itself, it can already be outdated,
so this might be racy (intransparent for the libvirt bindings user), and as
described in (3) currently not even possible. In general the real existing
events seem to differ quite significantly from the described life-cycle in
[1].
3) "Defined" event is triggered before the domain is completely defined
{"event_type": "Defined", "timestamp": "2016-11-25T12:02:44.037817Z",
"reason": "Added", "domain_name": "core_node", "domain_id":
"b9906489-6d5b-40f8-a742-ca71b2b84277"}
{"event_type": "Resumed", "timestamp": "2016-11-25T12:02:44.813104Z",
"reason": "Unpaused", "domain_name": "core_node", "domain_id":
"b9906489-6d5b-40f8-a742-ca71b2b84277"}
{"event_type": "Started", "timestamp": "2016-11-25T12:02:44.813733Z",
"reason": "Booted", "domain_name": "core_node", "domain_id":
"b9906489-6d5b-40f8-a742-ca71b2b84277"}
When I try to process the first event and do a xmldump I get:
Event: [Code-42] [Domain-10] Domain not found: no domain with matching
uuid 'b9906489-6d5b-40f8-a742-ca71b2b84277' (core_node)
So it seems like I get the event before the domain is completely ready.
4) There libvirt domain description is not versioned
I would expect that every time I update a domainxml (update from third
party entity), or an event is generated (update from libvirt), that the
resource version of a Domain is increased and that I get this resource
version when I do a xmldump or when I get an event. Without this there is
afaik no way to stay in sync with libvirt, even if you do regular polling
of all domains. The main issue here is that I can never know if events in
the queue arrived before my latest domain resync or after it.
Also not that this is not about delivery guarantees of events. It is just
about having a consistent view of a VM and the individual event. If I have
resource versions, I can decide if an event is still interesting for me or
not, which is exactly what I need to solve the syncing problem above.
When I do a complete relisting of all domains to syn, I know which version
I got and I can then see on every event if it is newer or older.
If along side with the event, the domain xml, the VM state, and the
resource version would be sent to a client, it would be even better. Then,
whenever there is a new event for a VM in the queue, I can be sure that
this domainxml I see is the one which triggered the event. This xml is then
a complete representation for this revision number.
Would be nice to hear your thoughts to these points.
Best Regards,
Roman
[1]
https://wiki.libvirt.org/page/VM_lifecycle#States_that_a_guest_domain_can...
7 years, 11 months
[libvirt] [PATCH v5 0/2] List only online cpus for vcpupin/emulatorpin when vcpu placement static
by Nitesh Konkar
Currently when the vcpu placement is static and
cpuset is not specified, CPU Affinity shows 0..
CPUMAX. This patchset will result in display of
only online CPU's under CPU Affinity on linux.
Fixes the following Bug:
virsh dumpxml Fedora
<domain type='kvm' id='4'>
<name>Fedora</name>
<uuid>aecf3e5e-6f9a-42a3-9d6a-223a75569a66</uuid>
<maxMemory slots='32' unit='KiB'>3145728</maxMemory>
<memory unit='KiB'>524288</memory>
<currentMemory unit='KiB'>524288</currentMemory>
<vcpu placement='static' current='8'>160</vcpu>
<resource>
<partition>/machine</partition>
</resource>
.....................
.......................
.........................
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+0:+0</label>
<imagelabel>+0:+0</imagelabel>
</seclabel>
</domain>
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-2,4-7
Off-line CPU(s) list: 3
Thread(s) per core: 1
Core(s) per socket: 7
Socket(s): 1
..........
..........
NUMA node0 CPU(s): 0-2,4-7
NUMA node1 CPU(s):
cat /sys/devices/system/cpu/online
0-2,4-7
Before Patch
virsh vcpupin Fedora
VCPU: CPU Affinity
----------------------------------
0: 0-7
1: 0-7
...
...
158: 0-7
159: 0-7
virsh emulatorpin Fedora
emulator: CPU Affinity
----------------------------------
*: 0-7
After Patch
virsh vcpupin Fedora
VCPU: CPU Affinity
----------------------------------
0: 0-2,4-7
1: 0-2,4-7
...
...
158: 0-2,4-7
159: 0-2,4-7
virsh emulatorpin Fedora
emulator: CPU Affinity
----------------------------------
*: 0-2,4-7
Nitesh Konkar (2):
conf: List only online cpus for virsh vcpupin
conf: List only online cpus for virsh emulatorpin
src/conf/domain_conf.c | 6 ++++++
src/qemu/qemu_driver.c | 5 +++++
2 files changed, 11 insertions(+)
--
2.1.0
7 years, 11 months
[libvirt] [RFC PATCH] cgroup: Use system reported "unlimited" value for comparison
by Viktor Mihajlovski
With kernel 3.18 (since commit 3e32cb2e0a12b6915056ff04601cf1bb9b44f967)
the "unlimited" value for cgroup memory limits has changed once again as
its byte value is now computed from a page counter.
The new "unlimited" value reported by the cgroup fs is therefore 2**51-1
pages which is (VIR_DOMAIN_MEMORY_PARAM_UNLIMITED - 3072). This results
e.g. in virsh memtune displaying 9007199254740988 instead of unlimited
for the limits.
This patch uses the value of memory.limit_in_bytes from the cgroup
memory root which is the system's "real" unlimited value for comparison.
See also libvirt commit 231656bbeb9e4d3bedc44362784c35eee21cf0f4 for the
history for kernel 3.12 and before.
I've tested this on F24 with the following configurations:
- no memory cgroup controller mounted
- memory cgroup controller mounted but not configured for libvirt
- memory cgroup controller mounted and configured
The first two fail as expected (and as before), the third case
works as expected.
Testing on other kernel versions highly welcome!
Not perfect yet in that we still provide a fallback to the old value.
We might consider failing right away if we can't get the system
value. I'd be inclined to do that, since we're probably facing
principal cgroup issues in this case.
Further, it's not the most efficient implementation. Obviously, the
unlimited value can be read once and cached. However, I'd like to see
the question above resolved first.
Signed-off-by: Viktor Mihajlovski <mihajlov(a)linux.vnet.ibm.com>
---
src/util/vircgroup.c | 61 ++++++++++++++++++++++++++++++++++++++++++++++------
1 file changed, 55 insertions(+), 6 deletions(-)
diff --git a/src/util/vircgroup.c b/src/util/vircgroup.c
index f151193..969dca5 100644
--- a/src/util/vircgroup.c
+++ b/src/util/vircgroup.c
@@ -2452,6 +2452,40 @@ virCgroupGetBlkioDeviceWeight(virCgroupPtr group,
}
+/*
+ * Retrieve the "memory.limit_in_bytes" value from the memory controller
+ * root dir. This value cannot be modified by userspace and therefore
+ * is the maximum limit value supported by cgroups on the local system.
+ */
+static int
+virCgroupGetMemoryUnlimited(unsigned long long int * mem_unlimited)
+{
+ int ret = -1;
+ virCgroupPtr group;
+
+ if (VIR_ALLOC(group))
+ goto cleanup;
+
+ if (virCgroupDetectMounts(group))
+ goto cleanup;
+
+ if (!group->controllers[VIR_CGROUP_CONTROLLER_MEMORY].mountPoint)
+ goto cleanup;
+
+ if (VIR_STRDUP(group->controllers[VIR_CGROUP_CONTROLLER_MEMORY].placement,
+ "/.") < 0)
+ goto cleanup;
+
+ ret = virCgroupGetValueU64(group,
+ VIR_CGROUP_CONTROLLER_MEMORY,
+ "memory.limit_in_bytes",
+ mem_unlimited);
+ cleanup:
+ virCgroupFree(&group);
+ return ret;
+}
+
+
/**
* virCgroupSetMemory:
*
@@ -2534,6 +2568,7 @@ int
virCgroupGetMemoryHardLimit(virCgroupPtr group, unsigned long long *kb)
{
long long unsigned int limit_in_bytes;
+ long long unsigned int unlimited_in_bytes;
int ret = -1;
if (virCgroupGetValueU64(group,
@@ -2541,9 +2576,13 @@ virCgroupGetMemoryHardLimit(virCgroupPtr group, unsigned long long *kb)
"memory.limit_in_bytes", &limit_in_bytes) < 0)
goto cleanup;
- *kb = limit_in_bytes >> 10;
- if (*kb > VIR_DOMAIN_MEMORY_PARAM_UNLIMITED)
+ if (virCgroupGetMemoryUnlimited(&unlimited_in_bytes) < 0)
+ unlimited_in_bytes = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED << 10;
+
+ if (limit_in_bytes == unlimited_in_bytes)
*kb = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED;
+ else
+ *kb = limit_in_bytes >> 10;
ret = 0;
cleanup:
@@ -2596,6 +2635,7 @@ int
virCgroupGetMemorySoftLimit(virCgroupPtr group, unsigned long long *kb)
{
long long unsigned int limit_in_bytes;
+ long long unsigned int unlimited_in_bytes;
int ret = -1;
if (virCgroupGetValueU64(group,
@@ -2603,9 +2643,13 @@ virCgroupGetMemorySoftLimit(virCgroupPtr group, unsigned long long *kb)
"memory.soft_limit_in_bytes", &limit_in_bytes) < 0)
goto cleanup;
- *kb = limit_in_bytes >> 10;
- if (*kb > VIR_DOMAIN_MEMORY_PARAM_UNLIMITED)
+ if (virCgroupGetMemoryUnlimited(&unlimited_in_bytes) < 0)
+ unlimited_in_bytes = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED << 10;
+
+ if (limit_in_bytes == unlimited_in_bytes)
*kb = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED;
+ else
+ *kb = limit_in_bytes >> 10;
ret = 0;
cleanup:
@@ -2658,6 +2702,7 @@ int
virCgroupGetMemSwapHardLimit(virCgroupPtr group, unsigned long long *kb)
{
long long unsigned int limit_in_bytes;
+ long long unsigned int unlimited_in_bytes;
int ret = -1;
if (virCgroupGetValueU64(group,
@@ -2665,9 +2710,13 @@ virCgroupGetMemSwapHardLimit(virCgroupPtr group, unsigned long long *kb)
"memory.memsw.limit_in_bytes", &limit_in_bytes) < 0)
goto cleanup;
- *kb = limit_in_bytes >> 10;
- if (*kb > VIR_DOMAIN_MEMORY_PARAM_UNLIMITED)
+ if (virCgroupGetMemoryUnlimited(&unlimited_in_bytes) < 0)
+ unlimited_in_bytes = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED << 10;
+
+ if (limit_in_bytes == unlimited_in_bytes)
*kb = VIR_DOMAIN_MEMORY_PARAM_UNLIMITED;
+ else
+ *kb = limit_in_bytes >> 10;
ret = 0;
cleanup:
--
1.9.1
7 years, 11 months
[libvirt] [PATCH] Add support for parsing -vga virtio
by Nehal J Wani
Since a94f0c5c qemu supports '-vga virtio'.
Libvirt also supports it since 21373feb.
This patch enables libvirt to parse the qemu-argv:
virsh domxml-from-native qemu-argv <(echo '/usr/bin/qemu-system-x86_64 -vga virtio')
Signed-off-by: Nehal J Wani <nehaljw.kkd1(a)gmail.com>
---
src/qemu/qemu_command.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 4a5fce3..d92bf9d 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -100,7 +100,7 @@ VIR_ENUM_IMPL(qemuVideo, VIR_DOMAIN_VIDEO_TYPE_LAST,
"", /* don't support vbox */
"qxl",
"", /* don't support parallels */
- "" /* no need for virtio */);
+ "virtio");
VIR_ENUM_DECL(qemuDeviceVideo)
--
2.7.4
7 years, 11 months
[libvirt] [REPOST PATCH v2 0/9] Add group_name support for <iotune>
by John Ferlan
This is just a REPOST of the v2 series:
http://www.redhat.com/archives/libvir-list/2016-November/msg00363.html
The only difference being updating to the current top of tree
of commit id '0b4c3bd30'.
I did *not* add the NEWS change yet as that's newer than this, but will
update NEWS with this once/if this is ACK'd using whatever format becomes
agreed upon for the file contents/formatting options.
John Ferlan (9):
include: Add new "group_name" definition for iotune throttling
caps: Add new capability for the iotune group name
qemu: Adjust maxparams logic for qemuDomainGetBlockIoTune
qemu: Alter qemuMonitorJSONSetBlockIoThrottle command logic
qemu: Adjust various bool BlockIoTune set_ values into mask
qemu: Add support for parsing iotune group setting
conf: Add support for blkiotune group_name option
qemu: Add the group name option to the iotune command line
virsh: Add group name to blkdeviotune output
docs/formatdomain.html.in | 11 ++
docs/schemas/domaincommon.rng | 5 +
include/libvirt/libvirt-domain.h | 15 ++
src/conf/domain_conf.c | 10 ++
src/conf/domain_conf.h | 1 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 13 ++
src/qemu/qemu_driver.c | 177 +++++++++++++--------
src/qemu/qemu_monitor.c | 2 +
src/qemu/qemu_monitor.h | 1 +
src/qemu/qemu_monitor_json.c | 97 ++++++-----
src/qemu/qemu_monitor_json.h | 1 +
tests/qemucapabilitiesdata/caps_2.4.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.5.0.x86_64.xml | 1 +
.../caps_2.6.0-gicv2.aarch64.xml | 1 +
.../caps_2.6.0-gicv3.aarch64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.6.0.ppc64le.xml | 1 +
tests/qemucapabilitiesdata/caps_2.6.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml | 1 +
tests/qemumonitorjsontest.c | 88 +++++++---
.../qemuxml2argv-blkdeviotune-group-num.args | 32 ++++
.../qemuxml2argv-blkdeviotune-group-num.xml | 61 +++++++
tests/qemuxml2argvtest.c | 4 +
.../qemuxml2xmlout-blkdeviotune-group-num.xml | 1 +
tests/qemuxml2xmltest.c | 1 +
tools/virsh-domain.c | 17 ++
tools/virsh.pod | 5 +-
28 files changed, 428 insertions(+), 124 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-blkdeviotune-group-num.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-blkdeviotune-group-num.xml
create mode 120000 tests/qemuxml2xmloutdata/qemuxml2xmlout-blkdeviotune-group-num.xml
--
2.7.4
7 years, 11 months
[libvirt] [PATCH] cpu: Add support for pku and ospke Intel features for Memory Protection Keys
by Lin Ma
qemu commit: f74eefe0
https://lwn.net/Articles/667156/
Signed-off-by: Lin Ma <lma(a)suse.com>
---
src/cpu/cpu_map.xml | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/src/cpu/cpu_map.xml b/src/cpu/cpu_map.xml
index 6da8321..dca5720 100644
--- a/src/cpu/cpu_map.xml
+++ b/src/cpu/cpu_map.xml
@@ -255,6 +255,13 @@
<cpuid eax_in='0x07' ebx='0x10000000'/>
</feature>
+ <feature name='pku'>
+ <cpuid eax_in='0x07' ecx='0x00000008'/>
+ </feature>
+ <feature name='ospke'>
+ <cpuid eax_in='0x07' ecx='0x00000010'/>
+ </feature>
+
<!-- Processor Extended State Enumeration sub leaf 1 -->
<feature name='xsaveopt'>
<cpuid eax_in='0x0d' ecx_in='0x01' eax='0x00000001'/>
--
2.9.2
7 years, 11 months
[libvirt] [PATCH] cpu: Add support for more AVX512 Intel features
by Lin Ma
These features are included:
AVX512DQ, AVX512IFMA, AVX512BW, AVX512VL, AVX512VBMI, AVX512_4VNNIW and
AVX512_4FMAPS.
qemu commits: cc728d14 and 95ea69fb
Signed-off-by: Lin Ma <lma(a)suse.com>
---
src/cpu/cpu_map.xml | 24 ++++++++++++++++++++++++
1 file changed, 24 insertions(+)
diff --git a/src/cpu/cpu_map.xml b/src/cpu/cpu_map.xml
index 6da8321..e9292e1 100644
--- a/src/cpu/cpu_map.xml
+++ b/src/cpu/cpu_map.xml
@@ -233,6 +233,9 @@
<feature name='avx512f'> <!-- AVX-512 Foundation -->
<cpuid eax_in='0x07' ebx='0x00010000'/>
</feature>
+ <feature name='avx512dq'> <!-- AVX-512 Doubleword & Quadword Instrs -->
+ <cpuid eax_in='0x07' ebx='0x00020000'/>
+ </feature>
<feature name='rdseed'>
<cpuid eax_in='0x07' ebx='0x00040000'/>
</feature>
@@ -242,6 +245,9 @@
<feature name='smap'>
<cpuid eax_in='0x07' ebx='0x00100000'/>
</feature>
+ <feature name='avx512ifma'> <!-- AVX-512 Integer Fused Multiply Add -->
+ <cpuid eax_in='0x07' ebx='0x00200000'/>
+ </feature>
<feature name='clflushopt'>
<cpuid eax_in='0x07' ebx='0x00800000'/>
</feature>
@@ -254,6 +260,24 @@
<feature name='avx512cd'> <!-- AVX-512 Conflict Detection -->
<cpuid eax_in='0x07' ebx='0x10000000'/>
</feature>
+ <feature name='avx512bw'> <!-- AVX-512 Byte and Word Instructions -->
+ <cpuid eax_in='0x07' ebx='0x40000000'/>
+ </feature>
+ <feature name='avx512vl'> <!-- AVX-512 Vector Length Extensions -->
+ <cpuid eax_in='0x07' ebx='0x80000000'/>
+ </feature>
+
+ <feature name='avx512vbmi'> <!-- AVX-512 Vector Byte Manipulation Instrs -->
+ <cpuid eax_in='0x07' ecx='0x00000002'/>
+ </feature>
+
+ <feature name='avx512-4vnniw'> <!-- AVX-512 Neural Network Instructions -->
+ <cpuid eax_in='0x07' edx='0x00000004'/>
+ </feature>
+ <!-- AVX-512 Multiply Accumulation Single Precision -->
+ <feature name='avx512-4fmaps'>
+ <cpuid eax_in='0x07' edx='0x00000008'/>
+ </feature>
<!-- Processor Extended State Enumeration sub leaf 1 -->
<feature name='xsaveopt'>
--
2.9.2
7 years, 11 months