[libvirt] [PATCH 0/2] qemu_cgroup: allow access to /dev/dri/render*
by Ján Tomko
Technically a v2, but v1 is already pushed.
This version is based on the <gl enable> in <spice> instead
of accel3d="yes" in <video><model type="virtio".
It also only allows access to the render* devices, instead of all of them.
https://bugzilla.redhat.com/show_bug.cgi?id=1337290
Ján Tomko (2):
Revert "qemu_cgroup: allow access to /dev/dri for virtio-vga"
qemu_cgroup: allow access to /dev/dri/render*
src/qemu/qemu_cgroup.c | 71 ++++++++++++++++++++++++++++++++++++++++----------
1 file changed, 57 insertions(+), 14 deletions(-)
--
2.7.3
7 years, 11 months
[libvirt] [PATCH v5 0/2] List only online cpus for vcpupin/emulatorpin when vcpu placement static
by Nitesh Konkar
Currently when the vcpu placement is static and
cpuset is not specified, CPU Affinity shows 0..
CPUMAX. This patchset will result in display of
only online CPU's under CPU Affinity on linux.
Fixes the following Bug:
virsh dumpxml Fedora
<domain type='kvm' id='4'>
<name>Fedora</name>
<uuid>aecf3e5e-6f9a-42a3-9d6a-223a75569a66</uuid>
<maxMemory slots='32' unit='KiB'>3145728</maxMemory>
<memory unit='KiB'>524288</memory>
<currentMemory unit='KiB'>524288</currentMemory>
<vcpu placement='static' current='8'>160</vcpu>
<resource>
<partition>/machine</partition>
</resource>
.....................
.......................
.........................
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
</memballoon>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+0:+0</label>
<imagelabel>+0:+0</imagelabel>
</seclabel>
</domain>
lscpu
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 8
On-line CPU(s) list: 0-2,4-7
Off-line CPU(s) list: 3
Thread(s) per core: 1
Core(s) per socket: 7
Socket(s): 1
..........
..........
NUMA node0 CPU(s): 0-2,4-7
NUMA node1 CPU(s):
cat /sys/devices/system/cpu/online
0-2,4-7
Before Patch
virsh vcpupin Fedora
VCPU: CPU Affinity
----------------------------------
0: 0-7
1: 0-7
...
...
158: 0-7
159: 0-7
virsh emulatorpin Fedora
emulator: CPU Affinity
----------------------------------
*: 0-7
After Patch
virsh vcpupin Fedora
VCPU: CPU Affinity
----------------------------------
0: 0-2,4-7
1: 0-2,4-7
...
...
158: 0-2,4-7
159: 0-2,4-7
virsh emulatorpin Fedora
emulator: CPU Affinity
----------------------------------
*: 0-2,4-7
Nitesh Konkar (2):
conf: List only online cpus for virsh vcpupin
conf: List only online cpus for virsh emulatorpin
src/conf/domain_conf.c | 6 ++++++
src/qemu/qemu_driver.c | 5 +++++
2 files changed, 11 insertions(+)
--
2.1.0
7 years, 11 months
[libvirt] [PATCH V3 0/2] Allow hot plugged host CPUs
by Viktor Mihajlovski
This is a rework of the original patch allowing to use all host
CPUs if the cpuset controller is not configured by libvirt.
The major enhancement is that we won't try to retrieve the online
host CPU map on platforms where this is not supported and just
fall back to the old behavior.
V3:
- Added new function (in seperate patch) to find out whether the host
CPU online map can be retrieved gracefully
V2:
- Avoid memory leak
Viktor Mihajlovski (2):
util: Allow to query the presence of host CPU bitmaps
qemu: Allow use of hot plugged host CPUs if no affinity set
src/libvirt_private.syms | 1 +
src/qemu/qemu_process.c | 33 ++++++++++++++++++++++++---------
src/util/virhostcpu.c | 10 ++++++++++
src/util/virhostcpu.h | 1 +
4 files changed, 36 insertions(+), 9 deletions(-)
--
1.9.1
7 years, 11 months
[libvirt] [PATCH 0/3] qemu: fix racy paths in agent related code
by Nikolay Shirokovskiy
Patch 1 is a nitpick I found when working on Patch 2.
Patch 2 and 3 fix a couple of race conditions in code.
A lot of changes is inherent refactorings in Patch 2 actually.
Nikolay Shirokovskiy (3):
qemu: agent: fix uninitialized var case in qemuAgentGetFSInfo
qemu: don't use vmdef without domain lock
qemu: agent: take monitor lock in qemuAgentNotifyEvent
src/qemu/qemu_agent.c | 57 +++++++++++++--------
src/qemu/qemu_agent.h | 25 ++++++++-
src/qemu/qemu_driver.c | 88 +++++++++++++++++++++++++++++++-
tests/Makefile.am | 1 -
tests/qemuagentdata/qemuagent-fsinfo.xml | 39 --------------
tests/qemuagenttest.c | 47 +++++++----------
6 files changed, 164 insertions(+), 93 deletions(-)
delete mode 100644 tests/qemuagentdata/qemuagent-fsinfo.xml
--
1.8.3.1
7 years, 11 months
[libvirt] dpdk/vpp and cross-version migration for vhost
by Michael S. Tsirkin
Hi!
So it looks like we face a problem with cross-version
migration when using vhost. It's not new but became more
acute with the advent of vhost user.
For users to be able to migrate between different versions
of the hypervisor the interface exposed to guests
by hypervisor must stay unchanged.
The problem is that a qemu device is connected
to a backend in another process, so the interface
exposed to guests depends on the capabilities of that
process.
Specifically, for vhost user interface based on virtio, this includes
the "host features" bitmap that defines the interface, as well as more
host values such as the max ring size. Adding new features/changing
values to this interface is required to make progress, but on the other
hand we need ability to get the old host features to be compatible.
To solve this problem within qemu, qemu has a versioning system based on
a machine type concept which fundamentally is a version string, by
specifying that string one can get hardware compatible with a previous
qemu version. QEMU also reports the latest version and list of versions
supported so libvirt records the version at VM creation and then is
careful to use this machine version whenever it migrates a VM.
One might wonder how is this solved with a kernel vhost backend. The
answer is that it mostly isn't - instead an assumption is made, that
qemu versions are deployed together with the kernel - this is generally
true for downstreams. Thus whenever qemu gains a new feature, it is
already supported by the kernel as well. However, if one attempts
migration with a new qemu from a system with a new to old kernel, one
would get a failure.
In the world where we have multiple userspace backends, with some of
these supplied by ISVs, this seems non-realistic.
IMO we need to support vhost backend versioning, ideally
in a way that will also work for vhost kernel backends.
So I'd like to get some input from both backend and management
developers on what a good solution would look like.
If we want to emulate the qemu solution, this involves adding the
concept of interface versions to dpdk. For example, dpdk could supply a
file (or utility printing?) with list of versions: latest and versions
supported. libvirt could read that and
- store latest version at vm creation
- pass it around with the vm
- pass it to qemu
>From here, qemu could pass this over the vhost-user channel,
thus making sure it's initialized with the correct
compatible interface.
As version here is an opaque string for libvirt and qemu,
anything can be used - but I suggest either a list
of values defining the interface, e.g.
any_layout=on,max_ring=256
or a version including the name and vendor of the backend,
e.g. "org.dpdk.v4.5.6".
Note that typically the list of supported versions can only be
extended, not shrunk. Also, if the host/guest interface
does not change, don't change the current version as
this just creates work for everyone.
Thoughts? Would this work well for management? dpdk? vpp?
Thanks!
--
MST
7 years, 11 months
[libvirt] [PATCH 0/3] qemu: bugfixes for websocket port in graphics
by Nikolay Shirokovskiy
Patch 1 is a preparation for patch 2.
Patch 3 particularly fixes next 2 cases:
== A. Can not restore domain with autoconfigured websocket.
domain 1 and 2 have autoconfigured websocket.
1. domain 1 is started then, saved
2. domain 2 is started
3. domain 1 restoration is failed:
error: internal error: qemu unexpectedly closed the monitor: 2016-11-21T10:23:11.356687Z
qemu-kvm: -vnc 0.0.0.0:2,websocket=5700: Failed to start VNC server on `(null)':
Failed to bind socket: Address already in use
== B. Can not migrate domain with autoconfigured websocket.
domain 1 on host A, domain 2 on host B, both have autoconfigured websocket
1. domain 1 started, domain 2 started
2. domain 1 migration to host B is failed with the above error.
Nikolay Shirokovskiy (3):
qemu: refactor: use switch for enum in qemuProcessGraphicsReservePorts
qemu: mark user defined websocket as used
qemu: fix xml dump of autogenerated websocket
src/conf/domain_conf.c | 5 ++++-
src/conf/domain_conf.h | 1 +
src/qemu/qemu_process.c | 45 ++++++++++++++++++++++++++++++++++++---------
3 files changed, 41 insertions(+), 10 deletions(-)
--
1.8.3.1
7 years, 11 months
[libvirt] RFC: add recreate option to domain events conf
by Nikolay Shirokovskiy
Hi, all.
Does it make sense to anybody else that rebooting and resetting
a persistent domain from outside or from inside should bring upon
pending configuration changes? For this purpose we can add another
option to on_reboot and other events, say 'recreate'. From technical
POV qemu has enough capabilities, namely -no-reboot option.
Nikolay
7 years, 11 months
[libvirt] [PATCH v3 0/4] Gathering network interface statistics with openvswitch
by Mehdi Abaakouk
This new version removes the virstat.c from locale configuration.
It also adds a new commit to autodected the ifname of the ovs vhostuser.
Mehdi Abaakouk (4):
Gathering vhostuser interface stats with ovs
virstat: fix signature of virstat helper
Move virstat.c code to virnetdevtap.c
domain_conf: autodetect vhostuser ifname
po/POTFILES.in | 1 -
src/Makefile.am | 1 -
src/conf/domain_conf.c | 10 +++
src/libvirt_private.syms | 5 +-
src/libxl/libxl_driver.c | 4 +-
src/lxc/lxc_driver.c | 3 +-
src/openvz/openvz_driver.c | 4 +-
src/qemu/qemu_driver.c | 31 +++++--
src/uml/uml_driver.c | 1 -
src/util/virnetdevopenvswitch.c | 106 ++++++++++++++++++++++++
src/util/virnetdevopenvswitch.h | 4 +
src/util/virnetdevtap.c | 143 ++++++++++++++++++++++++++++++++
src/util/virnetdevtap.h | 3 +
src/util/virstats.c | 178 ----------------------------------------
src/util/virstats.h | 31 -------
src/xen/xen_hypervisor.c | 4 +-
16 files changed, 298 insertions(+), 231 deletions(-)
delete mode 100644 src/util/virstats.c
delete mode 100644 src/util/virstats.h
--
2.10.2
7 years, 11 months
[libvirt] [PATCH v4 0/9] Implementation of QEMU vhost-scsi
by Eric Farman
This patch series provides a libvirt implementation of the vhost-scsi
interface in QEMU. As near as I can see, this was discussed upstream in
July 2014[1], and ended in a desire to replace a vhost-scsi controller
in favor of a hostdev element instead[2].
Host setup via targetcli (SCSI LUN(s) are already defined to host):
# targetcli
targetcli shell version 2.1.fb35
Copyright 2011-2013 by Datera, Inc and others.
For help on commands, type 'help'.
/> backstores/block create name=disk1 write_back=false \
dev=/dev/disk/by-id/dm-name-36005076306ffc7630000000000002211
Created block storage object disk1 using
/dev/disk/by-id/dm-name-36005076306ffc7630000000000002211.
/> vhost/ create
Created target naa.5001405df3e54061.
Created TPG 1.
/> vhost/naa.5001405df3e54061/tpg1/luns create /backstores/block/disk1
Created LUN 0.
/> exit
Host Filesystem Example:
# ls /sys/kernel/config/target/vhost/
discovery_auth naa.5001405df3e54061 version
# ls /sys/kernel/config/target/vhost/naa.5001405df3e54061/tpgt_1/lun/
lun_0
QEMU Example (snippet):
-device vhost-scsi-ccw,wwpn=naa.5001405df3e54061,devno=fe.0.1000
Libvirt Example (snippet):
<hostdev mode='subsystem' type='scsi_host'>
<source protocol='vhost' wwpn='naa.5001405df3e54061'/>
<address type='ccw' cssid='0xfe' ssid='0x0' devno='0x1000'/>
</hostdev>
Guest Viewpoint:
# lsscsi
[1:0:1:0] disk LIO-ORG disk0 4.0 /dev/sda
# dmesg | grep 1:
[ 6.065735] scsi host1: Virtio SCSI HBA
[ 6.093892] scsi 1:0:1:0: Direct-Access LIO-ORG disk0 4.0 PQ: 0 ANSI: 5
[ 6.313615] sd 1:0:1:0: Attached scsi generic sg0 type 0
[ 6.314981] sd 1:0:1:0: [sda] 29360128 512-byte logical blocks: (15.0 GB/14.0 GiB)
[ 6.317290] sd 1:0:1:0: [sda] Write Protect is off
[ 6.317566] sd 1:0:1:0: [sda] Mode Sense: 43 00 10 08
[ 6.317853] sd 1:0:1:0: [sda] Write cache: enabled, read cache: enabled, supports DPO and FUA
[ 6.352722] sd 1:0:1:0: [sda] Attached SCSI disk
Changelog:
v4:
- Rebase
- Rebased to current master (21 November)
- s/virDomainPCIAddressEnsureAddr/qemuDomainEnsurePCIAddress/
(per commit abb7a4bd)
- Renaming (apologies if this list is off slightly from the code)
- Per comments in v3.2, some renaming has been performed:
-- virDomainHostdevSubsysHostProtocol => virDomainHostdevSubsysSCSIHostProtocol
-- VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_HOST => VIR_DOMAIN_HOSTDEV_SUBSYS_TYPE_SCSI_HOST
-- VIR_DOMAIN_HOSTDEV_SUBSYS_HOST_PROTOCOL_TYPE_ => VIR_DOMAIN_HOSTDEV_SUBSYS_SCSI_HOST_PROTOCOL_TYPE_
-- virDomainHostdevSubsysHost => virDomainHostdevSubsysSCSIVHost
-- virDomainHostdevSubsysHost host; => virDomainHostdevSubsysSCSIVHost scsi_host;
-- src/util/virhost.[ch] => src/util/virscsivhost.[ch]
-- virHost... => virSCSIVHost...
-- activeHostHostdevs => activeSCSIVHostHostdevs
-- qemuBuildHostHostdevDevStr => qemuBuildSCSIVHostHostdevDevStr
-- qemuSetupHostHostDeviceCgroup => qemuSetupHostSCSIVHostDeviceCgroup
- Comments
- Checked for a guest address tag that is of type PCI or CCW
(it is optional, and should be calculated automatically)
- Removed the array of "used by" information in hostdev utilities,
since devices will not be shared
- Fixed the existing tests to properly check for an address
- Added a set of CCW tests in addition to the existing PCI ones
- Removed protocol check in PrimeVirtioDeviceAddresses
- Things *NOT* done (later?) from v2 feedback
- Investigation/tie-in with virsh nodedev-list stuff
- Implementation of 'num_queues', 'max_sectors', and 'cmd_per_lun'
(Need to research these in the virtio space, before figuring out
how to apply to vhost-scsi)
- Dropping the "naa." prefix of wwn
- Split the "tests" patch into earlier patches
- Other
v3.2: https://www.redhat.com/archives/libvir-list/2016-November/msg00454.html
v3.1: https://www.redhat.com/archives/libvir-list/2016-October/msg01324.html
v3: https://www.redhat.com/archives/libvir-list/2016-October/msg01201.html
v2.1: https://www.redhat.com/archives/libvir-list/2016-September/msg00148.html
v2: https://www.redhat.com/archives/libvir-list/2016-August/msg01028.html
v1: https://www.redhat.com/archives/libvir-list/2016-July/msg01004.html
[1] http://www.redhat.com/archives/libvir-list/2014-July/msg01235.html
[2] http://www.redhat.com/archives/libvir-list/2014-July/msg01390.html
Eric Farman (9):
qemu: Introduce vhost-scsi capability
Introduce framework for a hostdev SCSI_host subsystem type
util: Management routines for scsi_host devices
qemu: Add vhost-scsi string for -device parameter
qemu: Allow hotplug of vhost-scsi device
conf: Wire up the vhost-scsi connection from/to XML
security: Include vhost-scsi in security labels
tests: Introduce basic vhost-scsi tests
docs: Add vhost-scsi
docs/formatdomain.html.in | 24 ++
docs/schemas/domaincommon.rng | 23 ++
po/POTFILES.in | 1 +
src/Makefile.am | 1 +
src/conf/domain_audit.c | 7 +
src/conf/domain_conf.c | 101 +++++++-
src/conf/domain_conf.h | 18 ++
src/libvirt_private.syms | 18 ++
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_cgroup.c | 39 +++
src/qemu/qemu_command.c | 80 ++++++
src/qemu/qemu_command.h | 5 +
src/qemu/qemu_domain_address.c | 14 +-
src/qemu/qemu_hostdev.c | 41 +++
src/qemu/qemu_hostdev.h | 8 +
src/qemu/qemu_hotplug.c | 166 ++++++++++++
src/security/security_apparmor.c | 22 ++
src/security/security_dac.c | 50 ++++
src/security/security_selinux.c | 47 ++++
src/util/virhostdev.c | 163 ++++++++++++
src/util/virhostdev.h | 16 ++
src/util/virscsivhost.c | 288 +++++++++++++++++++++
src/util/virscsivhost.h | 65 +++++
tests/domaincapsschemadata/full.xml | 1 +
tests/qemucapabilitiesdata/caps_1.5.3.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_1.6.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_1.7.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.1.1.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.4.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.5.0.x86_64.xml | 1 +
.../caps_2.6.0-gicv2.aarch64.xml | 1 +
.../caps_2.6.0-gicv3.aarch64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.6.0.ppc64le.xml | 1 +
tests/qemucapabilitiesdata/caps_2.6.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.7.0.x86_64.xml | 1 +
.../qemuxml2argv-hostdev-scsi-vhost-scsi-ccw.args | 23 ++
.../qemuxml2argv-hostdev-scsi-vhost-scsi-ccw.xml | 34 +++
.../qemuxml2argv-hostdev-scsi-vhost-scsi-pci.args | 24 ++
.../qemuxml2argv-hostdev-scsi-vhost-scsi-pci.xml | 42 +++
tests/qemuxml2argvmock.c | 9 +
tests/qemuxml2argvtest.c | 6 +
.../qemuxml2xmlout-hostdev-scsi-vhost-scsi-ccw.xml | 1 +
.../qemuxml2xmlout-hostdev-scsi-vhost-scsi-pci.xml | 1 +
tests/qemuxml2xmltest.c | 6 +
45 files changed, 1355 insertions(+), 3 deletions(-)
create mode 100644 src/util/virscsivhost.c
create mode 100644 src/util/virscsivhost.h
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-vhost-scsi-ccw.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-vhost-scsi-ccw.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-vhost-scsi-pci.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-vhost-scsi-pci.xml
create mode 120000 tests/qemuxml2xmloutdata/qemuxml2xmlout-hostdev-scsi-vhost-scsi-ccw.xml
create mode 120000 tests/qemuxml2xmloutdata/qemuxml2xmlout-hostdev-scsi-vhost-scsi-pci.xml
--
1.9.1
7 years, 11 months
[libvirt] [PATCH] conf: Make scheduler formatting simpler
by Martin Kletzander
Since the great rework of how we store vcpu- and iothread-related
data, we have overly complex part of code that is trying to format the
scheduler tuning data in as less lines as possible by grouping
settings for multiple threads. That was designed as an input syntax
sugar for users, but we don't need to also use that when formatting
the XML. Switching to simple enumeration makes the code nicer,
shorter and more welcoming to future changes.
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
src/conf/domain_conf.c | 209 ++++-----------------
...l2xmlout-cputune-iothreadsched-zeropriority.xml | 7 +-
.../qemuxml2xmlout-cputune-iothreadsched.xml | 7 +-
3 files changed, 43 insertions(+), 180 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 6e008e22e3c7..51f1ee14498a 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -23066,183 +23066,35 @@ virDomainDefHasCapabilitiesFeatures(virDomainDefPtr def)
}
-/**
- * virDomainFormatSchedDef:
- * @def: domain definiton
- * @buf: target XML buffer
- * @name: name of the target XML element
- * @func: function that returns the thread scheduler parameter struct for an object
- * @resourceMap: bitmap of indexes of objects that shall be formatted (used with @func)
- *
- * Formats one of the two scheduler tuning elements to the XML. This function
- * transforms the internal representation where the scheduler info is stored
- * per-object to the XML representation where the info is stored per group of
- * objects. This function autogroups all the relevant scheduler configs.
- *
- * Returns 0 on success -1 on error.
- */
-static int
-virDomainFormatSchedDef(virDomainDefPtr def,
- virBufferPtr buf,
+static void
+virDomainFormatSchedDef(virBufferPtr buf,
const char *name,
- virDomainThreadSchedParamPtr (*func)(virDomainDefPtr, unsigned int),
- virBitmapPtr resourceMap)
-{
- virBitmapPtr schedMap = NULL;
- virBitmapPtr prioMap = NULL;
- virDomainThreadSchedParamPtr sched;
- char *tmp = NULL;
- ssize_t next;
- size_t i;
- int ret = -1;
-
- /* Okay, @func should never return NULL here because it does
- * so iff corresponding resource does not exists. But if it
- * doesn't we should not have been called in the first place.
- * But some compilers fails to see this complex reasoning and
- * deduct that this code is buggy. Shut them up by checking
- * for return value of sched. Even though we don't need to.
- */
-
- if (!(schedMap = virBitmapNew(VIR_DOMAIN_CPUMASK_LEN)) ||
- !(prioMap = virBitmapNew(VIR_DOMAIN_CPUMASK_LEN)))
- goto cleanup;
-
- for (i = VIR_PROC_POLICY_NONE + 1; i < VIR_PROC_POLICY_LAST; i++) {
- virBitmapClearAll(schedMap);
-
- /* find vcpus using a particular scheduler */
- next = -1;
- while ((next = virBitmapNextSetBit(resourceMap, next)) > -1) {
- sched = func(def, next);
-
- if (sched && sched->policy == i)
- ignore_value(virBitmapSetBit(schedMap, next));
- }
-
- /* it's necessary to discriminate priority levels for schedulers that
- * have them */
- while (!virBitmapIsAllClear(schedMap)) {
- virBitmapPtr currentMap = NULL;
- ssize_t nextprio;
- bool hasPriority = false;
- int priority = 0;
-
- switch ((virProcessSchedPolicy) i) {
- case VIR_PROC_POLICY_NONE:
- case VIR_PROC_POLICY_BATCH:
- case VIR_PROC_POLICY_IDLE:
- case VIR_PROC_POLICY_LAST:
- currentMap = schedMap;
- break;
-
- case VIR_PROC_POLICY_FIFO:
- case VIR_PROC_POLICY_RR:
- virBitmapClearAll(prioMap);
- hasPriority = true;
-
- /* we need to find a subset of vCPUs with the given scheduler
- * that share the priority */
- nextprio = virBitmapNextSetBit(schedMap, -1);
- if (!(sched = func(def, nextprio)))
- goto cleanup;
-
- priority = sched->priority;
- ignore_value(virBitmapSetBit(prioMap, nextprio));
-
- while ((nextprio = virBitmapNextSetBit(schedMap, nextprio)) > -1) {
- sched = func(def, nextprio);
- if (sched && sched->priority == priority)
- ignore_value(virBitmapSetBit(prioMap, nextprio));
- }
-
- currentMap = prioMap;
- break;
- }
-
- /* now we have the complete group */
- if (!(tmp = virBitmapFormat(currentMap)))
- goto cleanup;
-
- virBufferAsprintf(buf,
- "<%sched %s='%s' scheduler='%s'",
- name, name, tmp,
- virProcessSchedPolicyTypeToString(i));
- VIR_FREE(tmp);
-
- if (hasPriority)
- virBufferAsprintf(buf, " priority='%d'", priority);
+ virDomainThreadSchedParamPtr sched,
+ size_t id)
+{
+ switch (sched->policy) {
+ case VIR_PROC_POLICY_BATCH:
+ case VIR_PROC_POLICY_IDLE:
+ virBufferAsprintf(buf, "<%ssched "
+ "%ss='%zu' scheduler='%s'/>\n",
+ name, name, id,
+ virProcessSchedPolicyTypeToString(sched->policy));
+ break;
- virBufferAddLit(buf, "/>\n");
+ case VIR_PROC_POLICY_RR:
+ case VIR_PROC_POLICY_FIFO:
+ virBufferAsprintf(buf, "<%ssched "
+ "%ss='%zu' scheduler='%s' priority='%d'/>\n",
+ name, name, id,
+ virProcessSchedPolicyTypeToString(sched->policy),
+ sched->priority);
+ break;
- /* subtract all vCPUs that were already found */
- virBitmapSubtract(schedMap, currentMap);
+ case VIR_PROC_POLICY_NONE:
+ case VIR_PROC_POLICY_LAST:
+ break;
}
- }
-
- ret = 0;
- cleanup:
- virBitmapFree(schedMap);
- virBitmapFree(prioMap);
- return ret;
-}
-
-
-static int
-virDomainFormatVcpuSchedDef(virDomainDefPtr def,
- virBufferPtr buf)
-{
- virBitmapPtr allcpumap;
- int ret;
-
- if (virDomainDefGetVcpusMax(def) == 0)
- return 0;
-
- if (!(allcpumap = virBitmapNew(virDomainDefGetVcpusMax(def))))
- return -1;
-
- virBitmapSetAll(allcpumap);
-
- ret = virDomainFormatSchedDef(def, buf, "vcpus", virDomainDefGetVcpuSched,
- allcpumap);
-
- virBitmapFree(allcpumap);
- return ret;
-}
-
-
-static int
-virDomainFormatIOThreadSchedDef(virDomainDefPtr def,
- virBufferPtr buf)
-{
- virBitmapPtr threadmap;
- size_t i;
- int ret = -1;
-
- if (def->niothreadids == 0)
- return 0;
-
- if (!(threadmap = virBitmapNewEmpty()))
- return -1;
-
- for (i = 0; i < def->niothreadids; i++) {
- if (def->iothreadids[i]->sched.policy != VIR_PROC_POLICY_NONE &&
- virBitmapSetBitExpand(threadmap, def->iothreadids[i]->iothread_id) < 0)
- goto cleanup;
- }
-
- if (virBitmapIsAllClear(threadmap)) {
- ret = 0;
- goto cleanup;
- }
-
- ret = virDomainFormatSchedDef(def, buf, "iothreads",
- virDomainDefGetIOThreadSched, threadmap);
-
- cleanup:
- virBitmapFree(threadmap);
- return ret;
}
@@ -23336,11 +23188,16 @@ virDomainCputuneDefFormat(virBufferPtr buf,
VIR_FREE(cpumask);
}
- if (virDomainFormatVcpuSchedDef(def, &childrenBuf) < 0)
- goto cleanup;
+ for (i = 0; i < def->maxvcpus; i++) {
+ virDomainFormatSchedDef(&childrenBuf, "vcpu",
+ &def->vcpus[i]->sched, i);
+ }
- if (virDomainFormatIOThreadSchedDef(def, &childrenBuf) < 0)
- goto cleanup;
+ for (i = 0; i < def->niothreadids; i++) {
+ virDomainFormatSchedDef(&childrenBuf, "iothread",
+ &def->iothreadids[i]->sched,
+ def->iothreadids[i]->iothread_id);
+ }
if (virBufferUse(&childrenBuf)) {
virBufferAddLit(buf, "<cputune>\n");
diff --git a/tests/qemuxml2xmloutdata/qemuxml2xmlout-cputune-iothreadsched-zeropriority.xml b/tests/qemuxml2xmloutdata/qemuxml2xmlout-cputune-iothreadsched-zeropriority.xml
index 5616c5c8474d..794a52d57133 100644
--- a/tests/qemuxml2xmloutdata/qemuxml2xmlout-cputune-iothreadsched-zeropriority.xml
+++ b/tests/qemuxml2xmloutdata/qemuxml2xmlout-cputune-iothreadsched-zeropriority.xml
@@ -12,8 +12,11 @@
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
<emulatorpin cpuset='1'/>
- <vcpusched vcpus='0-1' scheduler='fifo' priority='0'/>
- <iothreadsched iothreads='1-3' scheduler='rr' priority='0'/>
+ <vcpusched vcpus='0' scheduler='fifo' priority='0'/>
+ <vcpusched vcpus='1' scheduler='fifo' priority='0'/>
+ <iothreadsched iothreads='1' scheduler='rr' priority='0'/>
+ <iothreadsched iothreads='2' scheduler='rr' priority='0'/>
+ <iothreadsched iothreads='3' scheduler='rr' priority='0'/>
</cputune>
<os>
<type arch='i686' machine='pc'>hvm</type>
diff --git a/tests/qemuxml2xmloutdata/qemuxml2xmlout-cputune-iothreadsched.xml b/tests/qemuxml2xmloutdata/qemuxml2xmlout-cputune-iothreadsched.xml
index a0457bc62ec0..cd1dc87b524d 100644
--- a/tests/qemuxml2xmloutdata/qemuxml2xmlout-cputune-iothreadsched.xml
+++ b/tests/qemuxml2xmloutdata/qemuxml2xmlout-cputune-iothreadsched.xml
@@ -12,8 +12,11 @@
<vcpupin vcpu='0' cpuset='0'/>
<vcpupin vcpu='1' cpuset='1'/>
<emulatorpin cpuset='1'/>
- <vcpusched vcpus='0-1' scheduler='fifo' priority='1'/>
- <iothreadsched iothreads='1-3' scheduler='batch'/>
+ <vcpusched vcpus='0' scheduler='fifo' priority='1'/>
+ <vcpusched vcpus='1' scheduler='fifo' priority='1'/>
+ <iothreadsched iothreads='1' scheduler='batch'/>
+ <iothreadsched iothreads='2' scheduler='batch'/>
+ <iothreadsched iothreads='3' scheduler='batch'/>
</cputune>
<os>
<type arch='i686' machine='pc'>hvm</type>
--
2.11.0.rc2
7 years, 11 months