[libvirt] [PATCH v2 0/4] Introduce support for virtio-blk-pci iothreads
by John Ferlan
v1:
http://www.redhat.com/archives/libvir-list/2014-August/msg01155.html
Changes since v1
Patches 1-3 - purely from code review
Patch 4 - rework the checking of the to be added disk that has the iothread
property set to be done during qemuBuildDriveDevStr() after the config
check. This way the same checks are done for both start and hotplug.
Only set the "inuse" bit after qemuBuildDriveDevStr() returns successfully
for both start and hotplug. This also enforces only setting for this path
Since the only way a disk with the property can be added is if the current
emulator supports the feature, the calls to set/clear the bit if iothread
is set should be safe from not needing to also ensure iothreadmap exists.
John Ferlan (4):
domain_conf: Introduce iothreads XML
qemu: Add support for iothreads
domain_conf: Add support for iothreads in disk definition
qemu: Allow use of iothreads for disk definitions
docs/formatdomain.html.in | 34 ++++++++++++
docs/schemas/domaincommon.rng | 14 +++++
src/conf/domain_conf.c | 47 +++++++++++++++-
src/conf/domain_conf.h | 4 ++
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 64 ++++++++++++++++++++++
src/qemu/qemu_hotplug.c | 6 ++
.../qemuxml2argv-iothreads-disk.args | 17 ++++++
.../qemuxml2argv-iothreads-disk.xml | 40 ++++++++++++++
tests/qemuxml2argvdata/qemuxml2argv-iothreads.args | 8 +++
tests/qemuxml2argvdata/qemuxml2argv-iothreads.xml | 29 ++++++++++
tests/qemuxml2argvtest.c | 4 ++
tests/qemuxml2xmltest.c | 2 +
14 files changed, 271 insertions(+), 1 deletion(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-disk.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-disk.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads.xml
--
1.9.3
10 years, 2 months
[libvirt] [PATCH v2 0/2] network: Bring netdevs online later
by Matthew Rosato
The following patchset introduces code to defer setting netdevs online
(and therefore registering MACs) until right before beginning guest
CPU execution. The first patch introduces some infrastructure changes
in preparation of the actual function added in the 2nd patch.
Associated BZ: https://bugzilla.redhat.com/show_bug.cgi?id=1081461
Changes for v2:
* Ping for comments, esp. on patch #2.
* Moved @flags operand of virNetDevMacVLanCreateWithVPortProfile to the
end of the operand list.
* Minor changes based on comments by Martin Kletzander.
* Added detail to patch #2 commit message.
* Martin suggested that I replace various ? operations with if/else
statements, but I ended up leaving them alone as they were being used
to conditionally assign values to constant fields.
* I left the contents of qemu_interface as-is, rather than collapsing
them into non-qemu-specific functions, in order to keep Makefile linkage
consistent & happy (needs to be part of QEMU_DRIVER_SOURCES). Instead,
incorporated copyright suggestions from previous comments. Martin, if
you feel strongly about not having these new functions in a qemu-specific
part, feel free to comment.
Changes since RFC:
* Add a separate patch to introduce a flags field for macvlan/macvtap
creation.
* Use macvlan/tap IFUP flags to skip virNetDevSetOnline (for qemu only).
* Add hotplug support.
* For macvlan, save the current virNetDevVPortProfileOp in virDomainNetDef
during qemuPhysIfaceConnect. As Laine mentioned, could use this field in
a future patch to eliminate passing virNetDevVPortProfileOp everywhere.
* Add qemu_interface.c and qemu_interface.h to encapsulate new functions.
Matthew Rosato (2):
util: Introduce flags field for macvtap creation
network: Bring netdevs online later
src/Makefile.am | 3 +-
src/conf/domain_conf.h | 2 ++
src/lxc/lxc_process.c | 6 ++--
src/qemu/qemu_command.c | 12 +++++--
src/qemu/qemu_hotplug.c | 7 ++++
src/qemu/qemu_interface.c | 78 +++++++++++++++++++++++++++++++++++++++++++
src/qemu/qemu_interface.h | 32 ++++++++++++++++++
src/qemu/qemu_process.c | 4 +++
src/util/virnetdevmacvlan.c | 36 ++++++++++++--------
src/util/virnetdevmacvlan.h | 16 ++++++---
10 files changed, 172 insertions(+), 24 deletions(-)
create mode 100644 src/qemu/qemu_interface.c
create mode 100644 src/qemu/qemu_interface.h
--
1.7.9.5
10 years, 2 months
[libvirt] [PATCH 00/19] More Coverity patches
by John Ferlan
I almost didn't want to do this due to the sheer volume, but figured
at the very least the bulk of these are resource leaks found by the
much pickier new coverity scanner.
After this there are "only" 70 issues found...
John Ferlan (19):
libxl_migration: Resolve Coverity NULL_RETURNS
daemon: Resolve Coverity NEGATIVE_RETURNS
domain_conf: Resolve Coverity RESOURCE_LEAK
cpu_x86: Resolve Coverity RESOURCE_LEAK
qemu_command: Resolve Coverity RESOURCE_LEAK
qemu_agent: Resolve Coverity RESOURCE_LEAK
libxl_domain: Resolve Coverity RESOURCE_LEAK
qemu_capabilities: Resolve Coverity RESOURCE_LEAK
network_conf: Resolve Coverity RESOURCE_LEAK
virsh-network: Resolve Coverity RESOURCE_LEAK
bridge_driver: Resolve Coverity RESOURCE_LEAK
libxl_migration: Resolve Coverity RESOURCE_LEAK
phyp_driver: Resolve Coverity RESOURCE_LEAK
qemu_driver: Resolve Coverity RESOURCE_LEAK
storage_conf: Resolve Coverity RESOURCE_LEAK
qemu_monitor: Resolve Coverity NESTING_INDENT_MISMATCH
domain_conf: Resolve Coverity DEADCODE
qemu_driver: Resolve Coverity DEADCODE
qemu_command: Resolve Coverity DEADCODE
daemon/remote.c | 24 ++++++++++++------------
src/conf/domain_conf.c | 28 ++++++++++++++++++++++++----
src/conf/network_conf.c | 2 ++
src/conf/storage_conf.c | 2 ++
src/cpu/cpu_x86.c | 15 ++++++++++-----
src/libxl/libxl_domain.c | 4 +++-
src/libxl/libxl_migration.c | 11 +++++++++--
src/network/bridge_driver.c | 1 +
src/phyp/phyp_driver.c | 1 +
src/qemu/qemu_agent.c | 6 ++++--
src/qemu/qemu_capabilities.c | 2 +-
src/qemu/qemu_command.c | 9 +++++----
src/qemu/qemu_driver.c | 8 ++++++++
src/qemu/qemu_monitor.c | 3 ++-
tools/virsh-network.c | 2 +-
15 files changed, 85 insertions(+), 33 deletions(-)
--
1.9.3
10 years, 2 months
[libvirt] Automatically affinitize hostdev interrupts to vm vCpus (qemu/kvm)
by Mooney, Sean K
Hi
I would like to ask for comments and propose a possible new libvirt feature.
Problem statement:
At present when you boot a vm via libvirt it is possible to pin vm vCPUs to host CPUs to improve performance in the guest under certain conditions.
If hostdev interrupts are not pined to a specific core/cpuset suboptimal processing of irqs may occur reducing the performance of both guest and host.
I would like to propose extending libvirt to automatically pin interrupts for hostdev devices if they are present in the guest.
By affinitizing interrupts, cache line sharing between the specified interrupt and guest can be achieved.
If CPU affinity for the guest, as set by the cpuset parameter,
is intelligently chosen to place the guest on the same numa node as the hostdev, cross socket traffic can be mitigated.
As a result, latency which would be introduce if the interrupt processing was scheduled to a non-local numa node CPU can be reduced via interrupt pinning.
Proposed change:
* util/virpci and util/virhostdev will be extended to retrieve IRQ and msi_interupt information from sysfs.
* util/virinterupt will be created
* util/virinterupt will implement managing interrupt affinity via /proc/irq/x/smp_affinity
* qemuProcessInitCpuAffinity will be extended to conditionally affinitize hostdev interrupts to vm vCpus when hostdevs are present in the vm definition.
Alternative implementation:
In addition to the above changes the hostdev element could be extended to include a cpuset attribute:
* if the cpuset is auto: the interrupts would be pinned to the same cpuset as the vms vCPU
* if a cpuset is specified: the interrupts would be affinitized as per the set cpuset
* if the cpuset is native: the interrupts will not be pinned and the current behaviour will not be changed.
* If the cpuset attribute is not present either the behaviour of auto or native could be used as a default.
o Using auto as the default would allow transparent use of the feature.
o Using native as the default would allow no changes to any existing deployments unless the feature is requested.
Any feedback is welcomed.
Regards
Sean.
--------------------------------------------------------------
Intel Shannon Limited
Registered in Ireland
Registered Office: Collinstown Industrial Park, Leixlip, County Kildare
Registered Number: 308263
Business address: Dromore House, East Park, Shannon, Co. Clare
This e-mail and any attachments may contain confidential material for the sole use of the intended recipient(s). Any review or distribution by others is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies.
10 years, 2 months
[libvirt] Proposition for the implementation of new features for Hyper-V driver
by Adrien Kantcheff
Dear libvirt developers,
During my final student training in the French company Bull, with a previous student (Simon RASTELLO), we developed new features for Hyper-V driver. For a project called OpenCloudware, our work was to bring new functionalities using libvirt API in order to do basic actions on virtual machines hosted on Hyper-V.
You may be interested in pushing our developments in the official release.
The libvirt driver already provides functions to enumerate and get WMI classes by sending WQL requests to WMI provider and also to call WMI class methods. For that last kind of communication, a method requires input parameters. There are two kinds of argument: basic (integer, string...) and complex (objects, end point references, embedded instances). Actually, just the first argument passing mode is available with libvirt driver. But the second one is very useful because there is many WMI methods with complex types parameters, and for the moment it constraints developers to call WMI methods only with basic types parameters. So in order to expand WMI methods calls, we have implemented the second argument passing mode.
Thanks to this new argument passing mode, we are available to set new functionalities.
Our contributions include:
* hyperv_driver.c
o hypervDomainDefineXML
o hypervDomainCreateXML
o hypervDomainUndefine
o hypervDomainUndefineFlags
o hypervDomainShutdown
o hypervDomainShutdownFlags
o hypervDomainGetVcpus
o hypervDomainGetVcpusFlags
o hypervConnectGetMaxVcpus
o hypervDomainGetMaxVcpus
o hypervDomainSetVcpus
o hypervDomainSetVcpusFlags
o hypervDomainSetMemory
o hypervDomainSetMemoryFlags
o hypervDomainSetMaxMemory
o hypervNodeGetFreeMemory
o hypervDomainAttachDevice
o hypervDomainAttachDeviceFlags
o hypervDomainGetSchedulerParameters
o hypervDomainGetSchedulerParametersFlags
o hypervDomainGetSchedulerType
o hypervConnectGetCapabilities
o hypervConnectGetVersion
o hypervDomainSetAutostart
o hypervDomainGetAutostart
* hyperv_network.c
o hypervConnectNumOfNetworks
o hypervConnectListNetworks
o hypervConnectNumOfDefinedNetworks
o hypervConnectListDefinedNetworks
o hypervNetworkLookupByName
o hypervNetworkGetXMLDesc
* hyperv_storage_driver.c
o hypervConnectNumOfStoragePools (stub)
o hypervConnectListStoragePools (stub)
o hypervConnectNumOfDefinedStoragePools (stub)
o hypervStoragePoolLookupByName (stub)
o hypervStorageVolLookupByPath (stub)
* hyperv_private.h
o Add mutex to protect against concurrent calls
o Add virDomainXMLOptionPtr to parse Domain XML
* hyperv_wmi.h
o Structures for complex arguments: objects, EPR (end point references) and embedded instances
* hyperv_wmi.c
o Methods to invoke WMI methods with complex arguments
* hyperv_wmi_generator.input
o CIM_DataFile
o Win32_ComputerSystemProduct
o Msvm_VirtualSystemManagementService
o Msvm_VirtualSystemGlobalSettingData
o Msvm_ResourceAllocationSettingData
o Msvm_AllocationCapabilities
o Msvm_VirtualSwitch
o Msvm_SwitchPort
o Msvm_SyntheticEthernetPortSettingData
o Msvm_VirtualSwitchManagementService
o Win32_OperatingSystem
o Win32_PerfFormattedData_HvStats_HyperVHypervisorVirtualProcessor
o Win32_PerfRawData_HvStats_HyperVHypervisorVirtualProcessor
* hyperv_wmi_generator.py
o Add CIM_DataFile class header to be generated
o Add tab classes and types to be generated
o Add a function to print header types
* openwsman.h
o Add ws_xml_create_doc signature
o Add xml_parser_get_root signature
Attach files contain sources and an pdf of our contributions.
I'm ending my training this week but I'm available to answer any questions you may have. You can use my personal email: adrien.kantcheff(a)gmail.com<mailto:adrien.kantcheff@gmail.com>
I also put in copy of this email my tutors Yves VINTER and Christian BOURGEOIS. Feel free to contact them as well.
Best regards,
Adrien KANTCHEFF
Bull
10 years, 2 months
[libvirt] run qemu-agent-command via binding (ruby/python/php)
by Vasiliy Tolstov
Hi! Does it possible(featured, planned) to run some qemu-agent-command
via libvirt binding (i'm interesting on ruby and php).?
I'm understand that i can connect via socket and run it, but it very
usable to get this ability inside binding.
--
Vasiliy Tolstov,
e-mail: v.tolstov(a)selfip.ru
jabber: vase(a)selfip.ru
10 years, 2 months
[libvirt] [PATCHv4] qemu: Implement bulk stats API and one of the stats groups to return
by Peter Krempa
Implement the API function for virDomainListGetStats and
virConnectGetAllDomainStats in a modular way and implement the
VIR_DOMAIN_STATS_STATE group of statistics.
Although it may look like the function looks universal I'd rather not
expose it to other drivers as the coming stats groups are likely to do
qemu specific stuff to obtain the stats.
---
Notes:
Version 4:
- fixed handling and error checking of @stats
- domain filtering flags are now rejected when passing in a domain list
src/qemu/qemu_driver.c | 198 +++++++++++++++++++++++++++++++++++++++++++++++++
1 file changed, 198 insertions(+)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 73959da..45a080b 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17190,6 +17190,203 @@ qemuConnectGetDomainCapabilities(virConnectPtr conn,
}
+static int
+qemuDomainGetStatsState(virDomainObjPtr dom,
+ virDomainStatsRecordPtr record,
+ int *maxparams,
+ unsigned int privflags ATTRIBUTE_UNUSED)
+{
+ if (virTypedParamsAddInt(&record->params,
+ &record->nparams,
+ maxparams,
+ "state.state",
+ dom->state.state) < 0)
+ return -1;
+
+ if (virTypedParamsAddInt(&record->params,
+ &record->nparams,
+ maxparams,
+ "state.reason",
+ dom->state.reason) < 0)
+ return -1;
+
+ return 0;
+}
+
+
+typedef int
+(*qemuDomainGetStatsFunc)(virDomainObjPtr dom,
+ virDomainStatsRecordPtr record,
+ int *maxparams,
+ unsigned int flags);
+
+struct qemuDomainGetStatsWorker {
+ qemuDomainGetStatsFunc func;
+ unsigned int stats;
+};
+
+static struct qemuDomainGetStatsWorker qemuDomainGetStatsWorkers[] = {
+ { qemuDomainGetStatsState, VIR_DOMAIN_STATS_STATE},
+ { NULL, 0 }
+};
+
+
+static int
+qemuDomainGetStatsCheckSupport(unsigned int *stats,
+ bool enforce)
+{
+ unsigned int supportedstats = 0;
+ size_t i;
+
+ for (i = 0; qemuDomainGetStatsWorkers[i].func; i++)
+ supportedstats |= qemuDomainGetStatsWorkers[i].stats;
+
+ if (*stats == 0) {
+ *stats = supportedstats;
+ return 0;
+ }
+
+ if (enforce &&
+ *stats & ~supportedstats) {
+ virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED,
+ _("Stats types bits 0x%x are not supported by this daemon"),
+ *stats & ~supportedstats);
+ return -1;
+ }
+
+ *stats &= supportedstats;
+ return 0;
+}
+
+
+static int
+qemuDomainGetStats(virConnectPtr conn,
+ virDomainObjPtr dom,
+ unsigned int stats,
+ virDomainStatsRecordPtr *record,
+ unsigned int flags)
+{
+ int maxparams = 0;
+ virDomainStatsRecordPtr tmp;
+ size_t i;
+ int ret = -1;
+
+ if (VIR_ALLOC(tmp) < 0)
+ goto cleanup;
+
+ for (i = 0; qemuDomainGetStatsWorkers[i].func; i++) {
+ if (stats & qemuDomainGetStatsWorkers[i].stats) {
+ if (qemuDomainGetStatsWorkers[i].func(dom, tmp, &maxparams,
+ flags) < 0)
+ goto cleanup;
+ }
+ }
+
+ if (!(tmp->dom = virGetDomain(conn, dom->def->name, dom->def->uuid)))
+ goto cleanup;
+
+ *record = tmp;
+ tmp = NULL;
+ ret = 0;
+
+ cleanup:
+ if (tmp) {
+ virTypedParamsFree(tmp->params, tmp->nparams);
+ VIR_FREE(tmp);
+ }
+
+ return ret;
+}
+
+
+
+static int
+qemuConnectGetAllDomainStats(virConnectPtr conn,
+ virDomainPtr *doms,
+ unsigned int ndoms,
+ unsigned int stats,
+ virDomainStatsRecordPtr **retStats,
+ unsigned int flags)
+{
+ virQEMUDriverPtr driver = conn->privateData;
+ virDomainPtr *domlist = NULL;
+ virDomainObjPtr dom = NULL;
+ virDomainStatsRecordPtr *tmpstats = NULL;
+ bool enforce = !!(flags & VIR_CONNECT_GET_ALL_DOMAINS_STATS_ENFORCE_STATS);
+ int ntempdoms;
+ int nstats = 0;
+ size_t i;
+ int ret = -1;
+
+ if (ndoms)
+ virCheckFlags(VIR_CONNECT_GET_ALL_DOMAINS_STATS_ENFORCE_STATS, -1);
+ else
+ virCheckFlags(VIR_CONNECT_LIST_DOMAINS_FILTERS_ACTIVE |
+ VIR_CONNECT_LIST_DOMAINS_FILTERS_PERSISTENT |
+ VIR_CONNECT_LIST_DOMAINS_FILTERS_STATE |
+ VIR_CONNECT_GET_ALL_DOMAINS_STATS_ENFORCE_STATS, -1);
+
+ if (virConnectGetAllDomainStatsEnsureACL(conn) < 0)
+ return -1;
+
+ if (qemuDomainGetStatsCheckSupport(&stats, enforce) < 0)
+ return -1;
+
+ if (!ndoms) {
+ unsigned int lflags = flags & (VIR_CONNECT_LIST_DOMAINS_FILTERS_ACTIVE |
+ VIR_CONNECT_LIST_DOMAINS_FILTERS_PERSISTENT |
+ VIR_CONNECT_LIST_DOMAINS_FILTERS_STATE);
+
+ if ((ntempdoms = virDomainObjListExport(driver->domains,
+ conn,
+ &domlist,
+ virConnectGetAllDomainStatsCheckACL,
+ lflags)) < 0)
+ goto cleanup;
+
+ ndoms = ntempdoms;
+ doms = domlist;
+ }
+
+ if (VIR_ALLOC_N(tmpstats, ndoms + 1) < 0)
+ goto cleanup;
+
+ for (i = 0; i < ndoms; i++) {
+ virDomainStatsRecordPtr tmp = NULL;
+
+ if (!(dom = qemuDomObjFromDomain(doms[i])))
+ continue;
+
+ if (!domlist &&
+ !virConnectGetAllDomainStatsCheckACL(conn, dom->def))
+ continue;
+
+ if (qemuDomainGetStats(conn, dom, stats, &tmp, flags) < 0)
+ goto cleanup;
+
+ if (tmp)
+ tmpstats[nstats++] = tmp;
+
+ virObjectUnlock(dom);
+ dom = NULL;
+ }
+
+ *retStats = tmpstats;
+ tmpstats = NULL;
+
+ ret = nstats;
+
+ cleanup:
+ if (dom)
+ virObjectUnlock(dom);
+
+ virDomainStatsRecordListFree(tmpstats);
+ virDomainListFree(domlist);
+
+ return ret;
+}
+
+
static virDriver qemuDriver = {
.no = VIR_DRV_QEMU,
.name = QEMU_DRIVER_NAME,
@@ -17387,6 +17584,7 @@ static virDriver qemuDriver = {
.domainSetTime = qemuDomainSetTime, /* 1.2.5 */
.nodeGetFreePages = qemuNodeGetFreePages, /* 1.2.6 */
.connectGetDomainCapabilities = qemuConnectGetDomainCapabilities, /* 1.2.7 */
+ .connectGetAllDomainStats = qemuConnectGetAllDomainStats, /* 1.2.8 */
};
--
2.0.2
10 years, 2 months
[libvirt] [PATCHv3 0/5] Implement bulk stats API
by Peter Krempa
New iteration of the series with a few improvements
Peter Krempa (5):
conf: Add helper to free domain list
lib: Add few flags for the bulk stats APIs
remote: Implement bulk domain stats APIs in the remote driver
qemu: Implement bulk stats API and one of the stats groups to return
virsh: Implement command to excercise the bulk stats APIs
daemon/remote.c | 86 +++++++++++++++++++
include/libvirt/libvirt.h.in | 15 ++++
src/conf/domain_conf.c | 31 +++++--
src/conf/domain_conf.h | 2 +
src/libvirt.c | 29 ++++++-
src/libvirt_private.syms | 1 +
src/qemu/qemu_driver.c | 175 +++++++++++++++++++++++++++++++++++++++
src/remote/remote_driver.c | 84 +++++++++++++++++++
src/remote/remote_protocol.x | 25 +++++-
src/remote_protocol-structs | 22 +++++
tools/virsh-domain-monitor.c | 191 +++++++++++++++++++++++++++++++++++++++++++
tools/virsh.pod | 34 ++++++++
12 files changed, 685 insertions(+), 10 deletions(-)
--
2.0.2
10 years, 2 months
[libvirt] [PATCH 0/3] Coverity patches to resolve RESOURCE_LEAK
by Wang Rui
I did coverity scan for libvirt-1.2.8 as John Ferlan did.
He has sent many patches about RESOURCE_LEAK. I picked
the other errors left to fix. There are also many errors
to analyze and fix in the future.
Wang Rui (3):
util: Resolve Coverity RESOURCE_LEAK
tests: Resolve Coverity RESOURCE_LEAK
qemu_capabilities: Resolve Coverity RESOURCE_LEAK
src/qemu/qemu_capabilities.c | 4 +++-
src/util/virpci.c | 1 +
tests/shunloadtest.c | 1 +
3 files changed, 5 insertions(+), 1 deletion(-)
--
1.7.12.4
10 years, 2 months
[libvirt] [PATCH 0/4] Introduce support for virtio-blk-pci iothreads
by John Ferlan
Introduce iothreads support to libvirt. These will be used to facilitate
adding an iothread attribute to a support disk which will enable having
a dedicated event loop thread for the disk. IOThreads are a QEMU feature
recently added (2.1) as a replacement for the virtio-blk data plane
functionality that's been in tech preview since 1.4.
Followup patches will add API's in order to list the IOThreads and eventually
be able to assign IOThreads to specific CPU's if so desired. This set of
patches should cover at least the bare minimum in order to allow modifying
domain XML in order to use the feature.
John Ferlan (4):
domain_conf: Introduce iothreads XML
qemu: Add support for iothreads
domain_conf: Add support for iothreads in disk definition
qemu: Allow use of iothreads for disk definitions
docs/formatdomain.html.in | 34 ++++++++++++++
docs/schemas/domaincommon.rng | 25 +++++++++++
src/conf/domain_conf.c | 52 +++++++++++++++++++++-
src/conf/domain_conf.h | 4 ++
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 48 ++++++++++++++++++++
src/qemu/qemu_command.h | 5 +++
src/qemu/qemu_hotplug.c | 11 +++++
.../qemuxml2argv-iothreads-disk.args | 17 +++++++
.../qemuxml2argv-iothreads-disk.xml | 40 +++++++++++++++++
tests/qemuxml2argvdata/qemuxml2argv-iothreads.args | 8 ++++
tests/qemuxml2argvdata/qemuxml2argv-iothreads.xml | 29 ++++++++++++
tests/qemuxml2argvtest.c | 4 ++
tests/qemuxml2xmltest.c | 2 +
15 files changed, 281 insertions(+), 1 deletion(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-disk.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-disk.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads.xml
--
1.9.3
10 years, 2 months