[libvirt] [PATCH 00/12] implements iothread polling feature into libvirt

Pavel Hrdina (12): conf: introduce domain XML element <polling> for iothread lib: introduce an API to add new iothread with parameters lib: introduce an API to modify parameters of existing iothread virsh: extend iothreadadd to support virDomainAddIOThreadParams virsh: introduce command iothreadmod that uses virDomainModIOThreadParams qemu_capabilities: detect whether iothread polling is supported util: properly handle NULL props in virQEMUBuildObjectCommandlineFromJSON qemu_monitor: extend qemuMonitorGetIOThreads to fetch polling data qemu: implement iothread polling qemu: implement virDomainAddIOThreadParams API qemu: implement virDomainModIOThreadParams API news: add entry for for iothread polling feature docs/formatdomain.html.in | 19 +- docs/news.xml | 9 + docs/schemas/domaincommon.rng | 24 ++ include/libvirt/libvirt-domain.h | 44 ++++ src/conf/domain_conf.c | 199 +++++++++++++- src/conf/domain_conf.h | 18 +- src/driver-hypervisor.h | 16 ++ src/libvirt-domain.c | 140 ++++++++++ src/libvirt_private.syms | 2 + src/libvirt_public.syms | 6 + src/qemu/qemu_capabilities.c | 2 + src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_command.c | 78 +++++- src/qemu/qemu_command.h | 5 +- src/qemu/qemu_domain.c | 23 +- src/qemu/qemu_domain.h | 6 + src/qemu/qemu_driver.c | 292 ++++++++++++++++++--- src/qemu/qemu_monitor.c | 25 +- src/qemu/qemu_monitor.h | 9 +- src/qemu/qemu_monitor_json.c | 51 +++- src/qemu/qemu_monitor_json.h | 7 +- src/qemu/qemu_process.c | 14 +- src/remote/remote_driver.c | 2 + src/remote/remote_protocol.x | 34 ++- src/remote_protocol-structs | 20 ++ src/util/virqemu.c | 3 +- .../generic-iothreads-no-polling.xml | 22 ++ .../generic-iothreads-polling-disabled.xml | 24 ++ .../generic-iothreads-polling-enabled-fail.xml | 24 ++ .../generic-iothreads-polling-enabled.xml | 24 ++ tests/genericxml2xmltest.c | 6 + .../qemucapabilitiesdata/caps_2.9.0.x86_64.replies | 12 + tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml | 1 + tests/qemumonitorjsontest.c | 2 +- .../qemuxml2argv-iothreads-polling-disabled.args | 23 ++ .../qemuxml2argv-iothreads-polling-disabled.xml | 36 +++ .../qemuxml2argv-iothreads-polling-enabled.args | 23 ++ .../qemuxml2argv-iothreads-polling-enabled.xml | 36 +++ ...emuxml2argv-iothreads-polling-not-supported.xml | 1 + tests/qemuxml2argvtest.c | 8 + tests/testutils.c | 3 +- tools/virsh-domain.c | 188 ++++++++++++- tools/virsh.pod | 18 ++ 43 files changed, 1442 insertions(+), 58 deletions(-) create mode 100644 tests/genericxml2xmlindata/generic-iothreads-no-polling.xml create mode 100644 tests/genericxml2xmlindata/generic-iothreads-polling-disabled.xml create mode 100644 tests/genericxml2xmlindata/generic-iothreads-polling-enabled-fail.xml create mode 100644 tests/genericxml2xmlindata/generic-iothreads-polling-enabled.xml create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.xml create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.xml create mode 120000 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-not-supported.xml -- 2.11.1

QEMU 2.9.0 will introduce polling feature for AioContext that instead of blocking syscalls polls for events without blocking. This means that polling can be in most cases faster but it also increases CPU utilization. To address this issue QEMU implements self-tuning algorithm that modifies the current polling time to adapt to different workloads and it can also fallback to blocking syscalls. For each IOThread this all is controlled by three parameters, poll-max-ns, poll-grow and poll-shrink. If parameter poll-max-ns is set to 0 it disables the polling, if it is omitted the default behavior is used and any value more than 0 enables polling. Parameters poll-grow and poll-shrink configure how the self-tuning algorithm will adapt the current polling time. If they are omitted or set to 0 default values will be used. Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- docs/formatdomain.html.in | 19 ++- docs/schemas/domaincommon.rng | 24 +++ src/conf/domain_conf.c | 169 ++++++++++++++++++++- src/conf/domain_conf.h | 8 + src/libvirt_private.syms | 1 + .../generic-iothreads-no-polling.xml | 22 +++ .../generic-iothreads-polling-disabled.xml | 24 +++ .../generic-iothreads-polling-enabled-fail.xml | 24 +++ .../generic-iothreads-polling-enabled.xml | 24 +++ tests/genericxml2xmltest.c | 6 + tests/testutils.c | 3 +- 11 files changed, 318 insertions(+), 6 deletions(-) create mode 100644 tests/genericxml2xmlindata/generic-iothreads-no-polling.xml create mode 100644 tests/genericxml2xmlindata/generic-iothreads-polling-disabled.xml create mode 100644 tests/genericxml2xmlindata/generic-iothreads-polling-enabled-fail.xml create mode 100644 tests/genericxml2xmlindata/generic-iothreads-polling-enabled.xml diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index b69bd4c44c..f01a11cf93 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -611,6 +611,7 @@ ... <iothreadids> <iothread id="2"/> + <polling enabled='yes' max_ns="4000" grow="2" shrink="4"/> <iothread id="4"/> <iothread id="6"/> <iothread id="8"/> @@ -645,7 +646,23 @@ defined for the domain, then the <code>iothreads</code> value will be adjusted accordingly. <span class="since">Since 1.2.15</span> - </dd> + </dd> + <dt><code>polling</code></dt> + <dd> + The optional <code>polling</code> element provides the capability to + enable/disable and configure polling mechanism for iothreads. Attribute + <code>max_ns</code> specifies the maximum time in <code>ns</code> + between each poll requests and is mandatory if polling is explicitly + enabled. Attributes <code>grow</code> and <code>shrink</code> specifies + time in <code>ns</code> that is used to configure how the polling + algorithm will adapt current polling time to different workloads. + If any of <code>max_ns</code>, <code>grow</code> and <code>shrink</code> + is set to <code>0</code>, it is the same as not providing it at all. + If this element is omitted the default behavior and values are set by + hypervisor. Possible values for <code>enabled</code> are + <code>yes</code> and <code>no</code>. Available only for QEMU driver. + <span class="since">Since 3.1.0</span> + </dd> </dl> <h3><a name="elementsCPUTuning">CPU Tuning</a></h3> diff --git a/docs/schemas/domaincommon.rng b/docs/schemas/domaincommon.rng index c5f101325e..11aecbaa6e 100644 --- a/docs/schemas/domaincommon.rng +++ b/docs/schemas/domaincommon.rng @@ -661,6 +661,30 @@ <attribute name="id"> <ref name="unsignedInt"/> </attribute> + <optional> + <element name="polling"> + <optional> + <attribute name="enabled"> + <ref name="virYesNo"/> + </attribute> + </optional> + <optional> + <attribute name="max_ns"> + <ref name="unsignedInt"/> + </attribute> + </optional> + <optional> + <attribute name="grow"> + <ref name="unsignedInt"/> + </attribute> + </optional> + <optional> + <attribute name="shrink"> + <ref name="unsignedInt"/> + </attribute> + </optional> + </element> + </optional> </element> </zeroOrMore> </element> diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 79bdbdf50c..4b552a9175 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -1228,6 +1228,30 @@ virDomainDeviceDefCheckUnsupportedMemoryDevice(virDomainDeviceDefPtr dev) } +/** + * virDomainDefCheckUnsupportedIOThreadPolling: + * @def: domain definition + * + * Returns -1 if the domain definition would configure IOThread polling + * and reports an error, otherwise returns 0. + */ +static int +virDomainDefCheckUnsupportedIOThreadPolling(virDomainDefPtr def) +{ + size_t i; + + for (i = 0; i < def->niothreadids; i++) { + if (def->iothreadids[i]->poll_enabled != VIR_TRISTATE_BOOL_ABSENT) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("IOThread polling is not supported by this driver")); + return -1; + } + } + + return 0; +} + + bool virDomainObjTaint(virDomainObjPtr obj, virDomainTaintFlags taint) { @@ -4467,6 +4491,10 @@ virDomainDefPostParseCheckFeatures(virDomainDefPtr def, return -1; } + if (UNSUPPORTED(VIR_DOMAIN_DEF_FEATURE_IOTHREAD_POLLING) && + virDomainDefCheckUnsupportedIOThreadPolling(def) < 0) + return -1; + return 0; } @@ -4585,6 +4613,60 @@ virDomainVcpuDefPostParse(virDomainDefPtr def) } +int +virDomainIOThreadDefPostParse(virDomainIOThreadIDDefPtr iothread) +{ + if ((iothread->poll_grow > 0 || iothread->poll_shrink > 0) && + iothread->poll_max_ns == 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("polling grow or shrink is set for iothread id " + "'%u' but max_ns is not set"), + iothread->iothread_id); + return -1; + } + + switch (iothread->poll_enabled) { + case VIR_TRISTATE_BOOL_ABSENT: + if (iothread->poll_max_ns > 0) + iothread->poll_enabled = VIR_TRISTATE_BOOL_YES; + break; + + case VIR_TRISTATE_BOOL_NO: + iothread->poll_max_ns = 0; + iothread->poll_grow = 0; + iothread->poll_shrink = 0; + break; + + case VIR_TRISTATE_BOOL_YES: + if (iothread->poll_max_ns == 0) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, + _("polling is enabled for iothread id '%u' but " + "max_ns is not set"), iothread->iothread_id); + return -1; + } + break; + + case VIR_TRISTATE_BOOL_LAST: + break; + } + + return 0; +} + + +static int +virDomainDefPostParseIOThreads(virDomainDefPtr def) +{ + size_t i; + + for (i = 0; i < def->niothreadids; i++) + if (virDomainIOThreadDefPostParse(def->iothreadids[i]) < 0) + return -1; + + return 0; +} + + static int virDomainDefPostParseInternal(virDomainDefPtr def, struct virDomainDefPostParseDeviceIteratorData *data) @@ -4599,6 +4681,9 @@ virDomainDefPostParseInternal(virDomainDefPtr def, if (virDomainVcpuDefPostParse(def) < 0) return -1; + if (virDomainDefPostParseIOThreads(def) < 0) + return -1; + if (virDomainDefPostParseMemory(def, data->parseFlags) < 0) return -1; @@ -15574,7 +15659,9 @@ virDomainIdmapDefParseXML(xmlXPathContextPtr ctxt, * * <iothreads>4</iothreads> * <iothreadids> - * <iothread id='1'/> + * <iothread id='1'> + * <polling enabled='yes' max_ns='4000' grow='2' shrink='4'/> + * </iothread> * <iothread id='3'/> * <iothread id='5'/> * <iothread id='7'/> @@ -15587,6 +15674,7 @@ virDomainIOThreadIDDefParseXML(xmlNodePtr node, virDomainIOThreadIDDefPtr iothrid; xmlNodePtr oldnode = ctxt->node; char *tmp = NULL; + int npoll = 0; if (VIR_ALLOC(iothrid) < 0) return NULL; @@ -15604,6 +15692,58 @@ virDomainIOThreadIDDefParseXML(xmlNodePtr node, _("invalid iothread 'id' value '%s'"), tmp); goto error; } + VIR_FREE(tmp); + + if ((npoll = virXPathNodeSet("./polling", ctxt, NULL)) < 0) + goto error; + + if (npoll > 1) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("only one polling element is allowed for each " + "<iothread> element")); + goto error; + } + + if (npoll > 0) { + if ((tmp = virXPathString("string(./polling/@enabled)", ctxt))) { + int enabled = virTristateBoolTypeFromString(tmp); + if (enabled < 0) { + virReportError(VIR_ERR_XML_ERROR, + _("invalid polling 'enabled' value '%s'"), tmp); + goto error; + } + iothrid->poll_enabled = enabled; + } else { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("missing 'enabled' attribute in <polling> element")); + goto error; + } + VIR_FREE(tmp); + + if ((tmp = virXPathString("string(./polling/@max_ns)", ctxt)) && + virStrToLong_uip(tmp, NULL, 10, &iothrid->poll_max_ns) < 0) { + virReportError(VIR_ERR_XML_ERROR, + _("invalid polling 'max_ns' value '%s'"), tmp); + goto error; + } + VIR_FREE(tmp); + + if ((tmp = virXPathString("string(./polling/@grow)", ctxt)) && + virStrToLong_uip(tmp, NULL, 10, &iothrid->poll_grow) < 0) { + virReportError(VIR_ERR_XML_ERROR, + _("invalid polling 'grow' value '%s'"), tmp); + goto error; + } + VIR_FREE(tmp); + + if ((tmp = virXPathString("string(./polling/@shrink)", ctxt)) && + virStrToLong_uip(tmp, NULL, 10, &iothrid->poll_shrink) < 0) { + virReportError(VIR_ERR_XML_ERROR, + _("invalid polling 'shrink' value '%s'"), tmp); + goto error; + } + VIR_FREE(tmp); + } cleanup: VIR_FREE(tmp); @@ -23713,7 +23853,8 @@ virDomainDefIothreadShouldFormat(virDomainDefPtr def) size_t i; for (i = 0; i < def->niothreadids; i++) { - if (!def->iothreadids[i]->autofill) + if (!def->iothreadids[i]->autofill || + def->iothreadids[i]->poll_enabled != VIR_TRISTATE_BOOL_ABSENT) return true; } @@ -23918,8 +24059,28 @@ virDomainDefFormatInternal(virDomainDefPtr def, virBufferAddLit(buf, "<iothreadids>\n"); virBufferAdjustIndent(buf, 2); for (i = 0; i < def->niothreadids; i++) { - virBufferAsprintf(buf, "<iothread id='%u'/>\n", - def->iothreadids[i]->iothread_id); + virDomainIOThreadIDDefPtr iothread = def->iothreadids[i]; + virBufferAsprintf(buf, "<iothread id='%u'", iothread->iothread_id); + if (iothread->poll_enabled != VIR_TRISTATE_BOOL_ABSENT) { + virBufferAddLit(buf, ">\n"); + virBufferAdjustIndent(buf, 2); + virBufferAsprintf(buf, "<polling enabled='%s'", + virTristateBoolTypeToString(iothread->poll_enabled)); + if (iothread->poll_max_ns) + virBufferAsprintf(buf, " max_ns='%u'", + iothread->poll_max_ns); + if (iothread->poll_grow) + virBufferAsprintf(buf, " grow='%u'", + iothread->poll_grow); + if (iothread->poll_shrink) + virBufferAsprintf(buf, " shrink='%u'", + iothread->poll_shrink); + virBufferAddLit(buf, "/>\n"); + virBufferAdjustIndent(buf, -2); + virBufferAddLit(buf, "</iothread>\n"); + } else { + virBufferAddLit(buf, "/>\n"); + } } virBufferAdjustIndent(buf, -2); virBufferAddLit(buf, "</iothreadids>\n"); diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 1e53cc3280..8ac1d8a409 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2074,11 +2074,18 @@ struct _virDomainIOThreadIDDef { int thread_id; virBitmapPtr cpumask; + virTristateBool poll_enabled; + unsigned int poll_max_ns; + unsigned int poll_grow; + unsigned int poll_shrink; + virDomainThreadSchedParam sched; }; void virDomainIOThreadIDDefFree(virDomainIOThreadIDDefPtr def); +int virDomainIOThreadDefPostParse(virDomainIOThreadIDDefPtr iothread); + typedef struct _virDomainCputune virDomainCputune; typedef virDomainCputune *virDomainCputunePtr; @@ -2410,6 +2417,7 @@ typedef enum { VIR_DOMAIN_DEF_FEATURE_OFFLINE_VCPUPIN = (1 << 2), VIR_DOMAIN_DEF_FEATURE_NAME_SLASH = (1 << 3), VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS = (1 << 4), + VIR_DOMAIN_DEF_FEATURE_IOTHREAD_POLLING = (1 << 5), } virDomainDefFeatures; diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index e6ccd697d2..97aee9c0e3 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -371,6 +371,7 @@ virDomainHypervTypeToString; virDomainInputDefFree; virDomainIOMMUModelTypeFromString; virDomainIOMMUModelTypeToString; +virDomainIOThreadDefPostParse; virDomainIOThreadIDAdd; virDomainIOThreadIDDefFree; virDomainIOThreadIDDel; diff --git a/tests/genericxml2xmlindata/generic-iothreads-no-polling.xml b/tests/genericxml2xmlindata/generic-iothreads-no-polling.xml new file mode 100644 index 0000000000..b7e5a11c06 --- /dev/null +++ b/tests/genericxml2xmlindata/generic-iothreads-no-polling.xml @@ -0,0 +1,22 @@ +<domain type='qemu'> + <name>foo</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>1</vcpu> + <iothreads>2</iothreads> + <iothreadids> + <iothread id='1'/> + <iothread id='2'/> + </iothreadids> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + </devices> +</domain> diff --git a/tests/genericxml2xmlindata/generic-iothreads-polling-disabled.xml b/tests/genericxml2xmlindata/generic-iothreads-polling-disabled.xml new file mode 100644 index 0000000000..0352a80900 --- /dev/null +++ b/tests/genericxml2xmlindata/generic-iothreads-polling-disabled.xml @@ -0,0 +1,24 @@ +<domain type='qemu'> + <name>foo</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>1</vcpu> + <iothreads>2</iothreads> + <iothreadids> + <iothread id='1'> + <polling enabled='no'/> + </iothread> + <iothread id='2'/> + </iothreadids> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + </devices> +</domain> diff --git a/tests/genericxml2xmlindata/generic-iothreads-polling-enabled-fail.xml b/tests/genericxml2xmlindata/generic-iothreads-polling-enabled-fail.xml new file mode 100644 index 0000000000..6f77922e7a --- /dev/null +++ b/tests/genericxml2xmlindata/generic-iothreads-polling-enabled-fail.xml @@ -0,0 +1,24 @@ +<domain type='qemu'> + <name>foo</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>1</vcpu> + <iothreads>2</iothreads> + <iothreadids> + <iothread id='1'> + <polling enabled='yes'/> + </iothread> + <iothread id='2'/> + </iothreadids> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + </devices> +</domain> diff --git a/tests/genericxml2xmlindata/generic-iothreads-polling-enabled.xml b/tests/genericxml2xmlindata/generic-iothreads-polling-enabled.xml new file mode 100644 index 0000000000..ef9161f367 --- /dev/null +++ b/tests/genericxml2xmlindata/generic-iothreads-polling-enabled.xml @@ -0,0 +1,24 @@ +<domain type='qemu'> + <name>foo</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>1</vcpu> + <iothreads>2</iothreads> + <iothreadids> + <iothread id='1'> + <polling enabled='yes' max_ns='4000' grow='50'/> + </iothread> + <iothread id='2'/> + </iothreadids> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + </devices> +</domain> diff --git a/tests/genericxml2xmltest.c b/tests/genericxml2xmltest.c index 488190270f..c3a9586eab 100644 --- a/tests/genericxml2xmltest.c +++ b/tests/genericxml2xmltest.c @@ -100,6 +100,12 @@ mymain(void) DO_TEST("vcpus-individual"); + DO_TEST("iothreads-no-polling"); + DO_TEST("iothreads-polling-enabled"); + DO_TEST("iothreads-polling-disabled"); + DO_TEST_FULL("iothreads-polling-enabled-fail", 0, false, + TEST_COMPARE_DOM_XML2XML_RESULT_FAIL_PARSE); + virObjectUnref(caps); virObjectUnref(xmlopt); diff --git a/tests/testutils.c b/tests/testutils.c index a596a83a96..93a12bcc76 100644 --- a/tests/testutils.c +++ b/tests/testutils.c @@ -1101,7 +1101,8 @@ virCapsPtr virTestGenericCapsInit(void) } static virDomainDefParserConfig virTestGenericDomainDefParserConfig = { - .features = VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS, + .features = VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS | + VIR_DOMAIN_DEF_FEATURE_IOTHREAD_POLLING, }; static virDomainXMLPrivateDataCallbacks virTestGenericPrivateDataCallbacks; -- 2.11.1

On Tue, Feb 21, 2017 at 01:14:57PM +0100, Pavel Hrdina wrote:
QEMU 2.9.0 will introduce polling feature for AioContext that instead of blocking syscalls polls for events without blocking. This means that polling can be in most cases faster but it also increases CPU utilization.
To address this issue QEMU implements self-tuning algorithm that modifies the current polling time to adapt to different workloads and it can also fallback to blocking syscalls.
For each IOThread this all is controlled by three parameters, poll-max-ns, poll-grow and poll-shrink. If parameter poll-max-ns is set to 0 it disables the polling, if it is omitted the default behavior is used and any value more than 0 enables polling. Parameters poll-grow and poll-shrink configure how the self-tuning algorithm will adapt the current polling time. If they are omitted or set to 0 default values will be used.
With my app developer hat on I have to wonder how an app is supposed to figure out what to set these parameters to ? It has been difficult enough figuring out existing QEMU block tunables, but at least most of those can be set dependant on the type of storage use on the host side. Tunables whose use depends on the guest workload are harder to use since it largely involves predicting the unknown. IOW is there a compelling reason to add these low level parameters that are tightly coupled to the specific algorithm that QEMU happens to use today. The QEMU commits say the tunables all default to sane parameters so I'm inclined to say we ignore them at the libvirt level entirely. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|

On Tue, Feb 21, 2017 at 12:26:02PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 01:14:57PM +0100, Pavel Hrdina wrote:
QEMU 2.9.0 will introduce polling feature for AioContext that instead of blocking syscalls polls for events without blocking. This means that polling can be in most cases faster but it also increases CPU utilization.
To address this issue QEMU implements self-tuning algorithm that modifies the current polling time to adapt to different workloads and it can also fallback to blocking syscalls.
For each IOThread this all is controlled by three parameters, poll-max-ns, poll-grow and poll-shrink. If parameter poll-max-ns is set to 0 it disables the polling, if it is omitted the default behavior is used and any value more than 0 enables polling. Parameters poll-grow and poll-shrink configure how the self-tuning algorithm will adapt the current polling time. If they are omitted or set to 0 default values will be used.
With my app developer hat on I have to wonder how an app is supposed to figure out what to set these parameters to ? It has been difficult enough figuring out existing QEMU block tunables, but at least most of those can be set dependant on the type of storage use on the host side. Tunables whose use depends on the guest workload are harder to use since it largely involves predicting the unknown. IOW is there a compelling reason to add these low level parameters that are tightly coupled to the specific algorithm that QEMU happens to use today.
I agree that it's probably way to complicated for management applications, but there is a small issue with QEMU. Currently it you don't specify anything the polling is enabled with some reasonable default value and base on experience with QEMU I'm not planning to count on that they will not change the default behavior in the future. To explicitly enable polling you need to set poll-max-ns to some value more than 0. We would have to check qemu source code, and define the default value in our code in order to let users explicitly enable polling.
The QEMU commits say the tunables all default to sane parameters so I'm inclined to say we ignore them at the libvirt level entirely.
Yes, it would be way batter to have only <polling enable='yes|no'> end let QEMU deal with the sane values for all parameters but that would mean to come up with the sane values ourselves or modify QEMU to add another property that would simply control whether it is enabled or not. Pavel
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Tue, Feb 21, 2017 at 01:48:15PM +0100, Pavel Hrdina wrote:
On Tue, Feb 21, 2017 at 12:26:02PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 01:14:57PM +0100, Pavel Hrdina wrote:
QEMU 2.9.0 will introduce polling feature for AioContext that instead of blocking syscalls polls for events without blocking. This means that polling can be in most cases faster but it also increases CPU utilization.
To address this issue QEMU implements self-tuning algorithm that modifies the current polling time to adapt to different workloads and it can also fallback to blocking syscalls.
For each IOThread this all is controlled by three parameters, poll-max-ns, poll-grow and poll-shrink. If parameter poll-max-ns is set to 0 it disables the polling, if it is omitted the default behavior is used and any value more than 0 enables polling. Parameters poll-grow and poll-shrink configure how the self-tuning algorithm will adapt the current polling time. If they are omitted or set to 0 default values will be used.
With my app developer hat on I have to wonder how an app is supposed to figure out what to set these parameters to ? It has been difficult enough figuring out existing QEMU block tunables, but at least most of those can be set dependant on the type of storage use on the host side. Tunables whose use depends on the guest workload are harder to use since it largely involves predicting the unknown. IOW is there a compelling reason to add these low level parameters that are tightly coupled to the specific algorithm that QEMU happens to use today.
I agree that it's probably way to complicated for management applications, but there is a small issue with QEMU. Currently it you don't specify anything the polling is enabled with some reasonable default value and base on experience with QEMU I'm not planning to count on that they will not change the default behavior in the future. To explicitly enable polling you need to set poll-max-ns to some value more than 0. We would have to check qemu source code, and define the default value in our code in order to let users explicitly enable polling.
The QEMU commit says polling is now enabled by default without needing to set poll-max-ns AFAICT commit cdd7abfdba9287a289c404dfdcb02316f9ffee7d Author: Stefan Hajnoczi <stefanha@redhat.com> Date: Thu Jan 26 17:01:19 2017 +0000 iothread: enable AioContext polling by default IOThread AioContexts are likely to consist only of event sources like virtqueue ioeventfds and LinuxAIO completion eventfds that are pollable from userspace (without system calls). We recently merged the AioContext polling feature but didn't enable it by default yet. I have gone back over the performance data on the mailing list and picked a default polling value that gave good results. Let's enable AioContext polling by default so users don't have another switch they need to set manually. If performance regressions are found we can still disable this for the QEMU 2.9 release.
The QEMU commits say the tunables all default to sane parameters so I'm inclined to say we ignore them at the libvirt level entirely.
Yes, it would be way batter to have only <polling enable='yes|no'> end let QEMU deal with the sane values for all parameters but that would mean to come up with the sane values ourselves or modify QEMU to add another property that would simply control whether it is enabled or not.
I'm saying don't even add that. Do exactly nothing and just rely on the QEMU defaults here. This is not affecting guest ABI at all so it doesn't matter if QEMU changes its defaults later. In fact if QEMU changes defaults based on newer performance measurements, it is a good thing if libvirt hasn't hardcoded all its VM configs to the old default. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|

On Tue, Feb 21, 2017 at 12:55:51PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 01:48:15PM +0100, Pavel Hrdina wrote:
On Tue, Feb 21, 2017 at 12:26:02PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 01:14:57PM +0100, Pavel Hrdina wrote:
QEMU 2.9.0 will introduce polling feature for AioContext that instead of blocking syscalls polls for events without blocking. This means that polling can be in most cases faster but it also increases CPU utilization.
To address this issue QEMU implements self-tuning algorithm that modifies the current polling time to adapt to different workloads and it can also fallback to blocking syscalls.
For each IOThread this all is controlled by three parameters, poll-max-ns, poll-grow and poll-shrink. If parameter poll-max-ns is set to 0 it disables the polling, if it is omitted the default behavior is used and any value more than 0 enables polling. Parameters poll-grow and poll-shrink configure how the self-tuning algorithm will adapt the current polling time. If they are omitted or set to 0 default values will be used.
With my app developer hat on I have to wonder how an app is supposed to figure out what to set these parameters to ? It has been difficult enough figuring out existing QEMU block tunables, but at least most of those can be set dependant on the type of storage use on the host side. Tunables whose use depends on the guest workload are harder to use since it largely involves predicting the unknown. IOW is there a compelling reason to add these low level parameters that are tightly coupled to the specific algorithm that QEMU happens to use today.
I agree that it's probably way to complicated for management applications, but there is a small issue with QEMU. Currently it you don't specify anything the polling is enabled with some reasonable default value and base on experience with QEMU I'm not planning to count on that they will not change the default behavior in the future. To explicitly enable polling you need to set poll-max-ns to some value more than 0. We would have to check qemu source code, and define the default value in our code in order to let users explicitly enable polling.
The QEMU commit says polling is now enabled by default without needing to set poll-max-ns AFAICT
commit cdd7abfdba9287a289c404dfdcb02316f9ffee7d Author: Stefan Hajnoczi <stefanha@redhat.com> Date: Thu Jan 26 17:01:19 2017 +0000
iothread: enable AioContext polling by default
IOThread AioContexts are likely to consist only of event sources like virtqueue ioeventfds and LinuxAIO completion eventfds that are pollable from userspace (without system calls).
We recently merged the AioContext polling feature but didn't enable it by default yet. I have gone back over the performance data on the mailing list and picked a default polling value that gave good results.
Let's enable AioContext polling by default so users don't have another switch they need to set manually. If performance regressions are found we can still disable this for the QEMU 2.9 release.
The QEMU commits say the tunables all default to sane parameters so I'm inclined to say we ignore them at the libvirt level entirely.
Yes, it would be way batter to have only <polling enable='yes|no'> end let QEMU deal with the sane values for all parameters but that would mean to come up with the sane values ourselves or modify QEMU to add another property that would simply control whether it is enabled or not.
I'm saying don't even add that.
Do exactly nothing and just rely on the QEMU defaults here. This is not affecting guest ABI at all so it doesn't matter if QEMU changes its defaults later. In fact if QEMU changes defaults based on newer performance measurements, it is a good thing if libvirt hasn't hardcoded all its VM configs to the old default.
What if someone would like to disable it even if QEMU thinks that the performance is good? This patch series doesn't hardcode anything into VM config. If you don't set the polling element at all Libvirt will follow the QEMU defaults and only the live XML would contain current state of polling with default values loaded from QEMU. This patch series ads a possibility to explicitly configure the polling if someone want's to do that for some reason, but it also preserve the benefit when you just don't care about it and you want to use the default. If you still think that we should not export this feature at all, well we don't have to. The use-case that you've described is still possible with this series, it only adds extra functionality on top of that. Pavel
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Tue, Feb 21, 2017 at 02:14:44PM +0100, Pavel Hrdina wrote:
On Tue, Feb 21, 2017 at 12:55:51PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 01:48:15PM +0100, Pavel Hrdina wrote:
On Tue, Feb 21, 2017 at 12:26:02PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 01:14:57PM +0100, Pavel Hrdina wrote:
QEMU 2.9.0 will introduce polling feature for AioContext that instead of blocking syscalls polls for events without blocking. This means that polling can be in most cases faster but it also increases CPU utilization.
To address this issue QEMU implements self-tuning algorithm that modifies the current polling time to adapt to different workloads and it can also fallback to blocking syscalls.
For each IOThread this all is controlled by three parameters, poll-max-ns, poll-grow and poll-shrink. If parameter poll-max-ns is set to 0 it disables the polling, if it is omitted the default behavior is used and any value more than 0 enables polling. Parameters poll-grow and poll-shrink configure how the self-tuning algorithm will adapt the current polling time. If they are omitted or set to 0 default values will be used.
With my app developer hat on I have to wonder how an app is supposed to figure out what to set these parameters to ? It has been difficult enough figuring out existing QEMU block tunables, but at least most of those can be set dependant on the type of storage use on the host side. Tunables whose use depends on the guest workload are harder to use since it largely involves predicting the unknown. IOW is there a compelling reason to add these low level parameters that are tightly coupled to the specific algorithm that QEMU happens to use today.
I agree that it's probably way to complicated for management applications, but there is a small issue with QEMU. Currently it you don't specify anything the polling is enabled with some reasonable default value and base on experience with QEMU I'm not planning to count on that they will not change the default behavior in the future. To explicitly enable polling you need to set poll-max-ns to some value more than 0. We would have to check qemu source code, and define the default value in our code in order to let users explicitly enable polling.
The QEMU commit says polling is now enabled by default without needing to set poll-max-ns AFAICT
commit cdd7abfdba9287a289c404dfdcb02316f9ffee7d Author: Stefan Hajnoczi <stefanha@redhat.com> Date: Thu Jan 26 17:01:19 2017 +0000
iothread: enable AioContext polling by default
IOThread AioContexts are likely to consist only of event sources like virtqueue ioeventfds and LinuxAIO completion eventfds that are pollable from userspace (without system calls).
We recently merged the AioContext polling feature but didn't enable it by default yet. I have gone back over the performance data on the mailing list and picked a default polling value that gave good results.
Let's enable AioContext polling by default so users don't have another switch they need to set manually. If performance regressions are found we can still disable this for the QEMU 2.9 release.
The QEMU commits say the tunables all default to sane parameters so I'm inclined to say we ignore them at the libvirt level entirely.
Yes, it would be way batter to have only <polling enable='yes|no'> end let QEMU deal with the sane values for all parameters but that would mean to come up with the sane values ourselves or modify QEMU to add another property that would simply control whether it is enabled or not.
I'm saying don't even add that.
Do exactly nothing and just rely on the QEMU defaults here. This is not affecting guest ABI at all so it doesn't matter if QEMU changes its defaults later. In fact if QEMU changes defaults based on newer performance measurements, it is a good thing if libvirt hasn't hardcoded all its VM configs to the old default.
What if someone would like to disable it even if QEMU thinks that the performance is good? This patch series doesn't hardcode anything into VM config. If you don't set the polling element at all Libvirt will follow the QEMU defaults and only the live XML would contain current state of polling with default values loaded from QEMU.
This patch series ads a possibility to explicitly configure the polling if someone want's to do that for some reason, but it also preserve the benefit when you just don't care about it and you want to use the default.
If you still think that we should not export this feature at all, well we don't have to. The use-case that you've described is still possible with this series, it only adds extra functionality on top of that.
I'm very wary of adding config parameters in libvirt just because they exist in QEMU, particularly when the parameters are totally specific to an algorithm that just happens to be the one implemented right now. We've no idea if QEMU will stick with this algorithm or change it entirely, and if the latter, then any config parameters will be likely meaningless for any other algorithm. I can understand why QEMU would expose them on its CLI, but I don't feel they are a good fit for exposing up the stack, particularly given a lack of any guidance as to how people might consider changing the values, other than random uninformed guesswork. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|

On Tue, Feb 21, 2017 at 01:26:25PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 02:14:44PM +0100, Pavel Hrdina wrote:
On Tue, Feb 21, 2017 at 12:55:51PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 01:48:15PM +0100, Pavel Hrdina wrote:
On Tue, Feb 21, 2017 at 12:26:02PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 01:14:57PM +0100, Pavel Hrdina wrote:
QEMU 2.9.0 will introduce polling feature for AioContext that instead of blocking syscalls polls for events without blocking. This means that polling can be in most cases faster but it also increases CPU utilization.
To address this issue QEMU implements self-tuning algorithm that modifies the current polling time to adapt to different workloads and it can also fallback to blocking syscalls.
For each IOThread this all is controlled by three parameters, poll-max-ns, poll-grow and poll-shrink. If parameter poll-max-ns is set to 0 it disables the polling, if it is omitted the default behavior is used and any value more than 0 enables polling. Parameters poll-grow and poll-shrink configure how the self-tuning algorithm will adapt the current polling time. If they are omitted or set to 0 default values will be used.
With my app developer hat on I have to wonder how an app is supposed to figure out what to set these parameters to ? It has been difficult enough figuring out existing QEMU block tunables, but at least most of those can be set dependant on the type of storage use on the host side. Tunables whose use depends on the guest workload are harder to use since it largely involves predicting the unknown. IOW is there a compelling reason to add these low level parameters that are tightly coupled to the specific algorithm that QEMU happens to use today.
I agree that it's probably way to complicated for management applications, but there is a small issue with QEMU. Currently it you don't specify anything the polling is enabled with some reasonable default value and base on experience with QEMU I'm not planning to count on that they will not change the default behavior in the future. To explicitly enable polling you need to set poll-max-ns to some value more than 0. We would have to check qemu source code, and define the default value in our code in order to let users explicitly enable polling.
The QEMU commit says polling is now enabled by default without needing to set poll-max-ns AFAICT
commit cdd7abfdba9287a289c404dfdcb02316f9ffee7d Author: Stefan Hajnoczi <stefanha@redhat.com> Date: Thu Jan 26 17:01:19 2017 +0000
iothread: enable AioContext polling by default
IOThread AioContexts are likely to consist only of event sources like virtqueue ioeventfds and LinuxAIO completion eventfds that are pollable from userspace (without system calls).
We recently merged the AioContext polling feature but didn't enable it by default yet. I have gone back over the performance data on the mailing list and picked a default polling value that gave good results.
Let's enable AioContext polling by default so users don't have another switch they need to set manually. If performance regressions are found we can still disable this for the QEMU 2.9 release.
The QEMU commits say the tunables all default to sane parameters so I'm inclined to say we ignore them at the libvirt level entirely.
Yes, it would be way batter to have only <polling enable='yes|no'> end let QEMU deal with the sane values for all parameters but that would mean to come up with the sane values ourselves or modify QEMU to add another property that would simply control whether it is enabled or not.
I'm saying don't even add that.
Do exactly nothing and just rely on the QEMU defaults here. This is not affecting guest ABI at all so it doesn't matter if QEMU changes its defaults later. In fact if QEMU changes defaults based on newer performance measurements, it is a good thing if libvirt hasn't hardcoded all its VM configs to the old default.
What if someone would like to disable it even if QEMU thinks that the performance is good? This patch series doesn't hardcode anything into VM config. If you don't set the polling element at all Libvirt will follow the QEMU defaults and only the live XML would contain current state of polling with default values loaded from QEMU.
This patch series ads a possibility to explicitly configure the polling if someone want's to do that for some reason, but it also preserve the benefit when you just don't care about it and you want to use the default.
If you still think that we should not export this feature at all, well we don't have to. The use-case that you've described is still possible with this series, it only adds extra functionality on top of that.
I'm very wary of adding config parameters in libvirt just because they exist in QEMU, particularly when the parameters are totally specific to an algorithm that just happens to be the one implemented right now. We've no idea if QEMU will stick with this algorithm or change it entirely, and if the latter, then any config parameters will be likely meaningless for any other algorithm. I can understand why QEMU would expose them on its CLI, but I don't feel they are a good fit for exposing up the stack, particularly given a lack of any guidance as to how people might consider changing the values, other than random uninformed guesswork.
Yes, that's true and I had the same worrying feeling about the parameters specifically tied to QEMU algorithm, but I thought that it would be nice to export them anyway. Let's ignore this series for now and if someone asks for a feature to disable polling we can revive it. Pavel
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On Tue, Feb 21, 2017 at 1:39 PM, Pavel Hrdina <phrdina@redhat.com> wrote:
On Tue, Feb 21, 2017 at 01:26:25PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 02:14:44PM +0100, Pavel Hrdina wrote:
On Tue, Feb 21, 2017 at 12:55:51PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 01:48:15PM +0100, Pavel Hrdina wrote:
On Tue, Feb 21, 2017 at 12:26:02PM +0000, Daniel P. Berrange wrote:
On Tue, Feb 21, 2017 at 01:14:57PM +0100, Pavel Hrdina wrote: > QEMU 2.9.0 will introduce polling feature for AioContext that instead > of blocking syscalls polls for events without blocking. This means > that polling can be in most cases faster but it also increases CPU > utilization. > > To address this issue QEMU implements self-tuning algorithm that > modifies the current polling time to adapt to different workloads > and it can also fallback to blocking syscalls. > > For each IOThread this all is controlled by three parameters, > poll-max-ns, poll-grow and poll-shrink. If parameter poll-max-ns > is set to 0 it disables the polling, if it is omitted the default > behavior is used and any value more than 0 enables polling. > Parameters poll-grow and poll-shrink configure how the self-tuning > algorithm will adapt the current polling time. If they are omitted > or set to 0 default values will be used.
With my app developer hat on I have to wonder how an app is supposed to figure out what to set these parameters to ? It has been difficult enough figuring out existing QEMU block tunables, but at least most of those can be set dependant on the type of storage use on the host side. Tunables whose use depends on the guest workload are harder to use since it largely involves predicting the unknown. IOW is there a compelling reason to add these low level parameters that are tightly coupled to the specific algorithm that QEMU happens to use today.
I agree that it's probably way to complicated for management applications, but there is a small issue with QEMU. Currently it you don't specify anything the polling is enabled with some reasonable default value and base on experience with QEMU I'm not planning to count on that they will not change the default behavior in the future. To explicitly enable polling you need to set poll-max-ns to some value more than 0. We would have to check qemu source code, and define the default value in our code in order to let users explicitly enable polling.
The QEMU commit says polling is now enabled by default without needing to set poll-max-ns AFAICT
commit cdd7abfdba9287a289c404dfdcb02316f9ffee7d Author: Stefan Hajnoczi <stefanha@redhat.com> Date: Thu Jan 26 17:01:19 2017 +0000
iothread: enable AioContext polling by default
IOThread AioContexts are likely to consist only of event sources like virtqueue ioeventfds and LinuxAIO completion eventfds that are pollable from userspace (without system calls).
We recently merged the AioContext polling feature but didn't enable it by default yet. I have gone back over the performance data on the mailing list and picked a default polling value that gave good results.
Let's enable AioContext polling by default so users don't have another switch they need to set manually. If performance regressions are found we can still disable this for the QEMU 2.9 release.
The QEMU commits say the tunables all default to sane parameters so I'm inclined to say we ignore them at the libvirt level entirely.
Yes, it would be way batter to have only <polling enable='yes|no'> end let QEMU deal with the sane values for all parameters but that would mean to come up with the sane values ourselves or modify QEMU to add another property that would simply control whether it is enabled or not.
I'm saying don't even add that.
Do exactly nothing and just rely on the QEMU defaults here. This is not affecting guest ABI at all so it doesn't matter if QEMU changes its defaults later. In fact if QEMU changes defaults based on newer performance measurements, it is a good thing if libvirt hasn't hardcoded all its VM configs to the old default.
What if someone would like to disable it even if QEMU thinks that the performance is good? This patch series doesn't hardcode anything into VM config. If you don't set the polling element at all Libvirt will follow the QEMU defaults and only the live XML would contain current state of polling with default values loaded from QEMU.
This patch series ads a possibility to explicitly configure the polling if someone want's to do that for some reason, but it also preserve the benefit when you just don't care about it and you want to use the default.
If you still think that we should not export this feature at all, well we don't have to. The use-case that you've described is still possible with this series, it only adds extra functionality on top of that.
I'm very wary of adding config parameters in libvirt just because they exist in QEMU, particularly when the parameters are totally specific to an algorithm that just happens to be the one implemented right now. We've no idea if QEMU will stick with this algorithm or change it entirely, and if the latter, then any config parameters will be likely meaningless for any other algorithm. I can understand why QEMU would expose them on its CLI, but I don't feel they are a good fit for exposing up the stack, particularly given a lack of any guidance as to how people might consider changing the values, other than random uninformed guesswork.
Yes, that's true and I had the same worrying feeling about the parameters specifically tied to QEMU algorithm, but I thought that it would be nice to export them anyway. Let's ignore this series for now and if someone asks for a feature to disable polling we can revive it.
libvirt doesn't need a dedicated API for <iothread> polling parameters but the QEMU command-line passthrough feature must make it possible using <qemu:arg value='-newarg'/>. I have tested the following QEMU command-line: -object iothread,id=iothread0 \ # assume libvirt defines the iothreads -object iothread,id=iothread1 \ -set object.iothread0.poll-max-ns=0 # override poll-max-ns using -set This disables polling in iothread0 and leaves the default value in iothread1. I'm fine if libvirt doesn't add a dedicated API for setting <iothread> polling parameters. It's unlikely that users will need to change the setting. In an emergency (e.g. disabling it due to a performance regression) they can use <qemu:arg value='-newarg'/>. Stefan

This basically copies and extend the existing virDomainAddIOThread API by adding support for parameters. This allows you to add a new iothread into a domain and also sets polling parameters along with the new iothread. Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- include/libvirt/libvirt-domain.h | 39 +++++++++++++++++++++ src/driver-hypervisor.h | 8 +++++ src/libvirt-domain.c | 75 ++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 5 +++ src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 20 ++++++++++- src/remote_protocol-structs | 10 ++++++ 7 files changed, 157 insertions(+), 1 deletion(-) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index e303140a23..5ce974292e 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -1855,6 +1855,40 @@ int virDomainGetEmulatorPinInfo (virDomainPtr domain, int maplen, unsigned int flags); +/* IOThread parameters */ + +/** + * VIR_DOMAIN_IOTHREAD_POLL_ENABLED: + * + * Whether polling should be enabled or not. If omitted the default is set + * by hypervisor. + */ +# define VIR_DOMAIN_IOTHREAD_POLL_ENABLED "poll_enabled" + +/** + * VIR_DOMAIN_IOTHREAD_POLL_MAX_NS: + * + * The maximal polling time that can be used by polling algorithm in ns. + * If omitted the default is 0. + */ +# define VIR_DOMAIN_IOTHREAD_POLL_MAX_NS "poll_max_ns" + +/** + * VIR_DOMAIN_IOTHREAD_POLL_GROW: + * + * This tells the polling algorithm how many ns it should grow current + * polling time if it's not optimal anymore. If omitted the default is 0. + */ +# define VIR_DOMAIN_IOTHREAD_POLL_GROW "poll_grow" + +/** + * VIR_DOMAIN_IOTHREAD_POLL_SHRINK: + * + * This tells the polling algorithm how many ns it should shrink current + * polling time if it's not optimal anymore. If omitted the default is 0. + */ +# define VIR_DOMAIN_IOTHREAD_POLL_SHRINK "poll_shrink" + /** * virIOThreadInfo: * @@ -1882,6 +1916,11 @@ int virDomainPinIOThread(virDomainPtr domain, int virDomainAddIOThread(virDomainPtr domain, unsigned int iothread_id, unsigned int flags); +int virDomainAddIOThreadParams(virDomainPtr domain, + unsigned int iothread_id, + virTypedParameterPtr params, + int nparams, + unsigned int flags); int virDomainDelIOThread(virDomainPtr domain, unsigned int iothread_id, unsigned int flags); diff --git a/src/driver-hypervisor.h b/src/driver-hypervisor.h index 51af73200b..9c7ce83cd3 100644 --- a/src/driver-hypervisor.h +++ b/src/driver-hypervisor.h @@ -399,6 +399,13 @@ typedef int unsigned int flags); typedef int +(*virDrvDomainAddIOThreadParams)(virDomainPtr domain, + unsigned int iothread_id, + virTypedParameterPtr params, + int nparams, + unsigned int flags); + +typedef int (*virDrvDomainDelIOThread)(virDomainPtr domain, unsigned int iothread_id, unsigned int flags); @@ -1334,6 +1341,7 @@ struct _virHypervisorDriver { virDrvDomainGetIOThreadInfo domainGetIOThreadInfo; virDrvDomainPinIOThread domainPinIOThread; virDrvDomainAddIOThread domainAddIOThread; + virDrvDomainAddIOThreadParams domainAddIOThreadParams; virDrvDomainDelIOThread domainDelIOThread; virDrvDomainGetSecurityLabel domainGetSecurityLabel; virDrvDomainGetSecurityLabelList domainGetSecurityLabelList; diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 5b3e842058..691c72dedd 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -7751,6 +7751,81 @@ virDomainAddIOThread(virDomainPtr domain, /** + * virDomainAddIOThreadParams: + * @domain: a domain object + * @iothread_id: the specific IOThread ID value to add + * @params: pointer to IOThread parameter objects + * @nparams: number of IOThread parameters + * @flags: bitwise-OR of virDomainModificationImpact and virTypedParameterFlags + * + * Dynamically add an IOThread to the domain. It is left up to the + * underlying virtual hypervisor to determine the valid range for an + * @iothread_id and determining whether the @iothread_id already exists. + * + * The combination of parameters has some limitation: + * + * - If VIR_DOMAIN_IOTHREAD_POLL_ENABLED is set to true, + * VIR_DOMAIN_IOTHREAD_POLL_MAX_NS must be set as well. + * + * - If VIR_DOMAIN_IOTHREAD_POLL_MAX_NS is set to value > 0, + * VIR_DOMAIN_IOTHREAD_POLL_ENABLED is set to true. + * + * - If one of VIR_DOMAIN_IOTHREAD_POLL_GROW or VIR_DOMAIN_IOTHREAD_POLL_SHRINK + * is set to value > 0, VIR_DOMAIN_IOTHREAD_POLL_MAX_NS must be set as well. + * + * See VIR_DOMAIN_IOTHREAD_* for detailed description of accepted IOThread + * parameters. + * + * Note that this call can fail if the underlying virtualization hypervisor + * does not support it or if growing the number of iothreads is arbitrarily + * limited. This function requires privileged access to the hypervisor. + * + * Returns 0 in case of success, -1 in case of failure. + */ +int +virDomainAddIOThreadParams(virDomainPtr domain, + unsigned int iothread_id, + virTypedParameterPtr params, + int nparams, + unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "iothread_id=%u, params=%p, nparams=%d, flags=%x", + iothread_id, params, nparams, flags); + VIR_TYPED_PARAMS_DEBUG(params, nparams); + + virResetLastError(); + + virCheckDomainReturn(domain, -1); + conn = domain->conn; + + virCheckReadOnlyGoto(conn->flags, error); + virCheckNonNegativeArgGoto(nparams, error); + if (nparams) + virCheckNonNullArgGoto(params, error); + + if (virTypedParameterValidateSet(conn, params, nparams) < 0) + goto error; + + if (conn->driver->domainAddIOThreadParams) { + int ret; + ret = conn->driver->domainAddIOThreadParams(domain, iothread_id, + params, nparams, flags); + if (ret < 0) + goto error; + return ret; + } + + virReportUnsupportedError(); + + error: + virDispatchError(domain->conn); + return -1; +} + + +/** * virDomainDelIOThread: * @domain: a domain object * @iothread_id: the specific IOThread ID value to delete diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index 62885ac415..edf72d23aa 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -753,4 +753,9 @@ LIBVIRT_3.0.0 { virConnectSecretEventDeregisterAny; } LIBVIRT_2.2.0; +LIBVIRT_3.1.0 { + global: + virDomainAddIOThreadParams; +} LIBVIRT_3.0.0; + # .... define new API here using predicted next version number .... diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index a3f7d9b0ba..f9e246b8bc 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -8246,6 +8246,7 @@ static virHypervisorDriver hypervisor_driver = { .domainGetIOThreadInfo = remoteDomainGetIOThreadInfo, /* 1.2.14 */ .domainPinIOThread = remoteDomainPinIOThread, /* 1.2.14 */ .domainAddIOThread = remoteDomainAddIOThread, /* 1.2.15 */ + .domainAddIOThreadParams = remoteDomainAddIOThreadParams, /* 3.1.0 */ .domainDelIOThread = remoteDomainDelIOThread, /* 1.2.15 */ .domainGetSecurityLabel = remoteDomainGetSecurityLabel, /* 0.6.1 */ .domainGetSecurityLabelList = remoteDomainGetSecurityLabelList, /* 0.10.0 */ diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index cd0a14cc69..146c38b3f4 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -253,6 +253,9 @@ const REMOTE_DOMAIN_IP_ADDR_MAX = 2048; /* Upper limit on number of guest vcpu information entries */ const REMOTE_DOMAIN_GUEST_VCPU_PARAMS_MAX = 64; +/* Upper limit on number of IOThread information entries */ +const REMOTE_DOMAIN_IOTHREAD_PARAMS_MAX = 64; + /* UUID. VIR_UUID_BUFLEN definition comes from libvirt.h */ typedef opaque remote_uuid[VIR_UUID_BUFLEN]; @@ -1227,6 +1230,13 @@ struct remote_domain_add_iothread_args { unsigned int flags; }; +struct remote_domain_add_iothread_params_args { + remote_nonnull_domain dom; + unsigned int iothread_id; + remote_typed_param params<REMOTE_DOMAIN_IOTHREAD_PARAMS_MAX>; + unsigned int flags; +}; + struct remote_domain_del_iothread_args { remote_nonnull_domain dom; unsigned int iothread_id; @@ -6018,6 +6028,14 @@ enum remote_procedure { * @generate: both * @acl: none */ - REMOTE_PROC_SECRET_EVENT_VALUE_CHANGED = 383 + REMOTE_PROC_SECRET_EVENT_VALUE_CHANGED = 383, + + /** + * @generate: both + * @acl: domain:write + * @acl: domain:save:!VIR_DOMAIN_AFFECT_CONFIG|VIR_DOMAIN_AFFECT_LIVE + * @acl: domain:save:VIR_DOMAIN_AFFECT_CONFIG + */ + REMOTE_PROC_DOMAIN_ADD_IOTHREAD_PARAMS = 384 }; diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 0360600cfb..2e3245322f 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -857,6 +857,15 @@ struct remote_domain_add_iothread_args { u_int iothread_id; u_int flags; }; +struct remote_domain_add_iothread_params_args { + remote_nonnull_domain dom; + u_int iothread_id; + struct { + u_int params_len; + remote_typed_param * params_val; + } params; + u_int flags; +}; struct remote_domain_del_iothread_args { remote_nonnull_domain dom; u_int iothread_id; @@ -3210,4 +3219,5 @@ enum remote_procedure { REMOTE_PROC_CONNECT_SECRET_EVENT_DEREGISTER_ANY = 381, REMOTE_PROC_SECRET_EVENT_LIFECYCLE = 382, REMOTE_PROC_SECRET_EVENT_VALUE_CHANGED = 383, + REMOTE_PROC_DOMAIN_ADD_IOTHREAD_PARAMS = 384, }; -- 2.11.1

This allows to modify polling parameters of existing iothread. Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- include/libvirt/libvirt-domain.h | 5 ++++ src/driver-hypervisor.h | 8 +++++ src/libvirt-domain.c | 65 ++++++++++++++++++++++++++++++++++++++++ src/libvirt_public.syms | 1 + src/remote/remote_driver.c | 1 + src/remote/remote_protocol.x | 16 +++++++++- src/remote_protocol-structs | 10 +++++++ 7 files changed, 105 insertions(+), 1 deletion(-) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index 5ce974292e..aa769760f1 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -1921,6 +1921,11 @@ int virDomainAddIOThreadParams(virDomainPtr domain, virTypedParameterPtr params, int nparams, unsigned int flags); +int virDomainModIOThreadParams(virDomainPtr domain, + unsigned int iothread_id, + virTypedParameterPtr params, + int nparams, + unsigned int flags); int virDomainDelIOThread(virDomainPtr domain, unsigned int iothread_id, unsigned int flags); diff --git a/src/driver-hypervisor.h b/src/driver-hypervisor.h index 9c7ce83cd3..02c8e23e66 100644 --- a/src/driver-hypervisor.h +++ b/src/driver-hypervisor.h @@ -406,6 +406,13 @@ typedef int unsigned int flags); typedef int +(*virDrvDomainModIOThreadParams)(virDomainPtr domain, + unsigned int iothread_id, + virTypedParameterPtr params, + int nparams, + unsigned int flags); + +typedef int (*virDrvDomainDelIOThread)(virDomainPtr domain, unsigned int iothread_id, unsigned int flags); @@ -1342,6 +1349,7 @@ struct _virHypervisorDriver { virDrvDomainPinIOThread domainPinIOThread; virDrvDomainAddIOThread domainAddIOThread; virDrvDomainAddIOThreadParams domainAddIOThreadParams; + virDrvDomainModIOThreadParams domainModIOThreadParams; virDrvDomainDelIOThread domainDelIOThread; virDrvDomainGetSecurityLabel domainGetSecurityLabel; virDrvDomainGetSecurityLabelList domainGetSecurityLabelList; diff --git a/src/libvirt-domain.c b/src/libvirt-domain.c index 691c72dedd..d661e68d5e 100644 --- a/src/libvirt-domain.c +++ b/src/libvirt-domain.c @@ -7826,6 +7826,71 @@ virDomainAddIOThreadParams(virDomainPtr domain, /** + * virDomainModIOThreadParams: + * @domain: a domain object + * @iothread_id: the specific IOThread ID value to modify + * @params: pointer to IOThread parameter objects + * @nparams: number of IOThread parameters + * @flags: bitwise-OR of virDomainModificationImpact and virTypedParameterFlags + * + * Modifies parameters of existing IOThread ID specified by @iothread_id. + * + * The combination of parameters has some limitation, + * see virDomainAddIOThreadParams for detailed description. + * + * See VIR_DOMAIN_IOTHREAD_* for detailed description of accepted IOThread + * parameters. + * + * Note that this call can fail if the underlying virtualization hypervisor + * does not support it or if growing the number of iothreads is arbitrarily + * limited. This function requires privileged access to the hypervisor. + * + * Returns 0 in case of success, -1 in case of failure. + */ +int +virDomainModIOThreadParams(virDomainPtr domain, + unsigned int iothread_id, + virTypedParameterPtr params, + int nparams, + unsigned int flags) +{ + virConnectPtr conn; + + VIR_DOMAIN_DEBUG(domain, "iothread_id=%u, params=%p, nparams=%d, flags=%x", + iothread_id, params, nparams, flags); + VIR_TYPED_PARAMS_DEBUG(params, nparams); + + virResetLastError(); + + virCheckDomainReturn(domain, -1); + conn = domain->conn; + + virCheckReadOnlyGoto(conn->flags, error); + virCheckNonNegativeArgGoto(nparams, error); + if (nparams) + virCheckNonNullArgGoto(params, error); + + if (virTypedParameterValidateSet(conn, params, nparams) < 0) + goto error; + + if (conn->driver->domainModIOThreadParams) { + int ret; + ret = conn->driver->domainModIOThreadParams(domain, iothread_id, + params, nparams, flags); + if (ret < 0) + goto error; + return ret; + } + + virReportUnsupportedError(); + + error: + virDispatchError(domain->conn); + return -1; +} + + +/** * virDomainDelIOThread: * @domain: a domain object * @iothread_id: the specific IOThread ID value to delete diff --git a/src/libvirt_public.syms b/src/libvirt_public.syms index edf72d23aa..de7f344d0d 100644 --- a/src/libvirt_public.syms +++ b/src/libvirt_public.syms @@ -756,6 +756,7 @@ LIBVIRT_3.0.0 { LIBVIRT_3.1.0 { global: virDomainAddIOThreadParams; + virDomainModIOThreadParams; } LIBVIRT_3.0.0; # .... define new API here using predicted next version number .... diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index f9e246b8bc..5086a678eb 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -8247,6 +8247,7 @@ static virHypervisorDriver hypervisor_driver = { .domainPinIOThread = remoteDomainPinIOThread, /* 1.2.14 */ .domainAddIOThread = remoteDomainAddIOThread, /* 1.2.15 */ .domainAddIOThreadParams = remoteDomainAddIOThreadParams, /* 3.1.0 */ + .domainModIOThreadParams = remoteDomainModIOThreadParams, /* 3.1.0 */ .domainDelIOThread = remoteDomainDelIOThread, /* 1.2.15 */ .domainGetSecurityLabel = remoteDomainGetSecurityLabel, /* 0.6.1 */ .domainGetSecurityLabelList = remoteDomainGetSecurityLabelList, /* 0.10.0 */ diff --git a/src/remote/remote_protocol.x b/src/remote/remote_protocol.x index 146c38b3f4..238a29b481 100644 --- a/src/remote/remote_protocol.x +++ b/src/remote/remote_protocol.x @@ -1237,6 +1237,13 @@ struct remote_domain_add_iothread_params_args { unsigned int flags; }; +struct remote_domain_mod_iothread_params_args { + remote_nonnull_domain dom; + unsigned int iothread_id; + remote_typed_param params<REMOTE_DOMAIN_IOTHREAD_PARAMS_MAX>; + unsigned int flags; +}; + struct remote_domain_del_iothread_args { remote_nonnull_domain dom; unsigned int iothread_id; @@ -6036,6 +6043,13 @@ enum remote_procedure { * @acl: domain:save:!VIR_DOMAIN_AFFECT_CONFIG|VIR_DOMAIN_AFFECT_LIVE * @acl: domain:save:VIR_DOMAIN_AFFECT_CONFIG */ - REMOTE_PROC_DOMAIN_ADD_IOTHREAD_PARAMS = 384 + REMOTE_PROC_DOMAIN_ADD_IOTHREAD_PARAMS = 384, + /** + * @generate: both + * @acl: domain:write + * @acl: domain:save:!VIR_DOMAIN_AFFECT_CONFIG|VIR_DOMAIN_AFFECT_LIVE + * @acl: domain:save:VIR_DOMAIN_AFFECT_CONFIG + */ + REMOTE_PROC_DOMAIN_MOD_IOTHREAD_PARAMS = 385 }; diff --git a/src/remote_protocol-structs b/src/remote_protocol-structs index 2e3245322f..7672110578 100644 --- a/src/remote_protocol-structs +++ b/src/remote_protocol-structs @@ -866,6 +866,15 @@ struct remote_domain_add_iothread_params_args { } params; u_int flags; }; +struct remote_domain_mod_iothread_params_args { + remote_nonnull_domain dom; + u_int iothread_id; + struct { + u_int params_len; + remote_typed_param * params_val; + } params; + u_int flags; +}; struct remote_domain_del_iothread_args { remote_nonnull_domain dom; u_int iothread_id; @@ -3220,4 +3229,5 @@ enum remote_procedure { REMOTE_PROC_SECRET_EVENT_LIFECYCLE = 382, REMOTE_PROC_SECRET_EVENT_VALUE_CHANGED = 383, REMOTE_PROC_DOMAIN_ADD_IOTHREAD_PARAMS = 384, + REMOTE_PROC_DOMAIN_MOD_IOTHREAD_PARAMS = 385, }; -- 2.11.1

Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- tools/virsh-domain.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++-- tools/virsh.pod | 10 ++++++++ 2 files changed, 73 insertions(+), 2 deletions(-) diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 023ec8a8b3..dddb336a57 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -7191,6 +7191,24 @@ static const vshCmdOptDef opts_iothreadadd[] = { .flags = VSH_OFLAG_REQ, .help = N_("iothread for the new IOThread") }, + {.name = "poll-disabled", + .type = VSH_OT_BOOL, + .help = N_("disable polling for the new IOThread") + }, + {.name = "poll-max-ns", + .type = VSH_OT_INT, + .help = N_("set max polling time in ns for the new IOThread") + }, + {.name = "poll-grow", + .type = VSH_OT_INT, + .help = N_("set how much ns should be used to grow current polling " + "time for the new IOThread") + }, + {.name = "poll-shrink", + .type = VSH_OT_INT, + .help = N_("set how much ns should be used to shrink current polling " + "time for the new IOThread") + }, VIRSH_COMMON_OPT_DOMAIN_CONFIG, VIRSH_COMMON_OPT_DOMAIN_LIVE, VIRSH_COMMON_OPT_DOMAIN_CURRENT, @@ -7206,10 +7224,21 @@ cmdIOThreadAdd(vshControl *ctl, const vshCmd *cmd) bool config = vshCommandOptBool(cmd, "config"); bool live = vshCommandOptBool(cmd, "live"); bool current = vshCommandOptBool(cmd, "current"); + bool poll_disabled = vshCommandOptBool(cmd, "poll-disabled"); unsigned int flags = VIR_DOMAIN_AFFECT_CURRENT; + virTypedParameterPtr params = NULL; + int nparams = 0; + int maxparams = 0; + unsigned int poll_val; + int rc; VSH_EXCLUSIVE_OPTIONS_VAR(current, live); VSH_EXCLUSIVE_OPTIONS_VAR(current, config); + VSH_EXCLUSIVE_OPTIONS("poll-disabled", "poll-max-ns"); + VSH_EXCLUSIVE_OPTIONS("poll-disabled", "poll-grow"); + VSH_EXCLUSIVE_OPTIONS("poll-disabled", "poll-shrink"); + VSH_REQUIRE_OPTION("poll-grow", "poll-max-ns"); + VSH_REQUIRE_OPTION("poll-shrink", "poll-max-ns"); if (config) flags |= VIR_DOMAIN_AFFECT_CONFIG; @@ -7226,14 +7255,46 @@ cmdIOThreadAdd(vshControl *ctl, const vshCmd *cmd) goto cleanup; } - if (virDomainAddIOThread(dom, iothread_id, flags) < 0) - goto cleanup; + if (poll_disabled) { + if (virTypedParamsAddBoolean(¶ms, &nparams, &maxparams, + VIR_DOMAIN_IOTHREAD_POLL_ENABLED, 0) < 0) + goto save_error; + } else { +#define VSH_IOTHREAD_SET_PARAMS(opt, param) \ + poll_val = 0; \ + if ((rc = vshCommandOptUInt(ctl, cmd, opt, &poll_val)) < 0) \ + goto cleanup; \ + if (rc > 0 && \ + virTypedParamsAddUInt(¶ms, &nparams, &maxparams, \ + param, poll_val) < 0) \ + goto save_error; + + VSH_IOTHREAD_SET_PARAMS("poll-max-ns", VIR_DOMAIN_IOTHREAD_POLL_MAX_NS) + VSH_IOTHREAD_SET_PARAMS("poll-grow", VIR_DOMAIN_IOTHREAD_POLL_GROW) + VSH_IOTHREAD_SET_PARAMS("poll-shrink", VIR_DOMAIN_IOTHREAD_POLL_SHRINK) + +#undef VSH_IOTHREAD_SET_PARAMS + } + + if (nparams) { + if (virDomainAddIOThreadParams(dom, iothread_id, + params, nparams, flags) < 0) + goto cleanup; + } else { + if (virDomainAddIOThread(dom, iothread_id, flags) < 0) + goto cleanup; + } ret = true; cleanup: + virTypedParamsFree(params, nparams); virDomainFree(dom); return ret; + + save_error: + vshSaveLibvirtError(); + goto cleanup; } /* diff --git a/tools/virsh.pod b/tools/virsh.pod index 90f4b5a1f7..12fa650f03 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -1520,12 +1520,22 @@ B<Note>: The expression is sequentially evaluated, so "0-15,^8" is identical to "9-14,0-7,15" but not identical to "^8,0-15". =item B<iothreadadd> I<domain> I<iothread_id> +[[I<--poll-disable>] | [I<--poll-max-ns> B<ns>] [I<--poll-grow> B<ns>] +[I<--poll-shring> B<ns>]] [[I<--config>] [I<--live>] | [I<--current>]] Add a new IOThread to the domain using the specified I<iothread_id>. If the I<iothread_id> already exists, the command will fail. The I<iothread_id> must be greater than zero. +It is possible to configure polling for the new added IOThread using +I<--poll-*> options. If no polling option is not specified hypervisor +will use its default configuration. To disable polling use I<--poll-disable>. +To enable polling you need to at least provide I<--poll-max-ns> which sets +the maximum polling time that can be used by polling algorithm. +I<--poll-grow> and I<--poll-shring> is used to configure how the polling +algorithm will adapt the current polling time to different workloads. + If I<--live> is specified, affect a running guest. If the guest is not running an error is returned. If I<--config> is specified, affect the next boot of a persistent guest. -- 2.11.1

Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- tools/virsh-domain.c | 125 +++++++++++++++++++++++++++++++++++++++++++++++++++ tools/virsh.pod | 8 ++++ 2 files changed, 133 insertions(+) diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index dddb336a57..dd0104cd6a 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -7298,6 +7298,125 @@ cmdIOThreadAdd(vshControl *ctl, const vshCmd *cmd) } /* + * "iothreadmod" command + */ +static const vshCmdInfo info_iothreadmod[] = { + {.name = "help", + .data = N_("modifies an existing IOThread of the guest domain") + }, + {.name = "desc", + .data = N_("Modifies an existing IOThread of the guest domain.") + }, + {.name = NULL} +}; + +static const vshCmdOptDef opts_iothreadmod[] = { + VIRSH_COMMON_OPT_DOMAIN_FULL, + {.name = "id", + .type = VSH_OT_INT, + .flags = VSH_OFLAG_REQ, + .help = N_("iothread id of existing IOThread") + }, + {.name = "poll-disabled", + .type = VSH_OT_BOOL, + .help = N_("disable polling for the new IOThread") + }, + {.name = "poll-max-ns", + .type = VSH_OT_INT, + .help = N_("set max polling time in ns for the new IOThread") + }, + {.name = "poll-grow", + .type = VSH_OT_INT, + .help = N_("set how much ns should be used to grow current polling " + "time for the new IOThread") + }, + {.name = "poll-shrink", + .type = VSH_OT_INT, + .help = N_("set how much ns should be used to shrink current polling " + "time for the new IOThread") + }, + VIRSH_COMMON_OPT_DOMAIN_CONFIG, + VIRSH_COMMON_OPT_DOMAIN_LIVE, + VIRSH_COMMON_OPT_DOMAIN_CURRENT, + {.name = NULL} +}; + +static bool +cmdIOThreadMod(vshControl *ctl, const vshCmd *cmd) +{ + virDomainPtr dom; + int iothread_id = 0; + bool ret = false; + bool config = vshCommandOptBool(cmd, "config"); + bool live = vshCommandOptBool(cmd, "live"); + bool current = vshCommandOptBool(cmd, "current"); + bool poll_disabled = vshCommandOptBool(cmd, "poll-disabled"); + unsigned int flags = VIR_DOMAIN_AFFECT_CURRENT; + virTypedParameterPtr params = NULL; + int nparams = 0; + int maxparams = 0; + unsigned int poll_val; + int rc; + + VSH_EXCLUSIVE_OPTIONS_VAR(current, live); + VSH_EXCLUSIVE_OPTIONS_VAR(current, config); + VSH_EXCLUSIVE_OPTIONS("poll-disabled", "poll-max-ns"); + VSH_EXCLUSIVE_OPTIONS("poll-disabled", "poll-grow"); + VSH_EXCLUSIVE_OPTIONS("poll-disabled", "poll-shrink"); + + if (config) + flags |= VIR_DOMAIN_AFFECT_CONFIG; + if (live) + flags |= VIR_DOMAIN_AFFECT_LIVE; + + if (!(dom = virshCommandOptDomain(ctl, cmd, NULL))) + return false; + + if (vshCommandOptInt(ctl, cmd, "id", &iothread_id) < 0) + goto cleanup; + if (iothread_id <= 0) { + vshError(ctl, _("Invalid IOThread id value: '%d'"), iothread_id); + goto cleanup; + } + + if (poll_disabled) { + if (virTypedParamsAddBoolean(¶ms, &nparams, &maxparams, + VIR_DOMAIN_IOTHREAD_POLL_ENABLED, 0) < 0) + goto save_error; + } else { +#define VSH_IOTHREAD_SET_PARAMS(opt, param) \ + poll_val = 0; \ + if ((rc = vshCommandOptUInt(ctl, cmd, opt, &poll_val)) < 0) \ + goto cleanup; \ + if (rc > 0 && \ + virTypedParamsAddUInt(¶ms, &nparams, &maxparams, \ + param, poll_val) < 0) \ + goto save_error; + + VSH_IOTHREAD_SET_PARAMS("poll-max-ns", VIR_DOMAIN_IOTHREAD_POLL_MAX_NS) + VSH_IOTHREAD_SET_PARAMS("poll-grow", VIR_DOMAIN_IOTHREAD_POLL_GROW) + VSH_IOTHREAD_SET_PARAMS("poll-shrink", VIR_DOMAIN_IOTHREAD_POLL_SHRINK) + +#undef VSH_IOTHREAD_SET_PARAMS + } + + if (virDomainModIOThreadParams(dom, iothread_id, + params, nparams, flags) < 0) + goto cleanup; + + ret = true; + + cleanup: + virTypedParamsFree(params, nparams); + virDomainFree(dom); + return ret; + + save_error: + vshSaveLibvirtError(); + goto cleanup; +} + +/* * "iothreaddel" command */ static const vshCmdInfo info_iothreaddel[] = { @@ -13736,6 +13855,12 @@ const vshCmdDef domManagementCmds[] = { .info = info_iothreadadd, .flags = 0 }, + {.name = "iothreadmod", + .handler = cmdIOThreadMod, + .opts = opts_iothreadmod, + .info = info_iothreadmod, + .flags = 0 + }, {.name = "iothreaddel", .handler = cmdIOThreadDel, .opts = opts_iothreaddel, diff --git a/tools/virsh.pod b/tools/virsh.pod index 12fa650f03..a9b6896b32 100644 --- a/tools/virsh.pod +++ b/tools/virsh.pod @@ -1542,6 +1542,14 @@ If I<--config> is specified, affect the next boot of a persistent guest. If I<--current> is specified or I<--live> and I<--config> are not specified, affect the current guest state. +=item B<iothreadmod> I<domain> I<iothread_id> +[[I<--poll-disable>] | [I<--poll-max-ns> B<ns>] [I<--poll-grow> B<ns>] +[I<--poll-shring> B<ns>]] +[[I<--config>] [I<--live>] | [I<--current>]] + +Modifies an existing iothread of the domain using the specified I<iothread_id>. +For detailed description of all options see B<iothreadadd> command. + =item B<iothreaddel> I<domain> I<iothread_id> [[I<--config>] [I<--live>] | [I<--current>]] -- 2.11.1

Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- Patch [1] that is still waiting to be pushed into QEMU is required for this feature. [1] <http://lists.nongnu.org/archive/html/qemu-devel/2017-02/msg02192.html> src/qemu/qemu_capabilities.c | 2 ++ src/qemu/qemu_capabilities.h | 1 + tests/qemucapabilitiesdata/caps_2.9.0.x86_64.replies | 12 ++++++++++++ tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml | 1 + 4 files changed, 16 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index e851eec7a7..982893a6c8 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -358,6 +358,7 @@ VIR_ENUM_IMPL(virQEMUCaps, QEMU_CAPS_LAST, "query-cpu-model-expansion", /* 245 */ "virtio-net.host_mtu", "spice-rendernode", + "iothread.poll-max-ns", ); @@ -1734,6 +1735,7 @@ static struct virQEMUCapsStringFlags virQEMUCapsObjectPropsUSBNECXHCI[] = { static struct virQEMUCapsStringFlags virQEMUCapsQMPSchemaQueries[] = { { "blockdev-add/arg-type/options/+gluster/debug-level", QEMU_CAPS_GLUSTER_DEBUG_LEVEL}, { "blockdev-add/arg-type/+gluster/debug", QEMU_CAPS_GLUSTER_DEBUG_LEVEL}, + { "query-iothreads/ret-type/poll-max-ns", QEMU_CAPS_IOTHREAD_POLLING}, }; struct virQEMUCapsObjectTypeProps { diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index 0f998c473f..bdaccde307 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -394,6 +394,7 @@ typedef enum { QEMU_CAPS_QUERY_CPU_MODEL_EXPANSION, /* qmp query-cpu-model-expansion */ QEMU_CAPS_VIRTIO_NET_HOST_MTU, /* virtio-net-*.host_mtu */ QEMU_CAPS_SPICE_RENDERNODE, /* -spice rendernode */ + QEMU_CAPS_IOTHREAD_POLLING, /* -object iothread.poll-max-ns */ QEMU_CAPS_LAST /* this must always be the last item */ } virQEMUCapsFlags; diff --git a/tests/qemucapabilitiesdata/caps_2.9.0.x86_64.replies b/tests/qemucapabilitiesdata/caps_2.9.0.x86_64.replies index fe9fe7d5b7..5ee2c61e4f 100644 --- a/tests/qemucapabilitiesdata/caps_2.9.0.x86_64.replies +++ b/tests/qemucapabilitiesdata/caps_2.9.0.x86_64.replies @@ -9211,6 +9211,18 @@ { "name": "thread-id", "type": "int" + }, + { + "name": "poll-max-ns", + "type": "int" + }, + { + "name": "poll-grow", + "type": "int" + }, + { + "name": "poll-shrink", + "type": "int" } ], "meta-type": "object" diff --git a/tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml index dcdc0e6213..7089429866 100644 --- a/tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml @@ -202,6 +202,7 @@ <flag name='vhost-scsi'/> <flag name='drive-iotune-group'/> <flag name='virtio-net.host_mtu'/> + <flag name='iothread.poll-max-ns'/> <version>2008050</version> <kvmVersion>0</kvmVersion> <package> (v2.8.0-1321-gad584d3)</package> -- 2.11.1

Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- src/util/virqemu.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/util/virqemu.c b/src/util/virqemu.c index 2e9e65f9ef..712fd1ad4f 100644 --- a/src/util/virqemu.c +++ b/src/util/virqemu.c @@ -229,7 +229,8 @@ virQEMUBuildCommandLineJSON(virJSONValuePtr value, virBufferPtr buf, virQEMUBuildCommandLineJSONArrayFormatFunc array) { - if (virQEMUBuildCommandLineJSONRecurse(NULL, value, buf, array, false) < 0) + if (value && + virQEMUBuildCommandLineJSONRecurse(NULL, value, buf, array, false) < 0) return -1; virBufferTrim(buf, ",", -1); -- 2.11.1

Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- src/qemu/qemu_driver.c | 6 +++--- src/qemu/qemu_monitor.c | 6 ++++-- src/qemu/qemu_monitor.h | 6 +++++- src/qemu/qemu_monitor_json.c | 19 ++++++++++++++++++- src/qemu/qemu_monitor_json.h | 3 ++- src/qemu/qemu_process.c | 2 +- tests/qemumonitorjsontest.c | 2 +- 7 files changed, 34 insertions(+), 10 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index da9f10e65e..ff610a7692 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5284,7 +5284,7 @@ qemuDomainGetIOThreadsLive(virQEMUDriverPtr driver, } qemuDomainObjEnterMonitor(driver, vm); - niothreads = qemuMonitorGetIOThreads(priv->mon, &iothreads); + niothreads = qemuMonitorGetIOThreads(priv->mon, &iothreads, false); if (qemuDomainObjExitMonitor(driver, vm) < 0) goto endjob; if (niothreads < 0) @@ -5599,7 +5599,7 @@ qemuDomainHotplugAddIOThread(virQEMUDriverPtr driver, * and add the thread_id to the vm->def->iothreadids list. */ if ((new_niothreads = qemuMonitorGetIOThreads(priv->mon, - &new_iothreads)) < 0) + &new_iothreads, false)) < 0) goto exit_monitor; if (qemuDomainObjExitMonitor(driver, vm) < 0) @@ -5681,7 +5681,7 @@ qemuDomainHotplugDelIOThread(virQEMUDriverPtr driver, goto exit_monitor; if ((new_niothreads = qemuMonitorGetIOThreads(priv->mon, - &new_iothreads)) < 0) + &new_iothreads, false)) < 0) goto exit_monitor; if (qemuDomainObjExitMonitor(driver, vm) < 0) diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index b15207a693..7633e6fc07 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -4025,6 +4025,7 @@ qemuMonitorRTCResetReinjection(qemuMonitorPtr mon) * qemuMonitorGetIOThreads: * @mon: Pointer to the monitor * @iothreads: Location to return array of IOThreadInfo data + * @supportPolling: Whether require polling data in QEMU reply * * Issue query-iothreads command. * Retrieve the list of iothreads defined/running for the machine @@ -4034,7 +4035,8 @@ qemuMonitorRTCResetReinjection(qemuMonitorPtr mon) */ int qemuMonitorGetIOThreads(qemuMonitorPtr mon, - qemuMonitorIOThreadInfoPtr **iothreads) + qemuMonitorIOThreadInfoPtr **iothreads, + bool supportPolling) { VIR_DEBUG("iothreads=%p", iothreads); @@ -4047,7 +4049,7 @@ qemuMonitorGetIOThreads(qemuMonitorPtr mon, return 0; } - return qemuMonitorJSONGetIOThreads(mon, iothreads); + return qemuMonitorJSONGetIOThreads(mon, iothreads, supportPolling); } diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index 8811d85017..eeae18e5b0 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -1005,9 +1005,13 @@ typedef qemuMonitorIOThreadInfo *qemuMonitorIOThreadInfoPtr; struct _qemuMonitorIOThreadInfo { unsigned int iothread_id; int thread_id; + int poll_max_ns; + int poll_grow; + int poll_shrink; }; int qemuMonitorGetIOThreads(qemuMonitorPtr mon, - qemuMonitorIOThreadInfoPtr **iothreads); + qemuMonitorIOThreadInfoPtr **iothreads, + bool supportPolling); typedef struct _qemuMonitorMemoryDeviceInfo qemuMonitorMemoryDeviceInfo; typedef qemuMonitorMemoryDeviceInfo *qemuMonitorMemoryDeviceInfoPtr; diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index 1d281af48e..ab73f7aaf6 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -6738,7 +6738,8 @@ qemuMonitorJSONRTCResetReinjection(qemuMonitorPtr mon) */ int qemuMonitorJSONGetIOThreads(qemuMonitorPtr mon, - qemuMonitorIOThreadInfoPtr **iothreads) + qemuMonitorIOThreadInfoPtr **iothreads, + bool supportPolling) { int ret = -1; virJSONValuePtr cmd; @@ -6804,6 +6805,22 @@ qemuMonitorJSONGetIOThreads(qemuMonitorPtr mon, "'thread-id' data")); goto cleanup; } + +#define VIR_IOTHREAD_GET_POLL_DATA(prop, store) \ + if (supportPolling && \ + virJSONValueObjectGetNumberInt(child, prop, &store) < 0) { \ + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", \ + _("query-iothreads reply has malformed " \ + "'" prop "' data")); \ + goto cleanup; \ + } + + VIR_IOTHREAD_GET_POLL_DATA("poll-max-ns", info->poll_max_ns) + VIR_IOTHREAD_GET_POLL_DATA("poll-grow", info->poll_grow) + VIR_IOTHREAD_GET_POLL_DATA("poll-shrink", info->poll_shrink) + +#undef VIR_IOTHREAD_GET_DATA + } ret = n; diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 79688c82f7..0f557a2991 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -480,7 +480,8 @@ int qemuMonitorJSONGetGuestCPU(qemuMonitorPtr mon, int qemuMonitorJSONRTCResetReinjection(qemuMonitorPtr mon); int qemuMonitorJSONGetIOThreads(qemuMonitorPtr mon, - qemuMonitorIOThreadInfoPtr **iothreads) + qemuMonitorIOThreadInfoPtr **iothreads, + bool supportPolling) ATTRIBUTE_NONNULL(2); int qemuMonitorJSONGetMemoryDeviceInfo(qemuMonitorPtr mon, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 522f49d8b7..9eb4dfd5fa 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2104,7 +2104,7 @@ qemuProcessDetectIOThreadPIDs(virQEMUDriverPtr driver, /* Get the list of IOThreads from qemu */ if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) goto cleanup; - niothreads = qemuMonitorGetIOThreads(priv->mon, &iothreads); + niothreads = qemuMonitorGetIOThreads(priv->mon, &iothreads, false); if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; if (niothreads < 0) diff --git a/tests/qemumonitorjsontest.c b/tests/qemumonitorjsontest.c index 5b2d6bb343..c9c1f2cada 100644 --- a/tests/qemumonitorjsontest.c +++ b/tests/qemumonitorjsontest.c @@ -2488,7 +2488,7 @@ testQemuMonitorJSONGetIOThreads(const void *data) goto cleanup; if ((ninfo = qemuMonitorGetIOThreads(qemuMonitorTestGetMonitor(test), - &info)) < 0) + &info, false)) < 0) goto cleanup; if (ninfo != 2) { -- 2.11.1

Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- src/qemu/qemu_command.c | 78 ++++++++++++++++++++-- src/qemu/qemu_command.h | 5 +- src/qemu/qemu_domain.c | 23 ++++++- src/qemu/qemu_domain.h | 6 ++ src/qemu/qemu_driver.c | 6 +- src/qemu/qemu_process.c | 14 +++- .../qemuxml2argv-iothreads-polling-disabled.args | 23 +++++++ .../qemuxml2argv-iothreads-polling-disabled.xml | 36 ++++++++++ .../qemuxml2argv-iothreads-polling-enabled.args | 23 +++++++ .../qemuxml2argv-iothreads-polling-enabled.xml | 36 ++++++++++ ...emuxml2argv-iothreads-polling-not-supported.xml | 1 + tests/qemuxml2argvtest.c | 8 +++ 12 files changed, 248 insertions(+), 11 deletions(-) create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.xml create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.args create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.xml create mode 120000 tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-not-supported.xml diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c index 552fdcf05e..1a189459a4 100644 --- a/src/qemu/qemu_command.c +++ b/src/qemu/qemu_command.c @@ -7277,11 +7277,59 @@ qemuBuildMemCommandLine(virCommandPtr cmd, } +int +qemuBuildIOThreadProps(const virDomainIOThreadIDDef *def, + virQEMUCapsPtr qemuCaps, + virJSONValuePtr *props) +{ + virJSONValuePtr newProps = NULL; + + if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_IOTHREAD_POLLING)) { + switch (def->poll_enabled) { + case VIR_TRISTATE_BOOL_YES: + if (virJSONValueObjectCreate(&newProps, "u:poll-max-ns", + def->poll_max_ns, NULL) < 0) + goto error; + + if (def->poll_grow && + virJSONValueObjectAdd(newProps, "u:poll-grow", + def->poll_grow, NULL) < 0) + goto error; + + if (def->poll_shrink && + virJSONValueObjectAdd(newProps, "u:poll-shrink", + def->poll_shrink, NULL) < 0) + goto error; + break; + case VIR_TRISTATE_BOOL_NO: + if (virJSONValueObjectCreate(&newProps, "u:poll-max-ns", 0, NULL) < 0) + goto error; + break; + case VIR_TRISTATE_BOOL_ABSENT: + case VIR_TRISTATE_BOOL_LAST: + break; + } + } + + *props = newProps; + return 0; + + error: + virJSONValueFree(newProps); + return -1; +} + + static int qemuBuildIOThreadCommandLine(virCommandPtr cmd, - const virDomainDef *def) + const virDomainDef *def, + virQEMUCapsPtr qemuCaps) { size_t i; + int ret = -1; + char *alias = NULL; + char *propsCmd = NULL; + virJSONValuePtr props = NULL; if (def->niothreadids == 0) return 0; @@ -7293,11 +7341,31 @@ qemuBuildIOThreadCommandLine(virCommandPtr cmd, */ for (i = 0; i < def->niothreadids; i++) { virCommandAddArg(cmd, "-object"); - virCommandAddArgFormat(cmd, "iothread,id=iothread%u", - def->iothreadids[i]->iothread_id); + + if (virAsprintf(&alias, "iothread%u", def->iothreadids[i]->iothread_id) < 0) + goto cleanup; + + if (qemuBuildIOThreadProps(def->iothreadids[i], qemuCaps, &props) < 0) + goto cleanup; + + if (!(propsCmd = virQEMUBuildObjectCommandlineFromJSON("iothread", + alias, props))) + goto cleanup; + + virCommandAddArg(cmd, propsCmd); + + virJSONValueFree(props); + VIR_FREE(propsCmd); + VIR_FREE(alias); } - return 0; + ret = 0; + + cleanup: + virJSONValueFree(props); + VIR_FREE(propsCmd); + VIR_FREE(alias); + return ret; } @@ -9598,7 +9666,7 @@ qemuBuildCommandLine(virQEMUDriverPtr driver, if (qemuBuildSmpCommandLine(cmd, def) < 0) goto error; - if (qemuBuildIOThreadCommandLine(cmd, def) < 0) + if (qemuBuildIOThreadCommandLine(cmd, def, qemuCaps) < 0) goto error; if (virDomainNumaGetNodeCount(def->numa) && diff --git a/src/qemu/qemu_command.h b/src/qemu/qemu_command.h index 69fe846139..84e8099bfe 100644 --- a/src/qemu/qemu_command.h +++ b/src/qemu/qemu_command.h @@ -202,6 +202,9 @@ char *qemuBuildShmemDevStr(virDomainDefPtr def, virQEMUCapsPtr qemuCaps) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); - +int qemuBuildIOThreadProps(const virDomainIOThreadIDDef *def, + virQEMUCapsPtr qemuCaps, + virJSONValuePtr *props) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3); #endif /* __QEMU_COMMAND_H__*/ diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c index ea4b28288e..009c93a15e 100644 --- a/src/qemu/qemu_domain.c +++ b/src/qemu/qemu_domain.c @@ -3283,7 +3283,8 @@ virDomainDefParserConfig virQEMUDriverDomainDefParserConfig = { .features = VIR_DOMAIN_DEF_FEATURE_MEMORY_HOTPLUG | VIR_DOMAIN_DEF_FEATURE_OFFLINE_VCPUPIN | - VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS, + VIR_DOMAIN_DEF_FEATURE_INDIVIDUAL_VCPUS | + VIR_DOMAIN_DEF_FEATURE_IOTHREAD_POLLING, }; @@ -8280,3 +8281,23 @@ qemuDomainNamespaceTeardownRNG(virQEMUDriverPtr driver, cleanup: return ret; } + + +void +qemuDomainIOThreadUpdate(virDomainIOThreadIDDefPtr iothread, + qemuMonitorIOThreadInfoPtr iothread_info, + bool supportPolling) +{ + iothread->thread_id = iothread_info->thread_id; + + if (supportPolling && iothread->poll_enabled == VIR_TRISTATE_BOOL_ABSENT) { + iothread->poll_max_ns = iothread_info->poll_max_ns; + iothread->poll_grow = iothread_info->poll_grow; + iothread->poll_shrink = iothread_info->poll_shrink; + + if (iothread->poll_max_ns == 0) + iothread->poll_enabled = VIR_TRISTATE_BOOL_NO; + else + iothread->poll_enabled = VIR_TRISTATE_BOOL_YES; + } +} diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h index 8ba807c656..900b689411 100644 --- a/src/qemu/qemu_domain.h +++ b/src/qemu/qemu_domain.h @@ -848,4 +848,10 @@ int qemuDomainNamespaceSetupRNG(virQEMUDriverPtr driver, int qemuDomainNamespaceTeardownRNG(virQEMUDriverPtr driver, virDomainObjPtr vm, virDomainRNGDefPtr rng); + +void qemuDomainIOThreadUpdate(virDomainIOThreadIDDefPtr iothread, + qemuMonitorIOThreadInfoPtr iothread_info, + bool supportPolling) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); + #endif /* __QEMU_DOMAIN_H__ */ diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index ff610a7692..9e3691b575 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5581,6 +5581,7 @@ qemuDomainHotplugAddIOThread(virQEMUDriverPtr driver, unsigned int orig_niothreads = vm->def->niothreadids; unsigned int exp_niothreads = vm->def->niothreadids; int new_niothreads = 0; + bool supportPolling = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_IOTHREAD_POLLING); qemuMonitorIOThreadInfoPtr *new_iothreads = NULL; virDomainIOThreadIDDefPtr iothrid; @@ -5599,7 +5600,8 @@ qemuDomainHotplugAddIOThread(virQEMUDriverPtr driver, * and add the thread_id to the vm->def->iothreadids list. */ if ((new_niothreads = qemuMonitorGetIOThreads(priv->mon, - &new_iothreads, false)) < 0) + &new_iothreads, + supportPolling)) < 0) goto exit_monitor; if (qemuDomainObjExitMonitor(driver, vm) < 0) @@ -5632,7 +5634,7 @@ qemuDomainHotplugAddIOThread(virQEMUDriverPtr driver, if (!(iothrid = virDomainIOThreadIDAdd(vm->def, iothread_id))) goto cleanup; - iothrid->thread_id = new_iothreads[idx]->thread_id; + qemuDomainIOThreadUpdate(iothrid, new_iothreads[idx], supportPolling); if (qemuProcessSetupIOThread(vm, iothrid) < 0) goto cleanup; diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index 9eb4dfd5fa..4f64c0e7d6 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2100,11 +2100,12 @@ qemuProcessDetectIOThreadPIDs(virQEMUDriverPtr driver, int niothreads = 0; int ret = -1; size_t i; + bool supportPolling = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_IOTHREAD_POLLING); /* Get the list of IOThreads from qemu */ if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) goto cleanup; - niothreads = qemuMonitorGetIOThreads(priv->mon, &iothreads, false); + niothreads = qemuMonitorGetIOThreads(priv->mon, &iothreads, supportPolling); if (qemuDomainObjExitMonitor(driver, vm) < 0) goto cleanup; if (niothreads < 0) @@ -2134,7 +2135,7 @@ qemuProcessDetectIOThreadPIDs(virQEMUDriverPtr driver, iothreads[i]->iothread_id); goto cleanup; } - iothrid->thread_id = iothreads[i]->thread_id; + qemuDomainIOThreadUpdate(iothrid, iothreads[i], supportPolling); } ret = 0; @@ -4571,6 +4572,15 @@ qemuProcessStartValidateIOThreads(virDomainObjPtr vm, return -1; } + for (i = 0; i < vm->def->niothreadids; i++) { + if (vm->def->iothreadids[i]->poll_enabled != VIR_TRISTATE_BOOL_ABSENT && + !virQEMUCapsGet(qemuCaps, QEMU_CAPS_IOTHREAD_POLLING)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("IOThreads polling is not supported for this QEMU")); + return -1; + } + } + for (i = 0; i < vm->def->ncontrollers; i++) { virDomainControllerDefPtr cont = vm->def->controllers[i]; diff --git a/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.args b/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.args new file mode 100644 index 0000000000..e9b53f0976 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.args @@ -0,0 +1,23 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/home/test \ +USER=test \ +LOGNAME=test \ +QEMU_AUDIO_DRV=none \ +/usr/bin/qemu \ +-name QEMUGuest1 \ +-S \ +-M pc \ +-m 214 \ +-smp 2,sockets=2,cores=1,threads=1 \ +-object iothread,id=iothread1,poll-max-ns=0 \ +-object iothread,id=iothread2 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-nographic \ +-nodefaults \ +-monitor unix:/tmp/lib/domain--1-QEMUGuest1/monitor.sock,server,nowait \ +-no-acpi \ +-boot c \ +-usb \ +-drive file=/dev/HostVG/QEMUGuest1,format=raw,if=none,id=drive-ide0-0-0 \ +-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 diff --git a/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.xml b/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.xml new file mode 100644 index 0000000000..f9d769f860 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-disabled.xml @@ -0,0 +1,36 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>2</vcpu> + <iothreads>2</iothreads> + <iothreadids> + <iothread id='1'> + <polling enabled='no'/> + </iothread> + </iothreadids> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/bin/qemu</emulator> + <disk type='block' device='disk'> + <driver name='qemu' type='raw'/> + <source dev='/dev/HostVG/QEMUGuest1'/> + <target dev='hda' bus='ide'/> + <address type='drive' controller='0' bus='0' target='0' unit='0'/> + </disk> + <controller type='usb' index='0'/> + <controller type='ide' index='0'/> + <controller type='pci' index='0' model='pci-root'/> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <memballoon model='none'/> + </devices> +</domain> diff --git a/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.args b/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.args new file mode 100644 index 0000000000..b3495dfe9c --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.args @@ -0,0 +1,23 @@ +LC_ALL=C \ +PATH=/bin \ +HOME=/home/test \ +USER=test \ +LOGNAME=test \ +QEMU_AUDIO_DRV=none \ +/usr/bin/qemu \ +-name QEMUGuest1 \ +-S \ +-M pc \ +-m 214 \ +-smp 2,sockets=2,cores=1,threads=1 \ +-object iothread,id=iothread1,poll-max-ns=4000 \ +-object iothread,id=iothread2 \ +-uuid c7a5fdbd-edaf-9455-926a-d65c16db1809 \ +-nographic \ +-nodefaults \ +-monitor unix:/tmp/lib/domain--1-QEMUGuest1/monitor.sock,server,nowait \ +-no-acpi \ +-boot c \ +-usb \ +-drive file=/dev/HostVG/QEMUGuest1,format=raw,if=none,id=drive-ide0-0-0 \ +-device ide-drive,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0 diff --git a/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.xml b/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.xml new file mode 100644 index 0000000000..44b6e2e219 --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-enabled.xml @@ -0,0 +1,36 @@ +<domain type='qemu'> + <name>QEMUGuest1</name> + <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid> + <memory unit='KiB'>219136</memory> + <currentMemory unit='KiB'>219136</currentMemory> + <vcpu placement='static'>2</vcpu> + <iothreads>2</iothreads> + <iothreadids> + <iothread id='1'> + <polling enabled='yes' max_ns='4000'/> + </iothread> + </iothreadids> + <os> + <type arch='x86_64' machine='pc'>hvm</type> + <boot dev='hd'/> + </os> + <clock offset='utc'/> + <on_poweroff>destroy</on_poweroff> + <on_reboot>restart</on_reboot> + <on_crash>destroy</on_crash> + <devices> + <emulator>/usr/bin/qemu</emulator> + <disk type='block' device='disk'> + <driver name='qemu' type='raw'/> + <source dev='/dev/HostVG/QEMUGuest1'/> + <target dev='hda' bus='ide'/> + <address type='drive' controller='0' bus='0' target='0' unit='0'/> + </disk> + <controller type='usb' index='0'/> + <controller type='ide' index='0'/> + <controller type='pci' index='0' model='pci-root'/> + <input type='mouse' bus='ps2'/> + <input type='keyboard' bus='ps2'/> + <memballoon model='none'/> + </devices> +</domain> diff --git a/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-not-supported.xml b/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-not-supported.xml new file mode 120000 index 0000000000..5b40c52a2d --- /dev/null +++ b/tests/qemuxml2argvdata/qemuxml2argv-iothreads-polling-not-supported.xml @@ -0,0 +1 @@ +qemuxml2argv-iothreads-polling-enabled.xml \ No newline at end of file diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c index f55b04b057..603e43d295 100644 --- a/tests/qemuxml2argvtest.c +++ b/tests/qemuxml2argvtest.c @@ -1507,6 +1507,14 @@ mymain(void) DO_TEST("iothreads-virtio-scsi-ccw", QEMU_CAPS_OBJECT_IOTHREAD, QEMU_CAPS_VIRTIO_SCSI, QEMU_CAPS_VIRTIO_SCSI_IOTHREAD, QEMU_CAPS_VIRTIO_CCW, QEMU_CAPS_VIRTIO_S390); + DO_TEST("iothreads-polling-enabled", + QEMU_CAPS_OBJECT_IOTHREAD, + QEMU_CAPS_IOTHREAD_POLLING); + DO_TEST("iothreads-polling-disabled", + QEMU_CAPS_OBJECT_IOTHREAD, + QEMU_CAPS_IOTHREAD_POLLING); + DO_TEST_FAILURE("iothreads-polling-not-supported", + QEMU_CAPS_OBJECT_IOTHREAD); DO_TEST("cpu-topology1", NONE); DO_TEST("cpu-topology2", NONE); -- 2.11.1

Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- src/conf/domain_conf.c | 4 +- src/conf/domain_conf.h | 2 +- src/qemu/qemu_driver.c | 182 +++++++++++++++++++++++++++++++++++++------------ 3 files changed, 140 insertions(+), 48 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 4b552a9175..64303a6790 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -20348,14 +20348,14 @@ virDomainIOThreadIDFind(const virDomainDef *def, virDomainIOThreadIDDefPtr virDomainIOThreadIDAdd(virDomainDefPtr def, - unsigned int iothread_id) + virDomainIOThreadIDDef iothread) { virDomainIOThreadIDDefPtr iothrid = NULL; if (VIR_ALLOC(iothrid) < 0) goto error; - iothrid->iothread_id = iothread_id; + *iothrid = iothread; if (VIR_APPEND_ELEMENT_COPY(def->iothreadids, def->niothreadids, iothrid) < 0) diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 8ac1d8a409..5f8c745d8a 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2791,7 +2791,7 @@ int virDomainDefAddImplicitDevices(virDomainDefPtr def); virDomainIOThreadIDDefPtr virDomainIOThreadIDFind(const virDomainDef *def, unsigned int iothread_id); virDomainIOThreadIDDefPtr virDomainIOThreadIDAdd(virDomainDefPtr def, - unsigned int iothread_id); + virDomainIOThreadIDDef iothread); void virDomainIOThreadIDDel(virDomainDefPtr def, unsigned int iothread_id); unsigned int virDomainDefFormatConvertXMLFlags(unsigned int flags); diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 9e3691b575..96c8b2b8bc 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5571,7 +5571,7 @@ qemuDomainPinIOThread(virDomainPtr dom, static int qemuDomainHotplugAddIOThread(virQEMUDriverPtr driver, virDomainObjPtr vm, - unsigned int iothread_id) + virDomainIOThreadIDDef iothread) { qemuDomainObjPrivatePtr priv = vm->privateData; char *alias = NULL; @@ -5583,14 +5583,18 @@ qemuDomainHotplugAddIOThread(virQEMUDriverPtr driver, int new_niothreads = 0; bool supportPolling = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_IOTHREAD_POLLING); qemuMonitorIOThreadInfoPtr *new_iothreads = NULL; - virDomainIOThreadIDDefPtr iothrid; + virDomainIOThreadIDDefPtr new_iothread = NULL; + virJSONValuePtr props = NULL; - if (virAsprintf(&alias, "iothread%u", iothread_id) < 0) + if (virAsprintf(&alias, "iothread%u", iothread.iothread_id) < 0) return -1; qemuDomainObjEnterMonitor(driver, vm); - rc = qemuMonitorAddObject(priv->mon, "iothread", alias, NULL); + if (qemuBuildIOThreadProps(&iothread, priv->qemuCaps, &props) < 0) + goto cleanup; + + rc = qemuMonitorAddObject(priv->mon, "iothread", alias, props); exp_niothreads++; if (rc < 0) goto exit_monitor; @@ -5620,23 +5624,23 @@ qemuDomainHotplugAddIOThread(virQEMUDriverPtr driver, * in the QEMU IOThread list, so we can add it to our iothreadids list */ for (idx = 0; idx < new_niothreads; idx++) { - if (new_iothreads[idx]->iothread_id == iothread_id) + if (new_iothreads[idx]->iothread_id == iothread.iothread_id) break; } if (idx == new_niothreads) { virReportError(VIR_ERR_INTERNAL_ERROR, _("cannot find new IOThread '%u' in QEMU monitor."), - iothread_id); + iothread.iothread_id); goto cleanup; } - if (!(iothrid = virDomainIOThreadIDAdd(vm->def, iothread_id))) + if (!(new_iothread = virDomainIOThreadIDAdd(vm->def, iothread))) goto cleanup; - qemuDomainIOThreadUpdate(iothrid, new_iothreads[idx], supportPolling); + qemuDomainIOThreadUpdate(new_iothread, new_iothreads[idx], supportPolling); - if (qemuProcessSetupIOThread(vm, iothrid) < 0) + if (qemuProcessSetupIOThread(vm, new_iothread) < 0) goto cleanup; ret = 0; @@ -5649,6 +5653,7 @@ qemuDomainHotplugAddIOThread(virQEMUDriverPtr driver, } virDomainAuditIOThread(vm, orig_niothreads, new_niothreads, "update", rc == 0); + virJSONValueFree(props); VIR_FREE(alias); return ret; @@ -5773,10 +5778,73 @@ qemuDomainDelIOThreadCheck(virDomainDefPtr def, return 0; } + +static int +qemuDomainIOThreadParseParams(virTypedParameterPtr params, + int nparams, + qemuDomainObjPrivatePtr priv, + virDomainIOThreadIDDefPtr iothread) +{ + int poll_enabled; + int rc; + + if (virTypedParamsValidate(params, nparams, + VIR_DOMAIN_IOTHREAD_POLL_ENABLED, + VIR_TYPED_PARAM_BOOLEAN, + VIR_DOMAIN_IOTHREAD_POLL_MAX_NS, + VIR_TYPED_PARAM_UINT, + VIR_DOMAIN_IOTHREAD_POLL_GROW, + VIR_TYPED_PARAM_UINT, + VIR_DOMAIN_IOTHREAD_POLL_SHRINK, + VIR_TYPED_PARAM_UINT, + NULL) < 0) + return -1; + + if ((rc = virTypedParamsGetBoolean(params, nparams, + VIR_DOMAIN_IOTHREAD_POLL_ENABLED, + &poll_enabled)) < 0) + return -1; + + if (rc > 0) { + if (poll_enabled) + iothread->poll_enabled = VIR_TRISTATE_BOOL_YES; + else + iothread->poll_enabled = VIR_TRISTATE_BOOL_NO; + } + + if (virTypedParamsGetUInt(params, nparams, + VIR_DOMAIN_IOTHREAD_POLL_MAX_NS, + &iothread->poll_max_ns) < 0) + return -1; + + if (virTypedParamsGetUInt(params, nparams, + VIR_DOMAIN_IOTHREAD_POLL_GROW, + &iothread->poll_grow) < 0) + return -1; + + if (virTypedParamsGetUInt(params, nparams, + VIR_DOMAIN_IOTHREAD_POLL_SHRINK, + &iothread->poll_shrink) < 0) + return -1; + + if (virDomainIOThreadDefPostParse(iothread) < 0) + return -1; + + if (iothread->poll_enabled != VIR_TRISTATE_BOOL_ABSENT && + !virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_IOTHREAD_POLLING)) { + virReportError(VIR_ERR_CONFIG_UNSUPPORTED, "%s", + _("IOThreads polling is not supported for this QEMU")); + return -1; + } + + return 0; +} + + static int qemuDomainChgIOThread(virQEMUDriverPtr driver, virDomainObjPtr vm, - unsigned int iothread_id, + virDomainIOThreadIDDef iothread, bool add, unsigned int flags) { @@ -5804,36 +5872,39 @@ qemuDomainChgIOThread(virQEMUDriverPtr driver, } if (add) { - if (qemuDomainAddIOThreadCheck(def, iothread_id) < 0) + if (qemuDomainAddIOThreadCheck(def, iothread.iothread_id) < 0) goto endjob; - if (qemuDomainHotplugAddIOThread(driver, vm, iothread_id) < 0) + if (qemuDomainHotplugAddIOThread(driver, vm, iothread) < 0) goto endjob; } else { - if (qemuDomainDelIOThreadCheck(def, iothread_id) < 0) + if (qemuDomainDelIOThreadCheck(def, iothread.iothread_id) < 0) goto endjob; - if (qemuDomainHotplugDelIOThread(driver, vm, iothread_id) < 0) + if (qemuDomainHotplugDelIOThread(driver, vm, + iothread.iothread_id) < 0) goto endjob; } - if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, driver->caps) < 0) + if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, + driver->caps) < 0) goto endjob; } if (persistentDef) { if (add) { - if (qemuDomainAddIOThreadCheck(persistentDef, iothread_id) < 0) + if (qemuDomainAddIOThreadCheck(persistentDef, + iothread.iothread_id) < 0) goto endjob; - if (!virDomainIOThreadIDAdd(persistentDef, iothread_id)) + if (!virDomainIOThreadIDAdd(persistentDef, iothread)) goto endjob; - } else { - if (qemuDomainDelIOThreadCheck(persistentDef, iothread_id) < 0) + if (qemuDomainDelIOThreadCheck(persistentDef, + iothread.iothread_id) < 0) goto endjob; - virDomainIOThreadIDDel(persistentDef, iothread_id); + virDomainIOThreadIDDel(persistentDef, iothread.iothread_id); } if (virDomainSaveConfig(cfg->configDir, driver->caps, @@ -5851,35 +5922,53 @@ qemuDomainChgIOThread(virQEMUDriverPtr driver, return ret; } + +static int +qemuDomainAddIOThreadParams(virDomainPtr dom, + unsigned int iothread_id, + virTypedParameterPtr params, + int nparams, + unsigned int flags) +{ + virQEMUDriverPtr driver = dom->conn->privateData; + virDomainObjPtr vm = NULL; + virDomainIOThreadIDDef iothread = {0}; + int ret = -1; + + virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | + VIR_DOMAIN_AFFECT_CONFIG, -1); + + if (iothread_id == 0) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("invalid value of 0 for iothread_id")); + return -1; + } + iothread.iothread_id = iothread_id; + + if (!(vm = qemuDomObjFromDomain(dom))) + goto cleanup; + + if (qemuDomainIOThreadParseParams(params, nparams, vm->privateData, + &iothread) < 0) + goto cleanup; + + if (virDomainAddIOThreadParamsEnsureACL(dom->conn, vm->def, flags) < 0) + goto cleanup; + + ret = qemuDomainChgIOThread(driver, vm, iothread, true, flags); + + cleanup: + virDomainObjEndAPI(&vm); + return ret; +} + + static int qemuDomainAddIOThread(virDomainPtr dom, unsigned int iothread_id, unsigned int flags) { - virQEMUDriverPtr driver = dom->conn->privateData; - virDomainObjPtr vm = NULL; - int ret = -1; - - virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | - VIR_DOMAIN_AFFECT_CONFIG, -1); - - if (iothread_id == 0) { - virReportError(VIR_ERR_INVALID_ARG, "%s", - _("invalid value of 0 for iothread_id")); - return -1; - } - - if (!(vm = qemuDomObjFromDomain(dom))) - goto cleanup; - - if (virDomainAddIOThreadEnsureACL(dom->conn, vm->def, flags) < 0) - goto cleanup; - - ret = qemuDomainChgIOThread(driver, vm, iothread_id, true, flags); - - cleanup: - virDomainObjEndAPI(&vm); - return ret; + return qemuDomainAddIOThreadParams(dom, iothread_id, NULL, 0, flags); } @@ -5890,6 +5979,7 @@ qemuDomainDelIOThread(virDomainPtr dom, { virQEMUDriverPtr driver = dom->conn->privateData; virDomainObjPtr vm = NULL; + virDomainIOThreadIDDef iothread = {0}; int ret = -1; virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | @@ -5900,6 +5990,7 @@ qemuDomainDelIOThread(virDomainPtr dom, _("invalid value of 0 for iothread_id")); return -1; } + iothread.iothread_id = iothread_id; if (!(vm = qemuDomObjFromDomain(dom))) goto cleanup; @@ -5907,7 +5998,7 @@ qemuDomainDelIOThread(virDomainPtr dom, if (virDomainDelIOThreadEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; - ret = qemuDomainChgIOThread(driver, vm, iothread_id, false, flags); + ret = qemuDomainChgIOThread(driver, vm, iothread, false, flags); cleanup: virDomainObjEndAPI(&vm); @@ -20274,6 +20365,7 @@ static virHypervisorDriver qemuHypervisorDriver = { .domainGetIOThreadInfo = qemuDomainGetIOThreadInfo, /* 1.2.14 */ .domainPinIOThread = qemuDomainPinIOThread, /* 1.2.14 */ .domainAddIOThread = qemuDomainAddIOThread, /* 1.2.15 */ + .domainAddIOThreadParams = qemuDomainAddIOThreadParams, /* 3.1.0 */ .domainDelIOThread = qemuDomainDelIOThread, /* 1.2.15 */ .domainGetSecurityLabel = qemuDomainGetSecurityLabel, /* 0.6.1 */ .domainGetSecurityLabelList = qemuDomainGetSecurityLabelList, /* 0.10.0 */ -- 2.11.1

Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- src/conf/domain_conf.c | 26 ++++++++ src/conf/domain_conf.h | 8 +++ src/libvirt_private.syms | 1 + src/qemu/qemu_driver.c | 150 +++++++++++++++++++++++++++++++++++++++++-- src/qemu/qemu_monitor.c | 19 ++++++ src/qemu/qemu_monitor.h | 3 + src/qemu/qemu_monitor_json.c | 32 +++++++++ src/qemu/qemu_monitor_json.h | 4 ++ 8 files changed, 236 insertions(+), 7 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 64303a6790..cc1be373ca 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -20370,6 +20370,32 @@ virDomainIOThreadIDAdd(virDomainDefPtr def, void +virDomainIOThreadIDMod(virDomainIOThreadIDDefPtr old_iothread, + virDomainIOThreadIDDefPtr new_iothread) +{ + old_iothread->poll_enabled = new_iothread->poll_enabled; + + switch (new_iothread->poll_enabled) { + case VIR_TRISTATE_BOOL_YES: + old_iothread->poll_max_ns = new_iothread->poll_max_ns; + old_iothread->poll_grow = new_iothread->poll_grow; + old_iothread->poll_shrink = new_iothread->poll_shrink; + break; + + case VIR_TRISTATE_BOOL_ABSENT: + case VIR_TRISTATE_BOOL_NO: + old_iothread->poll_max_ns = 0; + old_iothread->poll_grow = 0; + old_iothread->poll_shrink = 0; + break; + + case VIR_TRISTATE_BOOL_LAST: + break; + } +} + + +void virDomainIOThreadIDDel(virDomainDefPtr def, unsigned int iothread_id) { diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 5f8c745d8a..6f7edb3bfa 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2065,6 +2065,12 @@ struct _virDomainHugePage { # define VIR_DOMAIN_CPUMASK_LEN 1024 +typedef enum { + VIR_DOMAIN_IOTHREAD_ACTION_ADD, + VIR_DOMAIN_IOTHREAD_ACTION_DEL, + VIR_DOMAIN_IOTHREAD_ACTION_MOD, +} virDomainIOThreadAction; + typedef struct _virDomainIOThreadIDDef virDomainIOThreadIDDef; typedef virDomainIOThreadIDDef *virDomainIOThreadIDDefPtr; @@ -2792,6 +2798,8 @@ virDomainIOThreadIDDefPtr virDomainIOThreadIDFind(const virDomainDef *def, unsigned int iothread_id); virDomainIOThreadIDDefPtr virDomainIOThreadIDAdd(virDomainDefPtr def, virDomainIOThreadIDDef iothread); +void virDomainIOThreadIDMod(virDomainIOThreadIDDefPtr old_iothread, + virDomainIOThreadIDDefPtr new_iothread); void virDomainIOThreadIDDel(virDomainDefPtr def, unsigned int iothread_id); unsigned int virDomainDefFormatConvertXMLFlags(unsigned int flags); diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 97aee9c0e3..b9f0ac0c9f 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -376,6 +376,7 @@ virDomainIOThreadIDAdd; virDomainIOThreadIDDefFree; virDomainIOThreadIDDel; virDomainIOThreadIDFind; +virDomainIOThreadIDMod; virDomainKeyWrapCipherNameTypeFromString; virDomainKeyWrapCipherNameTypeToString; virDomainLeaseDefFree; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 96c8b2b8bc..46dc4a5ffb 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5662,6 +5662,55 @@ qemuDomainHotplugAddIOThread(virQEMUDriverPtr driver, goto cleanup; } + +static int +qemuDomainHotplugModIOThread(virQEMUDriverPtr driver, + virDomainObjPtr vm, + virDomainIOThreadIDDef iothread, + virDomainIOThreadIDDefPtr old_iothread) +{ + qemuDomainObjPrivatePtr priv = vm->privateData; + qemuMonitorIOThreadInfo iothread_info = {0}; + int rc; + + iothread_info.iothread_id = old_iothread->iothread_id; + + switch (iothread.poll_enabled) { + case VIR_TRISTATE_BOOL_ABSENT: + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("IOThread polling must be specified for " + "live update")); + return -1; + + case VIR_TRISTATE_BOOL_YES: + iothread_info.poll_max_ns = iothread.poll_max_ns; + iothread_info.poll_grow = iothread.poll_grow; + iothread_info.poll_shrink = iothread.poll_shrink; + break; + + case VIR_TRISTATE_BOOL_NO: + /* No need to do anything because iothread_info has all members + * initialized to 0 which will disable polling. */ + case VIR_TRISTATE_BOOL_LAST: + break; + } + + qemuDomainObjEnterMonitor(driver, vm); + + rc = qemuMonitorSetIOThread(priv->mon, &iothread_info); + + if (qemuDomainObjExitMonitor(driver, vm) < 0) + return -1; + + if (rc < 0) + return -1; + + virDomainIOThreadIDMod(old_iothread, &iothread); + + return 0; +} + + static int qemuDomainHotplugDelIOThread(virQEMUDriverPtr driver, virDomainObjPtr vm, @@ -5742,6 +5791,21 @@ qemuDomainAddIOThreadCheck(virDomainDefPtr def, } +static virDomainIOThreadIDDefPtr +qemuDomainModIOThreadGet(virDomainDefPtr def, + unsigned int iothread_id) +{ + virDomainIOThreadIDDefPtr ret = NULL; + + if (!(ret = virDomainIOThreadIDFind(def, iothread_id))) + virReportError(VIR_ERR_INVALID_ARG, + _("cannot find IOThread '%u' in iothreadids list"), + iothread_id); + + return ret; +} + + static int qemuDomainDelIOThreadCheck(virDomainDefPtr def, unsigned int iothread_id) @@ -5845,13 +5909,14 @@ static int qemuDomainChgIOThread(virQEMUDriverPtr driver, virDomainObjPtr vm, virDomainIOThreadIDDef iothread, - bool add, + virDomainIOThreadAction action, unsigned int flags) { virQEMUDriverConfigPtr cfg = NULL; qemuDomainObjPrivatePtr priv; virDomainDefPtr def; virDomainDefPtr persistentDef; + virDomainIOThreadIDDefPtr old_iothread = NULL; int ret = -1; cfg = virQEMUDriverGetConfig(driver); @@ -5871,19 +5936,34 @@ qemuDomainChgIOThread(virQEMUDriverPtr driver, goto endjob; } - if (add) { + switch (action) { + case VIR_DOMAIN_IOTHREAD_ACTION_ADD: if (qemuDomainAddIOThreadCheck(def, iothread.iothread_id) < 0) goto endjob; if (qemuDomainHotplugAddIOThread(driver, vm, iothread) < 0) goto endjob; - } else { + break; + + case VIR_DOMAIN_IOTHREAD_ACTION_DEL: if (qemuDomainDelIOThreadCheck(def, iothread.iothread_id) < 0) goto endjob; if (qemuDomainHotplugDelIOThread(driver, vm, iothread.iothread_id) < 0) goto endjob; + break; + + case VIR_DOMAIN_IOTHREAD_ACTION_MOD: + if (!(old_iothread = qemuDomainModIOThreadGet(def, + iothread.iothread_id))) + goto endjob; + + if (qemuDomainHotplugModIOThread(driver, vm, iothread, + old_iothread) < 0) + goto endjob; + + break; } if (virDomainSaveStatus(driver->xmlopt, cfg->stateDir, vm, @@ -5892,19 +5972,30 @@ qemuDomainChgIOThread(virQEMUDriverPtr driver, } if (persistentDef) { - if (add) { + switch (action) { + case VIR_DOMAIN_IOTHREAD_ACTION_ADD: if (qemuDomainAddIOThreadCheck(persistentDef, iothread.iothread_id) < 0) goto endjob; if (!virDomainIOThreadIDAdd(persistentDef, iothread)) goto endjob; - } else { + break; + + case VIR_DOMAIN_IOTHREAD_ACTION_DEL: if (qemuDomainDelIOThreadCheck(persistentDef, iothread.iothread_id) < 0) goto endjob; virDomainIOThreadIDDel(persistentDef, iothread.iothread_id); + + case VIR_DOMAIN_IOTHREAD_ACTION_MOD: + if (!(old_iothread = qemuDomainModIOThreadGet(persistentDef, + iothread.iothread_id))) + goto endjob; + + virDomainIOThreadIDMod(old_iothread, &iothread); + break; } if (virDomainSaveConfig(cfg->configDir, driver->caps, @@ -5955,7 +6046,8 @@ qemuDomainAddIOThreadParams(virDomainPtr dom, if (virDomainAddIOThreadParamsEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; - ret = qemuDomainChgIOThread(driver, vm, iothread, true, flags); + ret = qemuDomainChgIOThread(driver, vm, iothread, + VIR_DOMAIN_IOTHREAD_ACTION_ADD, flags); cleanup: virDomainObjEndAPI(&vm); @@ -5973,6 +6065,48 @@ qemuDomainAddIOThread(virDomainPtr dom, static int +qemuDomainModIOThreadParams(virDomainPtr dom, + unsigned int iothread_id, + virTypedParameterPtr params, + int nparams, + unsigned int flags) +{ + virQEMUDriverPtr driver = dom->conn->privateData; + virDomainObjPtr vm = NULL; + virDomainIOThreadIDDef iothread = {0}; + int ret = -1; + + virCheckFlags(VIR_DOMAIN_AFFECT_LIVE | + VIR_DOMAIN_AFFECT_CONFIG, -1); + + if (iothread_id == 0) { + virReportError(VIR_ERR_INVALID_ARG, "%s", + _("invalid value of 0 for iothread_id")); + goto cleanup; + } + + iothread.iothread_id = iothread_id; + + if (!(vm = qemuDomObjFromDomain(dom))) + goto cleanup; + + if (qemuDomainIOThreadParseParams(params, nparams, vm->privateData, + &iothread) < 0) + goto cleanup; + + if (virDomainModIOThreadParamsEnsureACL(dom->conn, vm->def, flags) < 0) + goto cleanup; + + ret = qemuDomainChgIOThread(driver, vm, iothread, + VIR_DOMAIN_IOTHREAD_ACTION_MOD, flags); + + cleanup: + virDomainObjEndAPI(&vm); + return ret; +} + + +static int qemuDomainDelIOThread(virDomainPtr dom, unsigned int iothread_id, unsigned int flags) @@ -5998,7 +6132,8 @@ qemuDomainDelIOThread(virDomainPtr dom, if (virDomainDelIOThreadEnsureACL(dom->conn, vm->def, flags) < 0) goto cleanup; - ret = qemuDomainChgIOThread(driver, vm, iothread, false, flags); + ret = qemuDomainChgIOThread(driver, vm, iothread, + VIR_DOMAIN_IOTHREAD_ACTION_DEL, flags); cleanup: virDomainObjEndAPI(&vm); @@ -20366,6 +20501,7 @@ static virHypervisorDriver qemuHypervisorDriver = { .domainPinIOThread = qemuDomainPinIOThread, /* 1.2.14 */ .domainAddIOThread = qemuDomainAddIOThread, /* 1.2.15 */ .domainAddIOThreadParams = qemuDomainAddIOThreadParams, /* 3.1.0 */ + .domainModIOThreadParams = qemuDomainModIOThreadParams, /* 3.1.0 */ .domainDelIOThread = qemuDomainDelIOThread, /* 1.2.15 */ .domainGetSecurityLabel = qemuDomainGetSecurityLabel, /* 0.6.1 */ .domainGetSecurityLabelList = qemuDomainGetSecurityLabelList, /* 0.10.0 */ diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c index 7633e6fc07..19be1bbf2e 100644 --- a/src/qemu/qemu_monitor.c +++ b/src/qemu/qemu_monitor.c @@ -4054,6 +4054,25 @@ qemuMonitorGetIOThreads(qemuMonitorPtr mon, /** + * qemuMonitorSetIOThread: + * @mon: Pointer to the monitor + * @iothreadInfo: filled IOThread info with data + * + * + */ +int +qemuMonitorSetIOThread(qemuMonitorPtr mon, + qemuMonitorIOThreadInfoPtr iothreadInfo) +{ + VIR_DEBUG("iothread=%p", iothreadInfo); + + QEMU_CHECK_MONITOR_JSON(mon); + + return qemuMonitorJSONSetIOThread(mon, iothreadInfo); +} + + +/** * qemuMonitorGetMemoryDeviceInfo: * @mon: pointer to the monitor * @info: Location to return the hash of qemuMonitorMemoryDeviceInfo diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h index eeae18e5b0..09c1dbc882 100644 --- a/src/qemu/qemu_monitor.h +++ b/src/qemu/qemu_monitor.h @@ -1013,6 +1013,9 @@ int qemuMonitorGetIOThreads(qemuMonitorPtr mon, qemuMonitorIOThreadInfoPtr **iothreads, bool supportPolling); +int qemuMonitorSetIOThread(qemuMonitorPtr mon, + qemuMonitorIOThreadInfoPtr iothreadInfo); + typedef struct _qemuMonitorMemoryDeviceInfo qemuMonitorMemoryDeviceInfo; typedef qemuMonitorMemoryDeviceInfo *qemuMonitorMemoryDeviceInfoPtr; diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c index ab73f7aaf6..93e2920d79 100644 --- a/src/qemu/qemu_monitor_json.c +++ b/src/qemu/qemu_monitor_json.c @@ -6840,6 +6840,38 @@ qemuMonitorJSONGetIOThreads(qemuMonitorPtr mon, int +qemuMonitorJSONSetIOThread(qemuMonitorPtr mon, + qemuMonitorIOThreadInfoPtr iothreadInfo) +{ + int ret = -1; + char *path = NULL; + qemuMonitorJSONObjectProperty prop; + + if (virAsprintf(&path, "/objects/iothread%u", iothreadInfo->iothread_id) < 0) + goto cleanup; + +#define VIR_IOTHREAD_SET_PROP(propName, propVal) \ + memset(&prop, 0, sizeof(qemuMonitorJSONObjectProperty)); \ + prop.type = QEMU_MONITOR_OBJECT_PROPERTY_INT; \ + prop.val.iv = propVal; \ + if (qemuMonitorJSONSetObjectProperty(mon, path, propName, &prop) < 0) \ + goto cleanup; + + VIR_IOTHREAD_SET_PROP("poll-max-ns", iothreadInfo->poll_max_ns) + VIR_IOTHREAD_SET_PROP("poll-grow", iothreadInfo->poll_grow) + VIR_IOTHREAD_SET_PROP("poll-shrink", iothreadInfo->poll_shrink) + +#undef VIR_IOTHREAD_SET_PROP + + ret = 0; + + cleanup: + VIR_FREE(path); + return ret; +} + + +int qemuMonitorJSONGetMemoryDeviceInfo(qemuMonitorPtr mon, virHashTablePtr info) { diff --git a/src/qemu/qemu_monitor_json.h b/src/qemu/qemu_monitor_json.h index 0f557a2991..1614ff5860 100644 --- a/src/qemu/qemu_monitor_json.h +++ b/src/qemu/qemu_monitor_json.h @@ -484,6 +484,10 @@ int qemuMonitorJSONGetIOThreads(qemuMonitorPtr mon, bool supportPolling) ATTRIBUTE_NONNULL(2); +int qemuMonitorJSONSetIOThread(qemuMonitorPtr mon, + qemuMonitorIOThreadInfoPtr iothreadInfo) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); + int qemuMonitorJSONGetMemoryDeviceInfo(qemuMonitorPtr mon, virHashTablePtr info) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2); -- 2.11.1

Signed-off-by: Pavel Hrdina <phrdina@redhat.com> --- docs/news.xml | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/docs/news.xml b/docs/news.xml index 8d53e07973..5544fefae9 100644 --- a/docs/news.xml +++ b/docs/news.xml @@ -80,6 +80,15 @@ devices, providing device type information. </description> </change> + <change> + <summary> + iothread: add support for polling mechanism + </summary> + <description> + Add a new <polling> element for iothreads that allows to + configure polling feature instead of blocking syscalls. + </description> + </change> </section> <section title="Improvements"> <change> -- 2.11.1
participants (3)
-
Daniel P. Berrange
-
Pavel Hrdina
-
Stefan Hajnoczi