[libvirt] VMX parser: limitation of numvcpus
by Pino Toscano
Hi Matthias,
while testing the recent improvements I did in the VMX parser for CPU
topology (see https://bugzilla.redhat.com/1568148), our QE Ming Xie set
a guest in ESXi 5.5 to 7 cores. The result was the error triggered by
the following code:
/* vmx:numvcpus -> def:vcpus */
if (virVMXGetConfigLong(conf, "numvcpus", &numvcpus, 1, true) < 0)
goto cleanup;
if (numvcpus <= 0 || (numvcpus % 2 != 0 && numvcpus != 1)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("Expecting VMX entry 'numvcpus' to be an unsigned "
"integer (1 or a multiple of 2) but found %lld"), numvcpus);
goto cleanup;
}
Looking into the history, this check dates back to the initial addition
of the esx driver, commit e2aeee6811917bc5ad28326c6a860ded39802a88.
Considering the VMX format is proprietary of VMware, and officially
undocumented, do you know remember why that limitation was added?
Do you think removing it might break anything?
Thanks,
--
Pino Toscano
6 years, 5 months
[libvirt] [PATCHv2 0/7] qemu: add vhost-vsock-pci support
by Ján Tomko
v1:
https://www.redhat.com/archives/libvir-list/2018-May/msg01517.html
v2:
* use <vsock> instead of <interface>
* use <source> for the guest address
* add <source auto> attribute and auto-assign the guest CID
* fixed PCI address allocation
https://bugzilla.redhat.com/show_bug.cgi?id=1291851
Ján Tomko (7):
Introduce virDomainVsockDef
Add privateData to virDomainVsockDef
conf: introduce <vsock> element
qemu: add private data for vsock
Introduce QEMU_CAPS_DEVICE_VHOST_VSOCK
util: create virvsock.c
qemu: add support for vhost-vsock-pci
configure.ac | 8 +
docs/formatdomain.html.in | 20 ++
docs/schemas/domaincommon.rng | 29 +++
src/conf/domain_conf.c | 228 ++++++++++++++++++++-
src/conf/domain_conf.h | 27 +++
src/libvirt_private.syms | 6 +
src/qemu/qemu_alias.c | 16 ++
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 45 ++++
src/qemu/qemu_domain.c | 42 ++++
src/qemu/qemu_domain.h | 9 +
src/qemu/qemu_domain_address.c | 11 +
src/qemu/qemu_driver.c | 6 +
src/qemu/qemu_hotplug.c | 1 +
src/qemu/qemu_process.c | 35 ++++
src/util/Makefile.inc.am | 2 +
src/util/virvsock.c | 89 ++++++++
src/util/virvsock.h | 29 +++
tests/qemucapabilitiesdata/caps_2.10.0.aarch64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.10.0.ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.10.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.10.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.11.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.12.0.aarch64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.12.0.ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.12.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.12.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.8.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.8.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.9.0.ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.9.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml | 1 +
.../vhost-vsock-auto.x86_64-latest.args | 32 +++
tests/qemuxml2argvdata/vhost-vsock-auto.xml | 35 ++++
.../vhost-vsock.x86_64-latest.args | 32 +++
tests/qemuxml2argvdata/vhost-vsock.xml | 36 ++++
tests/qemuxml2argvtest.c | 14 ++
tests/qemuxml2xmloutdata/vhost-vsock-auto.xml | 36 ++++
tests/qemuxml2xmloutdata/vhost-vsock.xml | 1 +
tests/qemuxml2xmltest.c | 3 +
41 files changed, 808 insertions(+), 1 deletion(-)
create mode 100644 src/util/virvsock.c
create mode 100644 src/util/virvsock.h
create mode 100644 tests/qemuxml2argvdata/vhost-vsock-auto.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/vhost-vsock-auto.xml
create mode 100644 tests/qemuxml2argvdata/vhost-vsock.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/vhost-vsock.xml
create mode 100644 tests/qemuxml2xmloutdata/vhost-vsock-auto.xml
create mode 120000 tests/qemuxml2xmloutdata/vhost-vsock.xml
--
2.16.1
6 years, 5 months
[libvirt] [GSoC] Design ideas for implementing cleanup attribute
by Sukrit Bhatnagar
Hi,
I am interested in implementing the GCC cleanup attribute for automatic
resource freeing as part of GSoC'18. I have shared a proposal for the same.
This mail is to discuss the code design for implementing it.
Here are some of my ideas:
This attribute requires a cleanup function that is called automatically
when the corresponding variable goes out of scope. There are some functions
whose logic can be reused:
- Functions such as virCommandFree, virConfFreeList and virCgroupFree can
be directly used as cleanup functions. They have parameter and return type
valid for a cleanup function.
- Functions such as virFileClose and virFileFclose need some additional
consideration as they return a value. I think we can set some global
variable in a separate source file (just like errno variable from errno.h).
Then the value to be returned can be accessed globally.
- Functions such as virDomainEventGraphicsDispose need an entirely new
design. They are used as callbacks in object classes and passed as an
argument in virClassNew. This would require making changes to
virObjectUnref's code too. *This is the part I am not sure how to implement
cleanup logic for.*
Also, since the __attribute__((__cleanup__(anyfunc))) looks ugly, a macro
like autoclean (ideas for macro name welcome!) can be used instead. As
Martin pointed out in my proposal, for some types, this can be done right
after typedef declarations, so that the type itself contains this attribute.
Basically, at most places where VIR_FREE is used to release memory
explicitly, the corresponding variable can use the attribute. The existing
virFree function also can be reused as it takes void pointer as an argument
and returns nothing.
One of the exceptions to this will be those variables which are struct
members. The cleanup of member has to be done when the enclosing struct
variable is cleaned.
I can create new files vircleanup.{c,h} for defining cleanup functions for
types which do not have an existing cleanup/free function. This can be done
separately for each driver supported.
For example, cleanups pertaining to lxc driver will be in
src/lxc/lxc_cleanup.c.
Your suggestions are welcome.
Thanks,
Sukrit Bhatnagar
6 years, 5 months
[libvirt] [jenkins-ci PATCH v3 0/3] Enable out-of-the-box parallel make
by Andrea Bolognani
Changes from [v2]:
* now that libvirt-perl uses Module::Build and ExtUtils::MakeMaker
support has been dropped, we don't need to special case any job,
so revert back to [v1] and rebase on top of master because the
original series doesn't apply anymore.
Changes from [v1]:
* turns out some versions of ExtUtils::MakeMaker output Makefiles
that are not entirely compatible with parallel make, which forces
us to introduce an exception in the relevant template and shuffle
patches around.
[v1] https://www.redhat.com/archives/libvir-list/2018-May/msg00732.html
[v2] https://www.redhat.com/archives/libvir-list/2018-May/msg01070.html
Andrea Bolognani (3):
jobs: Enable parallel make everywhere
guests: Set MAKEFLAGS for out-of-the-box parallel make
jobs: Drop explicit parallel make usage
guests/templates/bashrc | 2 ++
jobs/autotools.yaml | 10 +++++-----
jobs/defaults.yaml | 1 -
projects/libvirt.yaml | 4 ++--
projects/osinfo-db.yaml | 4 ++--
5 files changed, 11 insertions(+), 10 deletions(-)
--
2.17.0
6 years, 5 months
[libvirt] [PATCH v2] util: Loop through all resolved addresses in virNetSocketNewListenTCP
by Olaf Hering
Currently virNetSocketNewListenTCP bails out early under the following
conditions:
- the hostname resolves to at least one IPv4 and at least one IPv6
address
- the local interfaces have that one IPv4 address assigned, but not any
of the IPv6 addresses
- the local interfaces have just IPv6 link-local addresses
In this case the resolver returns not only the IPv4 addresses but also
IPv6. Binding the IPv6 address will obviously fail. But this terminates
the entire loop, even if binding to IPv4 succeeded.
To fix this error, just keep going and loop through all returned
addresses. In case none of the attempts to bind to some address
succeeded, try to report the appropriate error.
Signed-off-by: Olaf Hering <olaf(a)aepfle.de>
---
v2:
whitespace fixes, as suggested by John Ferlan
src/rpc/virnetsocket.c | 23 ++++++++++-------------
1 file changed, 10 insertions(+), 13 deletions(-)
diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index 7087abec9c..60a7187348 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -382,11 +382,8 @@ int virNetSocketNewListenTCP(const char *nodename,
#endif
if (bind(fd, runp->ai_addr, runp->ai_addrlen) < 0) {
- if (errno != EADDRINUSE) {
- virReportSystemError(errno, "%s", _("Unable to bind to port"));
- goto error;
- }
- addrInUse = true;
+ if (errno == EADDRINUSE)
+ addrInUse = true;
VIR_FORCE_CLOSE(fd);
runp = runp->ai_next;
continue;
@@ -409,14 +406,14 @@ int virNetSocketNewListenTCP(const char *nodename,
fd = -1;
}
- if (nsocks == 0 && familyNotSupported) {
- virReportSystemError(EAFNOSUPPORT, "%s", _("Unable to bind to port"));
- goto error;
- }
-
- if (nsocks == 0 &&
- addrInUse) {
- virReportSystemError(EADDRINUSE, "%s", _("Unable to bind to port"));
+ if (nsocks == 0) {
+ if (familyNotSupported)
+ errno = EAFNOSUPPORT;
+ else if (addrInUse)
+ errno = EADDRINUSE;
+ else
+ errno = EDESTADDRREQ;
+ virReportSystemError(errno, "%s", _("Unable to bind to port"));
goto error;
}
6 years, 5 months
[libvirt] [RFC PATCH 0/9] qemu: add vhost-vsock-pci support
by Ján Tomko
@Stefan, please take a look at the docs/ changes in patch 6
Add <interface type='vsock'>, mapping to vhost-vsock-pci
Missing: hotplug support
Similar to vhost-net, we cannot apply a SELinux label on the
file descriptor, so an adjustment of the policy will probably
be needed to make it work in enforcing mode.
https://bugzilla.redhat.com/show_bug.cgi?id=1291851
Ján Tomko (9):
conf: split interface target element condition
qemu: prepare for missing interface model
Introduce virDomainNetDefNew
Add privateData to virDomainNetDef
qemu: add private data for interfaces
conf: add interface type vsock
Introduce QEMU_CAPS_DEVICE_VHOST_VSOCK
Introduce virNetDevVsockSetGuestCid
qemu: implement vhost-vsock-pci support
configure.ac | 8 ++
docs/formatdomain.html.in | 15 ++++
docs/schemas/domaincommon.rng | 14 ++++
src/bhyve/bhyve_parse_command.c | 2 +-
src/conf/domain_conf.c | 85 ++++++++++++++++++----
src/conf/domain_conf.h | 8 ++
src/conf/netdev_bandwidth_conf.h | 1 +
src/libvirt_private.syms | 2 +
src/libxl/libxl_conf.c | 1 +
src/lxc/lxc_controller.c | 1 +
src/lxc/lxc_driver.c | 3 +
src/lxc/lxc_process.c | 1 +
src/openvz/openvz_conf.c | 4 +-
src/qemu/qemu_capabilities.c | 3 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 43 +++++++++--
src/qemu/qemu_domain.c | 47 ++++++++++++
src/qemu/qemu_domain.h | 13 ++++
src/qemu/qemu_domain_address.c | 6 +-
src/qemu/qemu_hotplug.c | 3 +
src/qemu/qemu_interface.c | 33 +++++++++
src/qemu/qemu_interface.h | 4 +
src/qemu/qemu_parse_command.c | 2 +-
src/qemu/qemu_process.c | 6 ++
src/uml/uml_conf.c | 5 ++
src/util/virnetdev.c | 30 ++++++++
src/util/virnetdev.h | 4 +
src/vbox/vbox_common.c | 2 +-
src/vmx/vmx.c | 3 +-
src/xenconfig/xen_common.c | 3 +-
src/xenconfig/xen_sxpr.c | 3 +-
tests/qemucapabilitiesdata/caps_2.10.0.aarch64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.10.0.ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.10.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.10.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.11.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.12.0.aarch64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.12.0.ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.12.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.12.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.8.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.8.0.x86_64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.9.0.ppc64.xml | 1 +
tests/qemucapabilitiesdata/caps_2.9.0.s390x.xml | 1 +
tests/qemucapabilitiesdata/caps_2.9.0.x86_64.xml | 1 +
.../vhost-vsock.x86_64-latest.args | 32 ++++++++
tests/qemuxml2argvdata/vhost-vsock.xml | 36 +++++++++
tests/qemuxml2argvtest.c | 15 ++++
tests/qemuxml2xmloutdata/vhost-vsock.xml | 1 +
tests/qemuxml2xmltest.c | 2 +
tools/virsh-domain.c | 1 +
51 files changed, 425 insertions(+), 32 deletions(-)
create mode 100644 tests/qemuxml2argvdata/vhost-vsock.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/vhost-vsock.xml
create mode 120000 tests/qemuxml2xmloutdata/vhost-vsock.xml
--
2.16.1
6 years, 5 months
[libvirt] [PATCH] util: storage: remove virStorageSource->tlsVerify
by Peter Krempa
Disks are client-only so we don't need to have this variable. We also
always pass false for 'isListen' to qemuBuildTLSx509BackendProps for all
disk-related code-paths.
---
This applies on top of my branch collecting all ACKed postings of
recent blockdev-related work. Current version can be fetched by:
git fetch git://pipo.sk/pipo/libvirt.git blockdev-staging
src/qemu/qemu_command.c | 2 +-
src/qemu/qemu_domain.c | 2 --
src/qemu/qemu_hotplug.c | 3 +--
src/util/virstoragefile.c | 1 -
src/util/virstoragefile.h | 1 -
5 files changed, 2 insertions(+), 7 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 26e61f26f4..c75595ca6d 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -774,7 +774,7 @@ qemuBuildDiskSrcTLSx509CommandLine(virCommandPtr cmd,
return 0;
return qemuBuildTLSx509CommandLine(cmd, src->tlsCertdir,
- false, src->tlsVerify,
+ false, true,
NULL, src->tlsAlias, qemuCaps);
}
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 4368e9be35..873bcec50d 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -9928,8 +9928,6 @@ qemuProcessPrepareStorageSourceTLSVxhs(virStorageSourcePtr src,
if (src->haveTLS == VIR_TRISTATE_BOOL_YES) {
if (VIR_STRDUP(src->tlsCertdir, cfg->vxhsTLSx509certdir) < 0)
return -1;
-
- src->tlsVerify = true;
}
return 0;
diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
index c656409eaa..2f76c048aa 100644
--- a/src/qemu/qemu_hotplug.c
+++ b/src/qemu/qemu_hotplug.c
@@ -164,8 +164,7 @@ qemuDomainAddDiskSrcTLSObject(virQEMUDriverPtr driver,
if (qemuDomainGetTLSObjects(priv->qemuCaps, NULL,
src->tlsCertdir,
- false,
- src->tlsVerify,
+ false, true,
src->tlsAlias,
&tlsProps, NULL) < 0)
goto cleanup;
diff --git a/src/util/virstoragefile.c b/src/util/virstoragefile.c
index 54de2c1c30..10fe0f201a 100644
--- a/src/util/virstoragefile.c
+++ b/src/util/virstoragefile.c
@@ -2171,7 +2171,6 @@ virStorageSourceCopy(const virStorageSource *src,
ret->shared = src->shared;
ret->haveTLS = src->haveTLS;
ret->tlsFromConfig = src->tlsFromConfig;
- ret->tlsVerify = src->tlsVerify;
ret->detected = src->detected;
ret->debugLevel = src->debugLevel;
ret->debug = src->debug;
diff --git a/src/util/virstoragefile.h b/src/util/virstoragefile.h
index 1631c4cf66..4591e6e213 100644
--- a/src/util/virstoragefile.h
+++ b/src/util/virstoragefile.h
@@ -310,7 +310,6 @@ struct _virStorageSource {
* certificate directory with listen and verify bools. */
char *tlsAlias;
char *tlsCertdir;
- bool tlsVerify;
bool detected; /* true if this entry was not provided by the user */
--
2.16.2
6 years, 5 months
[libvirt] Question about using cpu mode "host-model" while providing a cpu model name
by Collin Walling
Hi
I have noticed something that may be misconstrued regarding the libvirt domain xml format
for defining a cpu model. There seems to be a misalignment where the libvirt documentation
states something that is not supported, but libvirt itself gives no clear indication of
such. This is regarding the cpu mode "host-model" and providing a cpu model name between
the <model> tags.
>From the libvirt docs under header "CPU model and topology" paragraph "cpu" subparagraph
"host-model", the following rule is defined (bolded or between asterisks):
"... The match attribute can't be used in this mode. *Specifying CPU model is not supported*
either, but model's fallback attribute may still be used. ..."
https://libvirt.org/formatdomain.html#elementsCPU
The above rule reads as "if mode is 'host-model' (and the architecture is not PowerPC) then
specifying a model name should not be allowed". However, this is not the observed behavior.
For example, I can define and start a guest with the following xml snippet without any issues:
<cpu mode='host-model'>
<model>cpu-name</model>
</cpu>
Which seems to contradict what the documentation states.
This issue was reported by a colleague of mine who was confused by the cpu features that
were available to a guest when host-model and a model name are provided. Personally, I tend
to err on the side of providing host-model and a cpu-model-name being mutually exclusive.
I've attempted to find a solution to this problem myself by looking at virCPUDefParseXML,
but the fact that PowerPC exists as an exception and we do not know the architecture when
parsing a guest cpu xml makes minimal code changes challenging.
If we want to make changes to the code, then I imagine that the ideal solution would revolve
around only allowing <model>cpu-name</model> to be valid iff the cpu mode is set to "custom".
Otherwise some clarity on the documentation would suffice. Something like "A CPU model
specified in the domain xml will be ignored." Thoughts?
Thank you for your time.
--
Respectfully,
- Collin Walling
6 years, 5 months
[libvirt] [python PATCH] Add support for virConnectBaselineHypervisorCPU
by Jiri Denemark
The python bindings for this API cannot be generated because are
generator is not capable of handling string arrays (char **) parameters.
https://bugzilla.redhat.com/show_bug.cgi?id=1584676
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
generator.py | 1 +
libvirt-override-api.xml | 11 ++++++++
libvirt-override.c | 60 ++++++++++++++++++++++++++++++++++++++++
3 files changed, 72 insertions(+)
diff --git a/generator.py b/generator.py
index a0fc720..b7d96a1 100755
--- a/generator.py
+++ b/generator.py
@@ -488,6 +488,7 @@ skip_impl = (
'virDomainGetPerfEvents',
'virDomainSetPerfEvents',
'virDomainGetGuestVcpus',
+ 'virConnectBaselineHypervisorCPU',
)
lxc_skip_impl = (
diff --git a/libvirt-override-api.xml b/libvirt-override-api.xml
index b63a403..36d3577 100644
--- a/libvirt-override-api.xml
+++ b/libvirt-override-api.xml
@@ -717,5 +717,16 @@
<arg name='flags' type='unsigned int' info='extra flags; not used yet, so callers should always pass 0'/>
<return type='int' info="dictionary of vcpu data returned by the guest agent"/>
</function>
+ <function name='virConnectBaselineHypervisorCPU' file='python'>
+ <info>Computes the most feature-rich CPU which is compatible with all given CPUs and can be provided by the specified hypervisor.</info>
+ <return type='char *' info='XML description of the computed CPU or NULL on error.'/>
+ <arg name='conn' type='virConnectPtr' info='pointer to the hypervisor connection'/>
+ <arg name='emulator' type='const char *' info='path to the emulator binary'/>
+ <arg name='arch' type='const char *' info='CPU architecture'/>
+ <arg name='machine' type='const char *' info='machine type'/>
+ <arg name='virttype' type='const char *' info='virtualization type'/>
+ <arg name='xmlCPUs' type='const char **' info='array of XML descriptions of CPUs'/>
+ <arg name='flags' type='unsigned int' info='bitwise-OR of virConnectBaselineCPUFlags'/>
+ </function>
</symbols>
</api>
diff --git a/libvirt-override.c b/libvirt-override.c
index b4c1529..1c95c18 100644
--- a/libvirt-override.c
+++ b/libvirt-override.c
@@ -9708,6 +9708,63 @@ libvirt_virStreamRecvFlags(PyObject *self ATTRIBUTE_UNUSED,
#endif /* LIBVIR_CHECK_VERSION(3, 4, 0) */
+#if LIBVIR_CHECK_VERSION(4, 4, 0)
+static PyObject *
+libvirt_virConnectBaselineHypervisorCPU(PyObject *self ATTRIBUTE_UNUSED,
+ PyObject *args)
+{
+ virConnectPtr conn;
+ PyObject *pyobj_conn;
+ char *emulator;
+ char *arch;
+ char *machine;
+ char *virttype;
+ PyObject *list;
+ unsigned int flags;
+ char **xmlCPUs = NULL;
+ int ncpus = 0;
+ size_t i;
+ char *cpu;
+ PyObject *ret = NULL;
+
+ if (!PyArg_ParseTuple(args, (char *)"OzzzzOI:virConnectBaselineHypervisorCPU",
+ &pyobj_conn, &emulator, &arch, &machine, &virttype,
+ &list, &flags))
+ return NULL;
+
+ conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
+
+ if (PyList_Check(list)) {
+ ncpus = PyList_Size(list);
+ if (VIR_ALLOC_N(xmlCPUs, ncpus) < 0)
+ return PyErr_NoMemory();
+
+ for (i = 0; i < ncpus; i++) {
+ if (libvirt_charPtrUnwrap(PyList_GetItem(list, i),
+ &(xmlCPUs[i])) < 0 ||
+ !xmlCPUs[i])
+ goto cleanup;
+ }
+ }
+
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ cpu = virConnectBaselineHypervisorCPU(conn, emulator, arch, machine, virttype,
+ (const char **)xmlCPUs, ncpus, flags);
+ LIBVIRT_END_ALLOW_THREADS;
+
+ ret = libvirt_constcharPtrWrap(cpu);
+
+ cleanup:
+ for (i = 0; i < ncpus; i++)
+ VIR_FREE(xmlCPUs[i]);
+ VIR_FREE(xmlCPUs);
+ VIR_FREE(cpu);
+
+ return ret;
+}
+#endif /* LIBVIR_CHECK_VERSION(4, 4, 0) */
+
+
/************************************************************************
* *
* The registration stuff *
@@ -9941,6 +9998,9 @@ static PyMethodDef libvirtMethods[] = {
{(char *) "virStreamSendHole", libvirt_virStreamSendHole, METH_VARARGS, NULL},
{(char *) "virStreamRecvFlags", libvirt_virStreamRecvFlags, METH_VARARGS, NULL},
#endif /* LIBVIR_CHECK_VERSION(3, 4, 0) */
+#if LIBVIR_CHECK_VERSION(4, 4, 0)
+ {(char *) "virConnectBaselineHypervisorCPU", libvirt_virConnectBaselineHypervisorCPU, METH_VARARGS, NULL},
+#endif /* LIBVIR_CHECK_VERSION(4, 4, 0) */
{NULL, NULL, 0, NULL}
};
--
2.17.0
6 years, 5 months