[libvirt] attach-disk to freebsd based guest
by Sebastian Greatful
I can easily attach a disk using the following command in virsh to a
linux guest: attach-disk mydomain /dev/vg0/somethnig vdb
Hwoever when I try the same to a freebsd based guest it doesnt show up
under /dev. I hvae tried to substitude vdb with vda1 and all sorts of
other things to no avail.
How can I attache a disk to my freebsd guest?
best regards,
Seb
14 years, 11 months
[libvirt] [PATCH] add another SENTINEL attribute
by Paolo Bonzini
xend_op also can benefit from the attribute.
* src/xen/xend_internal.c (xend_op): Add ATTRIBUTE_SENTINEL.
---
src/xen/xend_internal.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c
index 4d9dcd1..66d2e7f 100644
--- a/src/xen/xend_internal.c
+++ b/src/xen/xend_internal.c
@@ -552,7 +552,7 @@ xend_op_ext(virConnectPtr xend, const char *path, char *error,
*
* Returns 0 in case of success, -1 in case of failure.
*/
-static int
+static int ATTRIBUTE_SENTINEL
xend_op(virConnectPtr xend, const char *name, const char *key, ...)
{
char buffer[1024];
--
1.6.5.2
14 years, 11 months
[libvirt] [PATCH] avoid chowning domain devices if higer-level mgmt does it for us
by Dan Kenigsberg
this is particularily important if said device is a file sitting on a
root_squashing nfs export.
---
src/qemu/qemu.conf | 4 ++++
src/qemu/qemu_conf.c | 3 +++
src/qemu/qemu_conf.h | 1 +
src/qemu/qemu_driver.c | 2 +-
4 files changed, 9 insertions(+), 1 deletions(-)
diff --git a/src/qemu/qemu.conf b/src/qemu/qemu.conf
index bca858a..892a50b 100644
--- a/src/qemu/qemu.conf
+++ b/src/qemu/qemu.conf
@@ -96,6 +96,10 @@
# The group ID for QEMU processes run by the system instance
#group = "root"
+# should libvirt assume that devices are accessible to the above user:group.
+# by default, libvirt tries to chown devices before starting up a domain and
+# restore ownership to root when domain comes down.
+#assume_devices_accessible = 0
# What cgroup controllers to make use of with QEMU guests
#
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index b1b9e5f..520a395 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -232,6 +232,9 @@ int qemudLoadDriverConfig(struct qemud_driver *driver,
return -1;
}
+ p = virConfGetValue (conf, "assume_devices_accessible");
+ CHECK_TYPE ("assume_devices_accessible", VIR_CONF_LONG);
+ if (p) driver->avoid_dev_chown = p->l;
if (virGetGroupID(NULL, group, &driver->group) < 0) {
VIR_FREE(group);
diff --git a/src/qemu/qemu_conf.h b/src/qemu/qemu_conf.h
index 675c636..3a9da73 100644
--- a/src/qemu/qemu_conf.h
+++ b/src/qemu/qemu_conf.h
@@ -87,6 +87,7 @@ struct qemud_driver {
uid_t user;
gid_t group;
+ int avoid_dev_chown;
unsigned int qemuVersion;
int nextvmid;
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 2f273eb..4c5de80 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1968,7 +1968,7 @@ static int qemuDomainSetDeviceOwnership(virConnectPtr conn,
uid_t uid;
gid_t gid;
- if (!driver->privileged)
+ if (!driver->privileged || driver->avoid_dev_chown)
return 0;
/* short circuit case of root:root */
--
1.6.5.2
14 years, 11 months
[libvirt] strange problem only if /usr/sbin/libvirtd is named
by Shi Jin
Hi there,
I am having a very strange problem. I have built a git libvirt for our servers running ubuntu 9.10. The same built works on all servers but one, which gives error information:
21:08:58.827: error : qemudStartup:926 : unable to set ownership of '/var/lib/libvirt/qemu' to user 400:400: Operation not permitted
21:08:58.827: error : qemudStartup:932 : unable to set ownership of '/var/cache/libvirt/qemu' to 400:400:
This is because I used the --with-qemu-user and --with-qemu-group options in building libvirt.
After hours of debugging, I figured out the problem:
as long as the service binary is not named /usr/sbin/libvirtd, I don't have the above errors. For example, I can rename the binary as /usr/sbin/libvirtd.1 or copy it to anywhere else and it works.
I am wondering if there is anything so special about this /usr/sbin/libvirtd file name. Also, it is so strange that it works for all other almost identical servers but this particular one.
I appreciate your help.
--
Shi Jin, PhD
14 years, 11 months
[libvirt] saving domains causes stucks all other commands
by Nikola Ciprich
Hi,
I noticed that using libvirt to save running KVM domain (ie using virsh save ...)
causes all other attempts to connect to libvirt and do something to hang till
the save finishes. Seems like the issue has been already reported once:
http://www.mail-archive.com/libvir-list@redhat.com/msg11431.html
but without any resoulution.
I've tried libvirt-0.7.4 + also applied bb8d57c68a5d9601058692813c7bdaf83c3d3aff
(Fix threading problems in python bindings) which seemed to me might be related
but still the problem persists.
here's the debug log of trying to list domains while saving some domain:
[root@vbox3 ~]# virsh list
16:30:31.718: debug : virInitialize:278 : register drivers
16:30:31.718: debug : virRegisterDriver:779 : registering Test as driver 0
16:30:31.718: debug : virRegisterNetworkDriver:617 : registering Test as network driver 0
16:30:31.718: debug : virRegisterInterfaceDriver:648 : registering Test as interface driver 0
16:30:31.718: debug : virRegisterStorageDriver:679 : registering Test as storage driver 0
16:30:31.718: debug : virRegisterDeviceMonitor:710 : registering Test as device driver 0
16:30:31.718: debug : virRegisterSecretDriver:741 : registering Test as secret driver 0
16:30:31.718: debug : vboxRegister:101 : VBoxCGlueInit failed, using dummy driver
16:30:31.718: debug : virRegisterDriver:779 : registering VBOX as driver 1
16:30:31.718: debug : virRegisterNetworkDriver:617 : registering VBOX as network driver 1
16:30:31.718: debug : virRegisterStorageDriver:679 : registering VBOX as storage driver 1
16:30:31.718: debug : virRegisterDriver:779 : registering remote as driver 2
16:30:31.718: debug : virRegisterNetworkDriver:617 : registering remote as network driver 2
16:30:31.718: debug : virRegisterInterfaceDriver:648 : registering remote as interface driver 1
16:30:31.718: debug : virRegisterStorageDriver:679 : registering remote as storage driver 2
16:30:31.718: debug : virRegisterDeviceMonitor:710 : registering remote as device driver 1
16:30:31.718: debug : virRegisterSecretDriver:741 : registering remote as secret driver 1
16:30:31.718: debug : virConnectOpenAuth:1279 : name=(null), auth=0x7f89bc41b8a0, flags=0
16:30:31.718: debug : do_open:1050 : no name, allowing driver auto-select
16:30:31.718: debug : do_open:1058 : trying driver 0 (Test) ...
16:30:31.718: debug : do_open:1064 : driver 0 Test returned DECLINED
16:30:31.718: debug : do_open:1058 : trying driver 1 (VBOX) ...
16:30:31.718: debug : do_open:1064 : driver 1 VBOX returned DECLINED
16:30:31.718: debug : do_open:1058 : trying driver 2 (remote) ...
16:30:31.718: debug : remoteOpen:1064 : Auto-probe remote URI
16:30:31.718: debug : doRemoteOpen:564 : proceeding with name =
16:30:31.719: debug : remoteIO:8359 : Do proc=66 serial=0 length=28 wait=(nil)
16:30:31.719: debug : remoteIO:8421 : We have the buck 66 0x7f89bc5b5010 0x7f89bc5b5010
16:30:31.720: debug : remoteIODecodeMessageLength:7843 : Got length, now need 64 total (60 more)
16:30:31.720: debug : remoteIOEventLoop:8285 : Giving up the buck 66 0x7f89bc5b5010 (nil)
16:30:31.720: debug : remoteIO:8452 : All done with our call 66 (nil) 0x7f89bc5b5010
16:30:31.720: debug : remoteIO:8359 : Do proc=1 serial=1 length=40 wait=(nil)
16:30:31.720: debug : remoteIO:8421 : We have the buck 1 0x256ce90 0x256ce90
16:30:31.721: debug : remoteIODecodeMessageLength:7843 : Got length, now need 56 total (52 more)
16:30:31.722: debug : remoteIOEventLoop:8285 : Giving up the buck 1 0x256ce90 (nil)
16:30:31.722: debug : remoteIO:8452 : All done with our call 1 (nil) 0x256ce90
16:30:31.722: debug : remoteIO:8359 : Do proc=110 serial=2 length=28 wait=(nil)
16:30:31.722: debug : remoteIO:8421 : We have the buck 110 0x256ce90 0x256ce90
16:30:31.723: debug : remoteIODecodeMessageLength:7843 : Got length, now need 76 total (72 more)
16:30:31.723: debug : remoteIOEventLoop:8285 : Giving up the buck 110 0x256ce90 (nil)
16:30:31.723: debug : remoteIO:8452 : All done with our call 110 (nil) 0x256ce90
16:30:31.723: debug : doRemoteOpen:872 : Auto-probed URI is qemu:///system
16:30:31.723: debug : doRemoteOpen:891 : Adding Handler for remote events
16:30:31.723: debug : doRemoteOpen:898 : virEventAddHandle failed: No addHandleImpl defined. continuing without events.
16:30:31.723: debug : do_open:1064 : driver 2 remote returned SUCCESS
16:30:31.723: debug : do_open:1084 : network driver 0 Test returned DECLINED
16:30:31.723: debug : do_open:1084 : network driver 1 VBOX returned DECLINED
16:30:31.723: debug : do_open:1084 : network driver 2 remote returned SUCCESS
16:30:31.723: debug : do_open:1103 : interface driver 0 Test returned DECLINED
16:30:31.723: debug : do_open:1103 : interface driver 1 remote returned SUCCESS
16:30:31.723: debug : do_open:1123 : storage driver 0 Test returned DECLINED
16:30:31.723: debug : do_open:1123 : storage driver 1 VBOX returned DECLINED
16:30:31.723: debug : do_open:1123 : storage driver 2 remote returned SUCCESS
16:30:31.723: debug : do_open:1143 : node driver 0 Test returned DECLINED
16:30:31.723: debug : do_open:1143 : node driver 1 remote returned SUCCESS
16:30:31.723: debug : do_open:1170 : secret driver 0 Test returned DECLINED
16:30:31.723: debug : do_open:1170 : secret driver 1 remote returned SUCCESS
16:30:31.723: debug : virConnectNumOfDomains:1656 : conn=0x2569c30
16:30:31.723: debug : remoteIO:8359 : Do proc=51 serial=3 length=28 wait=(nil)
16:30:31.723: debug : remoteIO:8421 : We have the buck 51 0x256ced0 0x256ced0
.. (here's the long wait till save finishes)..
16:31:11.077: debug : remoteIODecodeMessageLength:7843 : Got length, now need 60 total (56 more)
16:31:11.077: debug : remoteIOEventLoop:8285 : Giving up the buck 51 0x256ced0 (nil)
16:31:11.077: debug : remoteIO:8452 : All done with our call 51 (nil) 0x256ced0
Id Name State
----------------------------------
16:31:11.077: debug : virConnectClose:1297 : conn=0x2569c30
16:31:11.077: debug : virUnrefConnect:259 : unref connection 0x2569c30 1
16:31:11.077: debug : remoteIO:8359 : Do proc=2 serial=4 length=28 wait=(nil)
16:31:11.077: debug : remoteIO:8421 : We have the buck 2 0x256d0e0 0x256d0e0
16:31:11.079: debug : remoteIODecodeMessageLength:7843 : Got length, now need 56 total (52 more)
16:31:11.079: debug : remoteIOEventLoop:8285 : Giving up the buck 2 0x256d0e0 (nil)
16:31:11.079: debug : remoteIO:8452 : All done with our call 2 (nil) 0x256d0e0
16:31:11.079: debug : virReleaseConnect:216 : release connection 0x2569c30
Could somebody please have a look on this issue? I'll gladly provide any help if I'll
be able.
thanks a lot in advance!
with best regards
nik
--
-------------------------------------
Nikola CIPRICH
LinuxBox.cz, s.r.o.
28. rijna 168, 709 01 Ostrava
tel.: +420 596 603 142
fax: +420 596 621 273
mobil: +420 777 093 799
www.linuxbox.cz
mobil servis: +420 737 238 656
email servis: servis(a)linuxbox.cz
-------------------------------------
14 years, 11 months
[libvirt] Howto store schedinfo in the domain definition
by Ralf Nyren
Hi,
Is there support for storing the values accessible with eg
virsh schedinfo <domain> --set param=value
in the domain definition xml file?
Tried to find the syntax definition for this but didn't find anything.
Many thanks, Ralf
14 years, 11 months
[libvirt] Problem in handling fast prints from QEMU to stderr
by Saul Tamari
Hi,
I think I spotted a bug in the way libvirtd handles stderr output from QEMU VMs.
When starting a VM and QEMU outputs too much output data to stderr
(during a given time period), libvirtd will fail and report the
following error to /var/log/messages:
libvirtd: 13:44:40.695: error : internal error Out of space while
reading console log output
I looked at the libvirt code a bit and it seems like the
stderr-handling code (I think its in qemudReadLogOutput()) is
time-dependent and if a 4K buffer overflows it will stop running this
VM.
As a workaround I added some usleep(250000) near the fprintf() calls
(inside QEMU) and I now manage to get the VM running.
Is this the way libvirtd is supposed to behave?
Thanks,
Saul
p.s. This was tested on FC11 and libvirtd is version 0.6.2
14 years, 11 months
[libvirt] [PATCH] Xen: Add support for interface model='xenpv'
by Jiri Denemark
Xen HVM guests with PV drivers end up with two network interfaces for
each configured interface. One of them being emulated by qemu and the
other one paravirtual. As this might not be desirable, the attached
patch provides a way for users to specify that only paravirtual network
interface should be presented to the guest.
The configuration was inspired by qemu/kvm driver, for which users can
specify model='virtio' to use paravirtual network interface.
The patch adds support for model='xenpv' which results in type=xenpv
instead of type=ioemu (or nothing for newer xen versions) in guests
native configuration. Xen's qemu ignores interfaces with type != ioemu
and only paravirtual network device will be seen in the guest.
A possible addition to this would be to force type=ioemu for all other
models which would result in only emulated network device to be provided
for an HVM guest on xen newer then XEND_CONFIG_MAX_VERS_NET_TYPE_IOEMU.
No type would be configured when model is missing in domains XML.
If you think this is a good idea, I'll prepare a second version of the
patch.
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
---
src/xen/xend_internal.c | 25 +++++++++++++++++--------
src/xen/xm_internal.c | 19 ++++++++++++++-----
2 files changed, 31 insertions(+), 13 deletions(-)
diff --git a/src/xen/xend_internal.c b/src/xen/xend_internal.c
index e370eb8..9ea59c7 100644
--- a/src/xen/xend_internal.c
+++ b/src/xen/xend_internal.c
@@ -5427,6 +5427,7 @@ xenDaemonFormatSxprNet(virConnectPtr conn,
int isAttach)
{
const char *script = DEFAULT_VIF_SCRIPT;
+ int pv_only = 0;
if (def->type != VIR_DOMAIN_NET_TYPE_BRIDGE &&
def->type != VIR_DOMAIN_NET_TYPE_NETWORK &&
@@ -5495,15 +5496,23 @@ xenDaemonFormatSxprNet(virConnectPtr conn,
!STRPREFIX(def->ifname, "vif"))
virBufferVSprintf(buf, "(vifname '%s')", def->ifname);
- if (def->model != NULL)
- virBufferVSprintf(buf, "(model '%s')", def->model);
+ if (def->model != NULL) {
+ if (hvm && STREQ(def->model, "xenpv"))
+ pv_only = 1;
+ else
+ virBufferVSprintf(buf, "(model '%s')", def->model);
+ }
- /*
- * apparently (type ioemu) breaks paravirt drivers on HVM so skip this
- * from Xen 3.1.0
- */
- if (hvm && xendConfigVersion <= XEND_CONFIG_MAX_VERS_NET_TYPE_IOEMU)
- virBufferAddLit(buf, "(type ioemu)");
+ if (pv_only)
+ virBufferAddLit(buf, "(type xenpv)");
+ else {
+ /*
+ * apparently (type ioemu) breaks paravirt drivers on HVM so skip this
+ * from Xen 3.1.0
+ */
+ if (hvm && xendConfigVersion <= XEND_CONFIG_MAX_VERS_NET_TYPE_IOEMU)
+ virBufferAddLit(buf, "(type ioemu)");
+ }
if (!isAttach)
virBufferAddLit(buf, ")");
diff --git a/src/xen/xm_internal.c b/src/xen/xm_internal.c
index 40c1996..fe4bcbd 100644
--- a/src/xen/xm_internal.c
+++ b/src/xen/xm_internal.c
@@ -2041,6 +2041,7 @@ static int xenXMDomainConfigFormatNet(virConnectPtr conn,
virConfValuePtr val, tmp;
char *str;
xenUnifiedPrivatePtr priv = (xenUnifiedPrivatePtr) conn->privateData;
+ int pv_only = 0;
virBufferVSprintf(&buf, "mac=%02x:%02x:%02x:%02x:%02x:%02x",
net->mac[0], net->mac[1],
@@ -2092,12 +2093,20 @@ static int xenXMDomainConfigFormatNet(virConnectPtr conn,
goto cleanup;
}
- if (hvm && priv->xendConfigVersion <= XEND_CONFIG_MAX_VERS_NET_TYPE_IOEMU)
- virBufferAddLit(&buf, ",type=ioemu");
+ if (net->model) {
+ if (hvm && STREQ(net->model, "xenpv"))
+ pv_only = 1;
+ else
+ virBufferVSprintf(&buf, ",model=%s",
+ net->model);
+ }
- if (net->model)
- virBufferVSprintf(&buf, ",model=%s",
- net->model);
+ if (pv_only)
+ virBufferAddLit(&buf, ",type=xenpv");
+ else {
+ if (hvm && priv->xendConfigVersion <= XEND_CONFIG_MAX_VERS_NET_TYPE_IOEMU)
+ virBufferAddLit(&buf, ",type=ioemu");
+ }
if (net->ifname)
virBufferVSprintf(&buf, ",vifname=%s",
--
1.6.5.3
14 years, 11 months
[libvirt] [PATCH] Fix threading problems in python bindings
by Daniel P. Berrange
* libvirt-override.c: Add many missing calls to allow threading
when entering C code, otherwise python blocks & then deadlocks
when we have an async event to dispatch back into python code
---
python/libvirt-override.c | 106 ++++++++++++++++++++++++++++++++++++++++----
1 files changed, 96 insertions(+), 10 deletions(-)
diff --git a/python/libvirt-override.c b/python/libvirt-override.c
index b885190..0f7db9c 100644
--- a/python/libvirt-override.c
+++ b/python/libvirt-override.c
@@ -67,7 +67,10 @@ libvirt_virDomainBlockStats(PyObject *self ATTRIBUTE_UNUSED, PyObject *args) {
return(NULL);
domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virDomainBlockStats(domain, path, &stats, sizeof(stats));
+ LIBVIRT_END_ALLOW_THREADS;
+
if (c_retval < 0)
return VIR_PY_NONE;
@@ -96,7 +99,10 @@ libvirt_virDomainInterfaceStats(PyObject *self ATTRIBUTE_UNUSED, PyObject *args)
return(NULL);
domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virDomainInterfaceStats(domain, path, &stats, sizeof(stats));
+ LIBVIRT_END_ALLOW_THREADS;
+
if (c_retval < 0)
return VIR_PY_NONE;
@@ -128,7 +134,9 @@ libvirt_virDomainGetSchedulerType(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virDomainGetSchedulerType(domain, &nparams);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval == NULL)
return VIR_PY_NONE;
@@ -150,6 +158,7 @@ libvirt_virDomainGetSchedulerParameters(PyObject *self ATTRIBUTE_UNUSED,
virDomainPtr domain;
PyObject *pyobj_domain, *info;
char *c_retval;
+ int i_retval;
int nparams, i;
virSchedParameterPtr params;
@@ -158,7 +167,10 @@ libvirt_virDomainGetSchedulerParameters(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virDomainGetSchedulerType(domain, &nparams);
+ LIBVIRT_END_ALLOW_THREADS;
+
if (c_retval == NULL)
return VIR_PY_NONE;
free(c_retval);
@@ -166,7 +178,11 @@ libvirt_virDomainGetSchedulerParameters(PyObject *self ATTRIBUTE_UNUSED,
if ((params = malloc(sizeof(*params)*nparams)) == NULL)
return VIR_PY_NONE;
- if (virDomainGetSchedulerParameters(domain, params, &nparams) < 0) {
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ i_retval = virDomainGetSchedulerParameters(domain, params, &nparams);
+ LIBVIRT_END_ALLOW_THREADS;
+
+ if (i_retval < 0) {
free(params);
return VIR_PY_NONE;
}
@@ -223,6 +239,7 @@ libvirt_virDomainSetSchedulerParameters(PyObject *self ATTRIBUTE_UNUSED,
virDomainPtr domain;
PyObject *pyobj_domain, *info;
char *c_retval;
+ int i_retval;
int nparams, i;
virSchedParameterPtr params;
@@ -231,7 +248,10 @@ libvirt_virDomainSetSchedulerParameters(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virDomainGetSchedulerType(domain, &nparams);
+ LIBVIRT_END_ALLOW_THREADS;
+
if (c_retval == NULL)
return VIR_PY_INT_FAIL;
free(c_retval);
@@ -239,7 +259,11 @@ libvirt_virDomainSetSchedulerParameters(PyObject *self ATTRIBUTE_UNUSED,
if ((params = malloc(sizeof(*params)*nparams)) == NULL)
return VIR_PY_INT_FAIL;
- if (virDomainGetSchedulerParameters(domain, params, &nparams) < 0) {
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ i_retval = virDomainGetSchedulerParameters(domain, params, &nparams);
+ LIBVIRT_END_ALLOW_THREADS;
+
+ if (i_retval < 0) {
free(params);
return VIR_PY_INT_FAIL;
}
@@ -292,7 +316,10 @@ libvirt_virDomainSetSchedulerParameters(PyObject *self ATTRIBUTE_UNUSED,
}
}
- if (virDomainSetSchedulerParameters(domain, params, nparams) < 0) {
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ i_retval = virDomainSetSchedulerParameters(domain, params, nparams);
+ LIBVIRT_END_ALLOW_THREADS;
+ if (i_retval < 0) {
free(params);
return VIR_PY_INT_FAIL;
}
@@ -311,13 +338,17 @@ libvirt_virDomainGetVcpus(PyObject *self ATTRIBUTE_UNUSED,
virVcpuInfoPtr cpuinfo = NULL;
unsigned char *cpumap = NULL;
int cpumaplen, i;
+ int i_retval;
if (!PyArg_ParseTuple(args, (char *)"O:virDomainGetVcpus",
&pyobj_domain))
return(NULL);
domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain);
- if (virNodeGetInfo(virDomainGetConnect(domain), &nodeinfo) != 0)
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ i_retval = virNodeGetInfo(virDomainGetConnect(domain), &nodeinfo);
+ LIBVIRT_END_ALLOW_THREADS;
+ if (i_retval < 0)
return VIR_PY_NONE;
if (virDomainGetInfo(domain, &dominfo) != 0)
@@ -330,9 +361,12 @@ libvirt_virDomainGetVcpus(PyObject *self ATTRIBUTE_UNUSED,
if ((cpumap = malloc(dominfo.nrVirtCpu * cpumaplen)) == NULL)
goto cleanup;
- if (virDomainGetVcpus(domain,
- cpuinfo, dominfo.nrVirtCpu,
- cpumap, cpumaplen) < 0)
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ i_retval = virDomainGetVcpus(domain,
+ cpuinfo, dominfo.nrVirtCpu,
+ cpumap, cpumaplen);
+ LIBVIRT_END_ALLOW_THREADS;
+ if (i_retval < 0)
goto cleanup;
/* convert to a Python tuple of long objects */
@@ -395,13 +429,17 @@ libvirt_virDomainPinVcpu(PyObject *self ATTRIBUTE_UNUSED,
virNodeInfo nodeinfo;
unsigned char *cpumap;
int cpumaplen, i, vcpu;
+ int i_retval;
if (!PyArg_ParseTuple(args, (char *)"OiO:virDomainPinVcpu",
&pyobj_domain, &vcpu, &pycpumap))
return(NULL);
domain = (virDomainPtr) PyvirDomain_Get(pyobj_domain);
- if (virNodeGetInfo(virDomainGetConnect(domain), &nodeinfo) != 0)
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ i_retval = virNodeGetInfo(virDomainGetConnect(domain), &nodeinfo);
+ LIBVIRT_END_ALLOW_THREADS;
+ if (i_retval < 0)
return VIR_PY_INT_FAIL;
cpumaplen = VIR_CPU_MAPLEN(VIR_NODEINFO_MAXCPUS(nodeinfo));
@@ -418,10 +456,15 @@ libvirt_virDomainPinVcpu(PyObject *self ATTRIBUTE_UNUSED,
VIR_UNUSE_CPU(cpumap, i);
}
- virDomainPinVcpu(domain, vcpu, cpumap, cpumaplen);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ i_retval = virDomainPinVcpu(domain, vcpu, cpumap, cpumaplen);
+ LIBVIRT_END_ALLOW_THREADS;
Py_DECREF(truth);
free(cpumap);
+ if (i_retval < 0)
+ return VIR_PY_INT_FAIL;
+
return VIR_PY_INT_SUCCESS;
}
@@ -471,7 +514,10 @@ libvirt_virConnGetLastError(PyObject *self ATTRIBUTE_UNUSED, PyObject *args)
return(NULL);
conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
- if ((err = virConnGetLastError(conn)) == NULL)
+ LIBVIRT_BEGIN_ALLOW_THREADS;
+ err = virConnGetLastError(conn);
+ LIBVIRT_END_ALLOW_THREADS;
+ if (err == NULL)
return VIR_PY_NONE;
if ((info = PyTuple_New(9)) == NULL)
@@ -793,7 +839,9 @@ libvirt_virConnectListDefinedDomains(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectNumOfDefinedDomains(conn);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0)
return VIR_PY_NONE;
@@ -801,7 +849,9 @@ libvirt_virConnectListDefinedDomains(PyObject *self ATTRIBUTE_UNUSED,
names = malloc(sizeof(*names) * c_retval);
if (!names)
return VIR_PY_NONE;
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectListDefinedDomains(conn, names, c_retval);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0) {
free(names);
return VIR_PY_NONE;
@@ -966,7 +1016,9 @@ libvirt_virConnectListNetworks(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectNumOfNetworks(conn);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0)
return VIR_PY_NONE;
@@ -974,7 +1026,9 @@ libvirt_virConnectListNetworks(PyObject *self ATTRIBUTE_UNUSED,
names = malloc(sizeof(*names) * c_retval);
if (!names)
return VIR_PY_NONE;
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectListNetworks(conn, names, c_retval);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0) {
free(names);
return VIR_PY_NONE;
@@ -1008,7 +1062,9 @@ libvirt_virConnectListDefinedNetworks(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectNumOfDefinedNetworks(conn);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0)
return VIR_PY_NONE;
@@ -1016,7 +1072,9 @@ libvirt_virConnectListDefinedNetworks(PyObject *self ATTRIBUTE_UNUSED,
names = malloc(sizeof(*names) * c_retval);
if (!names)
return VIR_PY_NONE;
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectListDefinedNetworks(conn, names, c_retval);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0) {
free(names);
return VIR_PY_NONE;
@@ -1211,7 +1269,9 @@ libvirt_virConnectListStoragePools(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectNumOfStoragePools(conn);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0)
return VIR_PY_NONE;
@@ -1219,7 +1279,9 @@ libvirt_virConnectListStoragePools(PyObject *self ATTRIBUTE_UNUSED,
names = malloc(sizeof(*names) * c_retval);
if (!names)
return VIR_PY_NONE;
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectListStoragePools(conn, names, c_retval);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0) {
free(names);
return VIR_PY_NONE;
@@ -1261,7 +1323,9 @@ libvirt_virConnectListDefinedStoragePools(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectNumOfDefinedStoragePools(conn);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0)
return VIR_PY_NONE;
@@ -1269,7 +1333,9 @@ libvirt_virConnectListDefinedStoragePools(PyObject *self ATTRIBUTE_UNUSED,
names = malloc(sizeof(*names) * c_retval);
if (!names)
return VIR_PY_NONE;
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectListDefinedStoragePools(conn, names, c_retval);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0) {
free(names);
return VIR_PY_NONE;
@@ -1311,7 +1377,9 @@ libvirt_virStoragePoolListVolumes(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
pool = (virStoragePoolPtr) PyvirStoragePool_Get(pyobj_pool);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virStoragePoolNumOfVolumes(pool);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0)
return VIR_PY_NONE;
@@ -1319,7 +1387,9 @@ libvirt_virStoragePoolListVolumes(PyObject *self ATTRIBUTE_UNUSED,
names = malloc(sizeof(*names) * c_retval);
if (!names)
return VIR_PY_NONE;
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virStoragePoolListVolumes(pool, names, c_retval);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0) {
free(names);
return VIR_PY_NONE;
@@ -1520,7 +1590,9 @@ libvirt_virNodeListDevices(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virNodeNumOfDevices(conn, cap, flags);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0)
return VIR_PY_NONE;
@@ -1528,7 +1600,9 @@ libvirt_virNodeListDevices(PyObject *self ATTRIBUTE_UNUSED,
names = malloc(sizeof(*names) * c_retval);
if (!names)
return VIR_PY_NONE;
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virNodeListDevices(conn, cap, names, c_retval, flags);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0) {
free(names);
return VIR_PY_NONE;
@@ -1560,7 +1634,9 @@ libvirt_virNodeDeviceListCaps(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
dev = (virNodeDevicePtr) PyvirNodeDevice_Get(pyobj_dev);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virNodeDeviceNumOfCaps(dev);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0)
return VIR_PY_NONE;
@@ -1568,7 +1644,9 @@ libvirt_virNodeDeviceListCaps(PyObject *self ATTRIBUTE_UNUSED,
names = malloc(sizeof(*names) * c_retval);
if (!names)
return VIR_PY_NONE;
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virNodeDeviceListCaps(dev, names, c_retval);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0) {
free(names);
return VIR_PY_NONE;
@@ -1775,7 +1853,9 @@ libvirt_virConnectListInterfaces(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectNumOfInterfaces(conn);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0)
return VIR_PY_NONE;
@@ -1783,7 +1863,9 @@ libvirt_virConnectListInterfaces(PyObject *self ATTRIBUTE_UNUSED,
names = malloc(sizeof(*names) * c_retval);
if (!names)
return VIR_PY_NONE;
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectListInterfaces(conn, names, c_retval);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0) {
free(names);
return VIR_PY_NONE;
@@ -1826,7 +1908,9 @@ libvirt_virConnectListDefinedInterfaces(PyObject *self ATTRIBUTE_UNUSED,
return(NULL);
conn = (virConnectPtr) PyvirConnect_Get(pyobj_conn);
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectNumOfDefinedInterfaces(conn);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0)
return VIR_PY_NONE;
@@ -1834,7 +1918,9 @@ libvirt_virConnectListDefinedInterfaces(PyObject *self ATTRIBUTE_UNUSED,
names = malloc(sizeof(*names) * c_retval);
if (!names)
return VIR_PY_NONE;
+ LIBVIRT_BEGIN_ALLOW_THREADS;
c_retval = virConnectListDefinedInterfaces(conn, names, c_retval);
+ LIBVIRT_END_ALLOW_THREADS;
if (c_retval < 0) {
free(names);
return VIR_PY_NONE;
--
1.6.5.2
14 years, 11 months
[libvirt] [PATCH] Put libraries in $LIBS, not $LDFLAGS, during configure tests.
by Nix
If libraries go in $LDFLAGS while AC_CHECK_LIBbing, they'll end up in
front of the object file name, which rarely works well. They belong
in $LIBS.
---
configure.in | 25 ++++++++++++-------------
1 files changed, 12 insertions(+), 13 deletions(-)
diff --git a/configure.in b/configure.in
index f735bba..5308364 100644
--- a/configure.in
+++ b/configure.in
@@ -542,14 +542,14 @@ AC_SUBST([LIBXML_LIBS])
dnl xmlURI structure has query_raw?
old_cflags="$CFLAGS"
-old_ldflags="$LDFLAGS"
+old_libs="$LIBS"
CFLAGS="$CFLAGS $LIBXML_CFLAGS"
-LDFLAGS="$LDFLAGS $LIBXML_LIBS"
+LIBS="$LIBS $LIBXML_LIBS"
AC_CHECK_MEMBER([struct _xmlURI.query_raw],
[AC_DEFINE([HAVE_XMLURI_QUERY_RAW], [], [Have query_raw field in libxml2 xmlURI structure])],,
[#include <libxml/uri.h>])
CFLAGS="$old_cflags"
-LDFLAGS="$old_ldflags"
+LIBS="$old_libs"
dnl GnuTLS library
GNUTLS_CFLAGS=
@@ -579,15 +579,15 @@ dnl Old versions of GnuTLS uses types like 'gnutls_session' instead
dnl of 'gnutls_session_t'. Try to detect this type if defined so
dnl that we can offer backwards compatibility.
old_cflags="$CFLAGS"
-old_ldflags="$LDFLAGS"
+old_libs="$LIBS"
CFLAGS="$CFLAGS $GNUTLS_CFLAGS"
-LDFLAGS="$LDFLAGS $GNUTLS_LIBS"
+LIBS="$LIBS $GNUTLS_LIBS"
AC_CHECK_TYPE([gnutls_session],
AC_DEFINE([GNUTLS_1_0_COMPAT],[],
[enable GnuTLS 1.0 compatibility macros]),,
[#include <gnutls/gnutls.h>])
CFLAGS="$old_cflags"
-LDFLAGS="$old_ldflags"
+LIBS="$old_libs"
dnl Cyrus SASL
@@ -685,12 +685,12 @@ if test "x$with_polkit" = "xyes" -o "x$with_polkit" = "xcheck"; then
[use PolicyKit for UNIX socket access checks])
old_CFLAGS=$CFLAGS
- old_LDFLAGS=$LDFLAGS
+ old_LIBS=$LIBS
CFLAGS="$CFLAGS $POLKIT_CFLAGS"
- LDFLAGS="$LDFLAGS $POLKIT_LIBS"
+ LIBS="$LIBS $POLKIT_LIBS"
AC_CHECK_FUNCS([polkit_context_is_caller_authorized])
CFLAGS="$old_CFLAGS"
- LDFLAGS="$old_LDFLAGS"
+ LIBS="$old_LIBS"
AC_PATH_PROG([POLKIT_AUTH], [polkit-auth])
if test "x$POLKIT_AUTH" != "x"; then
@@ -1682,20 +1682,19 @@ if test "x$with_hal" = "xyes" -o "x$with_hal" = "xcheck"; then
[use HAL for host device enumeration])
old_CFLAGS=$CFLAGS
- old_LDFLAGS=$LDFLAGS
+ old_LIBS=$LIBS
CFLAGS="$CFLAGS $HAL_CFLAGS"
- LDFLAGS="$LDFLAGS $HAL_LIBS"
+ LIBS="$LIBS $HAL_LIBS"
AC_CHECK_FUNCS([libhal_get_all_devices],,[with_hal=no])
AC_CHECK_FUNCS([dbus_watch_get_unix_fd])
CFLAGS="$old_CFLAGS"
- LDFLAGS="$old_LDFLAGS"
+ LIBS="$old_LIBS"
fi
fi
AM_CONDITIONAL([HAVE_HAL], [test "x$with_hal" = "xyes"])
AC_SUBST([HAL_CFLAGS])
AC_SUBST([HAL_LIBS])
-
dnl udev/libpciaccess library check for alternate host device enumeration
UDEV_CFLAGS=
UDEV_LIBS=
--
1.6.5.3.100.g75959
14 years, 11 months