[libvirt] is libvirt-python default in libvirt compilation from source?
by Shanmuga Rajan
I was trying to install libvirt 0.5.0 from source..
It seems it went well. but when i tried to import libvirt module in python
i got error no module named libvirt
then i searched entire system for libvirt.py then i came to know that
python bindings for libvirt is not installed.
is it that python bindings for libvirt not default in compilation....?
if i have to install python bindings then what option should i use?
i tried to find option from ./configure --help from libvirt source directory
but i didnt get anything.
Can any one help me in this.
Thanks and Regards,
Shan
15 years, 11 months
[libvirt] Allowing <interface type="none"> for network interface definition
by Gihan Munasinghe
Hi
I am writing a management stack for our platform using libvirt. We have many DomU images which all have PV drivers loaded for network.I want to start these DomU guests with only the PV network device being visible from within the DomU.
This is a requirement in the platform and hopefully, I presume there are other people with the same requirement..
But when creating a server, libvirt sends (type ioemu) tag to xend. Which make all of our vm's to have 2 network cards. Using the normal xm command I can send type=none ( vif = [ "mac=00:16:3e:00:a5:57,bridge=eth0,script=vif-bridge,type=none"]) to xend. Which will send qeum-dm "-net none" switch which solves the problem.
After doing some investigation, I did some code change in libvirt0.5.0 to support a attribute called "none" with in <interface type="">, that will send xend the tag (type none). Allowing qeum-dm to be configured with "-net none" switch. ;-)
So you can define your network tag as
<interface type='none'> <!-- This will send qeum-dm -net none switch -->
<!-- Other configurations will work as normal -->
<mac address='00:16:3e:00:a5:57'/>
<source bridge='eth0'/>
<target dev='vif47.0'/>
</interface>
I this the proper way of doing it? :-\
Would this be a general functionality that libvirt community will be interested in (I think this add bit more flexibility to libvirt of the way user want to write their own management stack). If so could I contribute the patch to the mailing list.
Looking forward to hearing from you all, I am new to libvirt therefore any comments will be greatly appreciated :-)
Thanks
Gihan
--
Gihan Munasinghe
R&D Team Leader
XCalibre Communications Ltd.
www.flexiscale.com
15 years, 11 months
[libvirt] [PATCH 1/2] Java bindings for domain events
by David Lively
The attached patch (against libvirt-java) implements Java bindings for
libvirt domain events. This version provides a libvirt EventImpl
running in its own Java Thread, and provides per-Connect synchronization
that makes using the bindings thread-safe. (Note the Domain, Network,
StoragePool, and StorageVol methods also synchronize on their Connect
object, as required by libvirt. I have similar changes for
NodeDevice.java that need to be made when that code is checked in.)
This version of the patch also implements and uses an enum class
(DomainEvent.Type), as suggested by Tóth István.
IMPORTANT: THIS PATCH WILL BREAK THINGS UNLESS THE NEXT [PATCH 2/2] IS
APPLIED TO libvirt FIRST. Also, libvirt must be compiled WITH_PTHREADS
for Java events to work.
Dave
15 years, 11 months
[libvirt] review fallout: short memset
by Jim Meyering
While reviewing unrelated changes, I spotted a short memset:
char **names;
...
memset(names, 0, maxnames);
That zeros out 1/4 or 1/8 of the memory than it should.
It should be doing this:
memset(names, 0, maxnames * sizeof (*names));
I checked all memset uses and found a total of 6 uses like that.
This fixes them:
>From 614b0064eadc3ecef0690d5c65bb531844a3091d Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyering(a)redhat.com>
Date: Tue, 2 Dec 2008 14:49:03 +0100
Subject: [PATCH] fix inadequate initialization in storage and test drivers
* src/storage_driver.c (storageListPools): Set all "names" entries to 0.
(storageListDefinedPools, storagePoolListVolumes): Likewise.
* src/test.c (testStoragePoolListVolumes): Likewise.
---
src/storage_driver.c | 8 ++++----
src/test.c | 4 ++--
2 files changed, 6 insertions(+), 6 deletions(-)
diff --git a/src/storage_driver.c b/src/storage_driver.c
index 366820b..53388f1 100644
--- a/src/storage_driver.c
+++ b/src/storage_driver.c
@@ -347,7 +347,7 @@ storageListPools(virConnectPtr conn,
free(names[i]);
names[i] = NULL;
}
- memset(names, 0, nnames);
+ memset(names, 0, nnames * sizeof(*names));
return -1;
}
@@ -389,7 +389,7 @@ storageListDefinedPools(virConnectPtr conn,
free(names[i]);
names[i] = NULL;
}
- memset(names, 0, nnames);
+ memset(names, 0, nnames * sizeof(*names));
return -1;
}
@@ -880,7 +880,7 @@ storagePoolListVolumes(virStoragePoolPtr obj,
return -1;
}
- memset(names, 0, maxnames);
+ memset(names, 0, maxnames * sizeof(*names));
for (i = 0 ; i < pool->volumes.count && n < maxnames ; i++) {
if ((names[n++] = strdup(pool->volumes.objs[i]->name)) == NULL) {
virStorageReportError(obj->conn, VIR_ERR_NO_MEMORY,
@@ -895,7 +895,7 @@ storagePoolListVolumes(virStoragePoolPtr obj,
for (n = 0 ; n < maxnames ; n++)
VIR_FREE(names[i]);
- memset(names, 0, maxnames);
+ memset(names, 0, maxnames * sizeof(*names));
return -1;
}
diff --git a/src/test.c b/src/test.c
index 7998886..3e942da 100644
--- a/src/test.c
+++ b/src/test.c
@@ -1951,7 +1951,7 @@ testStoragePoolListVolumes(virStoragePoolPtr obj,
POOL_IS_ACTIVE(privpool, -1);
int i = 0, n = 0;
- memset(names, 0, maxnames);
+ memset(names, 0, maxnames * sizeof(*names));
for (i = 0 ; i < privpool->volumes.count && n < maxnames ; i++) {
if ((names[n++] = strdup(privpool->volumes.objs[i]->name)) == NULL) {
testError(obj->conn, VIR_ERR_NO_MEMORY, "%s", _("name"));
@@ -1965,7 +1965,7 @@ testStoragePoolListVolumes(virStoragePoolPtr obj,
for (n = 0 ; n < maxnames ; n++)
VIR_FREE(names[i]);
- memset(names, 0, maxnames);
+ memset(names, 0, maxnames * sizeof(*names));
return -1;
}
--
1.6.0.4.1044.g77718
15 years, 11 months
[libvirt] [PATCH 2/2]: Call udevsettle in the appropriate places
by Chris Lalancette
Instead of relying solely on polling for /dev devices to appear in libvirt, we
really should be synchronizing against udev. This is generally done by a call
to udevsettle, which is exactly what this patch implements for the storage
backends that are likely to create new /dev nodes. I believe I've read that
even after udevsettle, you are not guaranteed that devices are all the way
created, so we still need the polling in the rest of the sources, but this
should give us a much better chance of things existing as we expect.
Signed-off-by: Chris Lalancette <clalance(a)redhat.com>
Index: src/storage_backend.c
===================================================================
RCS file: /data/cvs/libvirt/src/storage_backend.c,v
retrieving revision 1.29
diff -u -r1.29 storage_backend.c
--- a/src/storage_backend.c 17 Nov 2008 11:19:33 -0000 1.29
+++ b/src/storage_backend.c 26 Nov 2008 12:40:19 -0000
@@ -270,6 +270,20 @@
return 0;
}
+void virStorageBackendWaitForDevices(virConnectPtr conn)
+{
+ const char *const settleprog[] = { UDEVSETTLE, NULL };
+ int exitstatus;
+
+ /*
+ * NOTE: we ignore errors here; this is just to make sure that any device
+ * nodes that are being created finish before we try to
+ * scan them. If this fails (udevsettle exits badly, or doesn't exist
+ * at all), then we still have a fallback mechanism anyway.
+ */
+ virRun(conn, settleprog, &exitstatus);
+}
+
/*
* Given a volume path directly in /dev/XXX, iterate over the
* entries in the directory pool->def->target.path and find the
Index: src/storage_backend.h
===================================================================
RCS file: /data/cvs/libvirt/src/storage_backend.h,v
retrieving revision 1.13
diff -u -r1.13 storage_backend.h
--- a/src/storage_backend.h 17 Nov 2008 11:19:33 -0000 1.13
+++ b/src/storage_backend.h 26 Nov 2008 12:40:19 -0000
@@ -69,6 +69,8 @@
int fd,
int withCapacity);
+void virStorageBackendWaitForDevices(virConnectPtr conn);
+
char *virStorageBackendStablePath(virConnectPtr conn,
virStoragePoolObjPtr pool,
const char *devpath);
Index: src/storage_backend_logical.c
===================================================================
RCS file: /data/cvs/libvirt/src/storage_backend_logical.c,v
retrieving revision 1.26
diff -u -r1.26 storage_backend_logical.c
--- a/src/storage_backend_logical.c 17 Nov 2008 11:19:33 -0000 1.26
+++ b/src/storage_backend_logical.c 26 Nov 2008 12:40:19 -0000
@@ -470,6 +470,8 @@
};
int exitstatus;
+ virStorageBackendWaitForDevices(conn);
+
/* Get list of all logical volumes */
if (virStorageBackendLogicalFindLVs(conn, pool, NULL) < 0) {
virStoragePoolObjClearVols(pool);
Index: src/storage_backend_disk.c
===================================================================
RCS file: /data/cvs/libvirt/src/storage_backend_disk.c,v
retrieving revision 1.20
diff -u -r1.20 storage_backend_disk.c
--- a/src/storage_backend_disk.c 17 Nov 2008 11:19:33 -0000 1.20
+++ b/src/storage_backend_disk.c 26 Nov 2008 12:40:19 -0000
@@ -262,6 +262,8 @@
VIR_FREE(pool->def->source.devices[0].freeExtents);
pool->def->source.devices[0].nfreeExtent = 0;
+ virStorageBackendWaitForDevices(conn);
+
return virStorageBackendDiskReadPartitions(conn, pool, NULL);
}
Index: src/storage_backend_iscsi.c
===================================================================
RCS file: /data/cvs/libvirt/src/storage_backend_iscsi.c,v
retrieving revision 1.18
diff -u -r1.18 storage_backend_iscsi.c
--- a/src/storage_backend_iscsi.c 17 Nov 2008 11:19:33 -0000 1.18
+++ b/src/storage_backend_iscsi.c 26 Nov 2008 12:40:19 -0000
@@ -603,6 +603,8 @@
pool->def->allocation = pool->def->capacity = pool->def->available = 0;
+ virStorageBackendWaitForDevices(conn);
+
if ((session = virStorageBackendISCSISession(conn, pool)) == NULL)
goto cleanup;
if (virStorageBackendISCSIRescanLUNs(conn, pool, session) < 0)
Index: configure.in
===================================================================
RCS file: /data/cvs/libvirt/configure.in,v
retrieving revision 1.187
diff -u -r1.187 configure.in
--- a/configure.in 25 Nov 2008 15:48:11 -0000 1.187
+++ b/configure.in 26 Nov 2008 12:40:19 -0000
@@ -115,11 +115,15 @@
[/sbin:/usr/sbin:/usr/local/sbin:$PATH])
AC_PATH_PROG([BRCTL], [brctl], [brctl],
[/sbin:/usr/sbin:/usr/local/sbin:$PATH])
+AC_PATH_PROG([UDEVSETTLE], [udevsettle], [udevsettle],
+ [/sbin:/usr/sbin:/usr/local/sbin:$PATH])
AC_DEFINE_UNQUOTED([DNSMASQ],["$DNSMASQ"],
[Location or name of the dnsmasq program])
AC_DEFINE_UNQUOTED([BRCTL],["$BRCTL"],
[Location or name of the brctl program (see bridge-utils)])
+AC_DEFINE_UNQUOTED([UDEVSETTLE],["$UDEVSETTLE"],
+ [Location or name of the udevsettle program (see udev)])
dnl Specific dir for HTML output ?
AC_ARG_WITH([html-dir], [AC_HELP_STRING([--with-html-dir=path],
15 years, 11 months
[libvirt] Concerning umlInotifyEvent
by Ron Yorston
At the top of umlInotifyEvent in uml_driver.c there is this test:
if (watch != driver->inotifyWatch)
return;
This doesn't seem to be correct. I have to comment out the test to get
libvirtd to work with UML.
Ron
15 years, 11 months
[libvirt] libvirt 0.5.0 and KVM migration
by Mickaël Canévet
Hi,
I just installed libvirt 0.5.0 on Debian Lenny with KVM 0.72 to try KVM
migration support.
When I try to migrate a VM from one node to the other one, I have this
error message:
libvir: error : this function is not supported by the hypervisor:
virDomainMigrate
Is it really supposed to work ? Do I have to upgrade KVM to a newer
version ?
Thank you for your answers and congratulation for your precious work.
--
_______________________________________________________________________
Mickaël Canévet.
European Molecular Biology Laboratory (EMBL)
Grenoble Outstation. FRANCE
_______________________________________________________________________
15 years, 11 months
[libvirt] Doubt over Kvm migration support in libvirt 0.5.0
by Shanmuga Rajan
I am using libvirt(python) to develop a small application which will
manage my nodes which running in KVM.
But when i tried migration i got this error.
>>> vs.migrate(conn, libvirt.VIR_MIGRATE_LIVE,'testtt','192.168.1.82',0)
libvir: error : this function is not supported by the hypervisor:
virDomainMigrate
Traceback (most recent call last):
File "<stdin>", line 1, in ?
File "/usr/lib64/python2.4/site-packages/libvirt.py", line 301, in migrate
if ret is None:raise libvirtError('virDomainMigrate() failed', dom=self)
libvirt.libvirtError: virDomainMigrate() failed this function is not
supported by the hypervisor: virDomainMigrate
i am running kvm-66 taken from Centos-5 Testing repositary and libvirt
0.5.0( 0.5.0: Nov 25 2008 release note says that support for kvm
migration included)
i am bit confused after reading this... Please any one help me in this.
Thanks and Regards,
-Shan
15 years, 11 months
[libvirt] [PATCH] qemud/qemud.c (qemudCleanup): Plug a leak.
by Jim Meyering
Hi Dan,
With your patches, "make check" hits one new leak:
40 bytes in 1 blocks are definitely lost in loss record 9 of 65
at 0x4A05174: calloc (vg_replace_malloc.c:397)
by 0x447F29: virAllocN (memory.c:128)
by 0x40CD82: qemudRunLoop (qemud.c:1839)
by 0x40E713: main (qemud.c:2462)
This fixes it:
>From 9230c41d3deb00b4a0bb6800126e5cd41a350a54 Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyering(a)redhat.com>
Date: Mon, 1 Dec 2008 15:07:40 +0100
Subject: [PATCH] * qemud/qemud.c (qemudCleanup): Plug a leak.
---
qemud/qemud.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/qemud/qemud.c b/qemud/qemud.c
index 94f5918..8518bd5 100644
--- a/qemud/qemud.c
+++ b/qemud/qemud.c
@@ -1942,6 +1942,7 @@ static void qemudCleanup(struct qemud_server *server) {
virStateCleanup();
+ VIR_FREE(server->workers);
free(server);
}
--
1.6.0.4.1044.g77718
15 years, 11 months