[libvirt] [PATCH] Fix bugs in virDomainMigrate v2 code.
by Chris Lalancette
Paolo Bonzini points out that in my refactoring of the code for
virDomainMigrate(), I added a check for the return value from
virDomainMigratePerform(). The problem is that we don't want to
exit if we fail, we actually want to go on and do
virDomainMigrateFinish2() with a non-0 return code to clean things
up. Remove the check.
While reproducing this issue, I also noticed that we wouldn't
always properly propagate an error message. In particular, I
found that if you blocked off the migration ports (with iptables)
and then tried the migration, it would actually fail but we would
get no failure output from Qemu. Therefore, we would think we
succeeded, and leave a huge mess behind us. Execute the monitor
command "info migrate", and look for a failure string in there
as well.
Signed-off-by: Chris Lalancette <clalance(a)redhat.com>
---
src/libvirt.c | 2 --
src/qemu_driver.c | 15 +++++++++++++++
2 files changed, 15 insertions(+), 2 deletions(-)
diff --git a/src/libvirt.c b/src/libvirt.c
index ca8e003..5c1efb6 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -2962,8 +2962,6 @@ virDomainMigrateVersion2 (virDomainPtr domain,
*/
ret = domain->conn->driver->domainMigratePerform
(domain, cookie, cookielen, uri, flags, dname, bandwidth);
- if (ret == -1)
- goto done;
/* In version 2 of the migration protocol, we pass the
* status code from the sender to the destination host,
diff --git a/src/qemu_driver.c b/src/qemu_driver.c
index eb22940..772f2f9 100644
--- a/src/qemu_driver.c
+++ b/src/qemu_driver.c
@@ -6956,6 +6956,21 @@ qemudDomainMigratePerform (virDomainPtr dom,
goto cleanup;
}
+ /* it is also possible that the migrate didn't fail initially, but
+ * rather failed later on. Check the output of "info migrate"
+ */
+ VIR_FREE(info);
+ if (qemudMonitorCommand(vm, "info migrate", &info) < 0) {
+ qemudReportError (dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
+ "%s", _("could not get info about migration"));
+ goto cleanup;
+ }
+ if (strstr(info, "fail") != NULL) {
+ qemudReportError (dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
+ _("migrate failed: %s"), info);
+ goto cleanup;
+ }
+
/* Clean up the source domain. */
qemudShutdownVMDaemon (dom->conn, driver, vm);
paused = 0;
--
1.6.0.6
15 years, 2 months
[libvirt] virsh vcpuinfo always returns CPU=0
by Shi Jin
Hi there,
I wonder what does the CPU entry stands for in the output of vcpuinfo. I guess it is the CPU utilization rate, right?
For both Ubuntu 9.04 (virsh version 0.6.1) and RHEL 5.4 (version 0.6.3), I always get the returned CPU value to be 0, no matter how busy the CPU can be and how many VCPU is used.
For example,
onnecting to uri: qemu:///system
VCPU: 0
CPU: 0
State: running
CPU Affinity: yyyyyyyyyyyyyyyy
VCPU: 1
CPU: 0
State: running
CPU Affinity: yyyyyyyyyyyyyyyy
Is this a problem and how do I fix it?
Thank you very much.
--
Shi Jin, PhD
15 years, 2 months
[libvirt] VMware ESX: Allow ethernet address type 'vpx'
by Matthias Bolte
The VMX entry ethernet0.addressType may be set to 'vpx' beside
'static' and 'generated'. 'vpx' indicates that the MAC address was
generated by a vCenter.
The attached patch adds 'vpx' to the valid values for ethernet0.addressType.
Matthias
15 years, 2 months
Re: [libvirt] [PATCH 2/2] RPM spec file updated with glusterfs dependency
by Gerrit Slomma
> On Tue, Jul 07, 2009 at 05:47:56AM -0700, Harshavardhana wrote:
> > Add new dependency for glusterfs rpm.
> [...]
> > +# For glusterfs
> > +Requires: glusterfs-client >= 2.0.2
> > %endif
>
> why 2.0.2 ? is taht a hard requirement ? In Fedora/Rawhide we have
> only 2.0.1 at the moment,
>
> Daniel
>
> --
> Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/
> daniel veillard com | Rpmfind RPM search engine http://rpmfind.net/
> http://veillard.com/ | virtualization library http://libvirt.org/
Tested libvirt-0.7.0 today.
And this requirement really breaks installation of builds on RHEL/CentOS/ScientificLinux.
The newest available built for glusterfs is those systems is glusterfs-client-1.3.8 from freshrpms.
Even --without-storage-fs does not help a bit, must be pulled in implicitly?
rpm -pq --requires libvirt-0.7.0-1.x86_64.rpm
(...)
glusterfs-client >= 2.0.1
(...)
Is this patched in upcoming 0.7.1 or how could i go around this issue?
15 years, 2 months
[libvirt] UML driver struct refers to public API functions
by Matthias Bolte
Hi,
The commit "Generic shared impls of all NUMA apis"
(b0b968efd56f6c66bfa23eebbecd491ea993f99b) changed the UML driver
struct to use the shared NUMA API. But now the UML driver struct
refers to the public API functions:
virNodeGetCellsFreeMemory
virNodeGetFreeMemory
instead of the shared NUMA API functions
nodeGetCellsFreeMemory
nodeGetFreeMemory
This results in an infinite recursion, if someone's going to call
virNodeGetCellsFreeMemory with an UML connection.
Matthias
15 years, 2 months
[libvirt] Re: OpenVZ : The restriction of domain name should be addressed
by Yuji NISHIDA
>> 2009/7/24 Daniel P. Berrange <berrange(a)redhat.com>
>>> We should make use of this --name parameter then - I guess it didn't
>>> exist when we first wrote the driver. It is useful to users to have
>>> separate ID vs Name parameters - and in fact the current reusing of
>>> 'ID' for the name, causes a little confusion in virsh's lookup
>>> routines
>>> because it can't tell whether the parameter its given is a name or
>>> an
>>> ID, since they are identical.
>>
>> There is still a question of how to specify both a name and CTID in
>> XML
>> description.
>> By default, CTID can be obtained as openvzGimmeFirstUnusedCTID(), but
>> actually
>> I think that there exists a number of persons interested in giving
>> CTIDs
>> manually.
>
> Well, can <domain id=''> be used for CTID remaining <name> for
> alphabetical domain name?
I worte a small patch and tried with following XML setting.
### Patch ###
--- a/src/openvz_driver.c
+++ b/src/openvz_driver.c
@@ -130,9 +130,6 @@
ADD_ARG_LIT(VZCTL);
ADD_ARG_LIT("--quiet");
ADD_ARG_LIT("create");
- ADD_ARG_LIT(vmdef->id);
-
- ADD_ARG_LIT("--name");
ADD_ARG_LIT(vmdef->name);
### XML ###
<domain id='100'>
<name>abc</name>
I found the type of id was identified as number( obj->type ==
XPATH_NUMBER ) in virXPathLongBase,
but it is clearly string before converted.
I think correct path is to go in the first if context( obj->type ==
XPATH_STRING ) and run strtol.
Then I tried with following XML.
### XML ###
<domain id=100>
<name>abc</name>
I got following error.
AttValue: " or ' expected
I'm not sure from which function should return this result.
Maybe this is the first step to go forward.
I am doubting openvz*LookupBy* functions be also modified, right?
-----
Yuji Nishida
nishidy(a)nict.go.jp
15 years, 2 months
[libvirt] PATCH: Don't blindly reorder disk drives
by Daniel P. Berrange
Calling qsort() on the disks array causes disk to be
unneccessarily re-ordered, potentially breaking the
ability to boot if the boot disk gets moved later in
the list. The new algorithm will insert a new disk as
far to the end of the list as possible, while being
ordered correctly wrt other disks on the same bus.
* src/domain_conf.c, src/domain_conf.h: Remove disk sorting
routines. Add API to insert a disk into existing list at
the optimal position, without resorting disks
* src/libvirt_private.syms: Export virDomainDiskInsert
* src/xend_internal.c, src/xm_internal.c: Remove calls to
qsort, use virDomainDiskInsert instead.
* src/qemu_driver.c: Remove calls to qsort, use virDoaminDiskInsert
instead. Fix reordering bugs when hotunplugging disks and
networks. Fix memory leak in disk/net unplug
---
src/domain_conf.c | 68 ++++++++++++++++++++++++++++++++++-----------
src/domain_conf.h | 7 +++--
src/libvirt_private.syms | 3 +-
src/qemu_driver.c | 30 +++++++++++---------
src/xend_internal.c | 2 -
src/xm_internal.c | 7 +----
6 files changed, 74 insertions(+), 43 deletions(-)
Daniel
diff --git a/src/domain_conf.c b/src/domain_conf.c
index bad53f7..29a02c1 100644
--- a/src/domain_conf.c
+++ b/src/domain_conf.c
@@ -627,19 +627,8 @@ void virDomainRemoveInactive(virDomainObjListPtr doms,
}
}
-#endif /* ! PROXY */
-
-
-int virDomainDiskCompare(virDomainDiskDefPtr a,
- virDomainDiskDefPtr b) {
- if (a->bus == b->bus)
- return virDiskNameToIndex(a->dst) - virDiskNameToIndex(b->dst);
- else
- return a->bus - b->bus;
-}
-#ifndef PROXY
/* Parse the XML definition for a disk
* @param node XML nodeset to parse for disk definition
*/
@@ -2329,14 +2318,61 @@ virDomainDeviceDefPtr virDomainDeviceDefParse(virConnectPtr conn,
}
#endif
-int virDomainDiskQSort(const void *a, const void *b)
+
+int virDomainDiskInsert(virDomainDefPtr def,
+ virDomainDiskDefPtr disk)
{
- const virDomainDiskDefPtr *da = a;
- const virDomainDiskDefPtr *db = b;
- return virDomainDiskCompare(*da, *db);
+ if (VIR_REALLOC_N(def->disks, def->ndisks+1) < 0)
+ return -1;
+
+ virDomainDiskInsertPreAlloced(def, disk);
+
+ return 0;
}
+void virDomainDiskInsertPreAlloced(virDomainDefPtr def,
+ virDomainDiskDefPtr disk)
+{
+ int i;
+ /* Tenatively plan to insert disk at the end. */
+ int insertAt = -1;
+
+ /* Then work backwards looking for disks on
+ * the same bus. If we find a disk with a drive
+ * index greater than the new one, insert at
+ * that position
+ */
+ for (i = (def->ndisks - 1) ; i >= 0 ; i--) {
+ /* If bus matches and current disk is after
+ * new disk, then new disk should go here */
+ if (def->disks[i]->bus == disk->bus &&
+ (virDiskNameToIndex(def->disks[i]->dst) >
+ virDiskNameToIndex(disk->dst))) {
+ insertAt = i;
+ } else if (def->disks[i]->bus == disk->bus &&
+ insertAt == -1) {
+ /* Last disk with match bus is before the
+ * new disk, then put new disk just after
+ */
+ insertAt = i + 1;
+ }
+ }
+
+ /* No disks with this bus yet, so put at end of list */
+ if (insertAt == -1)
+ insertAt = def->ndisks;
+
+ if (insertAt < def->ndisks)
+ memmove(def->disks + insertAt + 1,
+ def->disks + insertAt,
+ (sizeof(def->disks[0]) * (def->ndisks-insertAt)));
+
+ def->disks[insertAt] = disk;
+ def->ndisks++;
+}
+
+
#ifndef PROXY
static char *virDomainDefDefaultEmulator(virConnectPtr conn,
virDomainDefPtr def,
@@ -2643,8 +2679,6 @@ static virDomainDefPtr virDomainDefParseXML(virConnectPtr conn,
def->disks[def->ndisks++] = disk;
}
- qsort(def->disks, def->ndisks, sizeof(*def->disks),
- virDomainDiskQSort);
VIR_FREE(nodes);
/* analysis of the filesystems */
diff --git a/src/domain_conf.h b/src/domain_conf.h
index 44302be..c710986 100644
--- a/src/domain_conf.h
+++ b/src/domain_conf.h
@@ -683,9 +683,10 @@ char *virDomainCpuSetFormat(virConnectPtr conn,
char *cpuset,
int maxcpu);
-int virDomainDiskQSort(const void *a, const void *b);
-int virDomainDiskCompare(virDomainDiskDefPtr a,
- virDomainDiskDefPtr b);
+int virDomainDiskInsert(virDomainDefPtr def,
+ virDomainDiskDefPtr disk);
+void virDomainDiskInsertPreAlloced(virDomainDefPtr def,
+ virDomainDiskDefPtr disk);
int virDomainSaveXML(virConnectPtr conn,
const char *configDir,
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index 23fa01b..32961ca 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -80,7 +80,8 @@ virDomainDeviceTypeToString;
virDomainDiskBusTypeToString;
virDomainDiskDefFree;
virDomainDiskDeviceTypeToString;
-virDomainDiskQSort;
+virDomainDiskInsert;
+virDomainDiskInsertPreAlloced;
virDomainFindByID;
virDomainFindByName;
virDomainFindByUUID;
diff --git a/src/qemu_driver.c b/src/qemu_driver.c
index 4b1aeea..3ff9f9c 100644
--- a/src/qemu_driver.c
+++ b/src/qemu_driver.c
@@ -5030,9 +5030,7 @@ try_command:
dev->data.disk->pci_addr.bus = bus;
dev->data.disk->pci_addr.slot = slot;
- vm->def->disks[vm->def->ndisks++] = dev->data.disk;
- qsort(vm->def->disks, vm->def->ndisks, sizeof(*vm->def->disks),
- virDomainDiskQSort);
+ virDomainDiskInsertPreAlloced(vm->def, dev->data.disk);
return 0;
}
@@ -5090,9 +5088,7 @@ static int qemudDomainAttachUsbMassstorageDevice(virConnectPtr conn,
return -1;
}
- vm->def->disks[vm->def->ndisks++] = dev->data.disk;
- qsort(vm->def->disks, vm->def->ndisks, sizeof(*vm->def->disks),
- virDomainDiskQSort);
+ virDomainDiskInsertPreAlloced(vm->def, dev->data.disk);
VIR_FREE(reply);
VIR_FREE(cmd);
@@ -5626,17 +5622,19 @@ try_command:
}
if (vm->def->ndisks > 1) {
- vm->def->disks[i] = vm->def->disks[--vm->def->ndisks];
+ memmove(vm->def->disks + i,
+ vm->def->disks + i + 1,
+ sizeof(*vm->def->disks) *
+ (vm->def->ndisks - (i + 1)));
+ vm->def->ndisks--;
if (VIR_REALLOC_N(vm->def->disks, vm->def->ndisks) < 0) {
- virReportOOMError(conn);
- goto cleanup;
+ /* ignore, harmless */
}
- qsort(vm->def->disks, vm->def->ndisks, sizeof(*vm->def->disks),
- virDomainDiskQSort);
} else {
VIR_FREE(vm->def->disks[0]);
vm->def->ndisks = 0;
}
+ virDomainDiskDefFree(detach);
ret = 0;
cleanup:
@@ -5725,15 +5723,19 @@ qemudDomainDetachNetDevice(virConnectPtr conn,
DEBUG("%s: host_net_remove reply: %s", vm->def->name, reply);
if (vm->def->nnets > 1) {
- vm->def->nets[i] = vm->def->nets[--vm->def->nnets];
+ memmove(vm->def->nets + i,
+ vm->def->nets + i + 1,
+ sizeof(*vm->def->nets) *
+ (vm->def->nnets - (i + 1)));
+ vm->def->nnets--;
if (VIR_REALLOC_N(vm->def->nets, vm->def->nnets) < 0) {
- virReportOOMError(conn);
- goto cleanup;
+ /* ignore, harmless */
}
} else {
VIR_FREE(vm->def->nets[0]);
vm->def->nnets = 0;
}
+ virDomainNetDefFree(detach);
ret = 0;
cleanup:
diff --git a/src/xend_internal.c b/src/xend_internal.c
index 7bcee7d..99847b0 100644
--- a/src/xend_internal.c
+++ b/src/xend_internal.c
@@ -2540,8 +2540,6 @@ xenDaemonParseSxpr(virConnectPtr conn,
}
}
}
- qsort(def->disks, def->ndisks, sizeof(*def->disks),
- virDomainDiskQSort);
/* in case of HVM we have USB device emulation */
if (hvm &&
diff --git a/src/xm_internal.c b/src/xm_internal.c
index 71b852e..dd44ce5 100644
--- a/src/xm_internal.c
+++ b/src/xm_internal.c
@@ -990,8 +990,6 @@ xenXMDomainConfigParse(virConnectPtr conn, virConfPtr conf) {
disk = NULL;
}
}
- qsort(def->disks, def->ndisks, sizeof(*def->disks),
- virDomainDiskQSort);
list = virConfGetValue(conf, "vif");
if (list && list->type == VIR_CONF_LIST) {
@@ -2839,14 +2837,11 @@ xenXMDomainAttachDevice(virDomainPtr domain, const char *xml) {
switch (dev->type) {
case VIR_DOMAIN_DEVICE_DISK:
{
- if (VIR_REALLOC_N(def->disks, def->ndisks+1) < 0) {
+ if (virDomainDiskInsert(def, dev->data.disk) < 0) {
virReportOOMError(domain->conn);
goto cleanup;
}
- def->disks[def->ndisks++] = dev->data.disk;
dev->data.disk = NULL;
- qsort(def->disks, def->ndisks, sizeof(*def->disks),
- virDomainDiskQSort);
}
break;
--
1.6.2.5
--
|: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
15 years, 2 months
[libvirt] [PATCH] storage_backend_fs: avoid NULL dereference on opendir failure
by Jim Meyering
I've just begun using clang's static analyzer,
http://clang-analyzer.llvm.org/
It has uncovered a few problems in libvirt.
Here are the first few fixes.
I'll send more details later today.
>From b6bb9d82effa56733fbee9013e66fed384d9ff63 Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyering(a)redhat.com>
Date: Wed, 2 Sep 2009 09:42:32 +0200
Subject: [PATCH 1/4] storage_backend_fs: avoid NULL dereference on opendir failure
* src/storage_backend_fs.c (virStorageBackendFileSystemRefresh):
Don't call closedir on a NULL pointer.
---
src/storage_backend_fs.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/src/storage_backend_fs.c b/src/storage_backend_fs.c
index 65b656d..8241504 100644
--- a/src/storage_backend_fs.c
+++ b/src/storage_backend_fs.c
@@ -983,7 +983,8 @@ no_memory:
/* fallthrough */
cleanup:
- closedir(dir);
+ if (dir)
+ closedir(dir);
virStorageVolDefFree(vol);
virStoragePoolObjClearVols(pool);
return -1;
--
1.6.4.2.395.ge3d52
>From eaae148291680a72d19aa9d5320f90b98f123746 Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyering(a)redhat.com>
Date: Wed, 2 Sep 2009 09:58:28 +0200
Subject: [PATCH 2/4] storage_conf.c: avoid overflow upon use of "z" or "Z" (zebi) suffix
* src/storage_conf.c (virStorageSize): Don't try to compute 1024^7,
since it's too large for a 64-bit type.
---
src/storage_conf.c | 6 ------
1 files changed, 0 insertions(+), 6 deletions(-)
diff --git a/src/storage_conf.c b/src/storage_conf.c
index c446069..110f0ad 100644
--- a/src/storage_conf.c
+++ b/src/storage_conf.c
@@ -919,12 +919,6 @@ virStorageSize(virConnectPtr conn,
1024ull;
break;
- case 'z':
- case 'Z':
- mult = 1024ull * 1024ull * 1024ull * 1024ull * 1024ull *
- 1024ull * 1024ull;
- break;
-
default:
virStorageReportError(conn, VIR_ERR_XML_ERROR,
_("unknown size units '%s'"), unit);
--
1.6.4.2.395.ge3d52
>From 7f453c68bc709d542e4c40a388c92c7969ad0a3a Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyering(a)redhat.com>
Date: Wed, 2 Sep 2009 09:58:50 +0200
Subject: [PATCH 3/4] lxc: avoid NULL dereference when we find no mount point
* src/lxc_container.c (lxcContainerUnmountOldFS): Don't pass
a NULL pointer to qsort.
---
src/lxc_container.c | 5 +++--
1 files changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/lxc_container.c b/src/lxc_container.c
index 950dd50..2073864 100644
--- a/src/lxc_container.c
+++ b/src/lxc_container.c
@@ -546,8 +546,9 @@ static int lxcContainerUnmountOldFS(void)
}
endmntent(procmnt);
- qsort(mounts, nmounts, sizeof(mounts[0]),
- lxcContainerChildMountSort);
+ if (mounts)
+ qsort(mounts, nmounts, sizeof(mounts[0]),
+ lxcContainerChildMountSort);
for (i = 0 ; i < nmounts ; i++) {
VIR_DEBUG("Umount %s", mounts[i]);
--
1.6.4.2.395.ge3d52
>From 4e97befca175af427ed3b75f59e67cd620ee3ce2 Mon Sep 17 00:00:00 2001
From: Jim Meyering <meyering(a)redhat.com>
Date: Wed, 2 Sep 2009 10:02:49 +0200
Subject: [PATCH 4/4] lxc: don't unlink(NULL) in main
* src/lxc_controller.c (main): Unlink sockpath only if it's non-NULL.
---
src/lxc_controller.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/src/lxc_controller.c b/src/lxc_controller.c
index 8d11238..914c10a 100644
--- a/src/lxc_controller.c
+++ b/src/lxc_controller.c
@@ -803,7 +803,8 @@ cleanup:
if (def)
virFileDeletePid(LXC_STATE_DIR, def->name);
lxcControllerCleanupInterfaces(nveths, veths);
- unlink(sockpath);
+ if (sockpath):
+ unlink(sockpath);
VIR_FREE(sockpath);
return rc;
--
1.6.4.2.395.ge3d52
15 years, 2 months