[libvirt] [PATCH] Cosmetic change to 'virsh nodedev-list --tree' output
by Mark McLoughlin
Maybe it's just me, but I try to select an item from the tree using
double-click and get annoyed when "+-" gets included in the selection.
* src/virsh.c: add a space between "+-" and the node device name
in 'virsh nodedev-list --tree'
---
src/virsh.c | 4 +++-
1 files changed, 3 insertions(+), 1 deletions(-)
diff --git a/src/virsh.c b/src/virsh.c
index 94c3c4e..2d0cf81 100644
--- a/src/virsh.c
+++ b/src/virsh.c
@@ -5370,6 +5370,8 @@ cmdNodeListDevicesPrint(vshControl *ctl,
if (depth && depth < MAX_DEPTH) {
indentBuf[indentIdx] = '+';
indentBuf[indentIdx+1] = '-';
+ indentBuf[indentIdx+2] = ' ';
+ indentBuf[indentIdx+3] = '\0';
}
/* Print this device */
@@ -5398,7 +5400,7 @@ cmdNodeListDevicesPrint(vshControl *ctl,
/* If there is a child device, then print another blank line */
if (nextlastdev != -1) {
vshPrint(ctl, "%s", indentBuf);
- vshPrint(ctl, " |\n");
+ vshPrint(ctl, " |\n");
}
/* Finally print all children */
--
1.6.2.5
15 years, 3 months
[libvirt] XenStore fix
by Jonas Eriksson
Hi,
I have been examining a bug where libvirtd (and virsh) does not show
all virtual machines on a xen host. This proved to be because of this
program flow:
1. virConnectNumOfDomains -> .. -> xenUnifiedNumOfDomains
-> xenHypervisorNumOfDomains => 3
2. virConnectListDomains(max=3) -> .. -> xenUnifiedListDomains(max=3)
-> xenStoreNumOfDomains(max=3) => { 0, 2, 7 }
The domain with ID 2 is then removed when it is discovered that it is
not a running domain, which leads to this:
xenhost# xm list
Name ID Mem VCPUs State Time(s)
Domain-0 0 14970 2 r----- 2544.7
vm1 7 512 1 -b---- 2191.7
vm4 512 1 28.0
vm5 12 512 1 -b---- 467.1
vm6 512 1 0.0
vm7 512 1 482.4
xenhost# virsh list
Id Name State
----------------------------------
0 Domain-0 running
7 vm1 idle
xenhost#
But where does "2" come from? If we check all "directories" in
/local/domain which is queried by the xenstore driver, it is apparent
that xenstore is not properly cleaned. We find the sequence {0, 2, 7}
as the first entries:
xenhost# xenstore ls /local/domain |grep '^[^ ]'
0 = ""
2 = ""
7 = ""
9 = ""
10 = ""
11 = ""
12 = ""
xenhost#
This patch checks that the path found in /local/domain/<domid>/vm
exists in xenstore before adding the domid to the return list. The
same thing is done for xenStoreNumOfDomains.
I use SLES11 with Xen 3.3.1_18546_12-3.1.
/Jonas
15 years, 3 months
[libvirt] libvirt socket closed unexpectedly (code=39) & libvirt: Broken pipe (code=38)
by dave c
Hello everyone,
I'm trying to get eucalyptus v1.5.2 working on debian lenny 64bit, the
relevant info I can give you is as follows:
--- libvirtd conf file ----
01:/etc/libvirt# cat libvirtd.conf|grep -v '#'
unix_sock_group = "libvirt"
unix_sock_rw_perms = "0770"
auth_unix_ro = "none"
auth_unix_rw = "none"
---------------------
version of libvir installed on my server (cloud controller/cluster
controller):
01:/etc/libvirt# dpkg -l|grep libvirt
ii libvirt-bin 0.4.6-10 the
programs for the libvirt library
ii libvirt-dev 0.4.6-10
development files for the libvirt library
ii libvirt-doc 0.4.6-10
documentation for the libvirt library
ii libvirt0 0.4.6-10 library
for interfacing with different virtualization systems
version installed on my node:
02:~# dpkg -l |grep libvirt
ii libvirt-bin 0.4.6-10 the
programs for the libvirt library
ii libvirt0 0.4.6-10 library
for interfacing with different virtualization systems
================
error in my nc..log (eucalyptus node log):
[Sat Aug 15 02:24:58 2009][005115][EUCADEBUG ] system_output():
[//usr/lib/eucalyptus/euca_rootwrap
//usr/share/eucalyptus/gen_kvm_libvirt_xml --ramdisk --ephemeral]
[Sat Aug 15 02:24:58 2009][005115][EUCAERROR ] libvirt: socket closed
unexpectedly (code=39)
[Sat Aug 15 02:24:58 2009][005115][EUCAERROR ] libvirt: Broken pipe
(code=38)
basically when i start up an instance, it starts up and then terminates
within 1 min
I've already posted on the eucalyptus forums, i thought the libvir-list
might be able to assist as well
thoughts / ideas ? anything else I can share ?
15 years, 3 months
[libvirt] remote_protocol.c not compiling
by Kenneth Nagin
I am experimenting with some new APIs in Libvirt 0.6.5 and added the
changes to remote_protocol.x .
I'm getting a make error of undefined reference in ./.libs/libvirt.so to
the xdr arguments.
I think it is related to the fact that a new remote_protocol.c is not
being generated.
I've tried the obvious "make clean" and "make" and "make -C qemud
remote_protocol.c."
But it has no affect.
[nagin@gorky trunk]$ make -C qemud remote_protocol.c
make: Entering directory
`/gpfs/reservoir/nagin/workspace/LIBVIRT/LIBVIRT_0_6_5/trunk/qemud'
make: Nothing to be done for `remote_protocol.c'.
make: Leaving directory
`/gpfs/reservoir/nagin/workspace/LIBVIRT/LIBVIRT_0_6_5/trunk/qemud'
Any suggestions?
Kenneth Nagin
15 years, 3 months
[libvirt] [PATCH] Compressed save image format for Qemu.
by Chris Lalancette
Implement a compressed save image format for qemu. While ideally
we would have the choice between compressed/non-compressed
available to the libvirt API, unfortunately there is no "flags"
parameter to the virDomainSave() API. Therefore, implement this
as a qemu.conf option. Both gzip and bzip2 are implemented, and
it should be very easy to implement additional compression
methods.
One open question is if/how we should detect the gzip and bzip2
binaries. One way to do it is to do compile-time setting of the
paths (via configure.in), but that doesn't seem like a great thing
to do. Another solution (my preferred solution) is not to detect
at all; when we go to run the commands that need them, if they
aren't available, or aren't available in one of the standard paths,
then we'll fail. Maybe somebody else has another option or
opinion, though.
In the future, we'll have a more robust (managed) save/restore API,
at which time we can expose this functionality properly in the API.
V2: get rid of redundant dd command and just use >> to append data.
V3: Add back the missing pieces for the enum and bumping the save version.
V4: Make the compressed field in the save_header an int.
Implement LZMA compression.
Signed-off-by: Chris Lalancette <clalance(a)redhat.com>
---
src/qemu.conf | 10 ++++++
src/qemu_conf.c | 11 ++++++
src/qemu_conf.h | 2 +
src/qemu_driver.c | 93 +++++++++++++++++++++++++++++++++++++++++++++++++----
4 files changed, 109 insertions(+), 7 deletions(-)
diff --git a/src/qemu.conf b/src/qemu.conf
index 653f487..1f10b43 100644
--- a/src/qemu.conf
+++ b/src/qemu.conf
@@ -129,3 +129,13 @@
# "/dev/ptmx", "/dev/kvm", "/dev/kqemu",
# "/dev/rtc", "/dev/hpet", "/dev/net/tun",
#]
+
+# The default format for Qemu/KVM guest save images is raw; that is, the
+# memory from the domain is dumped out directly to a file. If you have
+# guests with a large amount of memory, however, this can take up quite
+# a bit of space. If you would like to compress the images while they
+# are being saved to disk, you can also set "gzip", "bzip2", or "lzma"
+# for save_image_format. Note that this means you slow down the
+# process of saving a domain in order to save disk space.
+#
+# save_image_format = "raw"
diff --git a/src/qemu_conf.c b/src/qemu_conf.c
index 7ca5a15..ed87e13 100644
--- a/src/qemu_conf.c
+++ b/src/qemu_conf.c
@@ -280,6 +280,17 @@ int qemudLoadDriverConfig(struct qemud_driver *driver,
driver->cgroupDeviceACL[i] = NULL;
}
+ p = virConfGetValue (conf, "save_image_format");
+ CHECK_TYPE ("save_image_format", VIR_CONF_STRING);
+ if (p && p->str) {
+ VIR_FREE(driver->saveImageFormat);
+ if (!(driver->saveImageFormat = strdup(p->str))) {
+ virReportOOMError(NULL);
+ virConfFree(conf);
+ return -1;
+ }
+ }
+
virConfFree (conf);
return 0;
}
diff --git a/src/qemu_conf.h b/src/qemu_conf.h
index 8f4ef6a..e34baab 100644
--- a/src/qemu_conf.h
+++ b/src/qemu_conf.h
@@ -111,6 +111,8 @@ struct qemud_driver {
char *securityDriverName;
virSecurityDriverPtr securityDriver;
+
+ char *saveImageFormat;
};
diff --git a/src/qemu_driver.c b/src/qemu_driver.c
index 20906ef..3fd153d 100644
--- a/src/qemu_driver.c
+++ b/src/qemu_driver.c
@@ -3411,18 +3411,27 @@ static char *qemudEscapeShellArg(const char *in)
}
#define QEMUD_SAVE_MAGIC "LibvirtQemudSave"
-#define QEMUD_SAVE_VERSION 1
+#define QEMUD_SAVE_VERSION 2
+
+enum qemud_save_formats {
+ QEMUD_SAVE_FORMAT_RAW,
+ QEMUD_SAVE_FORMAT_GZIP,
+ QEMUD_SAVE_FORMAT_BZIP2,
+ QEMUD_SAVE_FORMAT_LZMA,
+};
struct qemud_save_header {
char magic[sizeof(QEMUD_SAVE_MAGIC)-1];
int version;
int xml_len;
int was_running;
- int unused[16];
+ int compressed;
+ int unused[15];
};
static int qemudDomainSave(virDomainPtr dom,
- const char *path) {
+ const char *path)
+{
struct qemud_driver *driver = dom->conn->privateData;
virDomainObjPtr vm;
char *command = NULL;
@@ -3433,11 +3442,28 @@ static int qemudDomainSave(virDomainPtr dom,
struct qemud_save_header header;
int ret = -1;
virDomainEventPtr event = NULL;
+ int internalret;
memset(&header, 0, sizeof(header));
memcpy(header.magic, QEMUD_SAVE_MAGIC, sizeof(header.magic));
header.version = QEMUD_SAVE_VERSION;
+ if (driver->saveImageFormat == NULL)
+ header.compressed = QEMUD_SAVE_FORMAT_RAW;
+ else if (STREQ(driver->saveImageFormat, "raw"))
+ header.compressed = QEMUD_SAVE_FORMAT_RAW;
+ else if (STREQ(driver->saveImageFormat, "gzip"))
+ header.compressed = QEMUD_SAVE_FORMAT_GZIP;
+ else if (STREQ(driver->saveImageFormat, "bzip2"))
+ header.compressed = QEMUD_SAVE_FORMAT_BZIP2;
+ else if (STREQ(driver->saveImageFormat, "lzma"))
+ header.compressed = QEMUD_SAVE_FORMAT_LZMA;
+ else {
+ qemudReportError(dom->conn, dom, NULL, VIR_ERR_OPERATION_FAILED,
+ "%s", _("Invalid save image format specified in configuration file"));
+ return -1;
+ }
+
qemuDriverLock(driver);
vm = virDomainFindByUUID(&driver->domains, dom->uuid);
@@ -3510,11 +3536,28 @@ static int qemudDomainSave(virDomainPtr dom,
virReportOOMError(dom->conn);
goto cleanup;
}
- if (virAsprintf(&command, "migrate \"exec:"
- "dd of='%s' oflag=append conv=notrunc 2>/dev/null"
- "\"", safe_path) == -1) {
+
+ if (header.compressed == QEMUD_SAVE_FORMAT_RAW)
+ internalret = virAsprintf(&command, "migrate \"exec:"
+ "dd of='%s' oflag=append conv=notrunc 2>/dev/null"
+ "\"", safe_path);
+ else if (header.compressed == QEMUD_SAVE_FORMAT_GZIP)
+ internalret = virAsprintf(&command, "migrate \"exec:"
+ "gzip -c >> '%s' 2>/dev/null\"", safe_path);
+ else if (header.compressed == QEMUD_SAVE_FORMAT_BZIP2)
+ internalret = virAsprintf(&command, "migrate \"exec:"
+ "bzip2 -c >> '%s' 2>/dev/null\"", safe_path);
+ else if (header.compressed == QEMUD_SAVE_FORMAT_LZMA)
+ internalret = virAsprintf(&command, "migrate \"exec:"
+ "lzma -c >> '%s' 2>/dev/null\"", safe_path);
+ else {
+ qemudReportError(dom->conn, dom, NULL, VIR_ERR_INTERNAL_ERROR,
+ _("Invalid compress format %d"),
+ header.compressed);
+ goto cleanup;
+ }
+ if (internalret < 0) {
virReportOOMError(dom->conn);
- command = NULL;
goto cleanup;
}
@@ -4035,6 +4078,9 @@ static int qemudDomainRestore(virConnectPtr conn,
char *xml = NULL;
struct qemud_save_header header;
virDomainEventPtr event = NULL;
+ int intermediatefd = -1;
+ pid_t intermediate_pid = -1;
+ int childstat;
qemuDriverLock(driver);
/* Verify the header and read the XML */
@@ -4124,8 +4170,41 @@ static int qemudDomainRestore(virConnectPtr conn,
}
def = NULL;
+ if (header.version == 2) {
+ const char *intermediate_argv[3] = { NULL, "-dc", NULL };
+ if (header.compressed == QEMUD_SAVE_FORMAT_GZIP)
+ intermediate_argv[0] = "gzip";
+ else if (header.compressed == QEMUD_SAVE_FORMAT_BZIP2)
+ intermediate_argv[0] = "bzip2";
+ else if (header.compressed == QEMUD_SAVE_FORMAT_LZMA)
+ intermediate_argv[0] = "lzma";
+ else if (header.compressed != QEMUD_SAVE_FORMAT_RAW) {
+ qemudReportError(conn, NULL, NULL, VIR_ERR_OPERATION_FAILED,
+ _("Unknown compressed save format %d"),
+ header.compressed);
+ goto cleanup;
+ }
+ if (intermediate_argv[0] != NULL) {
+ intermediatefd = fd;
+ fd = -1;
+ if (virExec(conn, intermediate_argv, NULL, NULL,
+ &intermediate_pid, intermediatefd, &fd, NULL, 0) < 0) {
+ qemudReportError(conn, NULL, NULL, VIR_ERR_INTERNAL_ERROR,
+ _("Failed to start decompression binary %s"),
+ intermediate_argv[0]);
+ goto cleanup;
+ }
+ }
+ }
/* Set the migration source and start it up. */
ret = qemudStartVMDaemon(conn, driver, vm, "stdio", fd);
+ if (intermediate_pid != -1) {
+ /* Wait for intermediate process to exit */
+ while (waitpid(intermediate_pid, &childstat, 0) == -1 &&
+ errno == EINTR);
+ }
+ if (intermediatefd != -1)
+ close(intermediatefd);
close(fd);
fd = -1;
if (ret < 0) {
--
1.6.0.6
15 years, 3 months
[libvirt] [RFC] Interface for disk hotadd/remove
by Wolfgang Mauerer
Hi,
I'm currently interested in implementing hard disk hot-add and -remove support
for qemu (as opposed to controller-based hotplugging), and this brings up the
question how to best support this feature in libvirt. Many SCSI-Controllers in
real machines, for instance, allow to add and remove disks (without adding or
removing the controller itself) while the system is up and running, so it
would be nice to emulate this in a virtual machine. I'm focusing on
qemu on the backend side, but the problem is not related to this
particular backend. Rather, the question is how to integrate such
a feature best into libvirt.
Before implementing the functionality, it would be great to hear the
community's opinion which route to take wrt. the interface. Essentially, I can
see two options:
- Naturally, there are virDomain{At,De}tachDevice, but the pair implements
drive-hotadding via adding a new controller with an attached hard disk to
the system. By extending the XML description of the drive with a parameter
that specifies whether controller- or disk-based hotplugging is to be
performed, it would be possible to implement the desired functionality,
whilst preserving compatibility with existing semantics. Removing the drive
would then require another new parameter in the XML description to identify
the drive on the controller, which does not really prettify the thing.
- Extend the API with a new method (for instance virDomainDiskAttach) that
takes a hard disk description, a controller identifier, and a parameter that
identifies the disk on the controller.
- (Theoretically, it would also be possible to implement media exchange for
hard disks in qemu and re-use the media exchange infrastructure already
present in libvirt for CD-ROMs, but since this possibility comes to use on
real hardware only very occasionally, guest operating systems are typically
not really prepared to handle this well)
My preference would be to go for option 2, that is, implement a new API
method. Would there be any obstacles against adding such a patch to
mainline? Or is anyone already working on similar functionality? Or can this
be done in a much simpler way I've missed? If not, then I'd send patches
for more detailed review before long.
Thanks,
Wolfgang
15 years, 3 months
[libvirt] Share storage using iscsi
by Łukasz Mierzwa
Hi,
I'm trying to setup pool of machines (nodes) for virtual machines hosting and
I got few question about shared storage. My main requirements are:
1. central management - I've got simple python app that stores information
about all virtual machines and all nodes, this app needs to be able to manage
volumes using libvirt API, so I need libvirt volume pools
2. live migration - I got shared storage with HA, I want to use it also for
live migration in case one of nodes is dying or if I want to do some load
balancing
Right now I'm thinking about 2 machines with disks synchronized using drdb,
both acting as a identical iscsi targets, iscsi HA will be provided by
heartbeat. So I will end up with virtual IP pointing to working iscsi target,
drdb should keep storage is sync. But:
1. I can't just use single iscsi LUN and export it as libvirt storage pool to
each node, because no pool type would work that way, right?
2. http://libvirt.org/storage.html section "iSCSI volume pools" says:
"Volumes must be pre-allocated on the iSCSI server, and cannot be created via
the libvirt APIs."
So even if I got one LUN per node and set it as iscsi volume pool I would need
to create each volume on iscsi target. Libvirt can't manage volumes in such
pool, it can only assign already created volumes to virtual machines, right?
3. So maybe my storage could be setup as LVM volume group and this lvm group
would be managed as libvirt lvm volume pool on master (from heartbeat POV)
iscsi target. I would create one logical volume per virtual machine, export
this volume as a separate iscsi LUN, and use this LUN as iscsi volume for
virtual machine.
To create new virtual machine I would:
a) create lvm volume on iscsi target using libvirt (so I don't have to
Does it makes any sense? Are there better ways to *manage* volumes for virtual
machines using iscsi?
Thanks for any tips.
Łukasz Mierzwa
15 years, 3 months
[libvirt] [PATCH 1/2]: VirtualBox: Updated vboxNetworkCreateXML() and vboxNetworkDefineXML()
by Pritesh Kothari
Hi All,
I have made some changes to the functions vboxNetworkCreateXML(),
vboxNetworkDefineXML(), vboxNetworkUndefine() and vboxNetworkDestroy() to handle
multiple host only interfaces as multiple host only interfaces are supported
by VirtualBox 3.0 and greater.
The patch's are as below:
PATCH 1/2: Merged vboxNetworkCreateXML() and vboxNetworkDefineXML() and added
code to handle multiple hostonly interfaces.
PATCH 2/2: Merged vboxNetworkUndefine() and vboxNetworkDestroy() and added code
to handle multiple hostonly interfaces.
Regards,
Pritesh
15 years, 3 months
[libvirt] PATCH: Make UML/LXC drivers robust with bad NUMA data
by Daniel P. Berrange
commit e2052c24f39c71b3b8e92a983287f72176d73c77
Author: Daniel P. Berrange <berrange(a)redhat.com>
Date: Thu Aug 13 11:56:31 2009 +0100
Make LXC / UML drivers robust against NUMA topology brokenness
Some kernel versions expose broken NUMA topology for some machines.
This causes the LXC/UML drivers to fail to start. QEMU driver was
already fixed for this problem
* src/lxc_conf.c: Log and ignore failure to populate NUMA info
* src/uml_conf.c: Log and ignore failure to populate NUMA info
diff --git a/src/lxc_conf.c b/src/lxc_conf.c
index d06a024..fef60ba 100644
--- a/src/lxc_conf.c
+++ b/src/lxc_conf.c
@@ -30,6 +30,8 @@
#include "lxc_conf.h"
#include "nodeinfo.h"
#include "virterror_internal.h"
+#include "logging.h"
+
#define VIR_FROM_THIS VIR_FROM_LXC
@@ -46,8 +48,14 @@ virCapsPtr lxcCapsInit(void)
0, 0)) == NULL)
goto no_memory;
- if (nodeCapsInitNUMA(caps) < 0)
- goto no_memory;
+ /* Some machines have problematic NUMA toplogy causing
+ * unexpected failures. We don't want to break the QEMU
+ * driver in this scenario, so log errors & carry on
+ */
+ if (nodeCapsInitNUMA(caps) < 0) {
+ virCapabilitiesFreeNUMAInfo(caps);
+ VIR_WARN0("Failed to query host NUMA topology, disabling NUMA capabilities");
+ }
/* XXX shouldn't 'borrow' KVM's prefix */
virCapabilitiesSetMacPrefix(caps, (unsigned char []){ 0x52, 0x54, 0x00 });
diff --git a/src/uml_conf.c b/src/uml_conf.c
index 48e05a8..4f756d4 100644
--- a/src/uml_conf.c
+++ b/src/uml_conf.c
@@ -45,6 +45,7 @@
#include "nodeinfo.h"
#include "verify.h"
#include "bridge.h"
+#include "logging.h"
#define VIR_FROM_THIS VIR_FROM_UML
@@ -63,8 +64,14 @@ virCapsPtr umlCapsInit(void) {
0, 0)) == NULL)
goto no_memory;
- if (nodeCapsInitNUMA(caps) < 0)
- goto no_memory;
+ /* Some machines have problematic NUMA toplogy causing
+ * unexpected failures. We don't want to break the QEMU
+ * driver in this scenario, so log errors & carry on
+ */
+ if (nodeCapsInitNUMA(caps) < 0) {
+ virCapabilitiesFreeNUMAInfo(caps);
+ VIR_WARN0("Failed to query host NUMA topology, disabling NUMA capabilities");
+ }
if ((guest = virCapabilitiesAddGuest(caps,
"uml",
--
|: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :|
|: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :|
|: http://autobuild.org -o- http://search.cpan.org/~danberr/ :|
|: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
15 years, 3 months