[libvirt] [PATCH libguestfs 0/4] Add a libvirt backend to libguestfs.

This preliminary patch series adds a libvirt backend to libguestfs. It's for review only because although it launches the guest OK, there are some missing features that need to be implemented. The meat of the patch is in part 4/4. To save you the trouble of interpreting libxml2 fragments, an example of the generated XML and the corresponding qemu command line are attached below. Note the hack required to work around lack of support for '-drive [...]snapshot=on' Some questions: - I've tried to use the minimum set of XML possible to create the guest, leaving libvirt to fill out as much as possible. How does this XML look? - The <name> is mandatory, and I generate one randomly. Is this a good idea? I notice that my $HOME/.libvirt directory fills up with random files. Really I'd like libvirt to generate a random name and just deal with the logfiles. - How do we query libvirt to find out if qemu supports virtio-scsi? - Will <source file> work if the source is a host device? - Since when has <memory unit> attribute been available? For example, is it available in RHEL 6? - I'm using type="kvm" and I've only tested this on baremetal, but I don't want to force KVM. If only software emulation is available, I'd like to use it. - Is there an easy way to get -cpu host? Although I have the libvirt capabilities, I'd prefer not to have to parse it if I can avoid that, since libxml2 from C is so arcane. Comments: - <source mode> attribute is undocumented. Rich. ---------------------------------------------------------------------- <?xml version="1.0"?> <domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>1dhdpe3sb9ub2vxd</name> <memory unit="MiB">500</memory> <currentMemory unit="MiB">500</currentMemory> <vcpu>1</vcpu> <clock offset="utc"/> <os> <type>hvm</type> <kernel>/home/rjones/d/libguestfs/.guestfs-500/kernel.3198</kernel> <initrd>/home/rjones/d/libguestfs/.guestfs-500/initrd.3198</initrd> <cmdline>panic=1 console=ttyS0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm </cmdline> </os> <devices> <controller type="scsi" index="0" model="virtio-scsi"/> <disk type="file" device="disk"> <source file="/home/rjones/d/libguestfs/test1.img"/> <target dev="sda" bus="scsi"/> <driver name="qemu" format="raw" cache="none"/> <address type="drive" controller="0" bus="0" target="0" unit="0"/> </disk> <disk type="file" device="disk"> <source file="/home/rjones/d/libguestfs/.guestfs-500/root.3198"/> <target dev="sdb" bus="scsi"/> <driver name="qemu" format="raw" cache="unsafe"/> <address type="drive" controller="0" bus="0" target="1" unit="0"/> </disk> <channel type="unix"> <source mode="bind" path="/home/rjones/d/libguestfs/libguestfsSSg3Kl/guestfsd.sock"/> <target type="virtio" name="org.libguestfs.channel.0"/> </channel> </devices> <qemu:commandline> <qemu:arg value="-set"/> <qemu:arg value="drive.drive-scsi0-0-1-0.snapshot=on"/> </qemu:commandline> </domain> /usr/bin/qemu-kvm -S -M pc-1.1 -enable-kvm -m 500 -smp 1,sockets=1,cores=1,threads=1 -name 1dhdpe3sb9ub2vxd -uuid efcc8ab8-7193-da14-a72c-acbb82a1b975 -nographic -nodefconfig -nodefaults -chardev socket,id=charmonitor,path=/home/rjones/.libvirt/qemu/lib/1dhdpe3sb9ub2vxd.monitor,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc -no-shutdown -no-acpi -kernel /home/rjones/d/libguestfs/.guestfs-500/kernel.3198 -initrd /home/rjones/d/libguestfs/.guestfs-500/initrd.3198 -append panic=1 console=ttyS0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm -device virtio-scsi-pci,id=scsi0,bus=pci.0,addr=0x3 -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive file=/home/rjones/d/libguestfs/test1.img,if=none,id=drive-scsi0-0-0-0,format=raw,cache=none -device scsi-disk,bus=scsi0.0,channel=0,scsi-id=0,lun=0,drive=driv! e-scsi0-0-0-0,id=scsi0-0-0-0,bootindex=1 -drive file=/home/rjones/d/libguestfs/.guestfs-500/root.3198,if=none,id=drive-scsi0-0-1-0,format=raw,cache=unsafe -device scsi-disk,bus=scsi0.0,channel=0,scsi-id=1,lun=0,drive=drive-scsi0-0-1-0,id=scsi0-0-1-0 -chardev socket,id=charchannel0,path=/home/rjones/d/libguestfs/libguestfsSSg3Kl/guestfsd.sock,server,nowait -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.libguestfs.channel.0 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -set drive.drive-scsi0-0-1-0.snapshot=on

From: "Richard W.M. Jones" <rjones@redhat.com> Since we will be calling guestfs___build_appliance from the libvirt code in future, there's no point having two places where we have to acquire the lock. Push the lock down into this function instead. Because "glthread/lock.h" includes <errno.h> we have to add this header to the file too. --- src/appliance.c | 28 +++++++++++++++++++++++++--- src/launch-appliance.c | 16 ++-------------- 2 files changed, 27 insertions(+), 17 deletions(-) diff --git a/src/appliance.c b/src/appliance.c index e42bec4..d206f3a 100644 --- a/src/appliance.c +++ b/src/appliance.c @@ -36,6 +36,8 @@ #include <sys/types.h> #endif +#include "glthread/lock.h" + #include "guestfs.h" #include "guestfs-internal.h" #include "guestfs-internal-actions.h" @@ -58,6 +60,13 @@ static int hard_link_to_cached_appliance (guestfs_h *g, const char *cachedir, ch static int run_supermin_helper (guestfs_h *g, const char *supermin_path, const char *cachedir, size_t cdlen); static void print_febootstrap_command_line (guestfs_h *g, const char *argv[]); +/* RHBZ#790721: It makes no sense to have multiple threads racing to + * build the appliance from within a single process, and the code + * isn't safe for that anyway. Therefore put a thread lock around + * appliance building. + */ +gl_lock_define_initialized (static, building_lock); + /* Locate or build the appliance. * * This function locates or builds the appliance as necessary, @@ -136,11 +145,15 @@ guestfs___build_appliance (guestfs_h *g, int r; uid_t uid = geteuid (); + gl_lock_lock (building_lock); + /* Step (1). */ char *supermin_path; r = find_path (g, contains_supermin_appliance, NULL, &supermin_path); - if (r == -1) + if (r == -1) { + gl_lock_unlock (building_lock); return -1; + } if (r == 1) { /* Step (2): calculate checksum. */ @@ -152,6 +165,7 @@ guestfs___build_appliance (guestfs_h *g, if (r != 0) { free (supermin_path); free (checksum); + gl_lock_unlock (building_lock); return r == 1 ? 0 : -1; } @@ -160,6 +174,7 @@ guestfs___build_appliance (guestfs_h *g, kernel, initrd, appliance); free (supermin_path); free (checksum); + gl_lock_unlock (building_lock); return r; } free (supermin_path); @@ -168,8 +183,10 @@ guestfs___build_appliance (guestfs_h *g, /* Step (5). */ char *path; r = find_path (g, contains_fixed_appliance, NULL, &path); - if (r == -1) + if (r == -1) { + gl_lock_unlock (building_lock); return -1; + } if (r == 1) { size_t len = strlen (path); @@ -181,13 +198,16 @@ guestfs___build_appliance (guestfs_h *g, sprintf (*appliance, "%s/root", path); free (path); + gl_lock_unlock (building_lock); return 0; } /* Step (6). */ r = find_path (g, contains_old_style_appliance, NULL, &path); - if (r == -1) + if (r == -1) { + gl_lock_unlock (building_lock); return -1; + } if (r == 1) { size_t len = strlen (path); @@ -198,11 +218,13 @@ guestfs___build_appliance (guestfs_h *g, *appliance = NULL; free (path); + gl_lock_unlock (building_lock); return 0; } error (g, _("cannot find any suitable libguestfs supermin, fixed or old-style appliance on LIBGUESTFS_PATH (search path: %s)"), g->path); + gl_lock_unlock (building_lock); return -1; } diff --git a/src/launch-appliance.c b/src/launch-appliance.c index 79eae34..f10801c 100644 --- a/src/launch-appliance.c +++ b/src/launch-appliance.c @@ -23,13 +23,12 @@ #include <stdint.h> #include <inttypes.h> #include <unistd.h> +#include <errno.h> #include <fcntl.h> #include <sys/types.h> #include <sys/wait.h> #include <signal.h> -#include "glthread/lock.h" - #include "guestfs.h" #include "guestfs-internal.h" #include "guestfs-internal-actions.h" @@ -121,13 +120,6 @@ add_cmdline_shell_unquoted (guestfs_h *g, const char *options) } } -/* RHBZ#790721: It makes no sense to have multiple threads racing to - * build the appliance from within a single process, and the code - * isn't safe for that anyway. Therefore put a thread lock around - * appliance building. - */ -gl_lock_define_initialized (static, building_lock); - static int launch_appliance (guestfs_h *g, const char *arg) { @@ -150,12 +142,8 @@ launch_appliance (guestfs_h *g, const char *arg) /* Locate and/or build the appliance. */ char *kernel = NULL, *initrd = NULL, *appliance = NULL; - gl_lock_lock (building_lock); - if (guestfs___build_appliance (g, &kernel, &initrd, &appliance) == -1) { - gl_lock_unlock (building_lock); + if (guestfs___build_appliance (g, &kernel, &initrd, &appliance) == -1) return -1; - } - gl_lock_unlock (building_lock); TRACE0 (launch_build_appliance_end); -- 1.7.10.4

From: "Richard W.M. Jones" <rjones@redhat.com> This is just code motion. --- src/guestfs-internal.h | 1 + src/launch-appliance.c | 9 ++++----- 2 files changed, 5 insertions(+), 5 deletions(-) diff --git a/src/guestfs-internal.h b/src/guestfs-internal.h index 7707165..f05cec2 100644 --- a/src/guestfs-internal.h +++ b/src/guestfs-internal.h @@ -446,6 +446,7 @@ extern int guestfs___is_dir_nocase (guestfs_h *g, const char *); extern char *guestfs___download_to_tmp (guestfs_h *g, struct inspect_fs *fs, const char *filename, const char *basename, uint64_t max_size); extern char *guestfs___case_sensitive_path_silently (guestfs_h *g, const char *); extern struct inspect_fs *guestfs___search_for_root (guestfs_h *g, const char *root); +extern char *guestfs___drive_name (size_t index, char *ret); #if defined(HAVE_HIVEX) extern int guestfs___check_for_filesystem_on (guestfs_h *g, const char *device, int is_block, int is_partnum); diff --git a/src/launch-appliance.c b/src/launch-appliance.c index f10801c..efc1284 100644 --- a/src/launch-appliance.c +++ b/src/launch-appliance.c @@ -40,7 +40,6 @@ static int qemu_supports (guestfs_h *g, const char *option); static int qemu_supports_device (guestfs_h *g, const char *device_name); static int qemu_supports_virtio_scsi (guestfs_h *g); static char *qemu_drive_param (guestfs_h *g, const struct drive *drv, size_t index); -static char *drive_name (size_t index, char *ret); /* Functions to build up the qemu command line. These are only run * in the child process so no clean-up is required. @@ -306,7 +305,7 @@ launch_appliance (guestfs_h *g, const char *arg) snprintf (appliance_root, sizeof appliance_root, "root=/dev/%cd", virtio_scsi ? 's' : 'v'); - drive_name (drv_index, &appliance_root[12]); + guestfs___drive_name (drv_index, &appliance_root[12]); } if (STRNEQ (QEMU_OPTIONS, "")) { @@ -953,11 +952,11 @@ qemu_drive_param (guestfs_h *g, const struct drive *drv, size_t index) } /* https://rwmj.wordpress.com/2011/01/09/how-are-linux-drives-named-beyond-driv... */ -static char * -drive_name (size_t index, char *ret) +char * +guestfs___drive_name (size_t index, char *ret) { if (index >= 26) - ret = drive_name (index/26 - 1, ret); + ret = guestfs___drive_name (index/26 - 1, ret); index %= 26; *ret++ = 'a' + index; *ret = '\0'; -- 1.7.10.4

From: "Richard W.M. Jones" <rjones@redhat.com> With this commit, you can set the attach method to libvirt, but calling launch will give an error. --- generator/generator_actions.ml | 7 +++++++ src/guestfs-internal.h | 6 +++++- src/guestfs.c | 17 +++++++++++++++++ src/guestfs.pod | 2 ++ src/launch.c | 4 ++++ 5 files changed, 35 insertions(+), 1 deletion(-) diff --git a/generator/generator_actions.ml b/generator/generator_actions.ml index c25bda1..74f76bb 100644 --- a/generator/generator_actions.ml +++ b/generator/generator_actions.ml @@ -1576,6 +1576,13 @@ guestfsd daemon. Possible methods are: Launch an appliance and connect to it. This is the ordinary method and the default. +=item C<libvirt> + +=item C<libvirt:I<URI>> + +Use libvirt to launch the appliance. The optional I<URI> is the +libvirt connection URI to use (see L<http://libvirt.org/uri.html>). + =item C<unix:I<path>> Connect to the Unix domain socket I<path>. diff --git a/src/guestfs-internal.h b/src/guestfs-internal.h index f05cec2..8fbe2ec 100644 --- a/src/guestfs-internal.h +++ b/src/guestfs-internal.h @@ -122,7 +122,11 @@ enum state { CONFIG, LAUNCHING, READY, NO_HANDLE }; /* Attach method. */ -enum attach_method { ATTACH_METHOD_APPLIANCE = 0, ATTACH_METHOD_UNIX }; +enum attach_method { + ATTACH_METHOD_APPLIANCE, + ATTACH_METHOD_LIBVIRT, + ATTACH_METHOD_UNIX, +}; /* Event. */ struct event { diff --git a/src/guestfs.c b/src/guestfs.c index e848ff8..e13dd9f 100644 --- a/src/guestfs.c +++ b/src/guestfs.c @@ -746,6 +746,16 @@ guestfs__set_attach_method (guestfs_h *g, const char *method) free (g->attach_method_arg); g->attach_method_arg = NULL; } + else if (STREQ (method, "libvirt")) { + g->attach_method = ATTACH_METHOD_LIBVIRT; + free (g->attach_method_arg); + g->attach_method_arg = NULL; + } + else if (STRPREFIX (method, "libvirt:") && strlen (method) > 8) { + g->attach_method = ATTACH_METHOD_LIBVIRT; + free (g->attach_method_arg); + g->attach_method_arg = safe_strdup (g, method + 8); + } else if (STRPREFIX (method, "unix:") && strlen (method) > 5) { g->attach_method = ATTACH_METHOD_UNIX; free (g->attach_method_arg); @@ -770,6 +780,13 @@ guestfs__get_attach_method (guestfs_h *g) ret = safe_strdup (g, "appliance"); break; + case ATTACH_METHOD_LIBVIRT: + if (g->attach_method_arg == NULL) + ret = safe_strdup (g, "libvirt"); + else + ret = safe_asprintf (g, "libvirt:%s", g->attach_method_arg); + break; + case ATTACH_METHOD_UNIX: ret = safe_asprintf (g, "unix:%s", g->attach_method_arg); break; diff --git a/src/guestfs.pod b/src/guestfs.pod index 72a5506..92bdca0 100644 --- a/src/guestfs.pod +++ b/src/guestfs.pod @@ -1076,6 +1076,8 @@ library connects to the C<guestfsd> daemon in L</guestfs_launch> The normal attach method is C<appliance>, where a small appliance is created containing the daemon, and then the library connects to this. +C<libvirt> or C<libvirt:I<URI>> are alternatives that use libvirt to +start the appliance. Setting attach method to C<unix:I<path>> (where I<path> is the path of a Unix domain socket) causes L</guestfs_launch> to connect to an diff --git a/src/launch.c b/src/launch.c index 93029e4..7c403ab 100644 --- a/src/launch.c +++ b/src/launch.c @@ -325,6 +325,10 @@ guestfs__launch (guestfs_h *g) g->attach_ops = &attach_ops_appliance; break; + case ATTACH_METHOD_LIBVIRT: + error (g, _("libvirt attach method is not yet supported")); + return -1; + case ATTACH_METHOD_UNIX: g->attach_ops = &attach_ops_unix; break; -- 1.7.10.4

From: "Richard W.M. Jones" <rjones@redhat.com> Complete the attach-method libvirt backend. This backend uses libvirt to create a transient KVM domain to run the appliance. Note that this still will only work with local libvirt URIs since the <kernel>, <initrd> and appliance links in the libvirt XML refer to local files, and virtio serial only works locally (limitation of libvirt). Remote support will be added later. --- configure.ac | 4 + po/POTFILES | 1 + src/Makefile.am | 1 + src/guestfs-internal.h | 6 + src/launch-libvirt.c | 868 ++++++++++++++++++++++++++++++++++++++++++++++++ src/launch.c | 4 +- 6 files changed, 882 insertions(+), 2 deletions(-) create mode 100644 src/launch-libvirt.c diff --git a/configure.ac b/configure.ac index a79a992..e4207c5 100644 --- a/configure.ac +++ b/configure.ac @@ -678,6 +678,10 @@ PKG_CHECK_MODULES([LIBXML2], [libxml-2.0], [AC_SUBST([LIBXML2_CFLAGS]) AC_SUBST([LIBXML2_LIBS]) AC_DEFINE([HAVE_LIBXML2],[1],[libxml2 found at compile time.]) + old_LIBS="$LIBS" + LIBS="$LIBS $LIBREADLINE" + AC_CHECK_FUNCS([xmlBufferDetach]) + LIBS="$old_LIBS" ], [AC_MSG_WARN([libxml2 not found, some core features will be disabled])]) AM_CONDITIONAL([HAVE_LIBXML2],[test "x$LIBXML2_LIBS" != "x"]) diff --git a/po/POTFILES b/po/POTFILES index 60f8d95..ad00cd4 100644 --- a/po/POTFILES +++ b/po/POTFILES @@ -209,6 +209,7 @@ src/inspect-fs.c src/inspect-icon.c src/inspect.c src/launch-appliance.c +src/launch-libvirt.c src/launch-unix.c src/launch.c src/libvirtdomain.c diff --git a/src/Makefile.am b/src/Makefile.am index 25daaea..95042f8 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -138,6 +138,7 @@ libguestfs_la_SOURCES = \ inspect-icon.c \ launch.c \ launch-appliance.c \ + launch-libvirt.c \ launch-unix.c \ libvirtdomain.c \ listfs.c \ diff --git a/src/guestfs-internal.h b/src/guestfs-internal.h index 8fbe2ec..fe275f0 100644 --- a/src/guestfs-internal.h +++ b/src/guestfs-internal.h @@ -167,6 +167,7 @@ struct attach_ops { int (*shutdown) (guestfs_h *g); /* Shutdown and cleanup. */ }; extern struct attach_ops attach_ops_appliance; +extern struct attach_ops attach_ops_libvirt; extern struct attach_ops attach_ops_unix; struct guestfs_h @@ -272,6 +273,11 @@ struct guestfs_h bool virtio_scsi; /* See function qemu_supports_virtio_scsi */ } app; + + struct { /* Used only by src/launch-libvirt.c. */ + void *connv; /* libvirt connection (really virConnectPtr) */ + void *domv; /* libvirt domain (really virDomainPtr) */ + } virt; }; /* Per-filesystem data stored for inspect_os. */ diff --git a/src/launch-libvirt.c b/src/launch-libvirt.c new file mode 100644 index 0000000..3b735b5 --- /dev/null +++ b/src/launch-libvirt.c @@ -0,0 +1,868 @@ +/* libguestfs + * Copyright (C) 2009-2012 Red Hat Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301 USA + */ + +/* To do (XXX): + * + * - Need to query libvirt to find out if virtio-scsi is supported. + * This code assumes it. + * + * - Console, so we can see appliance messages and debugging. + * + * - Network. + * + * - Remote support. + */ + +#include <config.h> + +#include <stdio.h> +#include <stdlib.h> +#include <stdarg.h> +#include <unistd.h> +#include <fcntl.h> +#include <limits.h> +#include <assert.h> + +#ifdef HAVE_LIBVIRT +#include <libvirt/libvirt.h> +#include <libvirt/virterror.h> +#endif + +#ifdef HAVE_LIBXML2 +#include <libxml/xmlIO.h> +#include <libxml/xmlwriter.h> +#include <libxml/xpath.h> +#include <libxml/parser.h> +#include <libxml/tree.h> +#include <libxml/xmlsave.h> +#endif + +#include "glthread/lock.h" + +#include "guestfs.h" +#include "guestfs-internal.h" +#include "guestfs-internal-actions.h" +#include "guestfs_protocol.h" + +#if defined(HAVE_LIBVIRT) && defined(HAVE_LIBXML2) + +#ifndef HAVE_XMLBUFFERDETACH +/* Added in libxml2 2.8.0. This is mostly a copy of the function from + * upstream libxml2, which is under a more permissive license. + */ +static xmlChar * +xmlBufferDetach (xmlBufferPtr buf) +{ + xmlChar *ret; + + if (buf == NULL) + return NULL; + if (buf->alloc == XML_BUFFER_ALLOC_IMMUTABLE) + return NULL; + + ret = buf->content; + buf->content = NULL; + buf->size = 0; + buf->use = 0; + + return ret; +} +#endif + +static xmlChar *construct_libvirt_xml (guestfs_h *g, const char *capabilities_xml, const char *kernel, const char *initrd, const char *appliance, const char *guestfsd_sock); + +static void libvirt_error (guestfs_h *g, const char *fs, ...); + +static int +launch_libvirt (guestfs_h *g, const char *libvirt_uri) +{ + virConnectPtr conn = NULL; + virDomainPtr dom = NULL; + char *capabilities = NULL; + xmlChar *xml = NULL; + char *kernel = NULL, *initrd = NULL, *appliance = NULL; + char guestfsd_sock[256]; + struct sockaddr_un addr; + int r; + + /* At present you must add drives before starting the appliance. In + * future when we enable hotplugging you won't need to do this. + */ + if (!g->drives) { + error (g, _("you must call guestfs_add_drive before guestfs_launch")); + return -1; + } + + guestfs___launch_send_progress (g, 0); + TRACE0 (launch_libvirt_start); + + if (g->verbose) + guestfs___print_timestamped_message (g, "connect to libvirt"); + + /* Connect to libvirt, get capabilities. */ + /* XXX Support libvirt authentication in the future. */ + conn = virConnectOpen (libvirt_uri); + if (!conn) { + error (g, _("could not connect to libvirt: URI: %s"), + libvirt_uri ? : "NULL"); + goto cleanup; + } + + if (g->verbose) + guestfs___print_timestamped_message (g, "get libvirt capabilities"); + + capabilities = virConnectGetCapabilities (conn); + if (!capabilities) { + libvirt_error (g, _("could not get libvirt capabilities")); + goto cleanup; + } + + /* Locate and/or build the appliance. */ + TRACE0 (launch_build_libvirt_appliance_start); + + if (g->verbose) + guestfs___print_timestamped_message (g, "build appliance"); + + if (guestfs___build_appliance (g, &kernel, &initrd, &appliance) == -1) + goto cleanup; + + guestfs___launch_send_progress (g, 3); + TRACE0 (launch_build_libvirt_appliance_end); + + /* Using virtio-serial, we need to create a local Unix domain socket + * for qemu to connect to. + */ + snprintf (guestfsd_sock, sizeof guestfsd_sock, "%s/guestfsd.sock", g->tmpdir); + unlink (guestfsd_sock); + + g->sock = socket (AF_UNIX, SOCK_STREAM|SOCK_CLOEXEC, 0); + if (g->sock == -1) { + perrorf (g, "socket"); + goto cleanup; + } + + if (fcntl (g->sock, F_SETFL, O_NONBLOCK) == -1) { + perrorf (g, "fcntl"); + goto cleanup; + } + + addr.sun_family = AF_UNIX; + strncpy (addr.sun_path, guestfsd_sock, UNIX_PATH_MAX); + addr.sun_path[UNIX_PATH_MAX-1] = '\0'; + + if (bind (g->sock, &addr, sizeof addr) == -1) { + perrorf (g, "bind"); + goto cleanup; + } + + if (listen (g->sock, 1) == -1) { + perrorf (g, "listen"); + goto cleanup; + } + + /* XXX CONSOLE XXX */ + + /* Construct the libvirt XML. */ + if (g->verbose) + guestfs___print_timestamped_message (g, "create libvirt XML"); + + xml = construct_libvirt_xml (g, capabilities, + kernel, initrd, appliance, + guestfsd_sock); + if (!xml) + goto cleanup; + + /* Launch the libvirt guest. */ + if (g->verbose) + guestfs___print_timestamped_message (g, "launch libvirt guest"); + + dom = virDomainCreateXML (conn, (char *) xml, VIR_DOMAIN_START_AUTODESTROY); + if (!dom) { + libvirt_error (g, _("could not create appliance through libvirt")); + goto cleanup; + } + + free (kernel); + kernel = NULL; + free (initrd); + initrd = NULL; + free (appliance); + appliance = NULL; + free (xml); + xml = NULL; + free (capabilities); + capabilities = NULL; + + g->state = LAUNCHING; + + /* Wait for libvirt domain to start and to connect back to us via + * virtio-serial and send the GUESTFS_LAUNCH_FLAG message. + */ + r = guestfs___accept_from_daemon (g); + if (r == -1) + goto cleanup; + + /* NB: We reach here just because qemu has opened the socket. It + * does not mean the daemon is up until we read the + * GUESTFS_LAUNCH_FLAG below. Failures in qemu startup can still + * happen even if we reach here, even early failures like not being + * able to open a drive. + */ + + /* Close the listening socket. */ + if (close (g->sock) != 0) { + perrorf (g, "close: listening socket"); + close (r); + g->sock = -1; + goto cleanup; + } + g->sock = r; /* This is the accepted data socket. */ + + if (fcntl (g->sock, F_SETFL, O_NONBLOCK) == -1) { + perrorf (g, "fcntl"); + goto cleanup; + } + + uint32_t size; + void *buf = NULL; + r = guestfs___recv_from_daemon (g, &size, &buf); + free (buf); + + if (r == -1) { + error (g, _("guestfs_launch failed, see earlier error messages")); + goto cleanup; + } + + if (size != GUESTFS_LAUNCH_FLAG) { + error (g, _("guestfs_launch failed, see earlier error messages")); + goto cleanup; + } + + if (g->verbose) + guestfs___print_timestamped_message (g, "appliance is up"); + + /* This is possible in some really strange situations, such as + * guestfsd starts up OK but then qemu immediately exits. Check for + * it because the caller is probably expecting to be able to send + * commands after this function returns. + */ + if (g->state != READY) { + error (g, _("qemu launched and contacted daemon, but state != READY")); + goto cleanup; + } + + TRACE0 (launch_libvirt_end); + + guestfs___launch_send_progress (g, 12); + + g->virt.connv = conn; + g->virt.domv = dom; + + return 0; + + cleanup: + if (g->sock >= 0) { + close (g->sock); + g->sock = -1; + } + g->state = CONFIG; + free (kernel); + free (initrd); + free (appliance); + free (capabilities); + free (xml); + if (dom) { + virDomainDestroy (dom); + virDomainFree (dom); + } + if (conn) + virConnectClose (conn); + + return -1; +} + +static int construct_libvirt_xml_name (guestfs_h *g, xmlTextWriterPtr xo); +static int construct_libvirt_xml_cpu (guestfs_h *g, xmlTextWriterPtr xo); +static int construct_libvirt_xml_boot (guestfs_h *g, xmlTextWriterPtr xo, const char *kernel, const char *initrd, size_t appliance_index); +static int construct_libvirt_xml_devices (guestfs_h *g, xmlTextWriterPtr xo, const char *appliance, const char *guestfsd_sock, size_t appliance_index); +static int construct_libvirt_xml_qemu_cmdline (guestfs_h *g, xmlTextWriterPtr xo, size_t appliance_index); +static int construct_libvirt_xml_disk (guestfs_h *g, xmlTextWriterPtr xo, struct drive *drv, size_t drv_index); +static int construct_libvirt_xml_appliance (guestfs_h *g, xmlTextWriterPtr xo, const char *appliance, size_t appliance_index); + +#define XMLERROR(code,e) do { \ + if ((e) == (code)) { \ + perrorf (g, _("error constructing libvirt XML at \"%s\""), \ + #e); \ + goto err; \ + } \ + } while (0) + +static xmlChar * +construct_libvirt_xml (guestfs_h *g, const char *capabilities_xml, + const char *kernel, const char *initrd, + const char *appliance, + const char *guestfsd_sock) +{ + xmlChar *ret = NULL; + xmlBufferPtr xb = NULL; + xmlOutputBufferPtr ob; + xmlTextWriterPtr xo = NULL; + struct drive *drv = g->drives; + size_t appliance_index = 0; + + /* Count the number of disks added, in order to get the offset + * of the appliance disk. + */ + while (drv != NULL) { + drv = drv->next; + appliance_index++; + } + + XMLERROR (NULL, xb = xmlBufferCreate ()); + XMLERROR (NULL, ob = xmlOutputBufferCreateBuffer (xb, NULL)); + XMLERROR (NULL, xo = xmlNewTextWriter (ob)); + + XMLERROR (-1, xmlTextWriterSetIndent (xo, 1)); + XMLERROR (-1, xmlTextWriterSetIndentString (xo, BAD_CAST " ")); + XMLERROR (-1, xmlTextWriterStartDocument (xo, NULL, NULL, NULL)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "domain")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "type", BAD_CAST "kvm")); + XMLERROR (-1, + xmlTextWriterWriteAttributeNS (xo, + BAD_CAST "xmlns", + BAD_CAST "qemu", + NULL, + BAD_CAST "http://libvirt.org/schemas/domain/qemu/1.0")); + + if (construct_libvirt_xml_name (g, xo) == -1) + goto err; + if (construct_libvirt_xml_cpu (g, xo) == -1) + goto err; + if (construct_libvirt_xml_boot (g, xo, kernel, initrd, appliance_index) == -1) + goto err; + if (construct_libvirt_xml_devices (g, xo, appliance, guestfsd_sock, + appliance_index) == -1) + goto err; + if (construct_libvirt_xml_qemu_cmdline (g, xo, appliance_index) == -1) + goto err; + + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterEndDocument (xo)); + XMLERROR (NULL, ret = xmlBufferDetach (xb)); /* caller frees ret */ + + debug (g, "libvirt XML:\n%s", ret); + + err: + if (xo) + xmlFreeTextWriter (xo); /* frees 'ob' too */ + if (xb) + xmlBufferFree (xb); + + return ret; +} + +/* Construct a securely random name. We don't need to save the name + * because if we ever needed it, it's available from libvirt. + */ +#define DOMAIN_NAME_LEN 16 + +static int +construct_libvirt_xml_name (guestfs_h *g, xmlTextWriterPtr xo) +{ + int fd; + char name[DOMAIN_NAME_LEN+1]; + size_t i; + unsigned char c; + + fd = open ("/dev/urandom", O_RDONLY|O_CLOEXEC); + if (fd == -1) { + perrorf (g, "/dev/urandom: open"); + return -1; + } + + for (i = 0; i < DOMAIN_NAME_LEN; ++i) { + if (read (fd, &c, 1) != 1) { + perrorf (g, "/dev/urandom: read"); + close (fd); + return -1; + } + name[i] = "0123456789abcdefghijklmnopqrstuvwxyz"[c % 36]; + } + name[DOMAIN_NAME_LEN] = '\0'; + + close (fd); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "name")); + XMLERROR (-1, xmlTextWriterWriteString (xo, BAD_CAST name)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + return 0; + + err: + return -1; +} + +/* CPU and memory features. */ +static int +construct_libvirt_xml_cpu (guestfs_h *g, xmlTextWriterPtr xo) +{ + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "memory")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "unit", BAD_CAST "MiB")); + XMLERROR (-1, xmlTextWriterWriteFormatString (xo, "%d", g->memsize)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "currentMemory")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "unit", BAD_CAST "MiB")); + XMLERROR (-1, xmlTextWriterWriteFormatString (xo, "%d", g->memsize)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "vcpu")); + XMLERROR (-1, xmlTextWriterWriteFormatString (xo, "%d", g->smp)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "clock")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "offset", + BAD_CAST "utc")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + return 0; + + err: + return -1; +} + +/* Boot parameters. */ +static int +construct_libvirt_xml_boot (guestfs_h *g, xmlTextWriterPtr xo, + const char *kernel, const char *initrd, + size_t appliance_index) +{ + char buf[256]; + char appliance_root[64] = ""; + + /* XXX Lots of common code shared with src/launch-appliance.c */ +#if defined(__arm__) +#define SERIAL_CONSOLE "ttyAMA0" +#else +#define SERIAL_CONSOLE "ttyS0" +#endif + +#define LINUX_CMDLINE \ + "panic=1 " /* force kernel to panic if daemon exits */ \ + "console=" SERIAL_CONSOLE " " /* serial console */ \ + "udevtimeout=600 " /* good for very slow systems (RHBZ#480319) */ \ + "no_timer_check " /* fix for RHBZ#502058 */ \ + "acpi=off " /* we don't need ACPI, turn it off */ \ + "printk.time=1 " /* display timestamp before kernel messages */ \ + "cgroup_disable=memory " /* saves us about 5 MB of RAM */ + + /* Linux kernel command line. */ + guestfs___drive_name (appliance_index, appliance_root); + + snprintf (buf, sizeof buf, + LINUX_CMDLINE + "root=/dev/sd%s " /* (root) */ + "%s " /* (selinux) */ + "%s " /* (verbose) */ + "TERM=%s " /* (TERM environment variable) */ + "%s", /* (append) */ + appliance_root, + g->selinux ? "selinux=1 enforcing=0" : "selinux=0", + g->verbose ? "guestfs_verbose=1" : "", + getenv ("TERM") ? : "linux", + g->append ? g->append : ""); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "os")); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "type")); + XMLERROR (-1, xmlTextWriterWriteString (xo, BAD_CAST "hvm")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "kernel")); + XMLERROR (-1, xmlTextWriterWriteString (xo, BAD_CAST kernel)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "initrd")); + XMLERROR (-1, xmlTextWriterWriteString (xo, BAD_CAST initrd)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "cmdline")); + XMLERROR (-1, xmlTextWriterWriteString (xo, BAD_CAST buf)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + return 0; + + err: + return -1; +} + +/* Devices. */ +static int +construct_libvirt_xml_devices (guestfs_h *g, xmlTextWriterPtr xo, + const char *appliance, const char *guestfsd_sock, + size_t appliance_index) +{ + struct drive *drv = g->drives; + size_t drv_index = 0; + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "devices")); + + /* virtio-scsi controller. */ + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "controller")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "type", + BAD_CAST "scsi")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "index", + BAD_CAST "0")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "model", + BAD_CAST "virtio-scsi")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + /* Disks. */ + while (drv != NULL) { + if (construct_libvirt_xml_disk (g, xo, drv, drv_index) == -1) + goto err; + drv = drv->next; + drv_index++; + } + + /* Appliance disk. */ + if (construct_libvirt_xml_appliance (g, xo, appliance, appliance_index) == -1) + goto err; + + /* virtio-serial */ + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "channel")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "type", + BAD_CAST "unix")); + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "source")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "mode", + BAD_CAST "bind")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "path", + BAD_CAST guestfsd_sock)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "target")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "type", + BAD_CAST "virtio")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "name", + BAD_CAST "org.libguestfs.channel.0")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + return 0; + + err: + return -1; +} + +static int +construct_libvirt_xml_disk (guestfs_h *g, xmlTextWriterPtr xo, + struct drive *drv, size_t drv_index) +{ + char drive_name[64] = "sd"; + char scsi_target[64]; + char *path = NULL; + + guestfs___drive_name (drv_index, &drive_name[2]); + snprintf (scsi_target, sizeof scsi_target, "%zu", drv_index); + + /* Drive path must be absolute for libvirt. */ + path = realpath (drv->path, NULL); + if (path == NULL) { + perrorf (g, "realpath: could not convert '%s' to absolute path", drv->path); + goto err; + } + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "disk")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "type", + BAD_CAST "file")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "device", + BAD_CAST "disk")); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "source")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "file", + BAD_CAST path)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "target")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "dev", + BAD_CAST drive_name)); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "bus", + BAD_CAST "scsi")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "driver")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "name", + BAD_CAST "qemu")); + if (drv->format) { + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "format", + BAD_CAST drv->format)); + } + if (drv->use_cache_none) { + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "cache", + BAD_CAST "none")); + } + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "address")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "type", + BAD_CAST "drive")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "controller", + BAD_CAST "0")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "bus", + BAD_CAST "0")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "target", + BAD_CAST scsi_target)); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "unit", + BAD_CAST "0")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + if (drv->readonly) { + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "readonly")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + } + + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + free (path); + return 0; + + err: + free (path); + return -1; +} + +static int +construct_libvirt_xml_appliance (guestfs_h *g, xmlTextWriterPtr xo, + const char *appliance, size_t drv_index) +{ + char drive_name[64] = "sd"; + char scsi_target[64]; + + guestfs___drive_name (drv_index, &drive_name[2]); + snprintf (scsi_target, sizeof scsi_target, "%zu", drv_index); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "disk")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "type", + BAD_CAST "file")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "device", + BAD_CAST "disk")); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "source")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "file", + BAD_CAST appliance)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "target")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "dev", + BAD_CAST drive_name)); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "bus", + BAD_CAST "scsi")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "driver")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "name", + BAD_CAST "qemu")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "format", + BAD_CAST "raw")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "cache", + BAD_CAST "unsafe")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "address")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "type", + BAD_CAST "drive")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "controller", + BAD_CAST "0")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "bus", + BAD_CAST "0")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "target", + BAD_CAST scsi_target)); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "unit", + BAD_CAST "0")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + /* We'd like to do this, but it's not supported by libvirt. + * See construct_libvirt_xml_qemu_cmdline for the workaround. + * + * XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "transient")); + * XMLERROR (-1, xmlTextWriterEndElement (xo)); + */ + + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + return 0; + + err: + return -1; +} + +/* Workaround because libvirt can't do snapshot=on yet. Idea inspired + * by Stefan Hajnoczi's post here: + * http://blog.vmsplice.net/2011/04/how-to-pass-qemu-command-line-options.html + */ +static int +construct_libvirt_xml_qemu_cmdline (guestfs_h *g, xmlTextWriterPtr xo, + size_t appliance_index) +{ + char attr[256]; + + snprintf (attr, sizeof attr, + "drive.drive-scsi0-0-%zu-0.snapshot=on", appliance_index); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "qemu:commandline")); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "qemu:arg")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "value", + BAD_CAST "-set")); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterStartElement (xo, BAD_CAST "qemu:arg")); + XMLERROR (-1, + xmlTextWriterWriteAttribute (xo, BAD_CAST "value", + BAD_CAST attr)); + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + XMLERROR (-1, xmlTextWriterEndElement (xo)); + + return 0; + + err: + return -1; +} + +static int +shutdown_libvirt (guestfs_h *g) +{ + virConnectPtr conn = g->virt.connv; + virDomainPtr dom = g->virt.domv; + int ret = 0; + + assert (conn != NULL); + assert (dom != NULL); + + /* XXX Need to be graceful? */ + if (virDomainDestroyFlags (dom, 0) == -1) { + libvirt_error (g, _("could not destroy libvirt domain")); + ret = -1; + } + virDomainFree (dom); + virConnectClose (conn); + + g->virt.connv = g->virt.domv = NULL; + + return ret; +} + +/* Wrapper around error() which produces better errors for + * libvirt functions. + */ +static void +libvirt_error (guestfs_h *g, const char *fs, ...) +{ + va_list args; + char *msg; + int len; + virErrorPtr err; + + va_start (args, fs); + len = vasprintf (&msg, fs, args); + va_end (args); + + if (len < 0) + msg = safe_asprintf (g, + _("%s: internal error forming error message"), + __func__); + + /* In all recent libvirt, this retrieves the thread-local error. */ + err = virGetLastError (); + + error (g, "%s: %s [code=%d domain=%d]", + msg, err->message, err->code, err->domain); + + /* NB. 'err' must not be freed! */ + free (msg); +} + +#else /* no libvirt or libxml2 at compile time */ + +#define NOT_IMPL(r) \ + error (g, _("libvirt attach-method is not available since this version of libguestfs was compiled without libvirt or libxml2")); \ + return r + +static int +launch_libvirt (guestfs_h *g, const char *arg) +{ + NOT_IMPL (-1); +} + +static int +shutdown_libvirt (guestfs_h *g) +{ + NOT_IMPL (-1); +} + +#endif /* no libvirt or libxml2 at compile time */ + +struct attach_ops attach_ops_libvirt = { + .launch = launch_libvirt, + .shutdown = shutdown_libvirt, +}; diff --git a/src/launch.c b/src/launch.c index 7c403ab..02441db 100644 --- a/src/launch.c +++ b/src/launch.c @@ -326,8 +326,8 @@ guestfs__launch (guestfs_h *g) break; case ATTACH_METHOD_LIBVIRT: - error (g, _("libvirt attach method is not yet supported")); - return -1; + g->attach_ops = &attach_ops_libvirt; + break; case ATTACH_METHOD_UNIX: g->attach_ops = &attach_ops_unix; -- 1.7.10.4

On Sat, Jul 21, 2012 at 08:20:49PM +0100, Richard W.M. Jones wrote:
+ + /* Connect to libvirt, get capabilities. */ + /* XXX Support libvirt authentication in the future. */ + conn = virConnectOpen (libvirt_uri); + if (!conn) { + error (g, _("could not connect to libvirt: URI: %s"), + libvirt_uri ? : "NULL"); + goto cleanup; + }
Authentication is a tricky one. If your STDIN is a tty, then you could use OpenAuth() passing in virConnectAuthPtrDefault, to get the default text-mode auth prompt. If you are running in a GUI app though, there needs to be some way to graphically collect credentials, which means libguestfs would probably need to expose a callback API kinda like the libvirt auth callbacks. Alternatively you could just say to hell with it, and require the application to pass in a pre-opened virConnectPtr that you use. This is actually quite desirable, since it will avoid the user having to authenticate multiple times, when the app already has an open connection Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
Some questions:
Another question ...
<channel type="unix"> <source mode="connect" path="/home/rjones/d/libguestfs/libguestfsSSg3Kl/guestfsd.sock"/> <target type="virtio" name="org.libguestfs.channel.0"/> </channel>
This clause doesn't work when libguestfs/qemu runs as root. As far as I can tell there are a combination of three factors working against it: (1) libvirt (when run as root) runs qemu as qemu.qemu. Since this user didn't have write access to the socket, it fails. I fixed this by chowning the socket. (2) Regular Unix permissions didn't give access to my home directory by non-root/non-me users. Fixed those permissions. This won't be a problem when we're using /tmp normally, but will break tests because we like to set $TMPDIR. (3) SELinux/sVirt prevents qemu connecting to this socket. This one is a pain. You'd think that if a socket is specified in the libvirt XML then sVirt should allow access to it. How to solve? Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-p2v converts physical machines to virtual machines. Boot with a live CD or over the network (PXE) and turn machines into Xen guests. http://et.redhat.com/~rjones/virt-p2v

On Sat, Jul 21, 2012 at 09:43:45PM +0100, Richard W.M. Jones wrote:
(3) SELinux/sVirt prevents qemu connecting to this socket. This one is a pain. You'd think that if a socket is specified in the libvirt XML then sVirt should allow access to it.
The AVCs are: type=AVC msg=audit(1342903120.938:9403): avc: denied { write } for pid=21757 comm="qemu-kvm" name="guestfsd.sock" dev="dm-4" ino=939761 scontext=system_u:system_r:svirt_t:s0:c411,c865 tcontext=unconfined_u:object_r:user_home_t:s0 tclass=sock_file type=AVC msg=audit(1342903120.938:9403): avc: denied { connectto } for pid=21757 comm="qemu-kvm" path="/home/rjones/d/libguestfs/libguestfsDDwHEF/guestfsd.sock" scontext=system_u:system_r:svirt_t:s0:c411,c865 tcontext=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 tclass=unix_stream_socket audit2allow suggests: #============= svirt_t ============== allow svirt_t unconfined_t:unix_stream_socket connectto; allow svirt_t user_home_t:sock_file write; I might be able to solve this by labelling the socket, but I'm not clear what label to use. Also that won't work if the main process is non-root but has permissions to access the global libvirtd - we'd really need libvirtd to do the labelling. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-df lists disk usage of guests without needing to install any software inside the virtual machine. Supports Linux and Windows. http://et.redhat.com/~rjones/virt-df/

On Sat, Jul 21, 2012 at 09:43:45PM +0100, Richard W.M. Jones wrote:
On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
Some questions:
Another question ...
<channel type="unix"> <source mode="connect" path="/home/rjones/d/libguestfs/libguestfsSSg3Kl/guestfsd.sock"/> <target type="virtio" name="org.libguestfs.channel.0"/> </channel>
This clause doesn't work when libguestfs/qemu runs as root. As far as I can tell there are a combination of three factors working against it:
(1) libvirt (when run as root) runs qemu as qemu.qemu. Since this user didn't have write access to the socket, it fails. I fixed this by chowning the socket.
What libvirt URI are you using ? If libguest is running as non-root, then I expect you'd want to use qemu:///session. THus all files would be owned by the matching user ID, and I'd sugest $HOME/.libguestfs/qemu for the directory to store the sockets in. If libguestfs is running as root, then use qemu:///system and a socket under /var/lib/libguestfs/qemu/
(2) Regular Unix permissions didn't give access to my home directory by non-root/non-me users. Fixed those permissions. This won't be a problem when we're using /tmp normally, but will break tests because we like to set $TMPDIR.
Again, see above.
(3) SELinux/sVirt prevents qemu connecting to this socket. This one is a pain. You'd think that if a socket is specified in the libvirt XML then sVirt should allow access to it.
You could either use the same directory that libvirt uses for the main QEMU monitor socket, or preferrably define standard directories for libguestfs and have them added to the SELinux policy Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Mon, Jul 23, 2012 at 10:45:21AM +0100, Daniel P. Berrange wrote:
On Sat, Jul 21, 2012 at 09:43:45PM +0100, Richard W.M. Jones wrote:
On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
Some questions:
Another question ...
<channel type="unix"> <source mode="connect" path="/home/rjones/d/libguestfs/libguestfsSSg3Kl/guestfsd.sock"/> <target type="virtio" name="org.libguestfs.channel.0"/> </channel>
This clause doesn't work when libguestfs/qemu runs as root. As far as I can tell there are a combination of three factors working against it:
(1) libvirt (when run as root) runs qemu as qemu.qemu. Since this user didn't have write access to the socket, it fails. I fixed this by chowning the socket.
What libvirt URI are you using ? If libguest is running as non-root, then I expect you'd want to use qemu:///session.
It's using NULL and expecting libvirt to choose the appropriate connection URI, which does appear to work.
Thus all files would be owned by the matching user ID, and I'd sugest $HOME/.libguestfs/qemu for the directory to store the sockets in.
If libguestfs is running as root, then use qemu:///system and a socket under /var/lib/libguestfs/qemu/
This is fairly sucky. We already make a temporary directory (a randomly named subdirectory of $TMPDIR) and that seems the appropriate place for small temporary files like sockets, especially since the temp cleaner will clean them up properly if we don't.
You could either use the same directory that libvirt uses for the main QEMU monitor socket, or preferrably define standard directories for libguestfs and have them added to the SELinux policy
So just so I'm completely clear about what's happening: (1) SELinux labels are chosen based on the parent directory. (2) By having a standard named parent directory (even $HOME/.libguestfs) SELinux will assign the right label to a socket in this directory, even if libguestfs is not running as root. (3) libguestfs should not be setting labels on anything itself. (4) If a non-root user has never run libguestfs before, then merely the act of libguestfs doing mkdir("$HOME/.libguestfs") [as non-root] will ensure that any sockets in this directory are labelled correctly. Is this right? Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming blog: http://rwmj.wordpress.com Fedora now supports 80 OCaml packages (the OPEN alternative to F#) http://cocan.org/getting_started_with_ocaml_on_red_hat_and_fedora

On Mon, Jul 23, 2012 at 11:02:41AM +0100, Richard W.M. Jones wrote:
On Mon, Jul 23, 2012 at 10:45:21AM +0100, Daniel P. Berrange wrote:
On Sat, Jul 21, 2012 at 09:43:45PM +0100, Richard W.M. Jones wrote:
On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
Some questions:
Another question ...
<channel type="unix"> <source mode="connect" path="/home/rjones/d/libguestfs/libguestfsSSg3Kl/guestfsd.sock"/> <target type="virtio" name="org.libguestfs.channel.0"/> </channel>
This clause doesn't work when libguestfs/qemu runs as root. As far as I can tell there are a combination of three factors working against it:
(1) libvirt (when run as root) runs qemu as qemu.qemu. Since this user didn't have write access to the socket, it fails. I fixed this by chowning the socket.
What libvirt URI are you using ? If libguest is running as non-root, then I expect you'd want to use qemu:///session.
It's using NULL and expecting libvirt to choose the appropriate connection URI, which does appear to work.
Apps should only rely on NULL, if they are able to work with any possible hypervisor. If you have specific requirements for QEMU you should always request QEMU explicitly. A local sysadmin may well have set a different default URI using an env variable or $HOME/.libvirt/libvirt.conf which will give you an unexpected choice.
Thus all files would be owned by the matching user ID, and I'd sugest $HOME/.libguestfs/qemu for the directory to store the sockets in.
If libguestfs is running as root, then use qemu:///system and a socket under /var/lib/libguestfs/qemu/
This is fairly sucky. We already make a temporary directory (a randomly named subdirectory of $TMPDIR) and that seems the appropriate place for small temporary files like sockets, especially since the temp cleaner will clean them up properly if we don't.
You could either use the same directory that libvirt uses for the main QEMU monitor socket, or preferrably define standard directories for libguestfs and have them added to the SELinux policy
So just so I'm completely clear about what's happening:
(1) SELinux labels are chosen based on the parent directory.
Yep
(2) By having a standard named parent directory (even $HOME/.libguestfs) SELinux will assign the right label to a socket in this directory, even if libguestfs is not running as root.
Yep, if that dir is listed in the policy.
(3) libguestfs should not be setting labels on anything itself.
Yes & no, see next answer
(4) If a non-root user has never run libguestfs before, then merely the act of libguestfs doing mkdir("$HOME/.libguestfs") [as non-root] will ensure that any sockets in this directory are labelled correctly.
For directories outside $HOME, the correct context is normally expected to be set by RPM during install. For $HOME I think you need to invoke "restorecon $HOME/.libguestfs" after creation, although IIRC this is no longer needed on rawhide. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Mon, Jul 23, 2012 at 11:21:37AM +0100, Daniel P. Berrange wrote:
On Mon, Jul 23, 2012 at 11:02:41AM +0100, Richard W.M. Jones wrote:
On Mon, Jul 23, 2012 at 10:45:21AM +0100, Daniel P. Berrange wrote:
On Sat, Jul 21, 2012 at 09:43:45PM +0100, Richard W.M. Jones wrote:
On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
Some questions:
Another question ...
<channel type="unix"> <source mode="connect" path="/home/rjones/d/libguestfs/libguestfsSSg3Kl/guestfsd.sock"/> <target type="virtio" name="org.libguestfs.channel.0"/> </channel>
This clause doesn't work when libguestfs/qemu runs as root. As far as I can tell there are a combination of three factors working against it:
(1) libvirt (when run as root) runs qemu as qemu.qemu. Since this user didn't have write access to the socket, it fails. I fixed this by chowning the socket.
What libvirt URI are you using ? If libguest is running as non-root, then I expect you'd want to use qemu:///session.
It's using NULL and expecting libvirt to choose the appropriate connection URI, which does appear to work.
Apps should only rely on NULL, if they are able to work with any possible hypervisor. If you have specific requirements for QEMU you should always request QEMU explicitly. A local sysadmin may well have set a different default URI using an env variable or $HOME/.libvirt/libvirt.conf which will give you an unexpected choice.
Thus all files would be owned by the matching user ID, and I'd sugest $HOME/.libguestfs/qemu for the directory to store the sockets in.
If libguestfs is running as root, then use qemu:///system and a socket under /var/lib/libguestfs/qemu/
This is fairly sucky. We already make a temporary directory (a randomly named subdirectory of $TMPDIR) and that seems the appropriate place for small temporary files like sockets, especially since the temp cleaner will clean them up properly if we don't.
You could either use the same directory that libvirt uses for the main QEMU monitor socket, or preferrably define standard directories for libguestfs and have them added to the SELinux policy
So just so I'm completely clear about what's happening:
(1) SELinux labels are chosen based on the parent directory.
Yep
(2) By having a standard named parent directory (even $HOME/.libguestfs) SELinux will assign the right label to a socket in this directory, even if libguestfs is not running as root.
Yep, if that dir is listed in the policy.
(3) libguestfs should not be setting labels on anything itself.
Yes & no, see next answer
(4) If a non-root user has never run libguestfs before, then merely the act of libguestfs doing mkdir("$HOME/.libguestfs") [as non-root] will ensure that any sockets in this directory are labelled correctly.
For directories outside $HOME, the correct context is normally expected to be set by RPM during install. For $HOME I think you need to invoke "restorecon $HOME/.libguestfs" after creation, although IIRC this is no longer needed on rawhide.
An alternative that might work is to have libguestfs run 'chcon()' on the temporary directory it creates to give it the 'qemu_var_run_t' type Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Mon, Jul 23, 2012 at 11:21:37AM +0100, Daniel P. Berrange wrote:
On Mon, Jul 23, 2012 at 11:02:41AM +0100, Richard W.M. Jones wrote:
It's using NULL and expecting libvirt to choose the appropriate connection URI, which does appear to work.
Apps should only rely on NULL, if they are able to work with any possible hypervisor. If you have specific requirements for QEMU you should always request QEMU explicitly. A local sysadmin may well have set a different default URI using an env variable or $HOME/.libvirt/libvirt.conf which will give you an unexpected choice.
Currently the administrator can set the attach-method to one of: "appliance" (default): run qemu directly "libvirt": run libvirt with conn URI = NULL "libvirt:URI": run libvirt with conn URI = "URI" We could make "libvirt" mean "choose qemu:///session or qemu:///system". Then if they want NULL, we could have "libvirt:" (colon followed by empty string) or "libvirt:NULL" or something else.
Alternatively you could just say to hell with it, and require the application to pass in a pre-opened virConnectPtr that you use. This is actually quite desirable, since it will avoid the user having to authenticate multiple times, when the app already has an open connection
Unfortunately this is hard to implement. Specifically we cannot generally convert a language-specific object (eg. a Perl Sys::Virt object) into a virConnectPtr. Been discussed before a few years back when we wanted to pass virDomainPtr to a libguestfs call. There is even non-working support in the generator for this ... In true garbage-collected languages it's even harder to get it right. How would you stop the connection object 'conn' from getting collected and closed too early in OCaml code such as: conn = Libvirt.Connect.connect_readonly () g#attach_libvirt conn; (* as far as OCaml is concerned, conn is unreferenced from here onwards *) g#launch (); g#do_lots_of_stuff (); g#close () The language bindings would have to model the lifetime of every object that could potentially be attached to the libguestfs handle. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-top is 'top' for virtual machines. Tiny program with many powerful monitoring features, net stats, disk stats, logging, etc. http://et.redhat.com/~rjones/virt-top

On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
Some questions:
How do you set the IP address for userspace networking? AFAICT from the code this is not possible. ie: the net= parameter: -netdev user,id=usernet,net=169.254.0.0/16 \ -device virtio-net-pci,netdev=usernet Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming blog: http://rwmj.wordpress.com Fedora now supports 80 OCaml packages (the OPEN alternative to F#) http://cocan.org/getting_started_with_ocaml_on_red_hat_and_fedora

On Sun, Jul 22, 2012 at 09:33:46AM +0100, Richard W.M. Jones wrote:
On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
Some questions:
How do you set the IP address for userspace networking? AFAICT from the code this is not possible.
ie: the net= parameter:
-netdev user,id=usernet,net=169.254.0.0/16 \ -device virtio-net-pci,netdev=usernet
Yeah, that's not expressed in the XML yet. This has been requested in the past by the OpenVZ developers too, and could potentially be useful to LXC. We should probably define some syntax for network config in the XML Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
This preliminary patch series adds a libvirt backend to libguestfs. It's for review only because although it launches the guest OK, there are some missing features that need to be implemented.
I did some appliance boot timings of libvirt vs direct qemu boot from libguestfs, and essentially libvirt makes no measurable difference, which is all good news. I'm going to push my current libvirt backend upstream in libguestfs 1.19.23. It includes a number of fixes over the posted patches. Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones virt-df lists disk usage of guests without needing to install any software inside the virtual machine. Supports Linux and Windows. http://et.redhat.com/~rjones/virt-df/

On Sun, Jul 22, 2012 at 09:43:04AM +0100, Richard W.M. Jones wrote:
On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
This preliminary patch series adds a libvirt backend to libguestfs. It's for review only because although it launches the guest OK, there are some missing features that need to be implemented.
I did some appliance boot timings of libvirt vs direct qemu boot from libguestfs, and essentially libvirt makes no measurable difference, which is all good news.
Oh, I'm a little surprised at that. When I switched my KVM sandbox hacks from a quick proof of concept, over to using libvirt, I saw about a 300-500ms overhead from libvirt in the startup process. My presumption is that this overhead is primarily from libvirt invoking qemu -help several times to figure out supported options. When we switch to caching that info though, I'd expect there to be little measurable impact from libvirt Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Mon, Jul 23, 2012 at 10:55:36AM +0100, Daniel P. Berrange wrote:
On Sun, Jul 22, 2012 at 09:43:04AM +0100, Richard W.M. Jones wrote:
On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
This preliminary patch series adds a libvirt backend to libguestfs. It's for review only because although it launches the guest OK, there are some missing features that need to be implemented.
I did some appliance boot timings of libvirt vs direct qemu boot from libguestfs, and essentially libvirt makes no measurable difference, which is all good news.
Oh, I'm a little surprised at that. When I switched my KVM sandbox hacks from a quick proof of concept, over to using libvirt, I saw about a 300-500ms overhead from libvirt in the startup process. My presumption is that this overhead is primarily from libvirt invoking qemu -help several times to figure out supported options. When we switch to caching that info though, I'd expect there to be little measurable impact from libvirt
libguestfs also runs 'qemu -help', so I guess there is no difference there :-) Rich. -- Richard Jones, Virtualization Group, Red Hat http://people.redhat.com/~rjones Read my programming blog: http://rwmj.wordpress.com Fedora now supports 80 OCaml packages (the OPEN alternative to F#) http://cocan.org/getting_started_with_ocaml_on_red_hat_and_fedora

On Sat, Jul 21, 2012 at 08:20:45PM +0100, Richard W.M. Jones wrote:
This preliminary patch series adds a libvirt backend to libguestfs. It's for review only because although it launches the guest OK, there are some missing features that need to be implemented.
The meat of the patch is in part 4/4.
To save you the trouble of interpreting libxml2 fragments, an example of the generated XML and the corresponding qemu command line are attached below. Note the hack required to work around lack of support for '-drive [...]snapshot=on'
Some questions:
- I've tried to use the minimum set of XML possible to create the guest, leaving libvirt to fill out as much as possible. How does this XML look?
- The <name> is mandatory, and I generate one randomly. Is this a good idea? I notice that my $HOME/.libvirt directory fills up with random files. Really I'd like libvirt to generate a random name and just deal with the logfiles.
That's a good question - I have the same issue with libvirt-sandbox and filling up with log files.
- How do we query libvirt to find out if qemu supports virtio-scsi?
Info about supported devices is not available via the API. You'd have to try the create attempt and handle VIR_ERR_CONFIG_UNSUPPORTED
- Will <source file> work if the source is a host device?
You might be lucky but for correctness you should use source type=block if the source is a host device. This will almost certainly make a difference to the way disk locking is performed in the future.
- Since when has <memory unit> attribute been available? For example, is it available in RHEL 6?
http://libvirt.org/formatdomain.html#elementsMemoryAllocation "unit since 0.9.11" so, not RHEL6
- I'm using type="kvm" and I've only tested this on baremetal, but I don't want to force KVM. If only software emulation is available, I'd like to use it.
You have to query the capabilities to see if KVM is present, otherwise use type=qemu
- Is there an easy way to get -cpu host? Although I have the libvirt capabilities, I'd prefer not to have to parse it if I can avoid that, since libxml2 from C is so arcane.
Yes, <cpu mode='host-model'/> uses the host capabilities to specify a CPU that matches the host verbosely, while <cpu mode='host-passthrough'/> just uses '-cpu host'.
- <source mode> attribute is undocumented.
Rich.
----------------------------------------------------------------------
<?xml version="1.0"?> <domain type="kvm" xmlns:qemu="http://libvirt.org/schemas/domain/qemu/1.0"> <name>1dhdpe3sb9ub2vxd</name>
It would be nice to at least prefix this with 'guestfs-XXXX' so admins know what app is responsible for the guest.
<memory unit="MiB">500</memory> <currentMemory unit="MiB">500</currentMemory> <vcpu>1</vcpu> <clock offset="utc"/> <os> <type>hvm</type> <kernel>/home/rjones/d/libguestfs/.guestfs-500/kernel.3198</kernel> <initrd>/home/rjones/d/libguestfs/.guestfs-500/initrd.3198</initrd> <cmdline>panic=1 console=ttyS0 udevtimeout=600 no_timer_check acpi=off printk.time=1 cgroup_disable=memory root=/dev/sdb selinux=0 guestfs_verbose=1 TERM=xterm </cmdline> </os> <devices> <controller type="scsi" index="0" model="virtio-scsi"/> <disk type="file" device="disk"> <source file="/home/rjones/d/libguestfs/test1.img"/> <target dev="sda" bus="scsi"/> <driver name="qemu" format="raw" cache="none"/> <address type="drive" controller="0" bus="0" target="0" unit="0"/>
NB you don't need to specify <address> elements unless you actually want to have full control over the controller/bus/target/unit numbering
</disk> <disk type="file" device="disk"> <source file="/home/rjones/d/libguestfs/.guestfs-500/root.3198"/> <target dev="sdb" bus="scsi"/> <driver name="qemu" format="raw" cache="unsafe"/> <address type="drive" controller="0" bus="0" target="1" unit="0"/> </disk> <channel type="unix"> <source mode="bind" path="/home/rjones/d/libguestfs/libguestfsSSg3Kl/guestfsd.sock"/>
NB, current SELinux policy will prevent QEMU creating a socket in this location. You probably want to ask the SELinux folks to add a rule to the policy to allow creation of sockets like $HOME/.libguestfs/qemu/$VMNAME.guestfsd
<target type="virtio" name="org.libguestfs.channel.0"/> </channel> </devices> <qemu:commandline> <qemu:arg value="-set"/> <qemu:arg value="drive.drive-scsi0-0-1-0.snapshot=on"/> </qemu:commandline> </domain>
Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
participants (2)
-
Daniel P. Berrange
-
Richard W.M. Jones