[libvirt] [PATCH v7 0/8] Add basic driver for Parallels Virtuozzo Server

Parallels Cloud Server is a virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server. More information can be found here: http://www.parallels.com/products/pcs/ Also beta version of Parallels Cloud Server can be downloaded there. Dmitry Guryanov (8): parallels: add driver skeleton parallels: add functions to list domains and get info parallels: implement functions for domain life cycle management parallels: get info about serial ports parallels: add support of VNC remote display parallels: implement virDomainDefineXML operation for existing domains parallels: add storage driver parallels: implement VM creation cfg.mk | 1 + configure.ac | 23 + docs/drvparallels.html.in | 28 + include/libvirt/virterror.h | 2 +- libvirt.spec.in | 9 +- mingw32-libvirt.spec.in | 6 + po/POTFILES.in | 4 + src/Makefile.am | 15 + src/conf/domain_conf.c | 3 +- src/conf/domain_conf.h | 1 + src/driver.h | 1 + src/libvirt.c | 9 + src/parallels/parallels_driver.c | 1309 +++++++++++++++++++++++++++++++++ src/parallels/parallels_driver.h | 75 ++ src/parallels/parallels_storage.c | 1456 +++++++++++++++++++++++++++++++++++++ src/parallels/parallels_utils.c | 131 ++++ src/util/virterror.c | 3 +- 17 files changed, 3072 insertions(+), 4 deletions(-) create mode 100644 docs/drvparallels.html.in create mode 100644 src/parallels/parallels_driver.c create mode 100644 src/parallels/parallels_driver.h create mode 100644 src/parallels/parallels_storage.c create mode 100644 src/parallels/parallels_utils.c

Parallels Cloud Server is a virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server. More information can be found here: http://www.parallels.com/products/pcs/ Also beta version of Parallels Cloud Server can be downloaded there. This first patch adds driver, which can report node info only. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- changes in v7: * renamed to 'parallels', because first, the product renamed to "Parallels Cloud Server" and there are other products, such as Parallels Workstation for linux, which this driver will be able operate. changes in v6: * Add info about PVS to commit message * fixed issue with POTFILES.in * fixed check in configure * check for prlctl command existence moved to pvsRegister changes in v5: * add me to AUTHORS * fix indent in preprocessor directives in pvs_driver.h * remove unneded include * remove pvs_driver.c from po/POTFILES.in cfg.mk | 1 + configure.ac | 23 ++++ docs/drvparallels.html.in | 28 ++++ include/libvirt/virterror.h | 2 +- libvirt.spec.in | 9 +- mingw32-libvirt.spec.in | 6 + po/POTFILES.in | 1 + src/Makefile.am | 13 ++ src/conf/domain_conf.c | 3 +- src/conf/domain_conf.h | 1 + src/driver.h | 1 + src/libvirt.c | 9 ++ src/parallels/parallels_driver.c | 271 ++++++++++++++++++++++++++++++++++++++ src/parallels/parallels_driver.h | 51 +++++++ src/util/virterror.c | 3 +- 15 files changed, 418 insertions(+), 4 deletions(-) create mode 100644 docs/drvparallels.html.in create mode 100644 src/parallels/parallels_driver.c create mode 100644 src/parallels/parallels_driver.h diff --git a/cfg.mk b/cfg.mk index 7664d5d..7efe6a7 100644 --- a/cfg.mk +++ b/cfg.mk @@ -513,6 +513,7 @@ msg_gen_function += PHYP_ERROR msg_gen_function += VIR_ERROR msg_gen_function += VMX_ERROR msg_gen_function += XENXS_ERROR +msg_gen_function += PARALLELS_ERROR msg_gen_function += eventReportError msg_gen_function += ifaceError msg_gen_function += interfaceReportError diff --git a/configure.ac b/configure.ac index 3da1aa2..6ed7a2f 100644 --- a/configure.ac +++ b/configure.ac @@ -330,6 +330,8 @@ AC_ARG_WITH([esx], AC_HELP_STRING([--with-esx], [add ESX support @<:@default=check@:>@]),[],[with_esx=check]) AC_ARG_WITH([hyperv], AC_HELP_STRING([--with-hyperv], [add Hyper-V support @<:@default=check@:>@]),[],[with_hyperv=check]) +AC_ARG_WITH([parallels], + AC_HELP_STRING([--with-parallels], [add Parallels Virtuozzo Server support @<:@default=check@:>@]),[],[with_parallels=check]) AC_ARG_WITH([test], AC_HELP_STRING([--with-test], [add test driver support @<:@default=yes@:>@]),[],[with_test=yes]) AC_ARG_WITH([remote], @@ -788,6 +790,26 @@ fi AM_CONDITIONAL([WITH_LXC], [test "$with_lxc" = "yes"]) dnl +dnl Checks for the PARALLELS driver +dnl + +if test "$with_parallels" = "check"; then + with_parallels=$with_linux + if test ! $host_cpu = 'x86_64'; then + with_parallels=no + fi +fi + +if test "$with_parallels" = "yes" && test "$with_linux" = "no"; then + AC_MSG_ERROR([The PARALLELS driver can be enabled on Linux only.]) +fi + +if test "$with_parallels" = "yes"; then + AC_DEFINE_UNQUOTED([WITH_PARALLELS], 1, [whether PARALLELS driver is enabled]) +fi +AM_CONDITIONAL([WITH_PARALLELS], [test "$with_parallels" = "yes"]) + +dnl dnl check for shell that understands <> redirection without truncation, dnl needed by src/qemu/qemu_monitor_{text,json}.c. dnl @@ -2805,6 +2827,7 @@ AC_MSG_NOTICE([ LXC: $with_lxc]) AC_MSG_NOTICE([ PHYP: $with_phyp]) AC_MSG_NOTICE([ ESX: $with_esx]) AC_MSG_NOTICE([ Hyper-V: $with_hyperv]) +AC_MSG_NOTICE([ PARALLELS: $with_parallels]) AC_MSG_NOTICE([ Test: $with_test]) AC_MSG_NOTICE([ Remote: $with_remote]) AC_MSG_NOTICE([ Network: $with_network]) diff --git a/docs/drvparallels.html.in b/docs/drvparallels.html.in new file mode 100644 index 0000000..976dea1 --- /dev/null +++ b/docs/drvparallels.html.in @@ -0,0 +1,28 @@ +<html><body> + <h1>Parallels Virtuozzo Server driver</h1> + <ul id="toc"></ul> + <p> + The libvirt PARALLELS driver can manage Parallels Virtuozzo Server starting from 6.0 version. + </p> + + + <h2><a name="project">Project Links</a></h2> + <ul> + <li> + The <a href="http://www.parallels.com/products/server/baremetal/sp/">Parallels Virtuozzo Server</a> Virtualization Solution. + </li> + </ul> + + + <h2><a name="uri">Connections to the Parallels Virtuozzo Server driver</a></h2> + <p> + The libvirt PARALLELS driver is a single-instance privileged driver, with a driver name of 'parallels'. Some example connection URIs for the libvirt driver are: + </p> +<pre> +parallels:///default (local access) +parallels+unix:///default (local access) +parallels://example.com/default (remote access, TLS/x509) +parallels+tcp://example.com/default (remote access, SASl/Kerberos) +parallels+ssh://root@example.com/default (remote access, SSH tunnelled) +</pre> +</body></html> diff --git a/include/libvirt/virterror.h b/include/libvirt/virterror.h index 0e0bc9c..25e8d43 100644 --- a/include/libvirt/virterror.h +++ b/include/libvirt/virterror.h @@ -97,7 +97,7 @@ typedef enum { VIR_FROM_URI = 45, /* Error from URI handling */ VIR_FROM_AUTH = 46, /* Error from auth handling */ VIR_FROM_DBUS = 47, /* Error from DBus */ - + VIR_FROM_PARALLELS = 48, /* Error from PARALLELS */ # ifdef VIR_ENUM_SENTINELS VIR_ERR_DOMAIN_LAST # endif diff --git a/libvirt.spec.in b/libvirt.spec.in index 896ef51..20bdf3d 100644 --- a/libvirt.spec.in +++ b/libvirt.spec.in @@ -67,6 +67,7 @@ %define with_esx 0%{!?_without_esx:1} %define with_hyperv 0%{!?_without_hyperv:1} %define with_xenapi 0%{!?_without_xenapi:1} +%define with_parallels 0%{!?_without_parallels:1} # Then the secondary host drivers, which run inside libvirtd %define with_network 0%{!?_without_network:%{server_drivers}} @@ -131,6 +132,7 @@ %define with_xenapi 0 %define with_libxl 0 %define with_hyperv 0 +%define with_parallels 0 %endif # Fedora 17 / RHEL-7 are first where we use systemd. Although earlier @@ -1032,6 +1034,10 @@ of recent versions of Linux (and other OSes). %define _without_vmware --without-vmware %endif +%if ! %{with_parallels} +%define _without_parallels --without-parallels +%endif + %if ! %{with_polkit} %define _without_polkit --without-polkit %endif @@ -1170,6 +1176,7 @@ autoreconf -if %{?_without_esx} \ %{?_without_hyperv} \ %{?_without_vmware} \ + %{?_without_parallels} \ %{?_without_network} \ %{?_with_rhel5_api} \ %{?_without_storage_fs} \ @@ -1360,7 +1367,7 @@ fi /sbin/chkconfig --add libvirtd if [ "$1" -ge "1" ]; then - /sbin/service libvirtd condrestart > /dev/null 2>&1 + /sbin/service libvirtd condrestart > /dev/null 2>&1 fi %endif diff --git a/mingw32-libvirt.spec.in b/mingw32-libvirt.spec.in index 4d23c75..eaa0cb2 100644 --- a/mingw32-libvirt.spec.in +++ b/mingw32-libvirt.spec.in @@ -18,6 +18,7 @@ # missing libwsman, so can't build hyper-v %define with_hyperv 0%{!?_without_hyperv:0} %define with_xenapi 0%{!?_without_xenapi:1} +%define with_parallels 0%{!?_without_parallels:0} # RHEL ships ESX but not PowerHypervisor, HyperV, or libxenserver (xenapi) %if 0%{?rhel} @@ -92,6 +93,10 @@ MinGW Windows libvirt virtualization library. %define _without_xenapi --without-xenapi %endif +%if ! %{with_parallels} +%define _without_parallels --without-parallels +%endif + %if 0%{?enable_autotools} autoreconf -if %endif @@ -113,6 +118,7 @@ autoreconf -if %{?_without_esx} \ %{?_without_hyperv} \ --without-vmware \ + --without-parallels \ --without-netcf \ --without-audit \ --without-dtrace diff --git a/po/POTFILES.in b/po/POTFILES.in index 31246f7..1917899 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -63,6 +63,7 @@ src/openvz/openvz_conf.c src/openvz/openvz_driver.c src/openvz/openvz_util.c src/phyp/phyp_driver.c +src/parallels/parallels_driver.c src/qemu/qemu_agent.c src/qemu/qemu_bridge_filter.c src/qemu/qemu_capabilities.c diff --git a/src/Makefile.am b/src/Makefile.am index e40909b..128a1a4 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -479,6 +479,10 @@ HYPERV_DRIVER_EXTRA_DIST = \ hyperv/hyperv_wmi_generator.py \ $(HYPERV_DRIVER_GENERATED) +PARALLELS_DRIVER_SOURCES = \ + parallels/parallels_driver.h \ + parallels/parallels_driver.c + NETWORK_DRIVER_SOURCES = \ network/bridge_driver.h network/bridge_driver.c @@ -899,6 +903,14 @@ libvirt_driver_hyperv_la_LIBADD = $(OPENWSMAN_LIBS) libvirt_driver_hyperv_la_SOURCES = $(HYPERV_DRIVER_SOURCES) endif +if WITH_PARALLELS +noinst_LTLIBRARIES += libvirt_driver_parallels.la +libvirt_la_BUILT_LIBADD += libvirt_driver_parallels.la +libvirt_driver_parallels_la_CFLAGS = \ + -I$(top_srcdir)/src/conf $(AM_CFLAGS) +libvirt_driver_parallels_la_SOURCES = $(PARALLELS_DRIVER_SOURCES) +endif + if WITH_NETWORK noinst_LTLIBRARIES += libvirt_driver_network_impl.la libvirt_driver_network_la_SOURCES = @@ -1106,6 +1118,7 @@ EXTRA_DIST += \ $(ESX_DRIVER_EXTRA_DIST) \ $(HYPERV_DRIVER_SOURCES) \ $(HYPERV_DRIVER_EXTRA_DIST) \ + $(PARALLELS_DRIVER_SOURCES) \ $(NETWORK_DRIVER_SOURCES) \ $(INTERFACE_DRIVER_SOURCES) \ $(STORAGE_DRIVER_SOURCES) \ diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index ef6077e..7cf2bb5 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -93,7 +93,8 @@ VIR_ENUM_IMPL(virDomainVirt, VIR_DOMAIN_VIRT_LAST, "vmware", "hyperv", "vbox", - "phyp") + "phyp", + "parallels") VIR_ENUM_IMPL(virDomainBoot, VIR_DOMAIN_BOOT_LAST, "fd", diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 44280ba..d48994c 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -160,6 +160,7 @@ enum virDomainVirtType { VIR_DOMAIN_VIRT_HYPERV, VIR_DOMAIN_VIRT_VBOX, VIR_DOMAIN_VIRT_PHYP, + VIR_DOMAIN_VIRT_PARALLELS, VIR_DOMAIN_VIRT_LAST, }; diff --git a/src/driver.h b/src/driver.h index 09a8adf..2c68e7d 100644 --- a/src/driver.h +++ b/src/driver.h @@ -31,6 +31,7 @@ typedef enum { VIR_DRV_VMWARE = 13, VIR_DRV_LIBXL = 14, VIR_DRV_HYPERV = 15, + VIR_DRV_PARALLELS = 16, } virDrvNo; diff --git a/src/libvirt.c b/src/libvirt.c index 8eb390c..c155e2f 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -72,6 +72,9 @@ #ifdef WITH_XENAPI # include "xenapi/xenapi_driver.h" #endif +#ifdef WITH_PARALLELS +# include "parallels/parallels_driver.h" +#endif #define VIR_FROM_THIS VIR_FROM_NONE @@ -443,6 +446,9 @@ virInitialize(void) #ifdef WITH_XENAPI if (xenapiRegister() == -1) return -1; #endif +#ifdef WITH_PARALLELS + if (parallelsRegister() == -1) return -1; +#endif #ifdef WITH_REMOTE if (remoteRegister () == -1) return -1; #endif @@ -1144,6 +1150,9 @@ do_open (const char *name, #ifndef WITH_XENAPI STRCASEEQ(ret->uri->scheme, "xenapi") || #endif +#ifndef WITH_PARALLELS + STRCASEEQ(ret->uri->scheme, "parallels") || +#endif false)) { virReportErrorHelper(VIR_FROM_NONE, VIR_ERR_CONFIG_UNSUPPORTED, __FILE__, __FUNCTION__, __LINE__, diff --git a/src/parallels/parallels_driver.c b/src/parallels/parallels_driver.c new file mode 100644 index 0000000..4d83f9f --- /dev/null +++ b/src/parallels/parallels_driver.c @@ -0,0 +1,271 @@ +/* + * parallels_driver.c: core driver functions for managing + * Parallels Virtuozzo Server hosts + * + * Copyright (C) 2012 Parallels, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <config.h> + +#include <sys/types.h> +#include <sys/poll.h> +#include <limits.h> +#include <string.h> +#include <stdio.h> +#include <stdarg.h> +#include <stdlib.h> +#include <unistd.h> +#include <errno.h> +#include <sys/utsname.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <paths.h> +#include <pwd.h> +#include <stdio.h> +#include <sys/wait.h> +#include <sys/time.h> +#include <sys/statvfs.h> + +#include "datatypes.h" +#include "virterror_internal.h" +#include "memory.h" +#include "util.h" +#include "logging.h" +#include "command.h" +#include "configmake.h" +#include "storage_file.h" +#include "nodeinfo.h" +#include "json.h" + +#include "parallels_driver.h" + +#define VIR_FROM_THIS VIR_FROM_PARALLELS + +static virCapsPtr parallelsBuildCapabilities(void); +static int parallelsClose(virConnectPtr conn); + +static void +parallelsDriverLock(parallelsConnPtr driver) +{ + virMutexLock(&driver->lock); +} + +static void +parallelsDriverUnlock(parallelsConnPtr driver) +{ + virMutexUnlock(&driver->lock); +} + +static int +parallelsDefaultConsoleType(const char *ostype ATTRIBUTE_UNUSED) +{ + return VIR_DOMAIN_CHR_CONSOLE_TARGET_TYPE_SERIAL; +} + +static virCapsPtr +parallelsBuildCapabilities(void) +{ + virCapsPtr caps; + virCapsGuestPtr guest; + struct utsname utsname; + uname(&utsname); + + if ((caps = virCapabilitiesNew(utsname.machine, 0, 0)) == NULL) + goto no_memory; + + if (nodeCapsInitNUMA(caps) < 0) + goto no_memory; + + virCapabilitiesSetMacPrefix(caps, (unsigned char[]) { + 0x42, 0x1C, 0x00}); + + if ((guest = virCapabilitiesAddGuest(caps, "hvm", "x86_64", + 64, "parallels", + NULL, 0, NULL)) == NULL) + goto no_memory; + + if (virCapabilitiesAddGuestDomain(guest, + "parallels", NULL, NULL, 0, NULL) == NULL) + goto no_memory; + + caps->defaultConsoleTargetType = parallelsDefaultConsoleType; + return caps; + + no_memory: + virReportOOMError(); + virCapabilitiesFree(caps); + return NULL; +} + +static char * +parallelsGetCapabilities(virConnectPtr conn) +{ + parallelsConnPtr privconn = conn->privateData; + char *xml; + + parallelsDriverLock(privconn); + if ((xml = virCapabilitiesFormatXML(privconn->caps)) == NULL) + virReportOOMError(); + parallelsDriverUnlock(privconn); + return xml; +} + +static int +parallelsOpenDefault(virConnectPtr conn) +{ + parallelsConnPtr privconn; + + if (VIR_ALLOC(privconn) < 0) { + virReportOOMError(); + return VIR_DRV_OPEN_ERROR; + } + if (virMutexInit(&privconn->lock) < 0) { + parallelsError(VIR_ERR_INTERNAL_ERROR, + "%s", _("cannot initialize mutex")); + goto error; + } + + parallelsDriverLock(privconn); + conn->privateData = privconn; + parallelsDriverUnlock(privconn); + + if (!(privconn->caps = parallelsBuildCapabilities())) + goto error; + + if (virDomainObjListInit(&privconn->domains) < 0) + goto error; + + return VIR_DRV_OPEN_SUCCESS; + + error: + virDomainObjListDeinit(&privconn->domains); + virCapabilitiesFree(privconn->caps); + virStoragePoolObjListFree(&privconn->pools); + parallelsDriverUnlock(privconn); + conn->privateData = NULL; + VIR_FREE(privconn); + return VIR_DRV_OPEN_ERROR; +} + +static virDrvOpenStatus +parallelsOpen(virConnectPtr conn, + virConnectAuthPtr auth ATTRIBUTE_UNUSED, unsigned int flags) +{ + int ret; + parallelsConnPtr privconn; + virCheckFlags(VIR_CONNECT_RO, VIR_DRV_OPEN_ERROR); + + if (!conn->uri) + return VIR_DRV_OPEN_DECLINED; + + if (!conn->uri->scheme || STRNEQ(conn->uri->scheme, "parallels")) + return VIR_DRV_OPEN_DECLINED; + + /* Remote driver should handle these. */ + if (conn->uri->server) + return VIR_DRV_OPEN_DECLINED; + + /* From this point on, the connection is for us. */ + if (!conn->uri->path + || conn->uri->path[0] == '\0' + || (conn->uri->path[0] == '/' && conn->uri->path[1] == '\0')) { + parallelsError(VIR_ERR_INVALID_ARG, + "%s", _("parallelsOpen: supply a path or use parallels:///default")); + return VIR_DRV_OPEN_ERROR; + } + + if (STREQ(conn->uri->path, "/default")) + ret = parallelsOpenDefault(conn); + else + return VIR_DRV_OPEN_DECLINED; + + if (ret != VIR_DRV_OPEN_SUCCESS) + return ret; + + privconn = conn->privateData; + parallelsDriverLock(privconn); + privconn->domainEventState = virDomainEventStateNew(); + if (!privconn->domainEventState) { + parallelsDriverUnlock(privconn); + parallelsClose(conn); + return VIR_DRV_OPEN_ERROR; + } + + parallelsDriverUnlock(privconn); + return VIR_DRV_OPEN_SUCCESS; +} + +static int +parallelsClose(virConnectPtr conn) +{ + parallelsConnPtr privconn = conn->privateData; + + parallelsDriverLock(privconn); + virCapabilitiesFree(privconn->caps); + virDomainObjListDeinit(&privconn->domains); + virDomainEventStateFree(privconn->domainEventState); + conn->privateData = NULL; + + parallelsDriverUnlock(privconn); + virMutexDestroy(&privconn->lock); + + VIR_FREE(privconn); + return 0; +} + +static int +parallelsGetVersion(virConnectPtr conn ATTRIBUTE_UNUSED, unsigned long *hvVer) +{ + /* TODO */ + *hvVer = 6; + return 0; +} + +static virDriver parallelsDriver = { + .no = VIR_DRV_PARALLELS, + .name = "PARALLELS", + .open = parallelsOpen, /* 0.9.12 */ + .close = parallelsClose, /* 0.9.12 */ + .version = parallelsGetVersion, /* 0.9.12 */ + .getHostname = virGetHostname, /* 0.9.12 */ + .nodeGetInfo = nodeGetInfo, /* 0.9.12 */ + .getCapabilities = parallelsGetCapabilities, /* 0.9.12 */ +}; + +/** + * parallelsRegister: + * + * Registers the parallels driver + */ +int +parallelsRegister(void) +{ + char *prlctl_path; + + prlctl_path = virFindFileInPath(PRLCTL); + if (!prlctl_path) { + parallelsError(VIR_ERR_INTERNAL_ERROR, "%s", + _("Can't find prlctl command in the PATH env")); + return VIR_DRV_OPEN_ERROR; + } + + if (virRegisterDriver(¶llelsDriver) < 0) + return -1; + + return 0; +} diff --git a/src/parallels/parallels_driver.h b/src/parallels/parallels_driver.h new file mode 100644 index 0000000..c04db35 --- /dev/null +++ b/src/parallels/parallels_driver.h @@ -0,0 +1,51 @@ +/* + * parallels_driver.c: core driver functions for managing + * Parallels Virtuozzo Server hosts + * + * Copyright (C) 2012 Parallels, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#ifndef PARALLELS_DRIVER_H +# define PARALLELS_DRIVER_H + + +# include "domain_conf.h" +# include "storage_conf.h" +# include "domain_event.h" + +# define parallelsError(code, ...) \ + virReportErrorHelper(VIR_FROM_TEST, code, __FILE__, \ + __FUNCTION__, __LINE__, __VA_ARGS__) +# define PRLCTL "prlctl" + + +struct _parallelsConn { + virMutex lock; + virDomainObjList domains; + virStoragePoolObjList pools; + virCapsPtr caps; + virDomainEventStatePtr domainEventState; +}; + +typedef struct _parallelsConn parallelsConn; + +typedef struct _parallelsConn *parallelsConnPtr; + +int parallelsRegister(void); + +#endif diff --git a/src/util/virterror.c b/src/util/virterror.c index cb37be0..7c0119f 100644 --- a/src/util/virterror.c +++ b/src/util/virterror.c @@ -99,7 +99,8 @@ VIR_ENUM_IMPL(virErrorDomain, VIR_ERR_DOMAIN_LAST, "URI Utils", /* 45 */ "Authentication Utils", - "DBus Utils" + "DBus Utils", + "Parallels Cloud Server" ) -- 1.7.1

Parallels driver is 'stateless', like vmware or openvz drivers. It collects information about domains during startup using command-line utility prlctl. VMs in Parallels Cloud Server are identified by UUIDs or unique names, which can be used as respective fields in virDomainDef structure. Currently only basic info, like description, virtual cpus number and memory amount, is implemented. Querying devices information will be added in the next patches. Parallels Cloud Server doesn't support non-persistent domains - you can't run a domain having only disk image, it must always be registered in system. Functions for querying domain info have been just copied from test driver with some changes - they extract needed data from previously created list of virDomainObj objects. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- po/POTFILES.in | 2 + src/Makefile.am | 3 +- src/parallels/parallels_driver.c | 540 +++++++++++++++++++++++++++++++++++++- src/parallels/parallels_driver.h | 14 + src/parallels/parallels_utils.c | 89 +++++++ 5 files changed, 646 insertions(+), 2 deletions(-) create mode 100644 src/parallels/parallels_utils.c diff --git a/po/POTFILES.in b/po/POTFILES.in index 1917899..dcb0813 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -64,6 +64,8 @@ src/openvz/openvz_driver.c src/openvz/openvz_util.c src/phyp/phyp_driver.c src/parallels/parallels_driver.c +src/parallels/parallels_driver.h +src/parallels/parallels_utils.c src/qemu/qemu_agent.c src/qemu/qemu_bridge_filter.c src/qemu/qemu_capabilities.c diff --git a/src/Makefile.am b/src/Makefile.am index 128a1a4..a89585d 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -481,7 +481,8 @@ HYPERV_DRIVER_EXTRA_DIST = \ PARALLELS_DRIVER_SOURCES = \ parallels/parallels_driver.h \ - parallels/parallels_driver.c + parallels/parallels_driver.c \ + parallels/parallels_utils.c NETWORK_DRIVER_SOURCES = \ network/bridge_driver.h network/bridge_driver.c diff --git a/src/parallels/parallels_driver.c b/src/parallels/parallels_driver.c index 4d83f9f..69bc966 100644 --- a/src/parallels/parallels_driver.c +++ b/src/parallels/parallels_driver.c @@ -50,12 +50,13 @@ #include "configmake.h" #include "storage_file.h" #include "nodeinfo.h" -#include "json.h" +#include "domain_conf.h" #include "parallels_driver.h" #define VIR_FROM_THIS VIR_FROM_PARALLELS +static void parallelsFreeDomObj(void *p); static virCapsPtr parallelsBuildCapabilities(void); static int parallelsClose(virConnectPtr conn); @@ -77,6 +78,12 @@ parallelsDefaultConsoleType(const char *ostype ATTRIBUTE_UNUSED) return VIR_DOMAIN_CHR_CONSOLE_TARGET_TYPE_SERIAL; } +static void +parallelsFreeDomObj(void *p) +{ + VIR_FREE(p); +}; + static virCapsPtr parallelsBuildCapabilities(void) { @@ -125,6 +132,221 @@ parallelsGetCapabilities(virConnectPtr conn) return xml; } +/* + * Must be called with privconn->lock held + */ +static virDomainObjPtr +parallelsLoadDomain(parallelsConnPtr privconn, virJSONValuePtr jobj) +{ + virDomainObjPtr dom = NULL; + virDomainDefPtr def = NULL; + parallelsDomObjPtr pdom = NULL; + virJSONValuePtr jobj2, jobj3; + const char *tmp; + char *endptr; + unsigned long mem; + unsigned int x; + + if (VIR_ALLOC(def) < 0) + goto no_memory; + + def->virtType = VIR_DOMAIN_VIRT_PARALLELS; + def->id = -1; + + tmp = virJSONValueObjectGetString(jobj, "Name"); + if (!tmp) { + parallelsParseError(); + goto cleanup; + } + if (!(def->name = strdup(tmp))) + goto no_memory; + + tmp = virJSONValueObjectGetString(jobj, "ID"); + if (!tmp) { + parallelsParseError(); + goto cleanup; + } + + if (virUUIDParse(tmp, def->uuid) < 0) { + parallelsError(VIR_ERR_INTERNAL_ERROR, "%s", + _("UUID in config file malformed")); + goto cleanup; + } + + tmp = virJSONValueObjectGetString(jobj, "Description"); + if (!tmp) { + parallelsParseError(); + goto cleanup; + } + if (!(def->description = strdup(tmp))) + goto no_memory; + + jobj2 = virJSONValueObjectGet(jobj, "Hardware"); + if (!jobj2) { + parallelsParseError(); + goto cleanup; + } + + jobj3 = virJSONValueObjectGet(jobj2, "cpu"); + if (!jobj3) { + parallelsParseError(); + goto cleanup; + } + + if (virJSONValueObjectGetNumberUint(jobj3, "cpus", &x) < 0) { + parallelsParseError(); + goto cleanup; + } + def->vcpus = x; + def->maxvcpus = x; + + jobj3 = virJSONValueObjectGet(jobj2, "memory"); + if (!jobj3) { + parallelsParseError(); + goto cleanup; + } + + tmp = virJSONValueObjectGetString(jobj3, "size"); + + if (virStrToLong_ul(tmp, &endptr, 10, &mem) < 0) { + parallelsParseError(); + goto cleanup; + } + + if (!STREQ(endptr, "Mb")) { + parallelsParseError(); + goto cleanup; + } + + def->mem.max_balloon = mem; + def->mem.max_balloon <<= 10; + def->mem.cur_balloon = def->mem.max_balloon; + + if (!(def->os.type = strdup("hvm"))) + goto no_memory; + + if (!(def->os.init = strdup("/sbin/init"))) + goto no_memory; + + if (!(dom = virDomainAssignDef(privconn->caps, + &privconn->domains, def, false))) + goto cleanup; + /* dom is locked here */ + + if (VIR_ALLOC(pdom) < 0) + goto no_memory_unlock; + dom->privateDataFreeFunc = parallelsFreeDomObj; + dom->privateData = pdom; + + if (virJSONValueObjectGetNumberUint(jobj, "EnvID", &x) < 0) + goto cleanup_unlock; + pdom->id = x; + tmp = virJSONValueObjectGetString(jobj, "ID"); + if (!tmp) { + parallelsParseError(); + goto cleanup_unlock; + } + if (!(pdom->uuid = strdup(tmp))) + goto no_memory_unlock; + + tmp = virJSONValueObjectGetString(jobj, "OS"); + if (!tmp) + goto cleanup_unlock; + if (!(pdom->os = strdup(tmp))) + goto no_memory_unlock; + + dom->persistent = 1; + + tmp = virJSONValueObjectGetString(jobj, "State"); + if (!tmp) { + parallelsParseError(); + goto cleanup_unlock; + } + + /* TODO: handle all possible states */ + if (STREQ(tmp, "running")) { + virDomainObjSetState(dom, VIR_DOMAIN_RUNNING, + VIR_DOMAIN_RUNNING_BOOTED); + def->id = pdom->id; + } + + tmp = virJSONValueObjectGetString(jobj, "Autostart"); + if (!tmp) { + parallelsParseError(); + goto cleanup_unlock; + } + if (STREQ(tmp, "on")) + dom->autostart = 1; + else + dom->autostart = 0; + + virDomainObjUnlock(dom); + + return dom; + + no_memory_unlock: + virReportOOMError(); + cleanup_unlock: + virDomainObjUnlock(dom); + /* domain list was locked, so nobody could get 'dom'. It has only + * one reference and virDomainObjUnref return 0 here */ + if (virDomainObjUnref(dom)) + parallelsError(VIR_ERR_INTERNAL_ERROR, _("Can't free virDomainObj")); + return NULL; + no_memory: + virReportOOMError(); + cleanup: + virDomainDefFree(def); + return NULL; +} + +/* + * Must be called with privconn->lock held + * + * if domain_name is NULL - load information about all + * registered domains. + */ +static int +parallelsLoadDomains(parallelsConnPtr privconn, const char *domain_name) +{ + int count, i; + virJSONValuePtr jobj; + virJSONValuePtr jobj2; + virDomainObjPtr dom = NULL; + int ret = -1; + + jobj = parallelsParseOutput(PRLCTL, "list", "-j", "-a", + "-i", "-H", "--vmtype", "vm", domain_name, NULL); + if (!jobj) { + parallelsParseError(); + goto cleanup; + } + + count = virJSONValueArraySize(jobj); + if (count < 1) { + parallelsParseError(); + goto cleanup; + } + + for (i = 0; i < count; i++) { + jobj2 = virJSONValueArrayGet(jobj, i); + if (!jobj2) { + parallelsParseError(); + goto cleanup; + } + + dom = parallelsLoadDomain(privconn, jobj2); + if (!dom) + goto cleanup; + } + + ret = 0; + + cleanup: + virJSONValueFree(jobj); + return ret; +} + static int parallelsOpenDefault(virConnectPtr conn) { @@ -150,6 +372,9 @@ parallelsOpenDefault(virConnectPtr conn) if (virDomainObjListInit(&privconn->domains) < 0) goto error; + if (parallelsLoadDomains(privconn, NULL)) + goto error; + return VIR_DRV_OPEN_SUCCESS; error: @@ -236,6 +461,306 @@ parallelsGetVersion(virConnectPtr conn ATTRIBUTE_UNUSED, unsigned long *hvVer) return 0; } +static int +parallelsListDomains(virConnectPtr conn, int *ids, int maxids) +{ + parallelsConnPtr privconn = conn->privateData; + int n; + + parallelsDriverLock(privconn); + n = virDomainObjListGetActiveIDs(&privconn->domains, ids, maxids); + parallelsDriverUnlock(privconn); + + return n; +} + +static int +parallelsNumOfDomains(virConnectPtr conn) +{ + parallelsConnPtr privconn = conn->privateData; + int count; + + parallelsDriverLock(privconn); + count = virDomainObjListNumOfDomains(&privconn->domains, 1); + parallelsDriverUnlock(privconn); + + return count; +} + +static int +parallelsListDefinedDomains(virConnectPtr conn, char **const names, int maxnames) +{ + parallelsConnPtr privconn = conn->privateData; + int n; + + parallelsDriverLock(privconn); + memset(names, 0, sizeof(*names) * maxnames); + n = virDomainObjListGetInactiveNames(&privconn->domains, names, + maxnames); + parallelsDriverUnlock(privconn); + + return n; +} + +static int +parallelsNumOfDefinedDomains(virConnectPtr conn) +{ + parallelsConnPtr privconn = conn->privateData; + int count; + + parallelsDriverLock(privconn); + count = virDomainObjListNumOfDomains(&privconn->domains, 0); + parallelsDriverUnlock(privconn); + + return count; +} + +static virDomainPtr +parallelsLookupDomainByID(virConnectPtr conn, int id) +{ + parallelsConnPtr privconn = conn->privateData; + virDomainPtr ret = NULL; + virDomainObjPtr dom; + + parallelsDriverLock(privconn); + dom = virDomainFindByID(&privconn->domains, id); + parallelsDriverUnlock(privconn); + + if (dom == NULL) { + parallelsError(VIR_ERR_NO_DOMAIN, NULL); + goto cleanup; + } + + ret = virGetDomain(conn, dom->def->name, dom->def->uuid); + if (ret) + ret->id = dom->def->id; + + cleanup: + if (dom) + virDomainObjUnlock(dom); + return ret; +} + +static virDomainPtr +parallelsLookupDomainByUUID(virConnectPtr conn, const unsigned char *uuid) +{ + parallelsConnPtr privconn = conn->privateData; + virDomainPtr ret = NULL; + virDomainObjPtr dom; + + parallelsDriverLock(privconn); + dom = virDomainFindByUUID(&privconn->domains, uuid); + parallelsDriverUnlock(privconn); + + if (dom == NULL) { + parallelsError(VIR_ERR_NO_DOMAIN, NULL); + goto cleanup; + } + + ret = virGetDomain(conn, dom->def->name, dom->def->uuid); + if (ret) + ret->id = dom->def->id; + + cleanup: + if (dom) + virDomainObjUnlock(dom); + return ret; +} + +static virDomainPtr +parallelsLookupDomainByName(virConnectPtr conn, const char *name) +{ + parallelsConnPtr privconn = conn->privateData; + virDomainPtr ret = NULL; + virDomainObjPtr dom; + + parallelsDriverLock(privconn); + dom = virDomainFindByName(&privconn->domains, name); + parallelsDriverUnlock(privconn); + + if (dom == NULL) { + parallelsError(VIR_ERR_NO_DOMAIN, NULL); + goto cleanup; + } + + ret = virGetDomain(conn, dom->def->name, dom->def->uuid); + if (ret) + ret->id = dom->def->id; + + cleanup: + if (dom) + virDomainObjUnlock(dom); + return ret; +} + +static int +parallelsGetDomainInfo(virDomainPtr domain, virDomainInfoPtr info) +{ + parallelsConnPtr privconn = domain->conn->privateData; + virDomainObjPtr privdom; + int ret = -1; + + parallelsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, domain->name); + parallelsDriverUnlock(privconn); + + if (privdom == NULL) { + parallelsError(VIR_ERR_NO_DOMAIN, + _("no domain with matching name '%s'"), domain->name); + goto cleanup; + } + + info->state = virDomainObjGetState(privdom, NULL); + info->memory = privdom->def->mem.cur_balloon; + info->maxMem = privdom->def->mem.max_balloon; + info->nrVirtCpu = privdom->def->vcpus; + info->cpuTime = 0; + ret = 0; + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + return ret; +} + +static char * +parallelsGetOSType(virDomainPtr dom) +{ + parallelsConnPtr privconn = dom->conn->privateData; + virDomainObjPtr privdom; + parallelsDomObjPtr pdom; + + char *ret = NULL; + + parallelsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, dom->name); + if (privdom == NULL) { + parallelsError(VIR_ERR_NO_DOMAIN, + _("no domain with matching name '%s'"), dom->name); + goto cleanup; + } + + pdom = privdom->privateData; + + if (!(ret = strdup(pdom->os))) + virReportOOMError(); + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + parallelsDriverUnlock(privconn); + return ret; +} + +static int +parallelsDomainIsPersistent(virDomainPtr dom) +{ + parallelsConnPtr privconn = dom->conn->privateData; + virDomainObjPtr privdom; + int ret = -1; + + parallelsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, dom->name); + if (privdom == NULL) { + parallelsError(VIR_ERR_NO_DOMAIN, + _("no domain with matching name '%s'"), dom->name); + goto cleanup; + } + + ret = 1; + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + parallelsDriverUnlock(privconn); + return ret; +} + +static int +parallelsDomainGetState(virDomainPtr domain, + int *state, int *reason, unsigned int flags) +{ + parallelsConnPtr privconn = domain->conn->privateData; + virDomainObjPtr privdom; + int ret = -1; + virCheckFlags(0, -1); + + parallelsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, domain->name); + parallelsDriverUnlock(privconn); + + if (privdom == NULL) { + parallelsError(VIR_ERR_NO_DOMAIN, + _("no domain with matching name '%s'"), domain->name); + goto cleanup; + } + + *state = virDomainObjGetState(privdom, reason); + ret = 0; + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + return ret; +} + +static char * +parallelsDomainGetXMLDesc(virDomainPtr domain, unsigned int flags) +{ + parallelsConnPtr privconn = domain->conn->privateData; + virDomainDefPtr def; + virDomainObjPtr privdom; + char *ret = NULL; + + /* Flags checked by virDomainDefFormat */ + + parallelsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, domain->name); + parallelsDriverUnlock(privconn); + + if (privdom == NULL) { + parallelsError(VIR_ERR_NO_DOMAIN, + _("no domain with matching name '%s'"), domain->name); + goto cleanup; + } + + def = (flags & VIR_DOMAIN_XML_INACTIVE) && + privdom->newDef ? privdom->newDef : privdom->def; + + ret = virDomainDefFormat(def, flags); + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + return ret; +} + +static int +parallelsDomainGetAutostart(virDomainPtr domain, int *autostart) +{ + parallelsConnPtr privconn = domain->conn->privateData; + virDomainObjPtr privdom; + int ret = -1; + + parallelsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, domain->name); + parallelsDriverUnlock(privconn); + + if (privdom == NULL) { + parallelsError(VIR_ERR_NO_DOMAIN, + _("no domain with matching name '%s'"), domain->name); + goto cleanup; + } + + *autostart = privdom->autostart; + ret = 0; + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + return ret; +} + static virDriver parallelsDriver = { .no = VIR_DRV_PARALLELS, .name = "PARALLELS", @@ -245,6 +770,19 @@ static virDriver parallelsDriver = { .getHostname = virGetHostname, /* 0.9.12 */ .nodeGetInfo = nodeGetInfo, /* 0.9.12 */ .getCapabilities = parallelsGetCapabilities, /* 0.9.12 */ + .listDomains = parallelsListDomains, /* 0.9.12 */ + .numOfDomains = parallelsNumOfDomains, /* 0.9.12 */ + .listDefinedDomains = parallelsListDefinedDomains, /* 0.9.12 */ + .numOfDefinedDomains = parallelsNumOfDefinedDomains, /* 0.9.12 */ + .domainLookupByID = parallelsLookupDomainByID, /* 0.9.12 */ + .domainLookupByUUID = parallelsLookupDomainByUUID, /* 0.9.12 */ + .domainLookupByName = parallelsLookupDomainByName, /* 0.9.12 */ + .domainGetOSType = parallelsGetOSType, /* 0.9.12 */ + .domainGetInfo = parallelsGetDomainInfo, /* 0.9.12 */ + .domainGetState = parallelsDomainGetState, /* 0.9.12 */ + .domainGetXMLDesc = parallelsDomainGetXMLDesc, /* 0.9.12 */ + .domainIsPersistent = parallelsDomainIsPersistent, /* 0.9.12 */ + .domainGetAutostart = parallelsDomainGetAutostart, /* 0.9.12 */ }; /** diff --git a/src/parallels/parallels_driver.h b/src/parallels/parallels_driver.h index c04db35..8398c02 100644 --- a/src/parallels/parallels_driver.h +++ b/src/parallels/parallels_driver.h @@ -28,11 +28,23 @@ # include "storage_conf.h" # include "domain_event.h" +# include "json.h" + # define parallelsError(code, ...) \ virReportErrorHelper(VIR_FROM_TEST, code, __FILE__, \ __FUNCTION__, __LINE__, __VA_ARGS__) # define PRLCTL "prlctl" +# define parallelsParseError() \ + virReportErrorHelper(VIR_FROM_TEST, VIR_ERR_OPERATION_FAILED, __FILE__, \ + __FUNCTION__, __LINE__, _("Can't parse prlctl output")) + +struct parallelsDomObj { + int id; + char *uuid; + char *os; +}; +typedef struct parallelsDomObj *parallelsDomObjPtr; struct _parallelsConn { virMutex lock; @@ -48,4 +60,6 @@ typedef struct _parallelsConn *parallelsConnPtr; int parallelsRegister(void); +virJSONValuePtr parallelsParseOutput(const char *binary, ...) ATTRIBUTE_NONNULL(1) ATTRIBUTE_SENTINEL; + #endif diff --git a/src/parallels/parallels_utils.c b/src/parallels/parallels_utils.c new file mode 100644 index 0000000..845adf4 --- /dev/null +++ b/src/parallels/parallels_utils.c @@ -0,0 +1,89 @@ +/* + * parallels_utils.c: core driver functions for managing + * Parallels Virtuozzo Server hosts + * + * Copyright (C) 2012 Parallels, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <config.h> + +#include <stdarg.h> + +#include "command.h" +#include "virterror_internal.h" +#include "memory.h" + +#include "parallels_driver.h" + +static int +parallelsDoCmdRun(char **outbuf, const char *binary, va_list list) +{ + virCommandPtr cmd = virCommandNew(binary); + const char *arg; + char *scmd = NULL; + int ret = -1; + + while ((arg = va_arg(list, const char *)) != NULL) + virCommandAddArg(cmd, arg); + + if (outbuf) + virCommandSetOutputBuffer(cmd, outbuf); + + scmd = virCommandToString(cmd); + if (!scmd) + goto cleanup; + + if (virCommandRun(cmd, NULL)) + goto cleanup; + + ret = 0; + + cleanup: + VIR_FREE(scmd); + virCommandFree(cmd); + if (ret) + VIR_FREE(*outbuf); + return ret; +} + +/* + * Run command and parse its JSON output, return + * pointer to virJSONValue or NULL in case of error. + */ +virJSONValuePtr +parallelsParseOutput(const char *binary, ...) +{ + char *outbuf; + virJSONValuePtr jobj = NULL; + va_list list; + int ret; + + va_start(list, binary); + ret = parallelsDoCmdRun(&outbuf, binary, list); + va_end(list); + if (ret) + return NULL; + + jobj = virJSONValueFromString(outbuf); + if (!jobj) + parallelsError(VIR_ERR_INTERNAL_ERROR, "%s: %s", + _("invalid output from prlctl"), outbuf); + + VIR_FREE(outbuf); + return jobj; +} -- 1.7.1

Add functions for create/shutdown/destroy and suspend/resume domain. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/parallels/parallels_driver.c | 149 ++++++++++++++++++++++++++++++++++++++ src/parallels/parallels_driver.h | 1 + src/parallels/parallels_utils.c | 18 +++++ 3 files changed, 168 insertions(+), 0 deletions(-) diff --git a/src/parallels/parallels_driver.c b/src/parallels/parallels_driver.c index 69bc966..8278071 100644 --- a/src/parallels/parallels_driver.c +++ b/src/parallels/parallels_driver.c @@ -59,6 +59,11 @@ static void parallelsFreeDomObj(void *p); static virCapsPtr parallelsBuildCapabilities(void); static int parallelsClose(virConnectPtr conn); +static int parallelsPause(virDomainObjPtr privdom); +static int parallelsResume(virDomainObjPtr privdom); +static int parallelsStart(virDomainObjPtr privdom); +static int parallelsKill(virDomainObjPtr privdom); +static int parallelsStop(virDomainObjPtr privdom); static void parallelsDriverLock(parallelsConnPtr driver) @@ -84,6 +89,12 @@ parallelsFreeDomObj(void *p) VIR_FREE(p); }; +static void +parallelsDomainEventQueue(parallelsConnPtr driver, virDomainEventPtr event) +{ + virDomainEventStateQueue(driver->domainEventState, event); +} + static virCapsPtr parallelsBuildCapabilities(void) { @@ -761,6 +772,139 @@ parallelsDomainGetAutostart(virDomainPtr domain, int *autostart) return ret; } +typedef int (*parallelsChangeState) (virDomainObjPtr privdom); +#define PARALLELS_UUID(x) (((parallelsDomObjPtr)(x->privateData))->uuid) + +static int +parallelsDomainChangeState(virDomainPtr domain, + virDomainState req_state, const char *req_state_name, + parallelsChangeState chstate, + virDomainState new_state, int reason, + int event_type, int event_detail) +{ + parallelsConnPtr privconn = domain->conn->privateData; + virDomainObjPtr privdom; + virDomainEventPtr event = NULL; + int state; + int ret = -1; + + parallelsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, domain->name); + parallelsDriverUnlock(privconn); + + if (privdom == NULL) { + parallelsError(VIR_ERR_NO_DOMAIN, + _("no domain with matching name '%s'"), domain->name); + goto cleanup; + } + + state = virDomainObjGetState(privdom, NULL); + if (state != req_state) { + parallelsError(VIR_ERR_INTERNAL_ERROR, _("domain '%s' not %s"), + privdom->def->name, req_state_name); + goto cleanup; + } + + if (chstate(privdom)) + goto cleanup; + + virDomainObjSetState(privdom, new_state, reason); + + event = virDomainEventNewFromObj(privdom, event_type, event_detail); + ret = 0; + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + + if (event) { + parallelsDriverLock(privconn); + parallelsDomainEventQueue(privconn, event); + parallelsDriverUnlock(privconn); + } + return ret; +} + +static int parallelsPause(virDomainObjPtr privdom) +{ + return parallelsCmdRun(PRLCTL, "pause", PARALLELS_UUID(privdom), NULL); +} + +static int +parallelsPauseDomain(virDomainPtr domain) +{ + return parallelsDomainChangeState(domain, + VIR_DOMAIN_RUNNING, "running", + parallelsPause, + VIR_DOMAIN_PAUSED, VIR_DOMAIN_PAUSED_USER, + VIR_DOMAIN_EVENT_SUSPENDED, + VIR_DOMAIN_EVENT_SUSPENDED_PAUSED); +} + +static int parallelsResume(virDomainObjPtr privdom) +{ + return parallelsCmdRun(PRLCTL, "resume", PARALLELS_UUID(privdom), NULL); +} + +static int +parallelsResumeDomain(virDomainPtr domain) +{ + return parallelsDomainChangeState(domain, + VIR_DOMAIN_PAUSED, "paused", + parallelsResume, + VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_UNPAUSED, + VIR_DOMAIN_EVENT_RESUMED, + VIR_DOMAIN_EVENT_RESUMED_UNPAUSED); +} + +static int parallelsStart(virDomainObjPtr privdom) +{ + return parallelsCmdRun(PRLCTL, "start", PARALLELS_UUID(privdom), NULL); +} + +static int +parallelsDomainCreate(virDomainPtr domain) +{ + return parallelsDomainChangeState(domain, + VIR_DOMAIN_SHUTOFF, "stopped", + parallelsStart, + VIR_DOMAIN_RUNNING, VIR_DOMAIN_EVENT_STARTED_BOOTED, + VIR_DOMAIN_EVENT_STARTED, + VIR_DOMAIN_EVENT_STARTED_BOOTED); +} + +static int parallelsKill(virDomainObjPtr privdom) +{ + return parallelsCmdRun(PRLCTL, "stop", PARALLELS_UUID(privdom), "--kill", NULL); +} + +static int +parallelsDestroyDomain(virDomainPtr domain) +{ + return parallelsDomainChangeState(domain, + VIR_DOMAIN_RUNNING, "running", + parallelsKill, + VIR_DOMAIN_SHUTOFF, VIR_DOMAIN_SHUTOFF_DESTROYED, + VIR_DOMAIN_EVENT_STOPPED, + VIR_DOMAIN_EVENT_STOPPED_DESTROYED); +} + +static int parallelsStop(virDomainObjPtr privdom) +{ + return parallelsCmdRun(PRLCTL, "stop", PARALLELS_UUID(privdom), NULL); +} + +static int +parallelsShutdownDomain(virDomainPtr domain) +{ + return parallelsDomainChangeState(domain, + VIR_DOMAIN_RUNNING, "running", + parallelsStop, + VIR_DOMAIN_SHUTOFF, VIR_DOMAIN_SHUTOFF_SHUTDOWN, + VIR_DOMAIN_EVENT_STOPPED, + VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN); +} + static virDriver parallelsDriver = { .no = VIR_DRV_PARALLELS, .name = "PARALLELS", @@ -783,6 +927,11 @@ static virDriver parallelsDriver = { .domainGetXMLDesc = parallelsDomainGetXMLDesc, /* 0.9.12 */ .domainIsPersistent = parallelsDomainIsPersistent, /* 0.9.12 */ .domainGetAutostart = parallelsDomainGetAutostart, /* 0.9.12 */ + .domainSuspend = parallelsPauseDomain, /* 0.9.12 */ + .domainResume = parallelsResumeDomain, /* 0.9.12 */ + .domainDestroy = parallelsDestroyDomain, /* 0.9.12 */ + .domainShutdown = parallelsShutdownDomain, /* 0.9.12 */ + .domainCreate = parallelsDomainCreate, /* 0.9.12 */ }; /** diff --git a/src/parallels/parallels_driver.h b/src/parallels/parallels_driver.h index 8398c02..2b3c956 100644 --- a/src/parallels/parallels_driver.h +++ b/src/parallels/parallels_driver.h @@ -61,5 +61,6 @@ typedef struct _parallelsConn *parallelsConnPtr; int parallelsRegister(void); virJSONValuePtr parallelsParseOutput(const char *binary, ...) ATTRIBUTE_NONNULL(1) ATTRIBUTE_SENTINEL; +int parallelsCmdRun(const char *binary, ...) ATTRIBUTE_NONNULL(1) ATTRIBUTE_SENTINEL; #endif diff --git a/src/parallels/parallels_utils.c b/src/parallels/parallels_utils.c index 845adf4..e4220e9 100644 --- a/src/parallels/parallels_utils.c +++ b/src/parallels/parallels_utils.c @@ -87,3 +87,21 @@ parallelsParseOutput(const char *binary, ...) VIR_FREE(outbuf); return jobj; } + +/* + * Run prlctl command and check for errors + * + * Return value is 0 in case of success, else - -1 + */ +int +parallelsCmdRun(const char *binary, ...) +{ + int ret; + va_list list; + + va_start(list, binary); + ret = parallelsDoCmdRun(NULL, binary, list); + va_end(list); + + return ret; +} -- 1.7.1

Add support of collecting information about serial ports. This change is needed mostly as an example, support of other devices will be added later. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/parallels/parallels_driver.c | 119 ++++++++++++++++++++++++++++++++++++++ 1 files changed, 119 insertions(+), 0 deletions(-) diff --git a/src/parallels/parallels_driver.c b/src/parallels/parallels_driver.c index 8278071..31244db 100644 --- a/src/parallels/parallels_driver.c +++ b/src/parallels/parallels_driver.c @@ -143,6 +143,122 @@ parallelsGetCapabilities(virConnectPtr conn) return xml; } +static int +parallelsGetSerialInfo(virDomainChrDefPtr chr, + const char *name, virJSONValuePtr value) +{ + const char *tmp; + + chr->deviceType = VIR_DOMAIN_CHR_DEVICE_TYPE_SERIAL; + chr->targetType = VIR_DOMAIN_CHR_CONSOLE_TARGET_TYPE_SERIAL; + if (virStrToLong_i(name + strlen("serial"), + NULL, 10, &chr->target.port) < 0) { + parallelsParseError(); + return -1; + } + + if (virJSONValueObjectHasKey(value, "output")) { + chr->source.type = VIR_DOMAIN_CHR_TYPE_FILE; + + tmp = virJSONValueObjectGetString(value, "output"); + if (!tmp) { + parallelsParseError(); + return -1; + } + + if (!(chr->source.data.file.path = strdup(tmp))) + goto no_memory; + } else if (virJSONValueObjectHasKey(value, "socket")) { + chr->source.type = VIR_DOMAIN_CHR_TYPE_UNIX; + + tmp = virJSONValueObjectGetString(value, "socket"); + if (!tmp) { + parallelsParseError(); + return -1; + } + + if (!(chr->source.data.nix.path = strdup(tmp))) + goto no_memory; + chr->source.data.nix.listen = false; + } else if (virJSONValueObjectHasKey(value, "real")) { + chr->source.type = VIR_DOMAIN_CHR_TYPE_DEV; + + tmp = virJSONValueObjectGetString(value, "real"); + if (!tmp) { + parallelsParseError(); + return -1; + } + + if (!(chr->source.data.file.path = strdup(tmp))) + goto no_memory; + } else { + parallelsParseError(); + return -1; + } + + return 0; + + no_memory: + virReportOOMError(); + return -1; +} + +static int +parallelsAddSerialInfo(virDomainObjPtr dom, + const char *key, virJSONValuePtr value) +{ + virDomainDefPtr def = dom->def; + virDomainChrDefPtr chr = NULL; + + if (!(chr = virDomainChrDefNew())) + goto no_memory; + + if (parallelsGetSerialInfo(chr, key, value)) + goto cleanup; + + if (VIR_REALLOC_N(def->serials, def->nserials + 1) < 0) { + virDomainChrDefFree(chr); + goto no_memory; + } + + def->serials[def->nserials++] = chr; + + return 0; + + no_memory: + virReportOOMError(); + cleanup: + virDomainChrDefFree(chr); + return -1; +} + +static int +parallelsAddDomainHardware(virDomainObjPtr dom, virJSONValuePtr jobj) +{ + int n, i; + virJSONValuePtr value; + const char *key; + + n = virJSONValueObjectKeysNumber(jobj); + if (n < 1) + goto cleanup; + + for (i = 0; i < n; i++) { + key = virJSONValueObjectGetKey(jobj, i); + value = virJSONValueObjectGetValue(jobj, i); + + if (STRPREFIX(key, "serial")) { + if (parallelsAddSerialInfo(dom, key, value)) + goto cleanup; + } + } + + return 0; + + cleanup: + return -1; +} + /* * Must be called with privconn->lock held */ @@ -291,6 +407,9 @@ parallelsLoadDomain(parallelsConnPtr privconn, virJSONValuePtr jobj) else dom->autostart = 0; + if (parallelsAddDomainHardware(dom, jobj2) < 0) + goto cleanup_unlock; + virDomainObjUnlock(dom); return dom; -- 1.7.1

Add support for reading VNC parameters of the VM. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/parallels/parallels_driver.c | 65 ++++++++++++++++++++++++++++++++++++++ 1 files changed, 65 insertions(+), 0 deletions(-) diff --git a/src/parallels/parallels_driver.c b/src/parallels/parallels_driver.c index 31244db..46cb85c 100644 --- a/src/parallels/parallels_driver.c +++ b/src/parallels/parallels_driver.c @@ -259,6 +259,68 @@ parallelsAddDomainHardware(virDomainObjPtr dom, virJSONValuePtr jobj) return -1; } +static int +parallelsAddVNCInfo(virDomainObjPtr dom, virJSONValuePtr jobj_root) +{ + const char *tmp; + unsigned int port; + virJSONValuePtr jobj; + int ret = -1; + + virDomainDefPtr def = dom->def; + + virDomainGraphicsDefPtr gr = NULL; + + jobj = virJSONValueObjectGet(jobj_root, "Remote display"); + if (!jobj) { + parallelsParseError(); + goto cleanup; + } + + tmp = virJSONValueObjectGetString(jobj, "mode"); + if (!tmp) { + parallelsParseError(); + goto cleanup; + } + + if (STREQ(tmp, "off")) { + ret = 0; + goto cleanup; + } + + if (VIR_ALLOC(gr) < 0) + goto no_memory; + + if (virJSONValueObjectGetNumberUint(jobj, "port", &port) < 0) { + parallelsParseError(); + goto cleanup; + } + + /* TODO: handle non-auto vnc mode */ + gr->type = VIR_DOMAIN_GRAPHICS_TYPE_VNC; + gr->data.vnc.port = port; + gr->data.vnc.autoport = 0; + gr->data.vnc.keymap = NULL; + gr->data.vnc.socket = NULL; + gr->data.vnc.auth.passwd = NULL; + gr->data.vnc.auth.expires = 0; + gr->data.vnc.auth.connected = 0; + + if (VIR_REALLOC_N(def->graphics, def->ngraphics + 1) < 0) { + virDomainGraphicsDefFree(gr); + goto no_memory; + } + + def->graphics[def->ngraphics++] = gr; + return 0; + + no_memory: + virReportOOMError(); + cleanup: + VIR_FREE(gr); + return ret; +} + /* * Must be called with privconn->lock held */ @@ -410,6 +472,9 @@ parallelsLoadDomain(parallelsConnPtr privconn, virJSONValuePtr jobj) if (parallelsAddDomainHardware(dom, jobj2) < 0) goto cleanup_unlock; + if (parallelsAddVNCInfo(dom, jobj) < 0) + goto cleanup_unlock; + virDomainObjUnlock(dom); return dom; -- 1.7.1

Add parallelsDomainDefineXML function, it works only for existing domains for the present. It's too hard to convert libvirt's XML domain configuration into parallels's one, so I've decided to compare virDomainDef structures: current domain definition and the one created from XML, given to the function. And change only different parameters. Only description change implemetented, changing other parameters will be implemented later. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/parallels/parallels_driver.c | 89 ++++++++++++++++++++++++++++++++++++++ 1 files changed, 89 insertions(+), 0 deletions(-) diff --git a/src/parallels/parallels_driver.c b/src/parallels/parallels_driver.c index 46cb85c..1e6027c 100644 --- a/src/parallels/parallels_driver.c +++ b/src/parallels/parallels_driver.c @@ -1089,6 +1089,94 @@ parallelsShutdownDomain(virDomainPtr domain) VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN); } +static int +parallelsSetDescription(virDomainObjPtr dom, const char *description) +{ + parallelsDomObjPtr parallelsdom; + + parallelsdom = dom->privateData; + if (parallelsCmdRun(PRLCTL, "set", parallelsdom->uuid, + "--description", description, NULL)) + return -1; + + return 0; +} + +static int +parallelsApplyChanges(virDomainObjPtr dom, virDomainDefPtr newdef) +{ + virDomainDefPtr olddef = dom->def; + + if (newdef->description && !STREQ(olddef->description, newdef->description)) { + if (parallelsSetDescription(dom, newdef->description)) + return -1; + } + + /* TODO: compare all other parameters */ + + return 0; +} + +static virDomainPtr +parallelsDomainDefineXML(virConnectPtr conn, const char *xml) +{ + parallelsConnPtr privconn = conn->privateData; + virDomainPtr ret = NULL; + virDomainDefPtr def; + virDomainObjPtr dom = NULL, olddom = NULL; + virDomainEventPtr event = NULL; + int dupVM; + + parallelsDriverLock(privconn); + if ((def = virDomainDefParseString(privconn->caps, xml, + 1 << VIR_DOMAIN_VIRT_PARALLELS, + VIR_DOMAIN_XML_INACTIVE)) == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, _("Can't parse XML desc")); + goto cleanup; + } + + if ((dupVM = virDomainObjIsDuplicate(&privconn->domains, def, 0)) < 0) { + parallelsError(VIR_ERR_INVALID_ARG, _("Already exists")); + goto cleanup; + } + + if (dupVM == 1) { + olddom = virDomainFindByUUID(&privconn->domains, def->uuid); + parallelsApplyChanges(olddom, def); + virDomainObjUnlock(olddom); + + if (!(dom = virDomainAssignDef(privconn->caps, + &privconn->domains, def, false))) { + parallelsError(VIR_ERR_INTERNAL_ERROR, _("Can't allocate domobj")); + goto cleanup; + } + + def = NULL; + } else { + parallelsError(VIR_ERR_NO_SUPPORT, _("Not implemented yet")); + goto cleanup; + } + + event = virDomainEventNewFromObj(dom, + VIR_DOMAIN_EVENT_DEFINED, + !dupVM ? + VIR_DOMAIN_EVENT_DEFINED_ADDED : + VIR_DOMAIN_EVENT_DEFINED_UPDATED); + + ret = virGetDomain(conn, dom->def->name, dom->def->uuid); + if (ret) + ret->id = dom->def->id; + + cleanup: + virDomainDefFree(def); + if (dom) + virDomainObjUnlock(dom); + if (event) + parallelsDomainEventQueue(privconn, event); + parallelsDriverUnlock(privconn); + return ret; +} + static virDriver parallelsDriver = { .no = VIR_DRV_PARALLELS, .name = "PARALLELS", @@ -1116,6 +1204,7 @@ static virDriver parallelsDriver = { .domainDestroy = parallelsDestroyDomain, /* 0.9.12 */ .domainShutdown = parallelsShutdownDomain, /* 0.9.12 */ .domainCreate = parallelsDomainCreate, /* 0.9.12 */ + .domainDefineXML = parallelsDomainDefineXML, /* 0.9.12 */ }; /** -- 1.7.1

Parallels Cloud Server has one serious discrepancy with libvirt: libvirt stores domain configuration files in one place, and storage files in other places (with the API of storage pools and storage volumes). Parallels Cloud Server stores all domain data in a single directory, for example, you may have domain with name fedora-15, which will be located in '/var/parallels/fedora-15.pvm', and it's hard disk image will be in '/var/parallels/fedora-15.pvm/harddisk1.hdd'. I've decided to create storage driver, which produces pseudo-volumes (xml files with volume description), and they will be 'converted' to real disk images after attaching to a VM. So if someone creates VM with one hard disk using virt-manager, at first virt-manager creates a new volume, and then defines a domain. We can lookup a volume by path in XML domain definition and find out location of new domain and size of its hard disk. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- po/POTFILES.in | 1 + src/Makefile.am | 3 +- src/parallels/parallels_driver.c | 6 +- src/parallels/parallels_driver.h | 5 + src/parallels/parallels_storage.c | 1460 +++++++++++++++++++++++++++++++++++++ src/parallels/parallels_utils.c | 24 + 6 files changed, 1496 insertions(+), 3 deletions(-) create mode 100644 src/parallels/parallels_storage.c diff --git a/po/POTFILES.in b/po/POTFILES.in index dcb0813..240becb 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -66,6 +66,7 @@ src/phyp/phyp_driver.c src/parallels/parallels_driver.c src/parallels/parallels_driver.h src/parallels/parallels_utils.c +src/parallels/parallels_storage.c src/qemu/qemu_agent.c src/qemu/qemu_bridge_filter.c src/qemu/qemu_capabilities.c diff --git a/src/Makefile.am b/src/Makefile.am index a89585d..b390cc0 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -482,7 +482,8 @@ HYPERV_DRIVER_EXTRA_DIST = \ PARALLELS_DRIVER_SOURCES = \ parallels/parallels_driver.h \ parallels/parallels_driver.c \ - parallels/parallels_utils.c + parallels/parallels_utils.c \ + parallels/parallels_storage.c NETWORK_DRIVER_SOURCES = \ network/bridge_driver.h network/bridge_driver.c diff --git a/src/parallels/parallels_driver.c b/src/parallels/parallels_driver.c index 1e6027c..c415082 100644 --- a/src/parallels/parallels_driver.c +++ b/src/parallels/parallels_driver.c @@ -65,13 +65,13 @@ static int parallelsStart(virDomainObjPtr privdom); static int parallelsKill(virDomainObjPtr privdom); static int parallelsStop(virDomainObjPtr privdom); -static void +void parallelsDriverLock(parallelsConnPtr driver) { virMutexLock(&driver->lock); } -static void +void parallelsDriverUnlock(parallelsConnPtr driver) { virMutexUnlock(&driver->lock); @@ -1226,6 +1226,8 @@ parallelsRegister(void) if (virRegisterDriver(¶llelsDriver) < 0) return -1; + if (parallelsStorageRegister()) + return -1; return 0; } diff --git a/src/parallels/parallels_driver.h b/src/parallels/parallels_driver.h index 2b3c956..6f06ac8 100644 --- a/src/parallels/parallels_driver.h +++ b/src/parallels/parallels_driver.h @@ -26,6 +26,7 @@ # include "domain_conf.h" # include "storage_conf.h" +# include "driver.h" # include "domain_event.h" # include "json.h" @@ -59,8 +60,12 @@ typedef struct _parallelsConn parallelsConn; typedef struct _parallelsConn *parallelsConnPtr; int parallelsRegister(void); +int parallelsStorageRegister(void); virJSONValuePtr parallelsParseOutput(const char *binary, ...) ATTRIBUTE_NONNULL(1) ATTRIBUTE_SENTINEL; int parallelsCmdRun(const char *binary, ...) ATTRIBUTE_NONNULL(1) ATTRIBUTE_SENTINEL; +char * parallelsAddFileExt(const char *path, const char *ext); +void parallelsDriverLock(parallelsConnPtr driver); +void parallelsDriverUnlock(parallelsConnPtr driver); #endif diff --git a/src/parallels/parallels_storage.c b/src/parallels/parallels_storage.c new file mode 100644 index 0000000..05ac95d --- /dev/null +++ b/src/parallels/parallels_storage.c @@ -0,0 +1,1460 @@ +/* + * parallels_storage.c: core driver functions for managing + * Parallels Virtuozzo Server hosts + * + * Copyright (C) 2012 Parallels, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <config.h> + +#include <stdlib.h> +#include <dirent.h> +#include <sys/statvfs.h> + +#include "datatypes.h" +#include "memory.h" +#include "configmake.h" +#include "storage_file.h" +#include "virterror_internal.h" + +#include "parallels_driver.h" + +#define VIR_FROM_THIS VIR_FROM_PARALLELS + +static int parallelsStorageClose(virConnectPtr conn); +static virStorageVolDefPtr parallelsStorageVolumeDefine(virStoragePoolObjPtr pool, + const char *xmldesc, + const char *xmlfile, + bool is_new); +static virStorageVolPtr parallelsStorageVolumeLookupByPathLocked(virConnectPtr + conn, + const char + *path); +static virStorageVolPtr parallelsStorageVolumeLookupByPath(virConnectPtr conn, + const char *path); +static int parallelsStoragePoolGetAlloc(virStoragePoolDefPtr def); + +static void +parallelsStorageLock(virStorageDriverStatePtr driver) +{ + virMutexLock(&driver->lock); +} + +static void +parallelsStorageUnlock(virStorageDriverStatePtr driver) +{ + virMutexUnlock(&driver->lock); +} + +static int +parallelsFindVolumes(virStoragePoolObjPtr pool) +{ + DIR *dir; + struct dirent *ent; + char *path; + + if (!(dir = opendir(pool->def->target.path))) { + virReportSystemError(errno, + _("cannot open path '%s'"), + pool->def->target.path); + goto cleanup; + } + + while ((ent = readdir(dir)) != NULL) { + if (!virFileHasSuffix(ent->d_name, ".xml")) + continue; + + if (!(path = virFileBuildPath(pool->def->target.path, + ent->d_name, NULL))) + goto no_memory; + if (!parallelsStorageVolumeDefine(pool, NULL, path, false)) + goto cleanup; + VIR_FREE(path); + } + + return 0; + no_memory: + virReportOOMError(); + cleanup: + return -1; + +} + +static virDrvOpenStatus +parallelsStorageOpen(virConnectPtr conn, + virConnectAuthPtr auth ATTRIBUTE_UNUSED, unsigned int flags) +{ + char *base = NULL; + virStorageDriverStatePtr storageState; + int privileged = (geteuid() == 0); + parallelsConnPtr privconn = conn->privateData; + virCheckFlags(VIR_CONNECT_RO, VIR_DRV_OPEN_ERROR); + + if (STRNEQ(conn->driver->name, "PARALLELS")) + return VIR_DRV_OPEN_DECLINED; + + if (VIR_ALLOC(storageState) < 0) { + virReportOOMError(); + return VIR_DRV_OPEN_ERROR; + } + + if (virMutexInit(&storageState->lock) < 0) { + VIR_FREE(storageState); + return VIR_DRV_OPEN_ERROR; + } + parallelsStorageLock(storageState); + + if (privileged) { + if ((base = strdup(SYSCONFDIR "/libvirt")) == NULL) + goto out_of_memory; + } else { + char *userdir = virGetUserDirectory(); + + if (!userdir) + goto error; + + if (virAsprintf(&base, "%s/.libvirt", userdir) == -1) { + VIR_FREE(userdir); + goto out_of_memory; + } + VIR_FREE(userdir); + } + + /* Configuration paths are either ~/.libvirt/storage/... (session) or + * /etc/libvirt/storage/... (system). + */ + if (virAsprintf(&storageState->configDir, + "%s/parallels-storage", base) == -1) + goto out_of_memory; + + if (virAsprintf(&storageState->autostartDir, + "%s/parallels-storage/autostart", base) == -1) + goto out_of_memory; + + VIR_FREE(base); + + if (virStoragePoolLoadAllConfigs(&privconn->pools, + storageState->configDir, + storageState->autostartDir) < 0) { + parallelsError(VIR_ERR_INTERNAL_ERROR, _("Failed to load pool configs")); + goto error; + } + + for (int i = 0; i < privconn->pools.count; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + virStoragePoolObjPtr pool; + + pool = privconn->pools.objs[i]; + pool->active = 1; + + if (parallelsStoragePoolGetAlloc(pool->def) < 0) + goto error; + + if (parallelsFindVolumes(pool) < 0) + goto error; + + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + + parallelsStorageUnlock(storageState); + + conn->storagePrivateData = storageState; + + return VIR_DRV_OPEN_SUCCESS; + + out_of_memory: + virReportOOMError(); + error: + VIR_FREE(base); + parallelsStorageUnlock(storageState); + parallelsStorageClose(conn); + return -1; +} + +static int +parallelsStorageClose(virConnectPtr conn) +{ + parallelsConnPtr privconn = conn->privateData; + + virStorageDriverStatePtr storageState = conn->storagePrivateData; + conn->storagePrivateData = NULL; + + parallelsStorageLock(storageState); + virStoragePoolObjListFree(&privconn->pools); + VIR_FREE(storageState->configDir); + VIR_FREE(storageState->autostartDir); + parallelsStorageUnlock(storageState); + virMutexDestroy(&storageState->lock); + VIR_FREE(storageState); + + return 0; +} + +static int +parallelsStorageNumPools(virConnectPtr conn) +{ + parallelsConnPtr privconn = conn->privateData; + int numActive = 0, i; + + parallelsDriverLock(privconn); + for (i = 0; i < privconn->pools.count; i++) + if (virStoragePoolObjIsActive(privconn->pools.objs[i])) + numActive++; + parallelsDriverUnlock(privconn); + + return numActive; +} + +static int +parallelsStorageListPools(virConnectPtr conn, char **const names, int nnames) +{ + parallelsConnPtr privconn = conn->privateData; + int n = 0, i; + + parallelsDriverLock(privconn); + memset(names, 0, sizeof(*names) * nnames); + for (i = 0; i < privconn->pools.count && n < nnames; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + if (virStoragePoolObjIsActive(privconn->pools.objs[i]) && + !(names[n++] = strdup(privconn->pools.objs[i]->def->name))) { + virStoragePoolObjUnlock(privconn->pools.objs[i]); + goto no_memory; + } + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + parallelsDriverUnlock(privconn); + + return n; + + no_memory: + virReportOOMError(); + for (n = 0; n < nnames; n++) + VIR_FREE(names[n]); + parallelsDriverUnlock(privconn); + return -1; +} + +static int +parallelsStorageNumDefinedPools(virConnectPtr conn) +{ + parallelsConnPtr privconn = conn->privateData; + int numInactive = 0, i; + + parallelsDriverLock(privconn); + for (i = 0; i < privconn->pools.count; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + if (!virStoragePoolObjIsActive(privconn->pools.objs[i])) + numInactive++; + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + parallelsDriverUnlock(privconn); + + return numInactive; +} + +static int +parallelsStorageListDefinedPools(virConnectPtr conn, + char **const names, int nnames) +{ + parallelsConnPtr privconn = conn->privateData; + int n = 0, i; + + parallelsDriverLock(privconn); + memset(names, 0, sizeof(*names) * nnames); + for (i = 0; i < privconn->pools.count && n < nnames; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + if (!virStoragePoolObjIsActive(privconn->pools.objs[i]) && + !(names[n++] = strdup(privconn->pools.objs[i]->def->name))) { + virStoragePoolObjUnlock(privconn->pools.objs[i]); + goto no_memory; + } + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + parallelsDriverUnlock(privconn); + + return n; + + no_memory: + virReportOOMError(); + for (n = 0; n < nnames; n++) + VIR_FREE(names[n]); + parallelsDriverUnlock(privconn); + return -1; +} + + +static int +parallelsStoragePoolIsActive(virStoragePoolPtr pool) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr obj; + int ret = -1; + + parallelsDriverLock(privconn); + obj = virStoragePoolObjFindByUUID(&privconn->pools, pool->uuid); + parallelsDriverUnlock(privconn); + if (!obj) { + parallelsError(VIR_ERR_NO_STORAGE_POOL, NULL); + goto cleanup; + } + ret = virStoragePoolObjIsActive(obj); + + cleanup: + if (obj) + virStoragePoolObjUnlock(obj); + return ret; +} + +static int +parallelsStoragePoolIsPersistent(virStoragePoolPtr pool ATTRIBUTE_UNUSED) +{ + return 1; +} + +static char * +parallelsStorageFindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED, + const char *type ATTRIBUTE_UNUSED, + const char *srcSpec ATTRIBUTE_UNUSED, + unsigned int flags) +{ + virCheckFlags(0, NULL); + + return NULL; +} + +static virStoragePoolPtr +parallelsStoragePoolLookupByUUID(virConnectPtr conn, const unsigned char *uuid) +{ + parallelsConnPtr privconn = conn->privateData; + virStoragePoolObjPtr pool; + virStoragePoolPtr ret = NULL; + + parallelsDriverLock(privconn); + pool = virStoragePoolObjFindByUUID(&privconn->pools, uuid); + parallelsDriverUnlock(privconn); + + if (pool == NULL) { + parallelsError(VIR_ERR_NO_STORAGE_POOL, NULL); + goto cleanup; + } + + ret = virGetStoragePool(conn, pool->def->name, pool->def->uuid); + + cleanup: + if (pool) + virStoragePoolObjUnlock(pool); + return ret; +} + +static virStoragePoolPtr +parallelsStoragePoolLookupByName(virConnectPtr conn, const char *name) +{ + parallelsConnPtr privconn = conn->privateData; + virStoragePoolObjPtr pool; + virStoragePoolPtr ret = NULL; + + parallelsDriverLock(privconn); + pool = virStoragePoolObjFindByName(&privconn->pools, name); + parallelsDriverUnlock(privconn); + + if (pool == NULL) { + parallelsError(VIR_ERR_NO_STORAGE_POOL, NULL); + goto cleanup; + } + + ret = virGetStoragePool(conn, pool->def->name, pool->def->uuid); + + cleanup: + if (pool) + virStoragePoolObjUnlock(pool); + return ret; +} + +static virStoragePoolPtr +parallelsStoragePoolLookupByVolume(virStorageVolPtr vol) +{ + return parallelsStoragePoolLookupByName(vol->conn, vol->pool); +} + +/* + * Fill capacity, available and allocation + * fields in pool definition. + */ +static int +parallelsStoragePoolGetAlloc(virStoragePoolDefPtr def) +{ + struct statvfs sb; + + if (statvfs(def->target.path, &sb) < 0) { + virReportSystemError(errno, + _("cannot statvfs path '%s'"), + def->target.path); + return -1; + } + + def->capacity = ((unsigned long long)sb.f_frsize * + (unsigned long long)sb.f_blocks); + def->available = ((unsigned long long)sb.f_bfree * + (unsigned long long)sb.f_bsize); + def->allocation = def->capacity - def->available; + + return 0; +} + +static virStoragePoolPtr +parallelsStoragePoolDefine(virConnectPtr conn, + const char *xml, unsigned int flags) +{ + parallelsConnPtr privconn = conn->privateData; + virStoragePoolDefPtr def; + virStoragePoolObjPtr pool = NULL; + virStoragePoolPtr ret = NULL; + + virCheckFlags(0, NULL); + + parallelsDriverLock(privconn); + if (!(def = virStoragePoolDefParseString(xml))) + goto cleanup; + + if (def->type != VIR_STORAGE_POOL_DIR) { + parallelsError(VIR_ERR_NO_SUPPORT, "%s", + _("Only local directories are supported")); + goto cleanup; + } + + if (virStoragePoolObjIsDuplicate(&privconn->pools, def, 0) < 0) + goto cleanup; + + if (virStoragePoolSourceFindDuplicate(&privconn->pools, def) < 0) + goto cleanup; + + if (parallelsStoragePoolGetAlloc(def)) + goto cleanup; + + if (!(pool = virStoragePoolObjAssignDef(&privconn->pools, def))) + goto cleanup; + + if (virStoragePoolObjSaveDef(conn->storagePrivateData, pool, def) < 0) { + virStoragePoolObjRemove(&privconn->pools, pool); + def = NULL; + goto cleanup; + } + def = NULL; + + pool->configFile = strdup("\0"); + if (!pool->configFile) { + virReportOOMError(); + goto cleanup; + } + + ret = virGetStoragePool(conn, pool->def->name, pool->def->uuid); + + cleanup: + virStoragePoolDefFree(def); + if (pool) + virStoragePoolObjUnlock(pool); + parallelsDriverUnlock(privconn); + return ret; +} + +static int +parallelsStoragePoolUndefine(virStoragePoolPtr pool) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is still active"), pool->name); + goto cleanup; + } + + if (virStoragePoolObjDeleteDef(privpool) < 0) + goto cleanup; + + VIR_FREE(privpool->configFile); + + virStoragePoolObjRemove(&privconn->pools, privpool); + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + parallelsDriverUnlock(privconn); + return ret; +} + +static int +parallelsStoragePoolBuild(virStoragePoolPtr pool, unsigned int flags) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + virCheckFlags(0, -1); + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is already active"), pool->name); + goto cleanup; + } + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +parallelsStoragePoolStart(virStoragePoolPtr pool, unsigned int flags) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + virCheckFlags(0, -1); + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is already active"), pool->name); + goto cleanup; + } + + privpool->active = 1; + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +parallelsStoragePoolDestroy(virStoragePoolPtr pool) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + if (privpool->configFile == NULL) { + virStoragePoolObjRemove(&privconn->pools, privpool); + privpool = NULL; + } + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + parallelsDriverUnlock(privconn); + return ret; +} + + +static int +parallelsStoragePoolDelete(virStoragePoolPtr pool, unsigned int flags) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + virCheckFlags(0, -1); + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is already active"), pool->name); + goto cleanup; + } + + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + + +static int +parallelsStoragePoolRefresh(virStoragePoolPtr pool, unsigned int flags) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + virCheckFlags(0, -1); + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + + +static int +parallelsStoragePoolGetInfo(virStoragePoolPtr pool, virStoragePoolInfoPtr info) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + memset(info, 0, sizeof(virStoragePoolInfo)); + info->state = VIR_STORAGE_POOL_RUNNING; + info->capacity = privpool->def->capacity; + info->allocation = privpool->def->allocation; + info->available = privpool->def->available; + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static char * +parallelsStoragePoolGetXMLDesc(virStoragePoolPtr pool, unsigned int flags) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + char *ret = NULL; + + virCheckFlags(0, NULL); + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + ret = virStoragePoolDefFormat(privpool->def); + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +parallelsStoragePoolGetAutostart(virStoragePoolPtr pool, int *autostart) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!privpool->configFile) { + *autostart = 0; + } else { + *autostart = privpool->autostart; + } + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +parallelsStoragePoolSetAutostart(virStoragePoolPtr pool, int autostart) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!privpool->configFile) { + parallelsError(VIR_ERR_INVALID_ARG, "%s", _("pool has no config file")); + goto cleanup; + } + + autostart = (autostart != 0); + privpool->autostart = autostart; + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +parallelsStoragePoolNumVolumes(virStoragePoolPtr pool) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + ret = privpool->volumes.count; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +parallelsStoragePoolListVolumes(virStoragePoolPtr pool, + char **const names, int maxnames) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int i = 0, n = 0; + + memset(names, 0, maxnames * sizeof(*names)); + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + for (i = 0; i < privpool->volumes.count && n < maxnames; i++) { + if ((names[n++] = strdup(privpool->volumes.objs[i]->name)) == NULL) { + virReportOOMError(); + goto cleanup; + } + } + + virStoragePoolObjUnlock(privpool); + return n; + + cleanup: + for (n = 0; n < maxnames; n++) + VIR_FREE(names[i]); + + memset(names, 0, maxnames * sizeof(*names)); + if (privpool) + virStoragePoolObjUnlock(privpool); + return -1; +} + +static virStorageVolPtr +parallelsStorageVolumeLookupByName(virStoragePoolPtr pool, + const char *name ATTRIBUTE_UNUSED) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol; + virStorageVolPtr ret = NULL; + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + privvol = virStorageVolDefFindByName(privpool, name); + + if (!privvol) { + parallelsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), name); + goto cleanup; + } + + ret = virGetStorageVol(pool->conn, privpool->def->name, + privvol->name, privvol->key); + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + + +static virStorageVolPtr +parallelsStorageVolumeLookupByKey(virConnectPtr conn, const char *key) +{ + parallelsConnPtr privconn = conn->privateData; + unsigned int i; + virStorageVolPtr ret = NULL; + + parallelsDriverLock(privconn); + for (i = 0; i < privconn->pools.count; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + if (virStoragePoolObjIsActive(privconn->pools.objs[i])) { + virStorageVolDefPtr privvol = + virStorageVolDefFindByKey(privconn->pools.objs[i], key); + + if (privvol) { + ret = virGetStorageVol(conn, + privconn->pools.objs[i]->def->name, + privvol->name, privvol->key); + virStoragePoolObjUnlock(privconn->pools.objs[i]); + break; + } + } + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + parallelsDriverUnlock(privconn); + + if (!ret) + parallelsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching key '%s'"), key); + + return ret; +} + +static virStorageVolPtr +parallelsStorageVolumeLookupByPathLocked(virConnectPtr conn, const char *path) +{ + parallelsConnPtr privconn = conn->privateData; + unsigned int i; + virStorageVolPtr ret = NULL; + + for (i = 0; i < privconn->pools.count; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + if (virStoragePoolObjIsActive(privconn->pools.objs[i])) { + virStorageVolDefPtr privvol = + virStorageVolDefFindByPath(privconn->pools.objs[i], path); + + if (privvol) { + ret = virGetStorageVol(conn, + privconn->pools.objs[i]->def->name, + privvol->name, privvol->key); + virStoragePoolObjUnlock(privconn->pools.objs[i]); + break; + } + } + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + + if (!ret) + parallelsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching path '%s'"), path); + + return ret; +} + +static virStorageVolPtr +parallelsStorageVolumeLookupByPath(virConnectPtr conn, const char *path) +{ + parallelsConnPtr privconn = conn->privateData; + virStorageVolPtr ret = NULL; + + parallelsDriverLock(privconn); + ret = parallelsStorageVolumeLookupByPathLocked(conn, path); + parallelsDriverUnlock(privconn); + + return ret; +} + +static virStorageVolDefPtr +parallelsStorageVolumeDefine(virStoragePoolObjPtr pool, + const char *xmldesc, + const char *xmlfile, bool is_new) +{ + virStorageVolDefPtr privvol = NULL; + virStorageVolDefPtr ret = NULL; + char *xml_path = NULL; + + if (xmlfile) + privvol = virStorageVolDefParseFile(pool->def, xmlfile); + else + privvol = virStorageVolDefParseString(pool->def, xmldesc); + if (privvol == NULL) + goto cleanup; + + if (virStorageVolDefFindByName(pool, privvol->name)) { + parallelsError(VIR_ERR_OPERATION_FAILED, + "%s", _("storage vol already exists")); + goto cleanup; + } + + if (is_new) { + /* Make sure enough space */ + if ((pool->def->allocation + privvol->allocation) > + pool->def->capacity) { + parallelsError(VIR_ERR_INTERNAL_ERROR, + _("Not enough free space in pool for volume '%s'"), + privvol->name); + goto cleanup; + } + } + + if (VIR_REALLOC_N(pool->volumes.objs, pool->volumes.count + 1) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (virAsprintf(&privvol->target.path, "%s/%s", + pool->def->target.path, privvol->name) < 0) { + virReportOOMError(); + goto cleanup; + } + + privvol->key = strdup(privvol->target.path); + if (privvol->key == NULL) { + virReportOOMError(); + goto cleanup; + } + + if (is_new) { + xml_path = parallelsAddFileExt(privvol->target.path, ".xml"); + if (!xml_path) + goto cleanup; + + if (virXMLSaveFile + (xml_path, privvol->name, "volume-create", xmldesc)) { + parallelsError(VIR_ERR_OPERATION_FAILED, + "Can't create file with volume description"); + goto cleanup; + } + + pool->def->allocation += privvol->allocation; + pool->def->available = (pool->def->capacity - + pool->def->allocation); + } + + pool->volumes.objs[pool->volumes.count++] = privvol; + + ret = privvol; + privvol = NULL; + + cleanup: + virStorageVolDefFree(privvol); + VIR_FREE(xml_path); + return ret; +} + +static virStorageVolPtr +parallelsStorageVolumeCreateXML(virStoragePoolPtr pool, + const char *xmldesc, unsigned int flags) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolPtr ret = NULL; + virStorageVolDefPtr privvol = NULL; + + virCheckFlags(0, NULL); + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + privvol = parallelsStorageVolumeDefine(privpool, xmldesc, NULL, true); + if (!privvol) + goto cleanup; + + ret = virGetStorageVol(pool->conn, privpool->def->name, + privvol->name, privvol->key); + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static virStorageVolPtr +parallelsStorageVolumeCreateXMLFrom(virStoragePoolPtr pool, + const char *xmldesc, + virStorageVolPtr clonevol, + unsigned int flags) +{ + parallelsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol = NULL, origvol = NULL; + virStorageVolPtr ret = NULL; + + virCheckFlags(0, NULL); + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + privvol = virStorageVolDefParseString(privpool->def, xmldesc); + if (privvol == NULL) + goto cleanup; + + if (virStorageVolDefFindByName(privpool, privvol->name)) { + parallelsError(VIR_ERR_OPERATION_FAILED, + "%s", _("storage vol already exists")); + goto cleanup; + } + + origvol = virStorageVolDefFindByName(privpool, clonevol->name); + if (!origvol) { + parallelsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), + clonevol->name); + goto cleanup; + } + + /* Make sure enough space */ + if ((privpool->def->allocation + privvol->allocation) > + privpool->def->capacity) { + parallelsError(VIR_ERR_INTERNAL_ERROR, + _("Not enough free space in pool for volume '%s'"), + privvol->name); + goto cleanup; + } + privpool->def->available = (privpool->def->capacity - + privpool->def->allocation); + + if (VIR_REALLOC_N(privpool->volumes.objs, + privpool->volumes.count + 1) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (virAsprintf(&privvol->target.path, "%s/%s", + privpool->def->target.path, privvol->name) == -1) { + virReportOOMError(); + goto cleanup; + } + + privvol->key = strdup(privvol->target.path); + if (privvol->key == NULL) { + virReportOOMError(); + goto cleanup; + } + + privpool->def->allocation += privvol->allocation; + privpool->def->available = (privpool->def->capacity - + privpool->def->allocation); + + privpool->volumes.objs[privpool->volumes.count++] = privvol; + + ret = virGetStorageVol(pool->conn, privpool->def->name, + privvol->name, privvol->key); + privvol = NULL; + + cleanup: + virStorageVolDefFree(privvol); + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +parallelsStorageVolumeDelete(virStorageVolPtr vol, unsigned int flags) +{ + parallelsConnPtr privconn = vol->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol; + int i; + int ret = -1; + char *xml_path = NULL; + + virCheckFlags(0, -1); + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, vol->pool); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + + privvol = virStorageVolDefFindByName(privpool, vol->name); + + if (privvol == NULL) { + parallelsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), vol->name); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), vol->pool); + goto cleanup; + } + + + privpool->def->allocation -= privvol->allocation; + privpool->def->available = (privpool->def->capacity - + privpool->def->allocation); + + for (i = 0; i < privpool->volumes.count; i++) { + if (privpool->volumes.objs[i] == privvol) { + xml_path = parallelsAddFileExt(privvol->target.path, ".xml"); + if (!xml_path) + goto cleanup; + + if (unlink(xml_path)) { + parallelsError(VIR_ERR_OPERATION_FAILED, + _("Can't remove file '%s'"), xml_path); + goto cleanup; + } + + virStorageVolDefFree(privvol); + + if (i < (privpool->volumes.count - 1)) + memmove(privpool->volumes.objs + i, + privpool->volumes.objs + i + 1, + sizeof(*(privpool->volumes.objs)) * + (privpool->volumes.count - (i + 1))); + + if (VIR_REALLOC_N(privpool->volumes.objs, + privpool->volumes.count - 1) < 0) { + ; /* Failure to reduce memory allocation isn't fatal */ + } + privpool->volumes.count--; + + break; + } + } + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + VIR_FREE(xml_path); + return ret; +} + + +static int +parallelsStorageVolumeTypeForPool(int pooltype) +{ + + switch (pooltype) { + case VIR_STORAGE_POOL_DIR: + case VIR_STORAGE_POOL_FS: + case VIR_STORAGE_POOL_NETFS: + return VIR_STORAGE_VOL_FILE; + default: + return VIR_STORAGE_VOL_BLOCK; + } +} + +static int +parallelsStorageVolumeGetInfo(virStorageVolPtr vol, virStorageVolInfoPtr info) +{ + parallelsConnPtr privconn = vol->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol; + int ret = -1; + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, vol->pool); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + privvol = virStorageVolDefFindByName(privpool, vol->name); + + if (privvol == NULL) { + parallelsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), vol->name); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), vol->pool); + goto cleanup; + } + + memset(info, 0, sizeof(*info)); + info->type = parallelsStorageVolumeTypeForPool(privpool->def->type); + info->capacity = privvol->capacity; + info->allocation = privvol->allocation; + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static char * +parallelsStorageVolumeGetXMLDesc(virStorageVolPtr vol, unsigned int flags) +{ + parallelsConnPtr privconn = vol->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol; + char *ret = NULL; + + virCheckFlags(0, NULL); + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, vol->pool); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + privvol = virStorageVolDefFindByName(privpool, vol->name); + + if (privvol == NULL) { + parallelsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), vol->name); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), vol->pool); + goto cleanup; + } + + ret = virStorageVolDefFormat(privpool->def, privvol); + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static char * +parallelsStorageVolumeGetPath(virStorageVolPtr vol) +{ + parallelsConnPtr privconn = vol->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol; + char *ret = NULL; + + parallelsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, vol->pool); + parallelsDriverUnlock(privconn); + + if (privpool == NULL) { + parallelsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + privvol = virStorageVolDefFindByName(privpool, vol->name); + + if (privvol == NULL) { + parallelsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), vol->name); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + parallelsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), vol->pool); + goto cleanup; + } + + ret = strdup(privvol->target.path); + if (ret == NULL) + virReportOOMError(); + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static virStorageDriver parallelsStorageDriver = { + .name = "PARALLELS", + .open = parallelsStorageOpen, /* 0.9.12 */ + .close = parallelsStorageClose, /* 0.9.12 */ + + .numOfPools = parallelsStorageNumPools, /* 0.9.12 */ + .listPools = parallelsStorageListPools, /* 0.9.12 */ + .numOfDefinedPools = parallelsStorageNumDefinedPools, /* 0.9.12 */ + .listDefinedPools = parallelsStorageListDefinedPools, /* 0.9.12 */ + .findPoolSources = parallelsStorageFindPoolSources, /* 0.9.12 */ + .poolLookupByName = parallelsStoragePoolLookupByName, /* 0.9.12 */ + .poolLookupByUUID = parallelsStoragePoolLookupByUUID, /* 0.9.12 */ + .poolLookupByVolume = parallelsStoragePoolLookupByVolume, /* 0.9.12 */ + .poolDefineXML = parallelsStoragePoolDefine, /* 0.9.12 */ + .poolBuild = parallelsStoragePoolBuild, /* 0.9.12 */ + .poolUndefine = parallelsStoragePoolUndefine, /* 0.9.12 */ + .poolCreate = parallelsStoragePoolStart, /* 0.9.12 */ + .poolDestroy = parallelsStoragePoolDestroy, /* 0.9.12 */ + .poolDelete = parallelsStoragePoolDelete, /* 0.9.12 */ + .poolRefresh = parallelsStoragePoolRefresh, /* 0.9.12 */ + .poolGetInfo = parallelsStoragePoolGetInfo, /* 0.9.12 */ + .poolGetXMLDesc = parallelsStoragePoolGetXMLDesc, /* 0.9.12 */ + .poolGetAutostart = parallelsStoragePoolGetAutostart, /* 0.9.12 */ + .poolSetAutostart = parallelsStoragePoolSetAutostart, /* 0.9.12 */ + .poolNumOfVolumes = parallelsStoragePoolNumVolumes, /* 0.9.12 */ + .poolListVolumes = parallelsStoragePoolListVolumes, /* 0.9.12 */ + + .volLookupByName = parallelsStorageVolumeLookupByName, /* 0.9.12 */ + .volLookupByKey = parallelsStorageVolumeLookupByKey, /* 0.9.12 */ + .volLookupByPath = parallelsStorageVolumeLookupByPath, /* 0.9.12 */ + .volCreateXML = parallelsStorageVolumeCreateXML, /* 0.9.12 */ + .volCreateXMLFrom = parallelsStorageVolumeCreateXMLFrom, /* 0.9.12 */ + .volDelete = parallelsStorageVolumeDelete, /* 0.9.12 */ + .volGetInfo = parallelsStorageVolumeGetInfo, /* 0.9.12 */ + .volGetXMLDesc = parallelsStorageVolumeGetXMLDesc, /* 0.9.12 */ + .volGetPath = parallelsStorageVolumeGetPath, /* 0.9.12 */ + .poolIsActive = parallelsStoragePoolIsActive, /* 0.9.12 */ + .poolIsPersistent = parallelsStoragePoolIsPersistent, /* 0.9.12 */ +}; + +int +parallelsStorageRegister(void) +{ + if (virRegisterStorageDriver(¶llelsStorageDriver) < 0) + return -1; + + return 0; +} diff --git a/src/parallels/parallels_utils.c b/src/parallels/parallels_utils.c index e4220e9..72178d9 100644 --- a/src/parallels/parallels_utils.c +++ b/src/parallels/parallels_utils.c @@ -30,6 +30,8 @@ #include "parallels_driver.h" +#define VIR_FROM_THIS VIR_FROM_PARALLELS + static int parallelsDoCmdRun(char **outbuf, const char *binary, va_list list) { @@ -105,3 +107,25 @@ parallelsCmdRun(const char *binary, ...) return ret; } + +/* + * Return new file path in malloced string created by + * concatenating first and second function arguments. + */ +char * +parallelsAddFileExt(const char *path, const char *ext) +{ + char *new_path = NULL; + size_t len = strlen(path) + strlen(ext) + 1; + + if (VIR_ALLOC_N(new_path, len) < 0) { + virReportOOMError(); + return NULL; + } + + if (!virStrcpy(new_path, path, len)) + return NULL; + strcat(new_path, ext); + + return new_path; +} -- 1.7.1

To create a new VM in Parallels Clud Server we should issue "prlctl create" command, and give path to the directory, where VM should be created. VM's storage will be in that directory later. So in this first version find out location of first VM's hard disk and create VM there. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/parallels/parallels_driver.c | 78 ++++++++++++++++++++++++++++++++++++- src/parallels/parallels_driver.h | 4 ++ src/parallels/parallels_storage.c | 6 +-- 3 files changed, 82 insertions(+), 6 deletions(-) diff --git a/src/parallels/parallels_driver.c b/src/parallels/parallels_driver.c index c415082..b097599 100644 --- a/src/parallels/parallels_driver.c +++ b/src/parallels/parallels_driver.c @@ -1117,6 +1117,74 @@ parallelsApplyChanges(virDomainObjPtr dom, virDomainDefPtr newdef) return 0; } +static int +parallelsCreateVm(virConnectPtr conn, virDomainDefPtr def) +{ + parallelsConnPtr privconn = conn->privateData; + int i; + virStorageVolDefPtr privvol = NULL; + virStoragePoolObjPtr pool = NULL; + virStorageVolPtr vol = NULL; + char uuidstr[VIR_UUID_STRING_BUFLEN]; + + for (i = 0; i < def->ndisks; i++) { + if (def->disks[i]->device != VIR_DOMAIN_DISK_DEVICE_DISK) + continue; + + vol = parallelsStorageVolumeLookupByPathLocked(conn, def->disks[i]->src); + if (!vol) { + parallelsError(VIR_ERR_INVALID_ARG, + _("Can't find volume with path '%s'"), + def->disks[i]->src); + return -1; + } + break; + } + + if (!vol) { + parallelsError(VIR_ERR_INVALID_ARG, + _("Can't create VM without hard disks")); + return -1; + } + + pool = virStoragePoolObjFindByName(&privconn->pools, vol->pool); + if (!pool) { + parallelsError(VIR_ERR_INVALID_ARG, + _("Can't find storage pool with name '%s'"), + vol->pool); + goto error; + } + + privvol = virStorageVolDefFindByPath(pool, def->disks[i]->src); + if (!privvol) { + parallelsError(VIR_ERR_INVALID_ARG, + _("Can't find storage volume definition for path '%s'"), + def->disks[i]->src); + goto error2; + } + + virUUIDFormat(def->uuid, uuidstr); + + if (parallelsCmdRun(PRLCTL, "create", def->name, "--dst", + pool->def->target.path, "--no-hdd", + "--uuid", uuidstr, NULL) < 0) + goto error2; + + if (parallelsCmdRun(PRLCTL, "set", def->name, "--vnc-mode", "auto", NULL) < 0) + goto error2; + + virStoragePoolObjUnlock(pool); + virUnrefStorageVol(vol); + + return 0; + + error2: + virStoragePoolObjUnlock(pool); + error: + virUnrefStorageVol(vol); + return -1; +} + static virDomainPtr parallelsDomainDefineXML(virConnectPtr conn, const char *xml) { @@ -1153,8 +1221,16 @@ parallelsDomainDefineXML(virConnectPtr conn, const char *xml) def = NULL; } else { - parallelsError(VIR_ERR_NO_SUPPORT, _("Not implemented yet")); + if (parallelsCreateVm(conn, def)) goto cleanup; + if (parallelsLoadDomains(privconn, def->name)) + goto cleanup; + dom = virDomainFindByName(&privconn->domains, def->name); + if (!dom) { + parallelsError(VIR_ERR_INTERNAL_ERROR, + _("Domain is not defined after creation")); + goto cleanup; + } } event = virDomainEventNewFromObj(dom, diff --git a/src/parallels/parallels_driver.h b/src/parallels/parallels_driver.h index 6f06ac8..e32ad55 100644 --- a/src/parallels/parallels_driver.h +++ b/src/parallels/parallels_driver.h @@ -67,5 +67,9 @@ int parallelsCmdRun(const char *binary, ...) ATTRIBUTE_NONNULL(1) ATTRIBUTE_SENT char * parallelsAddFileExt(const char *path, const char *ext); void parallelsDriverLock(parallelsConnPtr driver); void parallelsDriverUnlock(parallelsConnPtr driver); +virStorageVolPtr parallelsStorageVolumeLookupByPathLocked(virConnectPtr + conn, + const char + *path); #endif diff --git a/src/parallels/parallels_storage.c b/src/parallels/parallels_storage.c index 05ac95d..825afa7 100644 --- a/src/parallels/parallels_storage.c +++ b/src/parallels/parallels_storage.c @@ -41,10 +41,6 @@ static virStorageVolDefPtr parallelsStorageVolumeDefine(virStoragePoolObjPtr poo const char *xmldesc, const char *xmlfile, bool is_new); -static virStorageVolPtr parallelsStorageVolumeLookupByPathLocked(virConnectPtr - conn, - const char - *path); static virStorageVolPtr parallelsStorageVolumeLookupByPath(virConnectPtr conn, const char *path); static int parallelsStoragePoolGetAlloc(virStoragePoolDefPtr def); @@ -939,7 +935,7 @@ parallelsStorageVolumeLookupByKey(virConnectPtr conn, const char *key) return ret; } -static virStorageVolPtr +virStorageVolPtr parallelsStorageVolumeLookupByPathLocked(virConnectPtr conn, const char *path) { parallelsConnPtr privconn = conn->privateData; -- 1.7.1

On Mon, Jun 25, 2012 at 12:57:55PM +0400, Dmitry Guryanov wrote:
Parallels Cloud Server is a virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/ Also beta version of Parallels Cloud Server can be downloaded there.
Okay, basically the main obstable on the last review which was the unavailability of the hypervisor is now fixed :-)
Dmitry Guryanov (8): parallels: add driver skeleton parallels: add functions to list domains and get info parallels: implement functions for domain life cycle management parallels: get info about serial ports parallels: add support of VNC remote display parallels: implement virDomainDefineXML operation for existing domains parallels: add storage driver parallels: implement VM creation
I woud not rush it to push it before the freeze, it's a big new functionality and i would rather have a bit of time before a release with it. But starting next week we should be good to go commiting the parts of the drivers which would get ACK'ed in the meantime. thanks ! Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On Mon, Jun 25, 2012 at 09:49:03PM +0800, Daniel Veillard wrote:
On Mon, Jun 25, 2012 at 12:57:55PM +0400, Dmitry Guryanov wrote:
Parallels Cloud Server is a virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/ Also beta version of Parallels Cloud Server can be downloaded there.
Okay, basically the main obstable on the last review which was the unavailability of the hypervisor is now fixed :-)
Dmitry Guryanov (8): parallels: add driver skeleton parallels: add functions to list domains and get info parallels: implement functions for domain life cycle management parallels: get info about serial ports parallels: add support of VNC remote display parallels: implement virDomainDefineXML operation for existing domains parallels: add storage driver parallels: implement VM creation
I woud not rush it to push it before the freeze, it's a big new functionality and i would rather have a bit of time before a release with it. But starting next week we should be good to go commiting the parts of the drivers which would get ACK'ed in the meantime.
I'd suggest when we merge the Parallels driver, we can change our version number to be 0.10.0, since we've been on 0.9.x for a long time now, and new hypervisor drivers have been our motivation for version number changes in the past :-) Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Mon, Jun 25, 2012 at 02:52:15PM +0100, Daniel P. Berrange wrote:
On Mon, Jun 25, 2012 at 09:49:03PM +0800, Daniel Veillard wrote: [...]
I woud not rush it to push it before the freeze, it's a big new functionality and i would rather have a bit of time before a release with it. But starting next week we should be good to go commiting the parts of the drivers which would get ACK'ed in the meantime.
I'd suggest when we merge the Parallels driver, we can change our version number to be 0.10.0, since we've been on 0.9.x for a long time now, and new hypervisor drivers have been our motivation for version number changes in the past :-)
Good point, agreed :-) Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On Mon, Jun 25, 2012 at 02:52:15PM +0100, Daniel P. Berrange wrote:
On Mon, Jun 25, 2012 at 09:49:03PM +0800, Daniel Veillard wrote:
On Mon, Jun 25, 2012 at 12:57:55PM +0400, Dmitry Guryanov wrote:
Parallels Cloud Server is a virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/ Also beta version of Parallels Cloud Server can be downloaded there.
Okay, basically the main obstable on the last review which was the unavailability of the hypervisor is now fixed :-)
Dmitry Guryanov (8): parallels: add driver skeleton parallels: add functions to list domains and get info parallels: implement functions for domain life cycle management parallels: get info about serial ports parallels: add support of VNC remote display parallels: implement virDomainDefineXML operation for existing domains parallels: add storage driver parallels: implement VM creation
I woud not rush it to push it before the freeze, it's a big new functionality and i would rather have a bit of time before a release with it. But starting next week we should be good to go commiting the parts of the drivers which would get ACK'ed in the meantime.
I'd suggest when we merge the Parallels driver, we can change our version number to be 0.10.0, since we've been on 0.9.x for a long time now, and new hypervisor drivers have been our motivation for version number changes in the past :-)
Okay, this should affect patches 2,3,6 and 7 where 0.9.12 will have to be replaced by 0.10.0, but that can be done easilly when this gets applied to the tree next week, Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 06/26/2012 10:54 AM, Daniel Veillard wrote:
On Mon, Jun 25, 2012 at 02:52:15PM +0100, Daniel P. Berrange wrote:
On Mon, Jun 25, 2012 at 09:49:03PM +0800, Daniel Veillard wrote:
On Mon, Jun 25, 2012 at 12:57:55PM +0400, Dmitry Guryanov wrote:
Parallels Cloud Server is a virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/ Also beta version of Parallels Cloud Server can be downloaded there. Okay, basically the main obstable on the last review which was the unavailability of the hypervisor is now fixed :-)
Dmitry Guryanov (8): parallels: add driver skeleton parallels: add functions to list domains and get info parallels: implement functions for domain life cycle management parallels: get info about serial ports parallels: add support of VNC remote display parallels: implement virDomainDefineXML operation for existing domains parallels: add storage driver parallels: implement VM creation I woud not rush it to push it before the freeze, it's a big new functionality and i would rather have a bit of time before a release with it. But starting next week we should be good to go commiting the parts of the drivers which would get ACK'ed in the meantime. I'd suggest when we merge the Parallels driver, we can change our version number to be 0.10.0, since we've been on 0.9.x for a long time now, and new hypervisor drivers have been our motivation for version number changes in the past :-) Okay, this should affect patches 2,3,6 and 7 where 0.9.12 will have to be replaced by 0.10.0, but that can be done easilly when this gets applied to the tree next week,
Daniel Thanks a lot !
I think it's better for me to wait for comments until next week before writing new code :) -- Dmitry Guryanov

On 06/26/2012 10:54 AM, Daniel Veillard wrote:
On Mon, Jun 25, 2012 at 02:52:15PM +0100, Daniel P. Berrange wrote:
On Mon, Jun 25, 2012 at 09:49:03PM +0800, Daniel Veillard wrote:
On Mon, Jun 25, 2012 at 12:57:55PM +0400, Dmitry Guryanov wrote:
Parallels Cloud Server is a virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/ Also beta version of Parallels Cloud Server can be downloaded there. Okay, basically the main obstable on the last review which was the unavailability of the hypervisor is now fixed :-)
Dmitry Guryanov (8): parallels: add driver skeleton parallels: add functions to list domains and get info parallels: implement functions for domain life cycle management parallels: get info about serial ports parallels: add support of VNC remote display parallels: implement virDomainDefineXML operation for existing domains parallels: add storage driver parallels: implement VM creation I woud not rush it to push it before the freeze, it's a big new functionality and i would rather have a bit of time before a release with it. But starting next week we should be good to go commiting the parts of the drivers which would get ACK'ed in the meantime. I'd suggest when we merge the Parallels driver, we can change our version number to be 0.10.0, since we've been on 0.9.x for a long time now, and new hypervisor drivers have been our motivation for version number changes in the past :-) Okay, this should affect patches 2,3,6 and 7 where 0.9.12 will have to be replaced by 0.10.0, but that can be done easilly when this gets applied to the tree next week,
Daniel Hello,
There is a conflict with first patch - file mingw32-libvirt.spec.in removed from tree, I'll resend patches. -- Dmitry Guryanov

On Wed, Jul 04, 2012 at 09:42:11PM +0400, Dmitry Guryanov wrote:
On 06/26/2012 10:54 AM, Daniel Veillard wrote:
On Mon, Jun 25, 2012 at 02:52:15PM +0100, Daniel P. Berrange wrote:
On Mon, Jun 25, 2012 at 09:49:03PM +0800, Daniel Veillard wrote:
On Mon, Jun 25, 2012 at 12:57:55PM +0400, Dmitry Guryanov wrote:
Parallels Cloud Server is a virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/ Also beta version of Parallels Cloud Server can be downloaded there. Okay, basically the main obstable on the last review which was the unavailability of the hypervisor is now fixed :-)
Dmitry Guryanov (8): parallels: add driver skeleton parallels: add functions to list domains and get info parallels: implement functions for domain life cycle management parallels: get info about serial ports parallels: add support of VNC remote display parallels: implement virDomainDefineXML operation for existing domains parallels: add storage driver parallels: implement VM creation I woud not rush it to push it before the freeze, it's a big new functionality and i would rather have a bit of time before a release with it. But starting next week we should be good to go commiting the parts of the drivers which would get ACK'ed in the meantime. I'd suggest when we merge the Parallels driver, we can change our version number to be 0.10.0, since we've been on 0.9.x for a long time now, and new hypervisor drivers have been our motivation for version number changes in the past :-) Okay, this should affect patches 2,3,6 and 7 where 0.9.12 will have to be replaced by 0.10.0, but that can be done easilly when this gets applied to the tree next week,
Daniel Hello,
There is a conflict with first patch - file mingw32-libvirt.spec.in removed from tree, I'll resend patches.
It was actually just renamed to mingw-libvirt.spec.in - and re-arranged to build for 32 & 64 bit Windows binaries. You ought to be able to adapt your change to apply to the new file without much trouble. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Wed, Jul 04, 2012 at 09:42:11PM +0400, Dmitry Guryanov wrote:
On 06/26/2012 10:54 AM, Daniel Veillard wrote:
On Mon, Jun 25, 2012 at 02:52:15PM +0100, Daniel P. Berrange wrote:
On Mon, Jun 25, 2012 at 09:49:03PM +0800, Daniel Veillard wrote:
On Mon, Jun 25, 2012 at 12:57:55PM +0400, Dmitry Guryanov wrote:
Parallels Cloud Server is a virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server.
More information can be found here: http://www.parallels.com/products/pcs/ Also beta version of Parallels Cloud Server can be downloaded there. Okay, basically the main obstable on the last review which was the unavailability of the hypervisor is now fixed :-)
Dmitry Guryanov (8): parallels: add driver skeleton parallels: add functions to list domains and get info parallels: implement functions for domain life cycle management parallels: get info about serial ports parallels: add support of VNC remote display parallels: implement virDomainDefineXML operation for existing domains parallels: add storage driver parallels: implement VM creation I woud not rush it to push it before the freeze, it's a big new functionality and i would rather have a bit of time before a release with it. But starting next week we should be good to go commiting the parts of the drivers which would get ACK'ed in the meantime. I'd suggest when we merge the Parallels driver, we can change our version number to be 0.10.0, since we've been on 0.9.x for a long time now, and new hypervisor drivers have been our motivation for version number changes in the past :-) Okay, this should affect patches 2,3,6 and 7 where 0.9.12 will have to be replaced by 0.10.0, but that can be done easilly when this gets applied to the tree next week,
Daniel Hello,
There is a conflict with first patch - file mingw32-libvirt.spec.in removed from tree, I'll resend patches.
Okay, thanks ! Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/
participants (3)
-
Daniel P. Berrange
-
Daniel Veillard
-
Dmitry Guryanov