[libvirt] [PATCH 0/9] Add basic driver for Parallels Virtuozzo Server

Parallels Virtuozzo Server is a cloud-ready virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server. Current name of this product is Parallels Server Bare Metal and more information about it can be found here - http://www.parallels.com/products/server/baremetal/sp/. This driver will work with PVS version 6.0 , beta version scheduled at 2012 Q2. Dmitry Guryanov (9): pvs: add driver skeleton util: add functions for interating over json object pvs: add functions to list domains and get info pvs: implement functions for domain life cycle management pvs: get info about serial ports pvs: add support of VNC remote display pvs: implement virDomainDefineXML operation for existing domains pvs: add storage driver pvs: implement VM creation cfg.mk | 1 + configure.ac | 20 + docs/drvpvs.html.in | 28 + include/libvirt/virterror.h | 1 + libvirt.spec.in | 7 + mingw32-libvirt.spec.in | 6 + po/POTFILES.in | 1 + src/Makefile.am | 23 + src/conf/domain_conf.c | 3 +- src/conf/domain_conf.h | 1 + src/driver.h | 1 + src/libvirt.c | 12 + src/pvs/pvs_driver.c | 1259 +++++++++++++++++++++++++++++++++++++ src/pvs/pvs_driver.h | 76 +++ src/pvs/pvs_storage.c | 1460 +++++++++++++++++++++++++++++++++++++++++++ src/pvs/pvs_utils.c | 139 ++++ src/util/json.c | 30 + src/util/json.h | 4 + src/util/virterror.c | 3 + 19 files changed, 3074 insertions(+), 1 deletions(-) create mode 100644 docs/drvpvs.html.in create mode 100644 src/pvs/pvs_driver.c create mode 100644 src/pvs/pvs_driver.h create mode 100644 src/pvs/pvs_storage.c create mode 100644 src/pvs/pvs_utils.c

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- cfg.mk | 1 + configure.ac | 20 ++++ docs/drvpvs.html.in | 28 +++++ include/libvirt/virterror.h | 1 + libvirt.spec.in | 7 + mingw32-libvirt.spec.in | 6 + po/POTFILES.in | 1 + src/Makefile.am | 21 ++++ src/conf/domain_conf.c | 3 +- src/conf/domain_conf.h | 1 + src/driver.h | 1 + src/libvirt.c | 12 ++ src/pvs/pvs_driver.c | 263 +++++++++++++++++++++++++++++++++++++++++++ src/pvs/pvs_driver.h | 49 ++++++++ src/util/virterror.c | 3 + 15 files changed, 416 insertions(+), 1 deletions(-) create mode 100644 docs/drvpvs.html.in create mode 100644 src/pvs/pvs_driver.c create mode 100644 src/pvs/pvs_driver.h diff --git a/cfg.mk b/cfg.mk index 71e9a1d..ca6c0a5 100644 --- a/cfg.mk +++ b/cfg.mk @@ -502,6 +502,7 @@ msg_gen_function += PHYP_ERROR msg_gen_function += VIR_ERROR msg_gen_function += VMX_ERROR msg_gen_function += XENXS_ERROR +msg_gen_function += PVS_ERROR msg_gen_function += eventReportError msg_gen_function += ifaceError msg_gen_function += interfaceReportError diff --git a/configure.ac b/configure.ac index 3f5b3ff..214f450 100644 --- a/configure.ac +++ b/configure.ac @@ -330,6 +330,8 @@ AC_ARG_WITH([esx], AC_HELP_STRING([--with-esx], [add ESX support @<:@default=check@:>@]),[],[with_esx=check]) AC_ARG_WITH([hyperv], AC_HELP_STRING([--with-hyperv], [add Hyper-V support @<:@default=check@:>@]),[],[with_hyperv=check]) +AC_ARG_WITH([pvs], + AC_HELP_STRING([--with-pvs], [add Parallels Virtuozzo Server support @<:@default=check@:>@]),[],[with_pvs=check]) AC_ARG_WITH([test], AC_HELP_STRING([--with-test], [add test driver support @<:@default=yes@:>@]),[],[with_test=yes]) AC_ARG_WITH([remote], @@ -794,6 +796,23 @@ fi AM_CONDITIONAL([WITH_LXC], [test "$with_lxc" = "yes"]) dnl +dnl Checks for the PVS driver +dnl + +if test "$with_pvs" = "check"; then + with_pvs=$with_linux +fi + +if test "$with_pvs" = "yes" && test "$with_linux" = "no"; then + AC_MSG_ERROR([The PVS driver can be enabled on Linux only.]) +fi + +if test "$with_pvs" = "yes"; then + AC_DEFINE_UNQUOTED([WITH_PVS], 1, [whether PVS driver is enabled]) +fi +AM_CONDITIONAL([WITH_PVS], [test "$with_pvs" = "yes"]) + +dnl dnl check for shell that understands <> redirection without truncation, dnl needed by src/qemu/qemu_monitor_{text,json}.c. dnl @@ -2680,6 +2699,7 @@ AC_MSG_NOTICE([ LXC: $with_lxc]) AC_MSG_NOTICE([ PHYP: $with_phyp]) AC_MSG_NOTICE([ ESX: $with_esx]) AC_MSG_NOTICE([ Hyper-V: $with_hyperv]) +AC_MSG_NOTICE([ PVS: $with_pvs]) AC_MSG_NOTICE([ Test: $with_test]) AC_MSG_NOTICE([ Remote: $with_remote]) AC_MSG_NOTICE([ Network: $with_network]) diff --git a/docs/drvpvs.html.in b/docs/drvpvs.html.in new file mode 100644 index 0000000..dae3c77 --- /dev/null +++ b/docs/drvpvs.html.in @@ -0,0 +1,28 @@ +<html><body> + <h1>Parallels Virtuozzo Server driver</h1> + <ul id="toc"></ul> + <p> + The libvirt PVS driver can manage Parallels Virtuozzo Server starting from 6.0 version. + </p> + + + <h2><a name="project">Project Links</a></h2> + <ul> + <li> + The <a href="http://www.parallels.com/products/server/baremetal/sp/">Parallels Virtuozzo Server</a> Virtualization Solution. + </li> + </ul> + + + <h2><a name="uri">Connections to the Parallels Virtuozzo Server driver</a></h2> + <p> + The libvirt PVS driver is a single-instance privileged driver, with a driver name of 'pvs'. Some example connection URIs for the libvirt driver are: + </p> +<pre> +pvs:///default (local access) +pvs+unix:///default (local access) +pvs://example.com/default (remote access, TLS/x509) +pvs+tcp://example.com/default (remote access, SASl/Kerberos) +pvs+ssh://root@example.com/default (remote access, SSH tunnelled) +</pre> +</body></html> diff --git a/include/libvirt/virterror.h b/include/libvirt/virterror.h index e04d29e..8a7e8ba 100644 --- a/include/libvirt/virterror.h +++ b/include/libvirt/virterror.h @@ -87,6 +87,7 @@ typedef enum { VIR_FROM_CAPABILITIES = 44, /* Error from capabilities */ VIR_FROM_URI = 45, /* Error from URI handling */ VIR_FROM_AUTH = 46, /* Error from auth handling */ + VIR_FROM_PVS = 47, /* Error from PVS */ } virErrorDomain; diff --git a/libvirt.spec.in b/libvirt.spec.in index e7e0a55..a4e2460 100644 --- a/libvirt.spec.in +++ b/libvirt.spec.in @@ -65,6 +65,7 @@ %define with_esx 0%{!?_without_esx:1} %define with_hyperv 0%{!?_without_hyperv:1} %define with_xenapi 0%{!?_without_xenapi:1} +%define with_pvs 0%{!?_without_pvs:1} # Then the secondary host drivers, which run inside libvirtd %define with_network 0%{!?_without_network:%{server_drivers}} @@ -124,6 +125,7 @@ %define with_xenapi 0 %define with_libxl 0 %define with_hyperv 0 +%define with_pvs 0 %endif # Although earlier Fedora has systemd, libvirt still used sysvinit @@ -790,6 +792,10 @@ of recent versions of Linux (and other OSes). %define _without_vmware --without-vmware %endif +%if ! %{with_pvs} +%define _without_pvs --without-pvs +%endif + %if ! %{with_polkit} %define _without_polkit --without-polkit %endif @@ -920,6 +926,7 @@ autoreconf -if %{?_without_esx} \ %{?_without_hyperv} \ %{?_without_vmware} \ + %{?_without_pvs} \ %{?_without_network} \ %{?_with_rhel5_api} \ %{?_without_storage_fs} \ diff --git a/mingw32-libvirt.spec.in b/mingw32-libvirt.spec.in index 4d23c75..7d82bc2 100644 --- a/mingw32-libvirt.spec.in +++ b/mingw32-libvirt.spec.in @@ -18,6 +18,7 @@ # missing libwsman, so can't build hyper-v %define with_hyperv 0%{!?_without_hyperv:0} %define with_xenapi 0%{!?_without_xenapi:1} +%define with_pvs 0%{!?_without_pvs:0} # RHEL ships ESX but not PowerHypervisor, HyperV, or libxenserver (xenapi) %if 0%{?rhel} @@ -92,6 +93,10 @@ MinGW Windows libvirt virtualization library. %define _without_xenapi --without-xenapi %endif +%if ! %{with_pvs} +%define _without_pvs --without-pvs +%endif + %if 0%{?enable_autotools} autoreconf -if %endif @@ -113,6 +118,7 @@ autoreconf -if %{?_without_esx} \ %{?_without_hyperv} \ --without-vmware \ + --without-pvs \ --without-netcf \ --without-audit \ --without-dtrace diff --git a/po/POTFILES.in b/po/POTFILES.in index 5d5739a..ddfd3e3 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -163,6 +163,7 @@ src/xenapi/xenapi_driver.c src/xenapi/xenapi_utils.c src/xenxs/xen_sxpr.c src/xenxs/xen_xm.c +src/pvs/pvs_driver.c tools/console.c tools/libvirt-guests.init.sh tools/virsh.c diff --git a/src/Makefile.am b/src/Makefile.am index a2aae9d..3cbd385 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -465,6 +465,10 @@ HYPERV_DRIVER_EXTRA_DIST = \ hyperv/hyperv_wmi_generator.py \ $(HYPERV_DRIVER_GENERATED) +PVS_DRIVER_SOURCES = \ + pvs/pvs_driver.h \ + pvs/pvs_driver.c + NETWORK_DRIVER_SOURCES = \ network/bridge_driver.h network/bridge_driver.c @@ -930,6 +934,22 @@ endif libvirt_driver_hyperv_la_SOURCES = $(HYPERV_DRIVER_SOURCES) endif +if WITH_PVS +if WITH_DRIVER_MODULES +mod_LTLIBRARIES += libvirt_driver_pvs.la +else +noinst_LTLIBRARIES += libvirt_driver_pvs.la +libvirt_la_BUILT_LIBADD += libvirt_driver_pvs.la +endif +libvirt_driver_pvs_la_CFLAGS = -I$(top_srcdir)/src/conf $(AM_CFLAGS) +libvirt_driver_pvs_la_LDFLAGS = $(AM_LDFLAGS) +if WITH_DRIVER_MODULES +libvirt_driver_pvs_la_LIBADD = ../gnulib/lib/libgnu.la +libvirt_driver_pvs_la_LDFLAGS += -module -avoid-version +endif +libvirt_driver_pvs_la_SOURCES = $(PVS_DRIVER_SOURCES) +endif + if WITH_NETWORK if WITH_DRIVER_MODULES mod_LTLIBRARIES += libvirt_driver_network.la @@ -1130,6 +1150,7 @@ EXTRA_DIST += \ $(ESX_DRIVER_EXTRA_DIST) \ $(HYPERV_DRIVER_SOURCES) \ $(HYPERV_DRIVER_EXTRA_DIST) \ + $(PVS_DRIVER_SOURCES) \ $(NETWORK_DRIVER_SOURCES) \ $(INTERFACE_DRIVER_SOURCES) \ $(STORAGE_DRIVER_SOURCES) \ diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index c6b97e1..4385fc7 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -93,7 +93,8 @@ VIR_ENUM_IMPL(virDomainVirt, VIR_DOMAIN_VIRT_LAST, "vmware", "hyperv", "vbox", - "phyp") + "phyp", + "pvs") VIR_ENUM_IMPL(virDomainBoot, VIR_DOMAIN_BOOT_LAST, "fd", diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index 0eed60e..7983e11 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -157,6 +157,7 @@ enum virDomainVirtType { VIR_DOMAIN_VIRT_HYPERV, VIR_DOMAIN_VIRT_VBOX, VIR_DOMAIN_VIRT_PHYP, + VIR_DOMAIN_VIRT_PVS, VIR_DOMAIN_VIRT_LAST, }; diff --git a/src/driver.h b/src/driver.h index 03d249b..1bcbd46 100644 --- a/src/driver.h +++ b/src/driver.h @@ -31,6 +31,7 @@ typedef enum { VIR_DRV_VMWARE = 13, VIR_DRV_LIBXL = 14, VIR_DRV_HYPERV = 15, + VIR_DRV_PVS = 16, } virDrvNo; diff --git a/src/libvirt.c b/src/libvirt.c index 16d1fd5..881dfb6 100644 --- a/src/libvirt.c +++ b/src/libvirt.c @@ -76,6 +76,9 @@ # ifdef WITH_XENAPI # include "xenapi/xenapi_driver.h" # endif +# ifdef WITH_PVS +# include "pvs/pvs_driver.h" +# endif #endif #define VIR_FROM_THIS VIR_FROM_NONE @@ -454,6 +457,9 @@ virInitialize(void) # ifdef WITH_XENAPI virDriverLoadModule("xenapi"); # endif +# ifdef WITH_PVS + virDriverLoadModule("pvs"); +# endif # ifdef WITH_REMOTE virDriverLoadModule("remote"); # endif @@ -485,6 +491,9 @@ virInitialize(void) # ifdef WITH_XENAPI if (xenapiRegister() == -1) return -1; # endif +# ifdef WITH_PVS + if (pvsRegister() == -1) return -1; +# endif # ifdef WITH_REMOTE if (remoteRegister () == -1) return -1; # endif @@ -1214,6 +1223,9 @@ do_open (const char *name, #ifndef WITH_XENAPI STRCASEEQ(ret->uri->scheme, "xenapi") || #endif +#ifndef WITH_PVS + STRCASEEQ(ret->uri->scheme, "pvs") || +#endif false)) { virReportErrorHelper(VIR_FROM_NONE, VIR_ERR_INVALID_ARG, __FILE__, __FUNCTION__, __LINE__, diff --git a/src/pvs/pvs_driver.c b/src/pvs/pvs_driver.c new file mode 100644 index 0000000..33bfa21 --- /dev/null +++ b/src/pvs/pvs_driver.c @@ -0,0 +1,263 @@ +/* + * pvs_driver.c: core driver functions for managing + * Parallels Virtuozzo Server hosts + * + * Copyright (C) 2012 Parallels, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <config.h> + +#include <sys/types.h> +#include <sys/poll.h> +#include <limits.h> +#include <string.h> +#include <stdio.h> +#include <stdarg.h> +#include <stdlib.h> +#include <unistd.h> +#include <errno.h> +#include <sys/utsname.h> +#include <sys/stat.h> +#include <fcntl.h> +#include <paths.h> +#include <pwd.h> +#include <stdio.h> +#include <sys/wait.h> +#include <sys/time.h> +#include <dirent.h> +#include <sys/statvfs.h> + +#include "datatypes.h" +#include "virterror_internal.h" +#include "memory.h" +#include "util.h" +#include "logging.h" +#include "command.h" +#include "configmake.h" +#include "storage_file.h" +#include "nodeinfo.h" +#include "json.h" + +#include "pvs_driver.h" + +#define VIR_FROM_THIS VIR_FROM_PVS + +static virCapsPtr pvsBuildCapabilities(void); +static int pvsClose(virConnectPtr conn); + +static void +pvsDriverLock(pvsConnPtr driver) +{ + virMutexLock(&driver->lock); +} + +static void +pvsDriverUnlock(pvsConnPtr driver) +{ + virMutexUnlock(&driver->lock); +} + +static int +pvsDefaultConsoleType(const char *ostype ATTRIBUTE_UNUSED) +{ + return VIR_DOMAIN_CHR_CONSOLE_TARGET_TYPE_SERIAL; +} + +static virCapsPtr +pvsBuildCapabilities(void) +{ + virCapsPtr caps; + virCapsGuestPtr guest; + struct utsname utsname; + uname(&utsname); + + if ((caps = virCapabilitiesNew(utsname.machine, 0, 0)) == NULL) + goto no_memory; + + if (nodeCapsInitNUMA(caps) < 0) + goto no_memory; + + virCapabilitiesSetMacPrefix(caps, (unsigned char[]) { + 0x42, 0x1C, 0x00}); + + if ((guest = virCapabilitiesAddGuest(caps, "hvm", "x86_64", + 64, "parallels", + NULL, 0, NULL)) == NULL) + goto no_memory; + + if (virCapabilitiesAddGuestDomain(guest, + "pvs", NULL, NULL, 0, NULL) == NULL) + goto no_memory; + + caps->defaultConsoleTargetType = pvsDefaultConsoleType; + return caps; + + no_memory: + virReportOOMError(); + virCapabilitiesFree(caps); + return NULL; +} + +static char * +pvsGetCapabilities(virConnectPtr conn) +{ + pvsConnPtr privconn = conn->privateData; + char *xml; + + pvsDriverLock(privconn); + if ((xml = virCapabilitiesFormatXML(privconn->caps)) == NULL) + virReportOOMError(); + pvsDriverUnlock(privconn); + return xml; +} + +static int +pvsOpenDefault(virConnectPtr conn) +{ + pvsConnPtr privconn; + + if (VIR_ALLOC(privconn) < 0) { + virReportOOMError(); + return VIR_DRV_OPEN_ERROR; + } + if (virMutexInit(&privconn->lock) < 0) { + pvsError(VIR_ERR_INTERNAL_ERROR, + "%s", _("cannot initialize mutex")); + goto error; + } + + pvsDriverLock(privconn); + conn->privateData = privconn; + pvsDriverUnlock(privconn); + + if (!(privconn->caps = pvsBuildCapabilities())) + goto error; + + if (virDomainObjListInit(&privconn->domains) < 0) + goto error; + + return VIR_DRV_OPEN_SUCCESS; + + error: + virDomainObjListDeinit(&privconn->domains); + virCapabilitiesFree(privconn->caps); + virStoragePoolObjListFree(&privconn->pools); + pvsDriverUnlock(privconn); + conn->privateData = NULL; + VIR_FREE(privconn); + return VIR_DRV_OPEN_ERROR; +} + +static virDrvOpenStatus +pvsOpen(virConnectPtr conn, + virConnectAuthPtr auth ATTRIBUTE_UNUSED, unsigned int flags) +{ + int ret; + pvsConnPtr privconn; + virCheckFlags(VIR_CONNECT_RO, VIR_DRV_OPEN_ERROR); + + if (!conn->uri) + return VIR_DRV_OPEN_DECLINED; + + if (!conn->uri->scheme || STRNEQ(conn->uri->scheme, "pvs")) + return VIR_DRV_OPEN_DECLINED; + + /* Remote driver should handle these. */ + if (conn->uri->server) + return VIR_DRV_OPEN_DECLINED; + + /* From this point on, the connection is for us. */ + if (!conn->uri->path + || conn->uri->path[0] == '\0' + || (conn->uri->path[0] == '/' && conn->uri->path[1] == '\0')) { + pvsError(VIR_ERR_INVALID_ARG, + "%s", _("pvsOpen: supply a path or use pvs:///default")); + return VIR_DRV_OPEN_ERROR; + } + + if (STREQ(conn->uri->path, "/default")) + ret = pvsOpenDefault(conn); + else + return VIR_DRV_OPEN_DECLINED; + + if (ret != VIR_DRV_OPEN_SUCCESS) + return ret; + + privconn = conn->privateData; + pvsDriverLock(privconn); + privconn->domainEventState = virDomainEventStateNew(); + if (!privconn->domainEventState) { + pvsDriverUnlock(privconn); + pvsClose(conn); + return VIR_DRV_OPEN_ERROR; + } + + pvsDriverUnlock(privconn); + return VIR_DRV_OPEN_SUCCESS; +} + +static int +pvsClose(virConnectPtr conn) +{ + pvsConnPtr privconn = conn->privateData; + + pvsDriverLock(privconn); + virCapabilitiesFree(privconn->caps); + virDomainObjListDeinit(&privconn->domains); + virDomainEventStateFree(privconn->domainEventState); + conn->privateData = NULL; + + pvsDriverUnlock(privconn); + virMutexDestroy(&privconn->lock); + + VIR_FREE(privconn); + return 0; +} + +static int +pvsGetVersion(virConnectPtr conn ATTRIBUTE_UNUSED, unsigned long *hvVer) +{ + /* TODO */ + *hvVer = 6; + return 0; +} + +static virDriver pvsDriver = { + .no = VIR_DRV_PVS, + .name = "PVS", + .open = pvsOpen, /* 0.9.11 */ + .close = pvsClose, /* 0.9.11 */ + .version = pvsGetVersion, /* 0.9.11 */ + .getHostname = virGetHostname, /* 0.9.11 */ + .nodeGetInfo = nodeGetInfo, /* 0.9.11 */ + .getCapabilities = pvsGetCapabilities, /* 0.9.11 */ +}; + +/** + * pvsRegister: + * + * Registers the pvs driver + */ +int +pvsRegister(void) +{ + if (virRegisterDriver(&pvsDriver) < 0) + return -1; + + return 0; +} diff --git a/src/pvs/pvs_driver.h b/src/pvs/pvs_driver.h new file mode 100644 index 0000000..289cc28 --- /dev/null +++ b/src/pvs/pvs_driver.h @@ -0,0 +1,49 @@ +/* + * pvs_driver.h: core driver functions for managing + * Parallels Virtuozzo Server hosts + * + * Copyright (C) 2012 Parallels, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#ifndef PVS_DRIVER_H +# define PVS_DRIVER_H + + +#include "domain_conf.h" +#include "storage_conf.h" +#include "domain_event.h" + +#define pvsError(code, ...) \ + virReportErrorHelper(VIR_FROM_TEST, code, __FILE__, \ + __FUNCTION__, __LINE__, __VA_ARGS__) + +struct _pvsConn { + virMutex lock; + virDomainObjList domains; + virStoragePoolObjList pools; + virCapsPtr caps; + virDomainEventStatePtr domainEventState; +}; + +typedef struct _pvsConn pvsConn; + +typedef struct _pvsConn *pvsConnPtr; + +int pvsRegister(void); + +#endif diff --git a/src/util/virterror.c b/src/util/virterror.c index ff9a36f..4976c08 100644 --- a/src/util/virterror.c +++ b/src/util/virterror.c @@ -175,6 +175,9 @@ static const char *virErrorDomainName(virErrorDomain domain) { case VIR_FROM_HYPERV: dom = "Hyper-V "; break; + case VIR_FROM_PVS: + dom = "Parallels Virtuozzo Server "; + break; case VIR_FROM_CAPABILITIES: dom = "Capabilities "; break; -- 1.7.1

Add function virJSONValueObjectKeysNumber, virJSONValueObjectGetKey and virJSONValueObjectGetValue, which allow you to iterate over all fields of json object: you can get number of fields and then get name and value, stored in field with that name by index. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/util/json.c | 30 ++++++++++++++++++++++++++++++ src/util/json.h | 4 ++++ 2 files changed, 34 insertions(+), 0 deletions(-) diff --git a/src/util/json.c b/src/util/json.c index a85f580..9109e06 100644 --- a/src/util/json.c +++ b/src/util/json.c @@ -428,6 +428,36 @@ virJSONValuePtr virJSONValueObjectGet(virJSONValuePtr object, const char *key) return NULL; } +int virJSONValueObjectKeysNumber(virJSONValuePtr object) +{ + if (object->type != VIR_JSON_TYPE_OBJECT) + return -1; + + return object->data.object.npairs; +} + +const char *virJSONValueObjectGetKey(virJSONValuePtr object, unsigned int n) +{ + if (object->type != VIR_JSON_TYPE_OBJECT) + return NULL; + + if (n >= object->data.object.npairs) + return NULL; + + return object->data.object.pairs[n].key; +} + +virJSONValuePtr virJSONValueObjectGetValue(virJSONValuePtr object, unsigned int n) +{ + if (object->type != VIR_JSON_TYPE_OBJECT) + return NULL; + + if (n >= object->data.object.npairs) + return NULL; + + return object->data.object.pairs[n].value; +} + int virJSONValueArraySize(virJSONValuePtr array) { if (array->type != VIR_JSON_TYPE_ARRAY) diff --git a/src/util/json.h b/src/util/json.h index 4572654..2677ffc 100644 --- a/src/util/json.h +++ b/src/util/json.h @@ -99,6 +99,10 @@ virJSONValuePtr virJSONValueObjectGet(virJSONValuePtr object, const char *key); int virJSONValueArraySize(virJSONValuePtr object); virJSONValuePtr virJSONValueArrayGet(virJSONValuePtr object, unsigned int element); +int virJSONValueObjectKeysNumber(virJSONValuePtr object); +const char *virJSONValueObjectGetKey(virJSONValuePtr object, unsigned int n); +virJSONValuePtr virJSONValueObjectGetValue(virJSONValuePtr object, unsigned int n); + const char *virJSONValueGetString(virJSONValuePtr object); int virJSONValueGetNumberInt(virJSONValuePtr object, int *value); int virJSONValueGetNumberUint(virJSONValuePtr object, unsigned int *value); -- 1.7.1

PVS driver is 'stateless', like vmware or openvz drivers. It collects information about domains during startup using command-line utility prlctl. VMs in PVS identified by UUIDs or unique names, which can be used as respective fields in virDomainDef structure. Currently only basic info, like description, virtual cpus number and memory amount implemented. Quering devices information will be added in the next patches. PVS does't support non-persistent domains - you can't run a domain having only disk image, it must always be registered in system. Functions for quering domain info have been just copied from test driver with some changes - they extract needed data from previouly created list of virDomainObj objects. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/Makefile.am | 3 +- src/pvs/pvs_driver.c | 504 +++++++++++++++++++++++++++++++++++++++++++++++++- src/pvs/pvs_driver.h | 17 ++ src/pvs/pvs_utils.c | 101 ++++++++++ 4 files changed, 623 insertions(+), 2 deletions(-) create mode 100644 src/pvs/pvs_utils.c diff --git a/src/Makefile.am b/src/Makefile.am index 3cbd385..bc9efcf 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -467,7 +467,8 @@ HYPERV_DRIVER_EXTRA_DIST = \ PVS_DRIVER_SOURCES = \ pvs/pvs_driver.h \ - pvs/pvs_driver.c + pvs/pvs_driver.c \ + pvs/pvs_utils.c NETWORK_DRIVER_SOURCES = \ network/bridge_driver.h network/bridge_driver.c diff --git a/src/pvs/pvs_driver.c b/src/pvs/pvs_driver.c index 33bfa21..c854664 100644 --- a/src/pvs/pvs_driver.c +++ b/src/pvs/pvs_driver.c @@ -51,12 +51,13 @@ #include "configmake.h" #include "storage_file.h" #include "nodeinfo.h" -#include "json.h" +#include "domain_conf.h" #include "pvs_driver.h" #define VIR_FROM_THIS VIR_FROM_PVS +void pvsFreeDomObj(void *p); static virCapsPtr pvsBuildCapabilities(void); static int pvsClose(virConnectPtr conn); @@ -78,6 +79,14 @@ pvsDefaultConsoleType(const char *ostype ATTRIBUTE_UNUSED) return VIR_DOMAIN_CHR_CONSOLE_TARGET_TYPE_SERIAL; } +void +pvsFreeDomObj(void *p) +{ + pvsDomObjPtr pdom = (pvsDomObjPtr) p; + + VIR_FREE(pdom); +}; + static virCapsPtr pvsBuildCapabilities(void) { @@ -126,6 +135,206 @@ pvsGetCapabilities(virConnectPtr conn) return xml; } +/* + * Must be called with privconn->lock held + */ +static virDomainObjPtr +pvsLoadDomain(pvsConnPtr privconn, virJSONValuePtr jobj) +{ + virDomainObjPtr dom = NULL; + virDomainDefPtr def = NULL; + pvsDomObjPtr pdom = NULL; + virJSONValuePtr jobj2, jobj3; + const char *tmp; + unsigned int x; + + if (VIR_ALLOC(def) < 0) + goto cleanup; + + def->virtType = VIR_DOMAIN_VIRT_PVS; + def->id = -1; + + tmp = virJSONValueObjectGetString(jobj, "Name"); + if (!tmp) { + pvsParseError(); + goto cleanup; + } + if (!(def->name = strdup(tmp))) + goto no_memory; + + tmp = virJSONValueObjectGetString(jobj, "ID"); + if (!tmp) { + pvsParseError(); + goto cleanup; + } + + if (virUUIDParse(tmp, def->uuid) == -1) { + pvsError(VIR_ERR_INTERNAL_ERROR, "%s", + _("UUID in config file malformed")); + goto cleanup; + } + + tmp = virJSONValueObjectGetString(jobj, "Description"); + if (!tmp) { + pvsParseError(); + goto cleanup; + } + if (!(def->description = strdup(tmp))) + goto no_memory; + + jobj2 = virJSONValueObjectGet(jobj, "Hardware"); + if (!jobj2) { + pvsParseError(); + goto cleanup; + } + + jobj3 = virJSONValueObjectGet(jobj2, "cpu"); + if (!jobj3) { + pvsParseError(); + goto cleanup; + } + + if (virJSONValueObjectGetNumberUint(jobj3, "cpus", &x) < 0) { + pvsParseError(); + goto cleanup; + } + def->vcpus = x; + def->maxvcpus = x; + + jobj3 = virJSONValueObjectGet(jobj2, "memory"); + if (!jobj3) { + pvsParseError(); + goto cleanup; + } + + tmp = virJSONValueObjectGetString(jobj3, "size"); + + def->mem.max_balloon = atoi(tmp); + def->mem.max_balloon <<= 10; + def->mem.cur_balloon = def->mem.max_balloon; + + if (!(def->os.type = strdup("hvm"))) + goto no_memory; + + if (!(def->os.init = strdup("/sbin/init"))) + goto no_memory; + + if (!(dom = virDomainAssignDef(privconn->caps, + &privconn->domains, def, false))) + goto cleanup; + /* dom is locked here */ + + if (VIR_ALLOC(pdom)) + goto no_memory_unlock; + dom->privateDataFreeFunc = pvsFreeDomObj; + dom->privateData = pdom; + + if (virJSONValueObjectGetNumberUint(jobj, "EnvID", &x) < 0) + goto cleanup_unlock; + pdom->id = x; + tmp = virJSONValueObjectGetString(jobj, "ID"); + if (!tmp) { + pvsParseError(); + goto cleanup_unlock; + } + if (!(pdom->uuid = strdup(tmp))) + goto no_memory_unlock; + + tmp = virJSONValueObjectGetString(jobj, "OS"); + if (!tmp) + goto cleanup_unlock; + if (!(pdom->os = strdup(tmp))) + goto no_memory_unlock; + + dom->persistent = 1; + + tmp = virJSONValueObjectGetString(jobj, "State"); + if (!tmp) { + pvsParseError(); + goto cleanup_unlock; + } + + /* TODO: handle all possible states */ + if (STREQ(tmp, "running")) { + virDomainObjSetState(dom, VIR_DOMAIN_RUNNING, + VIR_DOMAIN_RUNNING_BOOTED); + def->id = pdom->id; + } + + tmp = virJSONValueObjectGetString(jobj, "Autostart"); + if (!tmp) { + pvsParseError(); + goto cleanup_unlock; + } + if (STREQ(tmp, "on")) + dom->autostart = 1; + else + dom->autostart = 0; + + virDomainObjUnlock(dom); + + return dom; + + no_memory_unlock: + virReportOOMError(); + cleanup_unlock: + virDomainObjUnlock(dom); + /* domain list was locked, so nobody could get 'dom'. It has only + * one reference and virDomainObjUnref return 0 here */ + if (virDomainObjUnref(dom)) + pvsError(VIR_ERR_INTERNAL_ERROR, _("Can't free virDomainObj")); + return NULL; + no_memory: + virReportOOMError(); + cleanup: + virDomainDefFree(def); + return NULL; +} + +/* + * Must be called with privconn->lock held + */ +static int +pvsLoadDomains(pvsConnPtr privconn, const char *domain_name) +{ + int count, i; + virJSONValuePtr jobj; + virJSONValuePtr jobj2; + virDomainObjPtr dom = NULL; + int ret = -1; + + jobj = pvsParseOutput(PRLCTL, "list", "-j", "-a", + "-i", "-H", domain_name, NULL); + if (!jobj) { + pvsParseError(); + goto cleanup; + } + + count = virJSONValueArraySize(jobj); + if (count < 1) { + pvsParseError(); + goto cleanup; + } + + for (i = 0; i < count; i++) { + jobj2 = virJSONValueArrayGet(jobj, i); + if (!jobj2) { + pvsParseError(); + goto cleanup; + } + + dom = pvsLoadDomain(privconn, jobj2); + if (!dom) + goto cleanup; + } + + ret = 0; + + cleanup: + virJSONValueFree(jobj); + return ret; +} + static int pvsOpenDefault(virConnectPtr conn) { @@ -151,6 +360,9 @@ pvsOpenDefault(virConnectPtr conn) if (virDomainObjListInit(&privconn->domains) < 0) goto error; + if (pvsLoadDomains(privconn, NULL)) + goto error; + return VIR_DRV_OPEN_SUCCESS; error: @@ -237,6 +449,283 @@ pvsGetVersion(virConnectPtr conn ATTRIBUTE_UNUSED, unsigned long *hvVer) return 0; } +static int +pvsListDomains(virConnectPtr conn, int *ids, int maxids) +{ + pvsConnPtr privconn = conn->privateData; + int n; + + pvsDriverLock(privconn); + n = virDomainObjListGetActiveIDs(&privconn->domains, ids, maxids); + pvsDriverUnlock(privconn); + + return n; +} + +static int +pvsNumOfDomains(virConnectPtr conn) +{ + pvsConnPtr privconn = conn->privateData; + int count; + + pvsDriverLock(privconn); + count = virDomainObjListNumOfDomains(&privconn->domains, 1); + pvsDriverUnlock(privconn); + + return count; +} + +static int +pvsListDefinedDomains(virConnectPtr conn, char **const names, int maxnames) +{ + pvsConnPtr privconn = conn->privateData; + int n; + + pvsDriverLock(privconn); + memset(names, 0, sizeof(*names) * maxnames); + n = virDomainObjListGetInactiveNames(&privconn->domains, names, + maxnames); + pvsDriverUnlock(privconn); + + return n; +} + +static int +pvsNumOfDefinedDomains(virConnectPtr conn) +{ + pvsConnPtr privconn = conn->privateData; + int count; + + pvsDriverLock(privconn); + count = virDomainObjListNumOfDomains(&privconn->domains, 0); + pvsDriverUnlock(privconn); + + return count; +} + +static virDomainPtr +pvsLookupDomainByID(virConnectPtr conn, int id) +{ + pvsConnPtr privconn = conn->privateData; + virDomainPtr ret = NULL; + virDomainObjPtr dom; + + pvsDriverLock(privconn); + dom = virDomainFindByID(&privconn->domains, id); + pvsDriverUnlock(privconn); + + if (dom == NULL) { + pvsError(VIR_ERR_NO_DOMAIN, NULL); + goto cleanup; + } + + ret = virGetDomain(conn, dom->def->name, dom->def->uuid); + if (ret) + ret->id = dom->def->id; + + cleanup: + if (dom) + virDomainObjUnlock(dom); + return ret; +} + +static virDomainPtr +pvsLookupDomainByUUID(virConnectPtr conn, const unsigned char *uuid) +{ + pvsConnPtr privconn = conn->privateData; + virDomainPtr ret = NULL; + virDomainObjPtr dom; + + pvsDriverLock(privconn); + dom = virDomainFindByUUID(&privconn->domains, uuid); + pvsDriverUnlock(privconn); + + if (dom == NULL) { + pvsError(VIR_ERR_NO_DOMAIN, NULL); + goto cleanup; + } + + ret = virGetDomain(conn, dom->def->name, dom->def->uuid); + if (ret) + ret->id = dom->def->id; + + cleanup: + if (dom) + virDomainObjUnlock(dom); + return ret; +} + +static virDomainPtr +pvsLookupDomainByName(virConnectPtr conn, const char *name) +{ + pvsConnPtr privconn = conn->privateData; + virDomainPtr ret = NULL; + virDomainObjPtr dom; + + pvsDriverLock(privconn); + dom = virDomainFindByName(&privconn->domains, name); + pvsDriverUnlock(privconn); + + if (dom == NULL) { + pvsError(VIR_ERR_NO_DOMAIN, NULL); + goto cleanup; + } + + ret = virGetDomain(conn, dom->def->name, dom->def->uuid); + if (ret) + ret->id = dom->def->id; + + cleanup: + if (dom) + virDomainObjUnlock(dom); + return ret; +} + +static int +pvsGetDomainInfo(virDomainPtr domain, virDomainInfoPtr info) +{ + pvsConnPtr privconn = domain->conn->privateData; + virDomainObjPtr privdom; + int ret = -1; + + pvsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, domain->name); + pvsDriverUnlock(privconn); + + if (privdom == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + info->state = virDomainObjGetState(privdom, NULL); + info->memory = privdom->def->mem.cur_balloon; + info->maxMem = privdom->def->mem.max_balloon; + info->nrVirtCpu = privdom->def->vcpus; + info->cpuTime = 0; + ret = 0; + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + return ret; +} + +static char * +pvsGetOSType(virDomainPtr dom) +{ + pvsConnPtr privconn = dom->conn->privateData; + virDomainObjPtr privdom; + pvsDomObjPtr pdom; + + char *ret = NULL; + + pvsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, dom->name); + if (privdom == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + pdom = privdom->privateData; + + if (!(ret = strdup(pdom->os))) + virReportOOMError(); + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + pvsDriverUnlock(privconn); + return ret; +} + +static int +pvsDomainIsPersistent(virDomainPtr dom ATTRIBUTE_UNUSED) +{ + return 1; +} + +static int +pvsDomainGetState(virDomainPtr domain, + int *state, int *reason, unsigned int flags) +{ + pvsConnPtr privconn = domain->conn->privateData; + virDomainObjPtr privdom; + int ret = -1; + virCheckFlags(0, -1); + + pvsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, domain->name); + pvsDriverUnlock(privconn); + + if (privdom == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + *state = virDomainObjGetState(privdom, reason); + ret = 0; + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + return ret; +} + +static char * +pvsDomainGetXMLDesc(virDomainPtr domain, unsigned int flags) +{ + pvsConnPtr privconn = domain->conn->privateData; + virDomainDefPtr def; + virDomainObjPtr privdom; + char *ret = NULL; + + /* Flags checked by virDomainDefFormat */ + + pvsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, domain->name); + pvsDriverUnlock(privconn); + + if (privdom == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + def = (flags & VIR_DOMAIN_XML_INACTIVE) && + privdom->newDef ? privdom->newDef : privdom->def; + + ret = virDomainDefFormat(def, flags); + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + return ret; +} + +static int +pvsDomainGetAutostart(virDomainPtr domain, int *autostart) +{ + pvsConnPtr privconn = domain->conn->privateData; + virDomainObjPtr privdom; + int ret = -1; + + pvsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, domain->name); + pvsDriverUnlock(privconn); + + if (privdom == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + *autostart = privdom->autostart; + ret = 0; + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + return ret; +} + static virDriver pvsDriver = { .no = VIR_DRV_PVS, .name = "PVS", @@ -246,6 +735,19 @@ static virDriver pvsDriver = { .getHostname = virGetHostname, /* 0.9.11 */ .nodeGetInfo = nodeGetInfo, /* 0.9.11 */ .getCapabilities = pvsGetCapabilities, /* 0.9.11 */ + .listDomains = pvsListDomains, /* 0.9.11 */ + .numOfDomains = pvsNumOfDomains, /* 0.9.11 */ + .listDefinedDomains = pvsListDefinedDomains, /* 0.9.11 */ + .numOfDefinedDomains = pvsNumOfDefinedDomains, /* 0.9.11 */ + .domainLookupByID = pvsLookupDomainByID, /* 0.9.11 */ + .domainLookupByUUID = pvsLookupDomainByUUID, /* 0.9.11 */ + .domainLookupByName = pvsLookupDomainByName, /* 0.9.11 */ + .domainGetOSType = pvsGetOSType, /* 0.9.11 */ + .domainGetInfo = pvsGetDomainInfo, /* 0.9.11 */ + .domainGetState = pvsDomainGetState, /* 0.9.11 */ + .domainGetXMLDesc = pvsDomainGetXMLDesc, /* 0.9.11 */ + .domainIsPersistent = pvsDomainIsPersistent, /* 0.9.11 */ + .domainGetAutostart = pvsDomainGetAutostart, /* 0.9.11 */ }; /** diff --git a/src/pvs/pvs_driver.h b/src/pvs/pvs_driver.h index 289cc28..3f84d66 100644 --- a/src/pvs/pvs_driver.h +++ b/src/pvs/pvs_driver.h @@ -23,15 +23,30 @@ #ifndef PVS_DRIVER_H # define PVS_DRIVER_H +#include "json.h" #include "domain_conf.h" #include "storage_conf.h" #include "domain_event.h" +#define PRLCTL "prlctl" + #define pvsError(code, ...) \ virReportErrorHelper(VIR_FROM_TEST, code, __FILE__, \ __FUNCTION__, __LINE__, __VA_ARGS__) +#define pvsParseError() \ + virReportErrorHelper(VIR_FROM_TEST, VIR_ERR_OPERATION_FAILED, __FILE__, \ + __FUNCTION__, __LINE__, "Can't parse prlctl output") + +struct pvsDomObj { + int id; + char *uuid; + char *os; +}; + +typedef struct pvsDomObj *pvsDomObjPtr; + struct _pvsConn { virMutex lock; virDomainObjList domains; @@ -46,4 +61,6 @@ typedef struct _pvsConn *pvsConnPtr; int pvsRegister(void); +virJSONValuePtr pvsParseOutput(const char *binary, ...); + #endif diff --git a/src/pvs/pvs_utils.c b/src/pvs/pvs_utils.c new file mode 100644 index 0000000..3842010 --- /dev/null +++ b/src/pvs/pvs_utils.c @@ -0,0 +1,101 @@ +/* + * pvs_utils.c: core driver functions for managing + * Parallels Virtuozzo Server hosts + * + * Copyright (C) 2012 Parallels, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <config.h> + +#include <stdarg.h> + +#include "command.h" +#include "virterror_internal.h" +#include "memory.h" + +#include "pvs_driver.h" + +static int +pvsDoCmdRun(char **outbuf, const char *binary, va_list list) +{ + virCommandPtr cmd = virCommandNew(binary); + const char *arg; + int exitstatus; + char *scmd = NULL; + char *sstatus = NULL; + int ret; + + while ((arg = va_arg(list, const char *)) != NULL) + virCommandAddArg(cmd, arg); + + if (outbuf) + virCommandSetOutputBuffer(cmd, outbuf); + + scmd = virCommandToString(cmd); + if (!scmd) + goto cleanup; + + if (virCommandRun(cmd, &exitstatus)) { + pvsError(VIR_ERR_INTERNAL_ERROR, + _("Failed to execute command '%s'"), scmd); + goto cleanup; + } + + if (exitstatus) { + sstatus = virCommandTranslateStatus(exitstatus); + pvsError(VIR_ERR_INTERNAL_ERROR, + _("Command '%s' finished with errors: %s"), scmd, sstatus); + VIR_FREE(sstatus); + goto cleanup; + } + + ret = 0; + + cleanup: + VIR_FREE(scmd); + virCommandFree(cmd); + if (ret) + VIR_FREE(*outbuf); + return ret; +} + +/* + * Run command and parse its JSON output, return + * pointer to virJSONValue or NULL in case of error. + */ +virJSONValuePtr +pvsParseOutput(const char *binary, ...) +{ + char *outbuf; + virJSONValuePtr jobj = NULL; + va_list list; + int ret; + + va_start(list, binary); + ret = pvsDoCmdRun(&outbuf, binary, list); + va_end(list); + if (ret) + return NULL; + + jobj = virJSONValueFromString(outbuf); + if (!jobj) + pvsError(VIR_ERR_INTERNAL_ERROR, "%s: %s", + _("invalid output from prlctl"), outbuf); + + return jobj; +} -- 1.7.1

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/pvs/pvs_driver.c | 148 ++++++++++++++++++++++++++++++++++++++++++++++++++ src/pvs/pvs_driver.h | 1 + src/pvs/pvs_utils.c | 18 ++++++ 3 files changed, 167 insertions(+), 0 deletions(-) diff --git a/src/pvs/pvs_driver.c b/src/pvs/pvs_driver.c index c854664..b0c9a20 100644 --- a/src/pvs/pvs_driver.c +++ b/src/pvs/pvs_driver.c @@ -60,6 +60,11 @@ void pvsFreeDomObj(void *p); static virCapsPtr pvsBuildCapabilities(void); static int pvsClose(virConnectPtr conn); +int pvsPause(virDomainObjPtr privdom); +int pvsResume(virDomainObjPtr privdom); +int pvsStart(virDomainObjPtr privdom); +int pvsKill(virDomainObjPtr privdom); +int pvsStop(virDomainObjPtr privdom); static void pvsDriverLock(pvsConnPtr driver) @@ -87,6 +92,12 @@ pvsFreeDomObj(void *p) VIR_FREE(pdom); }; +static void +pvsDomainEventQueue(pvsConnPtr driver, virDomainEventPtr event) +{ + virDomainEventStateQueue(driver->domainEventState, event); +} + static virCapsPtr pvsBuildCapabilities(void) { @@ -726,6 +737,138 @@ pvsDomainGetAutostart(virDomainPtr domain, int *autostart) return ret; } +typedef int (*pvsChangeState) (virDomainObjPtr privdom); +#define PVS_UUID(x) (((pvsDomObjPtr)(x->privateData))->uuid) + +static int +pvsDomainChangeState(virDomainPtr domain, + virDomainState req_state, const char * req_state_name, + pvsChangeState chstate, + virDomainState new_state, int reason, + int event_type, int event_detail) +{ + pvsConnPtr privconn = domain->conn->privateData; + virDomainObjPtr privdom; + virDomainEventPtr event = NULL; + int state; + int ret = -1; + + pvsDriverLock(privconn); + privdom = virDomainFindByName(&privconn->domains, domain->name); + pvsDriverUnlock(privconn); + + if (privdom == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + state = virDomainObjGetState(privdom, NULL); + if (state != req_state) { + pvsError(VIR_ERR_INTERNAL_ERROR, _("domain '%s' not %s"), + privdom->def->name, req_state_name); + goto cleanup; + } + + if (chstate(privdom)) + goto cleanup; + + virDomainObjSetState(privdom, new_state, reason); + + event = virDomainEventNewFromObj(privdom, event_type, event_detail); + ret = 0; + + cleanup: + if (privdom) + virDomainObjUnlock(privdom); + + if (event) { + pvsDriverLock(privconn); + pvsDomainEventQueue(privconn, event); + pvsDriverUnlock(privconn); + } + return ret; +} + +int pvsPause(virDomainObjPtr privdom) +{ + return pvsCmdRun(PRLCTL, "pause", PVS_UUID(privdom), NULL); +} + +static int +pvsPauseDomain(virDomainPtr domain) +{ + return pvsDomainChangeState(domain, + VIR_DOMAIN_RUNNING, "running", + pvsPause, + VIR_DOMAIN_PAUSED, VIR_DOMAIN_PAUSED_USER, + VIR_DOMAIN_EVENT_SUSPENDED, + VIR_DOMAIN_EVENT_SUSPENDED_PAUSED); +} + +int pvsResume(virDomainObjPtr privdom) +{ + return pvsCmdRun(PRLCTL, "resume", PVS_UUID(privdom), NULL); +} + +static int +pvsResumeDomain(virDomainPtr domain) +{ + return pvsDomainChangeState(domain, + VIR_DOMAIN_PAUSED, "paused", + pvsResume, + VIR_DOMAIN_RUNNING, VIR_DOMAIN_RUNNING_UNPAUSED, + VIR_DOMAIN_EVENT_RESUMED, + VIR_DOMAIN_EVENT_RESUMED_UNPAUSED); +} + +int pvsStart(virDomainObjPtr privdom) +{ + return pvsCmdRun(PRLCTL, "start", PVS_UUID(privdom), NULL); +} + +static int +pvsDomainCreate(virDomainPtr domain) +{ + return pvsDomainChangeState(domain, + VIR_DOMAIN_SHUTOFF, "stopped", + pvsStart, + VIR_DOMAIN_RUNNING, VIR_DOMAIN_EVENT_STARTED_BOOTED, + VIR_DOMAIN_EVENT_STARTED, + VIR_DOMAIN_EVENT_STARTED_BOOTED); +} + +int pvsKill(virDomainObjPtr privdom) +{ + return pvsCmdRun(PRLCTL, "stop", PVS_UUID(privdom), "--kill", NULL); +} + +static int +pvsDestroyDomain(virDomainPtr domain) +{ + return pvsDomainChangeState(domain, + VIR_DOMAIN_RUNNING, "running", + pvsKill, + VIR_DOMAIN_SHUTOFF, VIR_DOMAIN_SHUTOFF_DESTROYED, + VIR_DOMAIN_EVENT_STOPPED, + VIR_DOMAIN_EVENT_STOPPED_DESTROYED); +} + +int pvsStop(virDomainObjPtr privdom) +{ + return pvsCmdRun(PRLCTL, "stop", PVS_UUID(privdom), NULL); +} + +static int +pvsShutdownDomain(virDomainPtr domain) +{ + return pvsDomainChangeState(domain, + VIR_DOMAIN_RUNNING, "running", + pvsStop, + VIR_DOMAIN_SHUTOFF, VIR_DOMAIN_SHUTOFF_SHUTDOWN, + VIR_DOMAIN_EVENT_STOPPED, + VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN); +} + static virDriver pvsDriver = { .no = VIR_DRV_PVS, .name = "PVS", @@ -748,6 +891,11 @@ static virDriver pvsDriver = { .domainGetXMLDesc = pvsDomainGetXMLDesc, /* 0.9.11 */ .domainIsPersistent = pvsDomainIsPersistent, /* 0.9.11 */ .domainGetAutostart = pvsDomainGetAutostart, /* 0.9.11 */ + .domainSuspend = pvsPauseDomain, /* 0.9.11 */ + .domainResume = pvsResumeDomain, /* 0.9.11 */ + .domainDestroy = pvsDestroyDomain, /* 0.9.11 */ + .domainShutdown = pvsShutdownDomain, /* 0.9.11 */ + .domainCreate = pvsDomainCreate, /* 0.9.11 */ }; /** diff --git a/src/pvs/pvs_driver.h b/src/pvs/pvs_driver.h index 3f84d66..d3cbca2 100644 --- a/src/pvs/pvs_driver.h +++ b/src/pvs/pvs_driver.h @@ -62,5 +62,6 @@ typedef struct _pvsConn *pvsConnPtr; int pvsRegister(void); virJSONValuePtr pvsParseOutput(const char *binary, ...); +int pvsCmdRun(const char *binary, ...); #endif diff --git a/src/pvs/pvs_utils.c b/src/pvs/pvs_utils.c index 3842010..3a548bd 100644 --- a/src/pvs/pvs_utils.c +++ b/src/pvs/pvs_utils.c @@ -99,3 +99,21 @@ pvsParseOutput(const char *binary, ...) return jobj; } + +/* + * Run prlctl command and check for errors + * + * Return value is 0 in case of success, else - -1 + */ +int +pvsCmdRun(const char *binary, ...) +{ + int ret; + va_list list; + + va_start(list, binary); + ret = pvsDoCmdRun(NULL, binary, list); + va_end(list); + + return ret; +} -- 1.7.1

Add support of collecting information about serial ports. This change is needed mostly as an example, support of other devices will be added later. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/pvs/pvs_driver.c | 115 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 115 insertions(+), 0 deletions(-) diff --git a/src/pvs/pvs_driver.c b/src/pvs/pvs_driver.c index b0c9a20..7f59e3b 100644 --- a/src/pvs/pvs_driver.c +++ b/src/pvs/pvs_driver.c @@ -146,6 +146,118 @@ pvsGetCapabilities(virConnectPtr conn) return xml; } +static int +pvsGetSerialInfo(virDomainChrDefPtr chr, + const char *name, virJSONValuePtr value) +{ + const char *tmp; + + chr->deviceType = VIR_DOMAIN_CHR_DEVICE_TYPE_SERIAL; + chr->targetType = VIR_DOMAIN_CHR_CONSOLE_TARGET_TYPE_SERIAL; + chr->target.port = atoi(name + strlen("serial")); + + if (virJSONValueObjectHasKey(value, "output")) { + chr->source.type = VIR_DOMAIN_CHR_TYPE_FILE; + + tmp = virJSONValueObjectGetString(value, "output"); + if (!tmp) { + pvsParseError(); + return -1; + } + + if (!(chr->source.data.file.path = strdup(tmp))) + goto no_memory; + } else if (virJSONValueObjectHasKey(value, "socket")) { + chr->source.type = VIR_DOMAIN_CHR_TYPE_UNIX; + + tmp = virJSONValueObjectGetString(value, "socket"); + if (!tmp) { + pvsParseError(); + return -1; + } + + if (!(chr->source.data.nix.path = strdup(tmp))) + goto no_memory; + chr->source.data.nix.listen = false; + } else if (virJSONValueObjectHasKey(value, "real")) { + chr->source.type = VIR_DOMAIN_CHR_TYPE_DEV; + + tmp = virJSONValueObjectGetString(value, "real"); + if (!tmp) { + pvsParseError(); + return -1; + } + + if (!(chr->source.data.file.path = strdup(tmp))) + goto no_memory; + } else { + pvsParseError(); + return -1; + } + + return 0; + + no_memory: + virReportOOMError(); + return -1; +} + +static int +pvsAddSerialInfo(virDomainObjPtr dom, + const char *key, virJSONValuePtr value) +{ + virDomainDefPtr def = dom->def; + virDomainChrDefPtr chr = NULL; + + if (!(chr = virDomainChrDefNew())) + goto no_memory; + + if (pvsGetSerialInfo(chr, key, value)) + goto cleanup; + + if (VIR_REALLOC_N(def->serials, def->nserials + 1) < 0) { + virDomainChrDefFree(chr); + goto no_memory; + } + + def->serials[def->nserials++] = chr; + + return 0; + + no_memory: + virReportOOMError(); + cleanup: + virDomainChrDefFree(chr); + return -1; +} + +static int +pvsAddDomainHardware(virDomainObjPtr dom, virJSONValuePtr jobj) +{ + int n, i; + virJSONValuePtr value; + const char *key; + + n = virJSONValueObjectKeysNumber(jobj); + if (n < 1) + goto cleanup; + + for (i = 0; i < n; i++) { + key = virJSONValueObjectGetKey(jobj, i); + value = virJSONValueObjectGetValue(jobj, i); + + if (STRPREFIX(key, "serial")) { + if (pvsAddSerialInfo(dom, key, value)) + goto cleanup; + } + } + + return 0; + + cleanup: + return -1; +} + /* * Must be called with privconn->lock held */ @@ -282,6 +394,9 @@ pvsLoadDomain(pvsConnPtr privconn, virJSONValuePtr jobj) else dom->autostart = 0; + if (pvsAddDomainHardware(dom, jobj2)) + goto cleanup_unlock; + virDomainObjUnlock(dom); return dom; -- 1.7.1

Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/pvs/pvs_driver.c | 65 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 65 insertions(+), 0 deletions(-) diff --git a/src/pvs/pvs_driver.c b/src/pvs/pvs_driver.c index 7f59e3b..20243eb 100644 --- a/src/pvs/pvs_driver.c +++ b/src/pvs/pvs_driver.c @@ -258,6 +258,68 @@ pvsAddDomainHardware(virDomainObjPtr dom, virJSONValuePtr jobj) return -1; } +static int +pvsAddVNCInfo(virDomainObjPtr dom, virJSONValuePtr jobj_root) +{ + const char *tmp; + unsigned int port; + virJSONValuePtr jobj; + int ret = -1; + + virDomainDefPtr def = dom->def; + + virDomainGraphicsDefPtr gr = NULL; + + jobj = virJSONValueObjectGet(jobj_root, "Remote display"); + if (!jobj) { + pvsParseError(); + goto cleanup; + } + + tmp = virJSONValueObjectGetString(jobj, "mode"); + if (!tmp) { + pvsParseError(); + goto cleanup; + } + + if (STREQ(tmp, "off")) { + ret = 0; + goto cleanup; + } + + if (VIR_ALLOC(gr) < 0) + goto no_memory; + + if (virJSONValueObjectGetNumberUint(jobj, "port", &port) < 0) { + pvsParseError(); + goto cleanup; + } + + /* TODO: handle non-auto vnc mode */ + gr->type = VIR_DOMAIN_GRAPHICS_TYPE_VNC; + gr->data.vnc.port = port; + gr->data.vnc.autoport = 0; + gr->data.vnc.keymap = NULL; + gr->data.vnc.socket = NULL; + gr->data.vnc.auth.passwd = NULL; + gr->data.vnc.auth.expires = 0; + gr->data.vnc.auth.connected = 0; + + if (VIR_REALLOC_N(def->graphics, def->ngraphics + 1) < 0) { + virDomainGraphicsDefFree(gr); + goto no_memory; + } + + def->graphics[def->ngraphics++] = gr; + return 0; + + no_memory: + virReportOOMError(); + cleanup: + VIR_FREE(gr); + return ret; +} + /* * Must be called with privconn->lock held */ @@ -397,6 +459,9 @@ pvsLoadDomain(pvsConnPtr privconn, virJSONValuePtr jobj) if (pvsAddDomainHardware(dom, jobj2)) goto cleanup_unlock; + if (pvsAddVNCInfo(dom, jobj)) + goto cleanup_unlock; + virDomainObjUnlock(dom); return dom; -- 1.7.1

Add pvsDomainDefineXML functions, it works only for existing domains for the present. It's too hard to convert libvirt's XML domain configuration into PVS's one, so I've decided to compare virDomainDef structures: current domain definition and the one created from XML, given to the function. And change only different parameters. Only description change implemetented, changing other parameters will be implemented later. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/pvs/pvs_driver.c | 89 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 89 insertions(+), 0 deletions(-) diff --git a/src/pvs/pvs_driver.c b/src/pvs/pvs_driver.c index 20243eb..ceca50d 100644 --- a/src/pvs/pvs_driver.c +++ b/src/pvs/pvs_driver.c @@ -1049,6 +1049,94 @@ pvsShutdownDomain(virDomainPtr domain) VIR_DOMAIN_EVENT_STOPPED_SHUTDOWN); } +static int +pvsSetDescription(virDomainObjPtr dom, const char *description) +{ + pvsDomObjPtr pvsdom; + + pvsdom = dom->privateData; + if (pvsCmdRun(PRLCTL, "set", pvsdom->uuid, + "--description", description, NULL)) + return -1; + + return 0; +} + +static int +pvsApplyChanges(virDomainObjPtr dom, virDomainDefPtr newdef) +{ + virDomainDefPtr olddef = dom->def; + + if (newdef->description && !STREQ(olddef->description, newdef->description)) { + if (pvsSetDescription(dom, newdef->description)) + return -1; + } + + /* TODO: compare all other parameters */ + + return 0; +} + +static virDomainPtr +pvsDomainDefineXML(virConnectPtr conn, const char *xml) +{ + pvsConnPtr privconn = conn->privateData; + virDomainPtr ret = NULL; + virDomainDefPtr def; + virDomainObjPtr dom = NULL, olddom = NULL; + virDomainEventPtr event = NULL; + int dupVM; + + pvsDriverLock(privconn); + if ((def = virDomainDefParseString(privconn->caps, xml, + 1 << VIR_DOMAIN_VIRT_PVS, + VIR_DOMAIN_XML_INACTIVE)) == NULL) { + pvsError(VIR_ERR_INVALID_ARG, _("Can't parse XML desc")); + goto cleanup; + } + + if ((dupVM = virDomainObjIsDuplicate(&privconn->domains, def, 0)) < 0) { + pvsError(VIR_ERR_INVALID_ARG, _("Already exists")); + goto cleanup; + } + + if (dupVM == 1) { + olddom = virDomainFindByUUID(&privconn->domains, def->uuid); + pvsApplyChanges(olddom, def); + virDomainObjUnlock(olddom); + + if (!(dom = virDomainAssignDef(privconn->caps, + &privconn->domains, def, false))) { + pvsError(VIR_ERR_INTERNAL_ERROR, _("Can't allocate domobj")); + goto cleanup; + } + + def = NULL; + } else { + pvsError(VIR_ERR_NO_SUPPORT, _("Not implemented yet")); + goto cleanup; + } + + event = virDomainEventNewFromObj(dom, + VIR_DOMAIN_EVENT_DEFINED, + !dupVM ? + VIR_DOMAIN_EVENT_DEFINED_ADDED : + VIR_DOMAIN_EVENT_DEFINED_UPDATED); + + ret = virGetDomain(conn, dom->def->name, dom->def->uuid); + if (ret) + ret->id = dom->def->id; + + cleanup: + virDomainDefFree(def); + if (dom) + virDomainObjUnlock(dom); + if (event) + pvsDomainEventQueue(privconn, event); + pvsDriverUnlock(privconn); + return ret; +} + static virDriver pvsDriver = { .no = VIR_DRV_PVS, .name = "PVS", @@ -1076,6 +1164,7 @@ static virDriver pvsDriver = { .domainDestroy = pvsDestroyDomain, /* 0.9.11 */ .domainShutdown = pvsShutdownDomain, /* 0.9.11 */ .domainCreate = pvsDomainCreate, /* 0.9.11 */ + .domainDefineXML = pvsDomainDefineXML, /* 0.9.11 */ }; /** -- 1.7.1

PVS has one serious discrepancy with libvirt: libvirt stores domain configuration files always in one place, and storage files in other places (with API of storage pools and storage volumes). PVS store all domain data in a single directory, for example, you may have domain with name fedora-15, which will be located in '/var/parallels/fedora-15.pvm', and it's hard disk image will be in '/var/parallels/fedora-15.pvm/harddisk1.hdd'. I've decided to create storage driver, which produces pseudo-volumes (xml files with volume description), and they will be 'converted' to real disk images after attaching to a VM. So if someone creates VM with one hard disk using virt-manager, at first virt-manager creates a new volume, and then defines a domain. We can lookup a volume by path in XML domain definition and find out location of new domain and size of its hard disk. This code mostly duplicates code in libvirt's default storage driver, but I haven't found, how functions from that driver can be reused. So if it possible I'll be very grateful for the advice, how to do it. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/Makefile.am | 3 +- src/pvs/pvs_driver.c | 6 +- src/pvs/pvs_driver.h | 5 + src/pvs/pvs_storage.c | 1464 +++++++++++++++++++++++++++++++++++++++++++++++++ src/pvs/pvs_utils.c | 20 + 5 files changed, 1495 insertions(+), 3 deletions(-) create mode 100644 src/pvs/pvs_storage.c diff --git a/src/Makefile.am b/src/Makefile.am index bc9efcf..0765aaf 100644 --- a/src/Makefile.am +++ b/src/Makefile.am @@ -468,7 +468,8 @@ HYPERV_DRIVER_EXTRA_DIST = \ PVS_DRIVER_SOURCES = \ pvs/pvs_driver.h \ pvs/pvs_driver.c \ - pvs/pvs_utils.c + pvs/pvs_utils.c \ + pvs/pvs_storage.c NETWORK_DRIVER_SOURCES = \ network/bridge_driver.h network/bridge_driver.c diff --git a/src/pvs/pvs_driver.c b/src/pvs/pvs_driver.c index ceca50d..5e9b691 100644 --- a/src/pvs/pvs_driver.c +++ b/src/pvs/pvs_driver.c @@ -66,13 +66,13 @@ int pvsStart(virDomainObjPtr privdom); int pvsKill(virDomainObjPtr privdom); int pvsStop(virDomainObjPtr privdom); -static void +void pvsDriverLock(pvsConnPtr driver) { virMutexLock(&driver->lock); } -static void +void pvsDriverUnlock(pvsConnPtr driver) { virMutexUnlock(&driver->lock); @@ -1177,6 +1177,8 @@ pvsRegister(void) { if (virRegisterDriver(&pvsDriver) < 0) return -1; + if (pvsStorageRegister()) + return -1; return 0; } diff --git a/src/pvs/pvs_driver.h b/src/pvs/pvs_driver.h index d3cbca2..7384eb1 100644 --- a/src/pvs/pvs_driver.h +++ b/src/pvs/pvs_driver.h @@ -27,6 +27,7 @@ #include "domain_conf.h" #include "storage_conf.h" +#include "driver.h" #include "domain_event.h" #define PRLCTL "prlctl" @@ -60,8 +61,12 @@ typedef struct _pvsConn pvsConn; typedef struct _pvsConn *pvsConnPtr; int pvsRegister(void); +int pvsStorageRegister(void); virJSONValuePtr pvsParseOutput(const char *binary, ...); int pvsCmdRun(const char *binary, ...); +char * pvsAddFileExt(const char *path, const char *ext); +void pvsDriverLock(pvsConnPtr driver); +void pvsDriverUnlock(pvsConnPtr driver); #endif diff --git a/src/pvs/pvs_storage.c b/src/pvs/pvs_storage.c new file mode 100644 index 0000000..95f1fde --- /dev/null +++ b/src/pvs/pvs_storage.c @@ -0,0 +1,1464 @@ +/* + * pvs_storage.c: core driver functions for managing + * Parallels Virtuozzo Server hosts + * + * Copyright (C) 2012 Parallels, Inc. + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library; if not, write to the Free Software + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA + * + */ + +#include <config.h> + +#include <stdlib.h> +#include <dirent.h> +#include <sys/statvfs.h> + +#include "datatypes.h" +#include "memory.h" +#include "configmake.h" +#include "storage_file.h" +#include "virterror_internal.h" + +#include "pvs_driver.h" + +#define VIR_FROM_THIS VIR_FROM_PVS + +static int pvsStorageClose(virConnectPtr conn); +static virStorageVolDefPtr pvsStorageVolumeDefine(virStoragePoolObjPtr pool, + const char *xmldesc, + const char *xmlfile, + bool is_new); +static virStorageVolPtr pvsStorageVolumeLookupByPathLocked(virConnectPtr + conn, + const char + *path); +static virStorageVolPtr pvsStorageVolumeLookupByPath(virConnectPtr conn, + const char *path); +static int pvsStoragePoolGetAlloc(virStoragePoolDefPtr def); + +static void +pvsStorageLock(virStorageDriverStatePtr driver) +{ + virMutexLock(&driver->lock); +} + +static void +pvsStorageUnlock(virStorageDriverStatePtr driver) +{ + virMutexUnlock(&driver->lock); +} + +static int +pvsFindVolumes(virStoragePoolObjPtr pool) +{ + int ret; + DIR *dir; + struct dirent *ent; + char *path; + + if (!(dir = opendir(pool->def->target.path))) { + virReportSystemError(errno, + _("cannot open path '%s'"), + pool->def->target.path); + goto cleanup; + } + + while ((ent = readdir(dir)) != NULL) { + if (!virFileHasSuffix(ent->d_name, ".xml")) + continue; + + if (!(path = virFileBuildPath(pool->def->target.path, + ent->d_name, NULL))) + goto no_memory; + if (!pvsStorageVolumeDefine(pool, NULL, path, false)) + goto cleanup; + VIR_FREE(path); + } + + return ret; + no_memory: + virReportOOMError(); + cleanup: + return ret; + +} + +static virDrvOpenStatus +pvsStorageOpen(virConnectPtr conn, + virConnectAuthPtr auth ATTRIBUTE_UNUSED, unsigned int flags) +{ + char *base = NULL; + virStorageDriverStatePtr storageState; + int privileged = (geteuid() == 0); + pvsConnPtr privconn = conn->privateData; + virCheckFlags(VIR_CONNECT_RO, VIR_DRV_OPEN_ERROR); + + if (STRNEQ(conn->driver->name, "PVS")) + return VIR_DRV_OPEN_DECLINED; + + if (VIR_ALLOC(storageState) < 0) + return VIR_DRV_OPEN_ERROR; + + if (virMutexInit(&storageState->lock) < 0) { + VIR_FREE(storageState); + return VIR_DRV_OPEN_ERROR; + } + pvsStorageLock(storageState); + + if (privileged) { + if ((base = strdup(SYSCONFDIR "/libvirt")) == NULL) + goto out_of_memory; + } else { + uid_t uid = geteuid(); + + char *userdir = virGetUserDirectory(uid); + + if (!userdir) + goto error; + + if (virAsprintf(&base, "%s/.libvirt", userdir) == -1) { + VIR_FREE(userdir); + goto out_of_memory; + } + VIR_FREE(userdir); + } + + /* Configuration paths are either ~/.libvirt/storage/... (session) or + * /etc/libvirt/storage/... (system). + */ + if (virAsprintf(&storageState->configDir, + "%s/pvs-storage", base) == -1) + goto out_of_memory; + + if (virAsprintf(&storageState->autostartDir, + "%s/pvs-storage/autostart", base) == -1) + goto out_of_memory; + + VIR_FREE(base); + + if (virStoragePoolLoadAllConfigs(&privconn->pools, + storageState->configDir, + storageState->autostartDir) < 0) { + pvsError(VIR_ERR_INTERNAL_ERROR, _("Failed to load pool configs")); + goto error; + } + + for (int i = 0; i < privconn->pools.count; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + virStoragePoolObjPtr pool; + + pool = privconn->pools.objs[i]; + if (pool->autostart) + pool->active = 1; + pool->active = 1; + + if (pvsStoragePoolGetAlloc(pool->def)) + goto error; + + if (pvsFindVolumes(pool)) + goto error; + + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + + pvsStorageUnlock(storageState); + + conn->storagePrivateData = storageState; + + return VIR_DRV_OPEN_SUCCESS; + + out_of_memory: + virReportOOMError(); + error: + VIR_FREE(base); + pvsStorageUnlock(storageState); + pvsStorageClose(conn); + return -1; +} + +static int +pvsStorageClose(virConnectPtr conn) +{ + pvsConnPtr privconn = conn->privateData; + virStorageDriverStatePtr storageState = conn->storagePrivateData; + conn->storagePrivateData = NULL; + + pvsStorageLock(storageState); + virStoragePoolObjListFree(&privconn->pools); + VIR_FREE(storageState->configDir); + VIR_FREE(storageState->autostartDir); + pvsStorageUnlock(storageState); + virMutexDestroy(&storageState->lock); + VIR_FREE(storageState); + + return 0; +} + +static int +pvsStorageNumPools(virConnectPtr conn) +{ + pvsConnPtr privconn = conn->privateData; + int numActive = 0, i; + + pvsDriverLock(privconn); + for (i = 0; i < privconn->pools.count; i++) + if (virStoragePoolObjIsActive(privconn->pools.objs[i])) + numActive++; + pvsDriverUnlock(privconn); + + return numActive; +} + +static int +pvsStorageListPools(virConnectPtr conn, char **const names, int nnames) +{ + pvsConnPtr privconn = conn->privateData; + int n = 0, i; + + pvsDriverLock(privconn); + memset(names, 0, sizeof(*names) * nnames); + for (i = 0; i < privconn->pools.count && n < nnames; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + if (virStoragePoolObjIsActive(privconn->pools.objs[i]) && + !(names[n++] = strdup(privconn->pools.objs[i]->def->name))) { + virStoragePoolObjUnlock(privconn->pools.objs[i]); + goto no_memory; + } + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + pvsDriverUnlock(privconn); + + return n; + + no_memory: + virReportOOMError(); + for (n = 0; n < nnames; n++) + VIR_FREE(names[n]); + pvsDriverUnlock(privconn); + return -1; +} + +static int +pvsStorageNumDefinedPools(virConnectPtr conn) +{ + pvsConnPtr privconn = conn->privateData; + int numInactive = 0, i; + + pvsDriverLock(privconn); + for (i = 0; i < privconn->pools.count; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + if (!virStoragePoolObjIsActive(privconn->pools.objs[i])) + numInactive++; + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + pvsDriverUnlock(privconn); + + return numInactive; +} + +static int +pvsStorageListDefinedPools(virConnectPtr conn, + char **const names, int nnames) +{ + pvsConnPtr privconn = conn->privateData; + int n = 0, i; + + pvsDriverLock(privconn); + memset(names, 0, sizeof(*names) * nnames); + for (i = 0; i < privconn->pools.count && n < nnames; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + if (!virStoragePoolObjIsActive(privconn->pools.objs[i]) && + !(names[n++] = strdup(privconn->pools.objs[i]->def->name))) { + virStoragePoolObjUnlock(privconn->pools.objs[i]); + goto no_memory; + } + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + pvsDriverUnlock(privconn); + + return n; + + no_memory: + virReportOOMError(); + for (n = 0; n < nnames; n++) + VIR_FREE(names[n]); + pvsDriverUnlock(privconn); + return -1; +} + + +static int +pvsStoragePoolIsActive(virStoragePoolPtr pool) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr obj; + int ret = -1; + + pvsDriverLock(privconn); + obj = virStoragePoolObjFindByUUID(&privconn->pools, pool->uuid); + pvsDriverUnlock(privconn); + if (!obj) { + pvsError(VIR_ERR_NO_STORAGE_POOL, NULL); + goto cleanup; + } + ret = virStoragePoolObjIsActive(obj); + + cleanup: + if (obj) + virStoragePoolObjUnlock(obj); + return ret; +} + +static int +pvsStoragePoolIsPersistent(virStoragePoolPtr pool ATTRIBUTE_UNUSED) +{ + return 1; +} + +static char * +pvsStorageFindPoolSources(virConnectPtr conn ATTRIBUTE_UNUSED, + const char *type ATTRIBUTE_UNUSED, + const char *srcSpec ATTRIBUTE_UNUSED, + unsigned int flags ATTRIBUTE_UNUSED) +{ + return NULL; +} + +static virStoragePoolPtr +pvsStoragePoolLookupByUUID(virConnectPtr conn, const unsigned char *uuid) +{ + pvsConnPtr privconn = conn->privateData; + virStoragePoolObjPtr pool; + virStoragePoolPtr ret = NULL; + + pvsDriverLock(privconn); + pool = virStoragePoolObjFindByUUID(&privconn->pools, uuid); + pvsDriverUnlock(privconn); + + if (pool == NULL) { + pvsError(VIR_ERR_NO_STORAGE_POOL, NULL); + goto cleanup; + } + + ret = virGetStoragePool(conn, pool->def->name, pool->def->uuid); + + cleanup: + if (pool) + virStoragePoolObjUnlock(pool); + return ret; +} + +static virStoragePoolPtr +pvsStoragePoolLookupByName(virConnectPtr conn, const char *name) +{ + pvsConnPtr privconn = conn->privateData; + virStoragePoolObjPtr pool; + virStoragePoolPtr ret = NULL; + + pvsDriverLock(privconn); + pool = virStoragePoolObjFindByName(&privconn->pools, name); + pvsDriverUnlock(privconn); + + if (pool == NULL) { + pvsError(VIR_ERR_NO_STORAGE_POOL, NULL); + goto cleanup; + } + + ret = virGetStoragePool(conn, pool->def->name, pool->def->uuid); + + cleanup: + if (pool) + virStoragePoolObjUnlock(pool); + return ret; +} + +static virStoragePoolPtr +pvsStoragePoolLookupByVolume(virStorageVolPtr vol) +{ + return pvsStoragePoolLookupByName(vol->conn, vol->pool); +} + +/* + * Fill capacity, available and allocation + * fields in pool definition. + */ +static int +pvsStoragePoolGetAlloc(virStoragePoolDefPtr def) +{ + struct statvfs sb; + + if (statvfs(def->target.path, &sb) < 0) { + virReportSystemError(errno, + _("cannot statvfs path '%s'"), + def->target.path); + return -1; + } + + def->capacity = ((unsigned long long)sb.f_frsize * + (unsigned long long)sb.f_blocks); + def->available = ((unsigned long long)sb.f_bfree * + (unsigned long long)sb.f_bsize); + def->allocation = def->capacity - def->available; + + return 0; +} + +static virStoragePoolPtr +pvsStoragePoolDefine(virConnectPtr conn, + const char *xml, unsigned int flags) +{ + pvsConnPtr privconn = conn->privateData; + virStoragePoolDefPtr def; + virStoragePoolObjPtr pool = NULL; + virStoragePoolPtr ret = NULL; + + virCheckFlags(0, NULL); + + pvsDriverLock(privconn); + if (!(def = virStoragePoolDefParseString(xml))) + goto cleanup; + + if (def->type != VIR_STORAGE_POOL_DIR) { + pvsError(VIR_ERR_NO_SUPPORT, "%s", + _("Only local directories are supported")); + goto cleanup; + } + + if (virStoragePoolObjIsDuplicate(&privconn->pools, def, 0) < 0) + goto cleanup; + + if (virStoragePoolSourceFindDuplicate(&privconn->pools, def) < 0) + goto cleanup; + + if (pvsStoragePoolGetAlloc(def)) + goto cleanup; + + if (!(pool = virStoragePoolObjAssignDef(&privconn->pools, def))) + goto cleanup; + + if (virStoragePoolObjSaveDef(conn->storagePrivateData, pool, def) < 0) { + virStoragePoolObjRemove(&privconn->pools, pool); + def = NULL; + goto cleanup; + } + def = NULL; + + pool->configFile = strdup("\0"); + if (!pool->configFile) { + virReportOOMError(); + goto cleanup; + } + + ret = virGetStoragePool(conn, pool->def->name, pool->def->uuid); + + cleanup: + virStoragePoolDefFree(def); + if (pool) + virStoragePoolObjUnlock(pool); + pvsDriverUnlock(privconn); + return ret; +} + +static int +pvsStoragePoolUndefine(virStoragePoolPtr pool) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is still active"), pool->name); + goto cleanup; + } + + if (virStoragePoolObjDeleteDef(privpool) < 0) + goto cleanup; + + VIR_FREE(privpool->configFile); + + virStoragePoolObjRemove(&privconn->pools, privpool); + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + pvsDriverUnlock(privconn); + return ret; +} + +static int +pvsStoragePoolBuild(virStoragePoolPtr pool, unsigned int flags) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + virCheckFlags(0, -1); + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is already active"), pool->name); + goto cleanup; + } + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +pvsStoragePoolStart(virStoragePoolPtr pool, unsigned int flags) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + virCheckFlags(0, -1); + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is already active"), pool->name); + goto cleanup; + } + + privpool->active = 1; + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +pvsStoragePoolDestroy(virStoragePoolPtr pool) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + privpool->active = 0; + + if (privpool->configFile == NULL) { + virStoragePoolObjRemove(&privconn->pools, privpool); + privpool = NULL; + } + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + pvsDriverUnlock(privconn); + return ret; +} + + +static int +pvsStoragePoolDelete(virStoragePoolPtr pool, unsigned int flags) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + virCheckFlags(0, -1); + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is already active"), pool->name); + goto cleanup; + } + + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + + +static int +pvsStoragePoolRefresh(virStoragePoolPtr pool, unsigned int flags) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + virCheckFlags(0, -1); + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + + +static int +pvsStoragePoolGetInfo(virStoragePoolPtr pool, virStoragePoolInfoPtr info) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + memset(info, 0, sizeof(virStoragePoolInfo)); + info->state = VIR_STORAGE_POOL_RUNNING; + info->capacity = privpool->def->capacity; + info->allocation = privpool->def->allocation; + info->available = privpool->def->available; + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static char * +pvsStoragePoolGetXMLDesc(virStoragePoolPtr pool, unsigned int flags) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + char *ret = NULL; + + virCheckFlags(0, NULL); + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + ret = virStoragePoolDefFormat(privpool->def); + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +pvsStoragePoolGetAutostart(virStoragePoolPtr pool, int *autostart) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!privpool->configFile) { + *autostart = 0; + } else { + *autostart = privpool->autostart; + } + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +pvsStoragePoolSetAutostart(virStoragePoolPtr pool, int autostart) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!privpool->configFile) { + pvsError(VIR_ERR_INVALID_ARG, "%s", _("pool has no config file")); + goto cleanup; + } + + autostart = (autostart != 0); + privpool->autostart = autostart; + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +pvsStoragePoolNumVolumes(virStoragePoolPtr pool) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int ret = -1; + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + ret = privpool->volumes.count; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +pvsStoragePoolListVolumes(virStoragePoolPtr pool, + char **const names, int maxnames) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + int i = 0, n = 0; + + memset(names, 0, maxnames * sizeof(*names)); + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + for (i = 0; i < privpool->volumes.count && n < maxnames; i++) { + if ((names[n++] = strdup(privpool->volumes.objs[i]->name)) == NULL) { + virReportOOMError(); + goto cleanup; + } + } + + virStoragePoolObjUnlock(privpool); + return n; + + cleanup: + for (n = 0; n < maxnames; n++) + VIR_FREE(names[i]); + + memset(names, 0, maxnames * sizeof(*names)); + if (privpool) + virStoragePoolObjUnlock(privpool); + return -1; +} + +static virStorageVolPtr +pvsStorageVolumeLookupByName(virStoragePoolPtr pool, + const char *name ATTRIBUTE_UNUSED) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol; + virStorageVolPtr ret = NULL; + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + privvol = virStorageVolDefFindByName(privpool, name); + + if (!privvol) { + pvsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), name); + goto cleanup; + } + + ret = virGetStorageVol(pool->conn, privpool->def->name, + privvol->name, privvol->key); + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + + +static virStorageVolPtr +pvsStorageVolumeLookupByKey(virConnectPtr conn, const char *key) +{ + pvsConnPtr privconn = conn->privateData; + unsigned int i; + virStorageVolPtr ret = NULL; + + pvsDriverLock(privconn); + for (i = 0; i < privconn->pools.count; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + if (virStoragePoolObjIsActive(privconn->pools.objs[i])) { + virStorageVolDefPtr privvol = + virStorageVolDefFindByKey(privconn->pools.objs[i], key); + + if (privvol) { + ret = virGetStorageVol(conn, + privconn->pools.objs[i]->def->name, + privvol->name, privvol->key); + virStoragePoolObjUnlock(privconn->pools.objs[i]); + break; + } + } + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + pvsDriverUnlock(privconn); + + if (!ret) + pvsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching key '%s'"), key); + + return ret; +} + +static virStorageVolPtr +pvsStorageVolumeLookupByPathLocked(virConnectPtr conn, const char *path) +{ + pvsConnPtr privconn = conn->privateData; + unsigned int i; + virStorageVolPtr ret = NULL; + + for (i = 0; i < privconn->pools.count; i++) { + virStoragePoolObjLock(privconn->pools.objs[i]); + if (virStoragePoolObjIsActive(privconn->pools.objs[i])) { + virStorageVolDefPtr privvol = + virStorageVolDefFindByPath(privconn->pools.objs[i], path); + + if (privvol) { + ret = virGetStorageVol(conn, + privconn->pools.objs[i]->def->name, + privvol->name, privvol->key); + virStoragePoolObjUnlock(privconn->pools.objs[i]); + break; + } + } + virStoragePoolObjUnlock(privconn->pools.objs[i]); + } + + if (!ret) + pvsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching path '%s'"), path); + + return ret; +} + +static virStorageVolPtr +pvsStorageVolumeLookupByPath(virConnectPtr conn, const char *path) +{ + pvsConnPtr privconn = conn->privateData; + virStorageVolPtr ret = NULL; + + pvsDriverLock(privconn); + ret = pvsStorageVolumeLookupByPathLocked(conn, path); + pvsDriverUnlock(privconn); + + return ret; +} + +static virStorageVolDefPtr +pvsStorageVolumeDefine(virStoragePoolObjPtr pool, + const char *xmldesc, + const char *xmlfile, bool is_new) +{ + virStorageVolDefPtr privvol = NULL; + virStorageVolDefPtr ret = NULL; + char *xml_path = NULL; + + if (xmlfile) + privvol = virStorageVolDefParseFile(pool->def, xmlfile); + else + privvol = virStorageVolDefParseString(pool->def, xmldesc); + if (privvol == NULL) + goto cleanup; + + if (virStorageVolDefFindByName(pool, privvol->name)) { + pvsError(VIR_ERR_OPERATION_FAILED, + "%s", _("storage vol already exists")); + goto cleanup; + } + + if (is_new) { + /* Make sure enough space */ + if ((pool->def->allocation + privvol->allocation) > + pool->def->capacity) { + pvsError(VIR_ERR_INTERNAL_ERROR, + _("Not enough free space in pool for volume '%s'"), + privvol->name); + goto cleanup; + } + } + + if (VIR_REALLOC_N(pool->volumes.objs, pool->volumes.count + 1) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (virAsprintf(&privvol->target.path, "%s/%s", + pool->def->target.path, privvol->name) == -1) { + virReportOOMError(); + goto cleanup; + } + + privvol->key = strdup(privvol->target.path); + if (privvol->key == NULL) { + virReportOOMError(); + goto cleanup; + } + + if (is_new) { + xml_path = pvsAddFileExt(privvol->target.path, ".xml"); + if (!xml_path) { + virReportOOMError(); + goto cleanup; + } + + if (virXMLSaveFile + (xml_path, privvol->name, "volume-create", xmldesc)) { + pvsError(VIR_ERR_OPERATION_FAILED, + "Can't create file with volume description"); + goto cleanup; + } + + pool->def->allocation += privvol->allocation; + pool->def->available = (pool->def->capacity - + pool->def->allocation); + } + + pool->volumes.objs[pool->volumes.count++] = privvol; + + ret = privvol; + privvol = NULL; + + cleanup: + virStorageVolDefFree(privvol); + VIR_FREE(xml_path); + return ret; +} + +static virStorageVolPtr +pvsStorageVolumeCreateXML(virStoragePoolPtr pool, + const char *xmldesc, unsigned int flags) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolPtr ret = NULL; + virStorageVolDefPtr privvol = NULL; + + virCheckFlags(0, NULL); + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + privvol = pvsStorageVolumeDefine(privpool, xmldesc, NULL, true); + + ret = virGetStorageVol(pool->conn, privpool->def->name, + privvol->name, privvol->key); + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static virStorageVolPtr +pvsStorageVolumeCreateXMLFrom(virStoragePoolPtr pool, + const char *xmldesc, + virStorageVolPtr clonevol, + unsigned int flags) +{ + pvsConnPtr privconn = pool->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol = NULL, origvol = NULL; + virStorageVolPtr ret = NULL; + + virCheckFlags(0, NULL); + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, pool->name); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), pool->name); + goto cleanup; + } + + privvol = virStorageVolDefParseString(privpool->def, xmldesc); + if (privvol == NULL) + goto cleanup; + + if (virStorageVolDefFindByName(privpool, privvol->name)) { + pvsError(VIR_ERR_OPERATION_FAILED, + "%s", _("storage vol already exists")); + goto cleanup; + } + + origvol = virStorageVolDefFindByName(privpool, clonevol->name); + if (!origvol) { + pvsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), + clonevol->name); + goto cleanup; + } + + /* Make sure enough space */ + if ((privpool->def->allocation + privvol->allocation) > + privpool->def->capacity) { + pvsError(VIR_ERR_INTERNAL_ERROR, + _("Not enough free space in pool for volume '%s'"), + privvol->name); + goto cleanup; + } + privpool->def->available = (privpool->def->capacity - + privpool->def->allocation); + + if (VIR_REALLOC_N(privpool->volumes.objs, + privpool->volumes.count + 1) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (virAsprintf(&privvol->target.path, "%s/%s", + privpool->def->target.path, privvol->name) == -1) { + virReportOOMError(); + goto cleanup; + } + + privvol->key = strdup(privvol->target.path); + if (privvol->key == NULL) { + virReportOOMError(); + goto cleanup; + } + + privpool->def->allocation += privvol->allocation; + privpool->def->available = (privpool->def->capacity - + privpool->def->allocation); + + privpool->volumes.objs[privpool->volumes.count++] = privvol; + + ret = virGetStorageVol(pool->conn, privpool->def->name, + privvol->name, privvol->key); + privvol = NULL; + + cleanup: + virStorageVolDefFree(privvol); + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static int +pvsStorageVolumeDelete(virStorageVolPtr vol, unsigned int flags) +{ + pvsConnPtr privconn = vol->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol; + int i; + int ret = -1; + char *xml_path = NULL; + + virCheckFlags(0, -1); + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, vol->pool); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + + privvol = virStorageVolDefFindByName(privpool, vol->name); + + if (privvol == NULL) { + pvsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), vol->name); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), vol->pool); + goto cleanup; + } + + + privpool->def->allocation -= privvol->allocation; + privpool->def->available = (privpool->def->capacity - + privpool->def->allocation); + + for (i = 0; i < privpool->volumes.count; i++) { + if (privpool->volumes.objs[i] == privvol) { + xml_path = pvsAddFileExt(privvol->target.path, ".xml"); + if (!xml_path) { + virReportOOMError(); + goto cleanup; + } + + if (unlink(xml_path)) { + pvsError(VIR_ERR_OPERATION_FAILED, + _("Can't remove file '%s'"), xml_path); + goto cleanup; + } + + virStorageVolDefFree(privvol); + + if (i < (privpool->volumes.count - 1)) + memmove(privpool->volumes.objs + i, + privpool->volumes.objs + i + 1, + sizeof(*(privpool->volumes.objs)) * + (privpool->volumes.count - (i + 1))); + + if (VIR_REALLOC_N(privpool->volumes.objs, + privpool->volumes.count - 1) < 0) { + ; /* Failure to reduce memory allocation isn't fatal */ + } + privpool->volumes.count--; + + break; + } + } + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + VIR_FREE(xml_path); + return ret; +} + + +static int +pvsStorageVolumeTypeForPool(int pooltype) +{ + + switch (pooltype) { + case VIR_STORAGE_POOL_DIR: + case VIR_STORAGE_POOL_FS: + case VIR_STORAGE_POOL_NETFS: + return VIR_STORAGE_VOL_FILE; + default: + return VIR_STORAGE_VOL_BLOCK; + } +} + +static int +pvsStorageVolumeGetInfo(virStorageVolPtr vol, virStorageVolInfoPtr info) +{ + pvsConnPtr privconn = vol->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol; + int ret = -1; + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, vol->pool); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + privvol = virStorageVolDefFindByName(privpool, vol->name); + + if (privvol == NULL) { + pvsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), vol->name); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), vol->pool); + goto cleanup; + } + + memset(info, 0, sizeof(*info)); + info->type = pvsStorageVolumeTypeForPool(privpool->def->type); + info->capacity = privvol->capacity; + info->allocation = privvol->allocation; + ret = 0; + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static char * +pvsStorageVolumeGetXMLDesc(virStorageVolPtr vol, unsigned int flags) +{ + pvsConnPtr privconn = vol->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol; + char *ret = NULL; + + virCheckFlags(0, NULL); + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, vol->pool); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + privvol = virStorageVolDefFindByName(privpool, vol->name); + + if (privvol == NULL) { + pvsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), vol->name); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), vol->pool); + goto cleanup; + } + + ret = virStorageVolDefFormat(privpool->def, privvol); + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static char * +pvsStorageVolumeGetPath(virStorageVolPtr vol) +{ + pvsConnPtr privconn = vol->conn->privateData; + virStoragePoolObjPtr privpool; + virStorageVolDefPtr privvol; + char *ret = NULL; + + pvsDriverLock(privconn); + privpool = virStoragePoolObjFindByName(&privconn->pools, vol->pool); + pvsDriverUnlock(privconn); + + if (privpool == NULL) { + pvsError(VIR_ERR_INVALID_ARG, __FUNCTION__); + goto cleanup; + } + + privvol = virStorageVolDefFindByName(privpool, vol->name); + + if (privvol == NULL) { + pvsError(VIR_ERR_NO_STORAGE_VOL, + _("no storage vol with matching name '%s'"), vol->name); + goto cleanup; + } + + if (!virStoragePoolObjIsActive(privpool)) { + pvsError(VIR_ERR_OPERATION_INVALID, + _("storage pool '%s' is not active"), vol->pool); + goto cleanup; + } + + ret = strdup(privvol->target.path); + if (ret == NULL) + virReportOOMError(); + + cleanup: + if (privpool) + virStoragePoolObjUnlock(privpool); + return ret; +} + +static virStorageDriver pvsStorageDriver = { + .name = "PVS", + .open = pvsStorageOpen, /* 0.9.11 */ + .close = pvsStorageClose, /* 0.9.11 */ + + .numOfPools = pvsStorageNumPools, /* 0.9.11 */ + .listPools = pvsStorageListPools, /* 0.9.11 */ + .numOfDefinedPools = pvsStorageNumDefinedPools, /* 0.9.11 */ + .listDefinedPools = pvsStorageListDefinedPools, /* 0.9.11 */ + .findPoolSources = pvsStorageFindPoolSources, /* 0.9.11 */ + .poolLookupByName = pvsStoragePoolLookupByName, /* 0.9.11 */ + .poolLookupByUUID = pvsStoragePoolLookupByUUID, /* 0.9.11 */ + .poolLookupByVolume = pvsStoragePoolLookupByVolume, /* 0.9.11 */ + .poolDefineXML = pvsStoragePoolDefine, /* 0.9.11 */ + .poolBuild = pvsStoragePoolBuild, /* 0.9.11 */ + .poolUndefine = pvsStoragePoolUndefine, /* 0.9.11 */ + .poolCreate = pvsStoragePoolStart, /* 0.9.11 */ + .poolDestroy = pvsStoragePoolDestroy, /* 0.9.11 */ + .poolDelete = pvsStoragePoolDelete, /* 0.9.11 */ + .poolRefresh = pvsStoragePoolRefresh, /* 0.9.11 */ + .poolGetInfo = pvsStoragePoolGetInfo, /* 0.9.11 */ + .poolGetXMLDesc = pvsStoragePoolGetXMLDesc, /* 0.9.11 */ + .poolGetAutostart = pvsStoragePoolGetAutostart, /* 0.9.11 */ + .poolSetAutostart = pvsStoragePoolSetAutostart, /* 0.9.11 */ + .poolNumOfVolumes = pvsStoragePoolNumVolumes, /* 0.9.11 */ + .poolListVolumes = pvsStoragePoolListVolumes, /* 0.9.11 */ + + .volLookupByName = pvsStorageVolumeLookupByName, /* 0.9.11 */ + .volLookupByKey = pvsStorageVolumeLookupByKey, /* 0.9.11 */ + .volLookupByPath = pvsStorageVolumeLookupByPath, /* 0.9.11 */ + .volCreateXML = pvsStorageVolumeCreateXML, /* 0.9.11 */ + .volCreateXMLFrom = pvsStorageVolumeCreateXMLFrom, /* 0.9.11 */ + .volDelete = pvsStorageVolumeDelete, /* 0.9.11 */ + .volGetInfo = pvsStorageVolumeGetInfo, /* 0.9.11 */ + .volGetXMLDesc = pvsStorageVolumeGetXMLDesc, /* 0.9.11 */ + .volGetPath = pvsStorageVolumeGetPath, /* 0.9.11 */ + .poolIsActive = pvsStoragePoolIsActive, /* 0.9.11 */ + .poolIsPersistent = pvsStoragePoolIsPersistent, /* 0.9.11 */ +}; + +int +pvsStorageRegister(void) +{ + if (virRegisterStorageDriver(&pvsStorageDriver) < 0) + return -1; + + return 0; +} diff --git a/src/pvs/pvs_utils.c b/src/pvs/pvs_utils.c index 3a548bd..e5241ee 100644 --- a/src/pvs/pvs_utils.c +++ b/src/pvs/pvs_utils.c @@ -117,3 +117,23 @@ pvsCmdRun(const char *binary, ...) return ret; } + +/* + * Return new file path in malloced string created by + * concatenating first and second function arguments. + */ +char * +pvsAddFileExt(const char *path, const char *ext) +{ + char *new_path = NULL; + size_t len = strlen(path) + strlen(ext) + 1; + + if (VIR_ALLOC_N(new_path, len) < 0) + return NULL; + + if (!virStrcpy(new_path, path, len)) + return NULL; + strcat(new_path, ext); + + return new_path; +} -- 1.7.1

To create a new VM in PVS we should issue "prlctl create" command, and give path to the directory, where VM should be created. VM's storage will be in that directory later. So in this first version find out location of first VM's hard disk and create VM there. Signed-off-by: Dmitry Guryanov <dguryanov@parallels.com> --- src/pvs/pvs_driver.c | 77 ++++++++++++++++++++++++++++++++++++++++++++++++- src/pvs/pvs_driver.h | 4 ++ src/pvs/pvs_storage.c | 6 +--- 3 files changed, 81 insertions(+), 6 deletions(-) diff --git a/src/pvs/pvs_driver.c b/src/pvs/pvs_driver.c index 5e9b691..7a072bf 100644 --- a/src/pvs/pvs_driver.c +++ b/src/pvs/pvs_driver.c @@ -1077,6 +1077,73 @@ pvsApplyChanges(virDomainObjPtr dom, virDomainDefPtr newdef) return 0; } +static int +pvsCreateVm(virConnectPtr conn, virDomainDefPtr def) +{ + pvsConnPtr privconn = conn->privateData; + int i; + virStorageVolDefPtr privvol = NULL; + virStoragePoolObjPtr pool = NULL; + virStorageVolPtr vol = NULL; + char uuidstr[VIR_UUID_STRING_BUFLEN]; + + for (i = 0; i < def->ndisks; i++) { + if (def->disks[i]->device != VIR_DOMAIN_DISK_DEVICE_DISK) + continue; + + vol = pvsStorageVolumeLookupByPathLocked(conn, def->disks[i]->src); + if (!vol) { + pvsError(VIR_ERR_INVALID_ARG, + _("Can't find volume with path '%s'"), + def->disks[i]->src); + return -1; + } + break; + } + + if (!vol) { + pvsError(VIR_ERR_INVALID_ARG, + _("Can't create VM without hard disks")); + return -1; + } + + pool = virStoragePoolObjFindByName(&privconn->pools, vol->pool); + if (!pool) { + pvsError(VIR_ERR_INVALID_ARG, + _("Can't find storage pool with name '%s'"), + vol->pool); + goto error; + } + + privvol = virStorageVolDefFindByPath(pool, def->disks[i]->src); + if (!privvol) { + pvsError(VIR_ERR_INVALID_ARG, + _("Can't find storage volume definition for path '%s'"), + def->disks[i]->src); + goto error2; + } + + virUUIDFormat(def->uuid, uuidstr); + + if (pvsCmdRun(PRLCTL, "create", def->name, "--dst", + pool->def->target.path, "--no-hdd", "--uuid", uuidstr, NULL)) + goto error2; + + if (pvsCmdRun(PRLCTL, "set", def->name, "--vnc-mode", "auto", NULL)) + goto error2; + + virStoragePoolObjUnlock(pool); + virUnrefStorageVol(vol); + + return 0; + + error2: + virStoragePoolObjUnlock(pool); + error: + virUnrefStorageVol(vol); + return -1; +} + static virDomainPtr pvsDomainDefineXML(virConnectPtr conn, const char *xml) { @@ -1113,8 +1180,16 @@ pvsDomainDefineXML(virConnectPtr conn, const char *xml) def = NULL; } else { - pvsError(VIR_ERR_NO_SUPPORT, _("Not implemented yet")); + if (pvsCreateVm(conn, def)) goto cleanup; + if (pvsLoadDomains(privconn, def->name)) + goto cleanup; + dom = virDomainFindByName(&privconn->domains, def->name); + if (!dom) { + pvsError(VIR_ERR_INTERNAL_ERROR, + _("Domain is not defined after creation")); + goto cleanup; + } } event = virDomainEventNewFromObj(dom, diff --git a/src/pvs/pvs_driver.h b/src/pvs/pvs_driver.h index 7384eb1..1d502f3 100644 --- a/src/pvs/pvs_driver.h +++ b/src/pvs/pvs_driver.h @@ -68,5 +68,9 @@ int pvsCmdRun(const char *binary, ...); char * pvsAddFileExt(const char *path, const char *ext); void pvsDriverLock(pvsConnPtr driver); void pvsDriverUnlock(pvsConnPtr driver); +virStorageVolPtr pvsStorageVolumeLookupByPathLocked(virConnectPtr + conn, + const char + *path); #endif diff --git a/src/pvs/pvs_storage.c b/src/pvs/pvs_storage.c index 95f1fde..c177ef3 100644 --- a/src/pvs/pvs_storage.c +++ b/src/pvs/pvs_storage.c @@ -41,10 +41,6 @@ static virStorageVolDefPtr pvsStorageVolumeDefine(virStoragePoolObjPtr pool, const char *xmldesc, const char *xmlfile, bool is_new); -static virStorageVolPtr pvsStorageVolumeLookupByPathLocked(virConnectPtr - conn, - const char - *path); static virStorageVolPtr pvsStorageVolumeLookupByPath(virConnectPtr conn, const char *path); static int pvsStoragePoolGetAlloc(virStoragePoolDefPtr def); @@ -941,7 +937,7 @@ pvsStorageVolumeLookupByKey(virConnectPtr conn, const char *key) return ret; } -static virStorageVolPtr +virStorageVolPtr pvsStorageVolumeLookupByPathLocked(virConnectPtr conn, const char *path) { pvsConnPtr privconn = conn->privateData; -- 1.7.1

On Fri, Apr 13, 2012 at 10:26:14PM +0400, Dmitry Guryanov wrote:
Parallels Virtuozzo Server is a cloud-ready virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server.
Current name of this product is Parallels Server Bare Metal and more information about it can be found here - http://www.parallels.com/products/server/baremetal/sp/.
This driver will work with PVS version 6.0 , beta version scheduled at 2012 Q2.
Okay, I started to review this version 2 of the driver. IIRC the first version was relying on an API which wasn't LGPL compatible so that was a no go. What I understand from that second version is that now the driver talks to the prlctl command to get information back (using JSON). That looks acceptable, but before making further review can you fix the following things I found out in reviewing quickly patch 1-3 - the driver open doesn't seems to make any check about availability of prlctl command, it should fail to open if this is not found at Open() time. - all the entry points in the driver structure are marked as 0.9.11, since 0.9.11 is out already this would need to be bumped to 0.9.12 assuming that will be the next version and the patches make it in time for that release (scheduled at the end of the month) - the configure check blindly assumes that if compiled on linux the pvs driver should be activated. Since we rely on a command line tool to provide the interface that's a relatively safe assumption, but if I understand correctly Parrallels requires a modified kernel version, which is not upstream, right ? To that extend you're at the same level as the OpenVZ driver (you rely on it underneath, right ?) Do you support all linux archs ? If not please improve the configure time check to the architecture you actually support. In general I wonder why make it a new driver instead of ramping up the existing OpenVZ one, are the 2 really incompatible, or is the existing OpenVZ not proper for current versions ? Basically I wonder if we really need 2 drivers assuming the implementation of the hypervisor is based on the same core for both, thanks ! Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/

On 04/16/2012 06:13 AM, Daniel Veillard wrote:
On Fri, Apr 13, 2012 at 10:26:14PM +0400, Dmitry Guryanov wrote:
Parallels Virtuozzo Server is a cloud-ready virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server.
Current name of this product is Parallels Server Bare Metal and more information about it can be found here - http://www.parallels.com/products/server/baremetal/sp/.
This driver will work with PVS version 6.0 , beta version scheduled at 2012 Q2. Okay, I started to review this version 2 of the driver. IIRC the first version was relying on an API which wasn't LGPL compatible so that was a no go. What I understand from that second version is that now the driver talks to the prlctl command to get information back (using JSON). That looks acceptable, but before making further review can you fix the following things I found out in reviewing quickly patch 1-3
- the driver open doesn't seems to make any check about availability of prlctl command, it should fail to open if this is not found at Open() time. - all the entry points in the driver structure are marked as 0.9.11, since 0.9.11 is out already this would need to be bumped to 0.9.12 assuming that will be the next version and the patches make it in time for that release (scheduled at the end of the month) Ok, I'll correct these things.
- the configure check blindly assumes that if compiled on linux the pvs driver should be activated. Since we rely on a command line tool to provide the interface that's a relatively safe assumption, but if I understand correctly Parrallels requires a modified kernel version, which is not upstream, right ? PVS is a full distribution, based on cloud linux and it has modified kernel. To that extend you're at the same level as the OpenVZ driver (you rely on it underneath, right ?) Do you support all linux archs ? If not please improve the configure time check to the architecture you actually support. Ok, I'll add additional checks.
In general I wonder why make it a new driver instead of ramping up the existing OpenVZ one, are the 2 really incompatible, or is the existing OpenVZ not proper for current versions ? Basically I wonder if we really need 2 drivers assuming the implementation of the hypervisor is based on the same core for both, First, PVS and OpenVZ are different products, PVS includes full virtualization support and OS level one, OpenVz supports only OS level virtualization. In PVS both types of virtual environments can be managed using single prlctl utility, while vzctl can handle only containers. This version of pvs driver supports only virtual machines, but support of containers planned too.
Second, prlctl and vzctl+vzlist have completely different output, command-line parameters also differ, especially in devices and network parameters. I think that drivers, which supports two types of virtualization and works with two different utilities will be too complicated and it's better to have two separate drivers. Also OpenVZ driver works directly with containers config files in some places, which is not possible while using prlctl. Thanks for your response !
thanks !
Daniel
-- Dmitry Guryanov

On Tue, Apr 17, 2012 at 10:01:50PM +0400, Dmitry Guryanov wrote:
On 04/16/2012 06:13 AM, Daniel Veillard wrote:
On Fri, Apr 13, 2012 at 10:26:14PM +0400, Dmitry Guryanov wrote:
Parallels Virtuozzo Server is a cloud-ready virtualization solution that allows users to simultaneously run multiple virtual machines and containers on the same physical server.
Current name of this product is Parallels Server Bare Metal and more information about it can be found here - http://www.parallels.com/products/server/baremetal/sp/.
This driver will work with PVS version 6.0 , beta version scheduled at 2012 Q2. Okay, I started to review this version 2 of the driver. IIRC the first version was relying on an API which wasn't LGPL compatible so that was a no go. What I understand from that second version is that now the driver talks to the prlctl command to get information back (using JSON). That looks acceptable, but before making further review can you fix the following things I found out in reviewing quickly patch 1-3
- the driver open doesn't seems to make any check about availability of prlctl command, it should fail to open if this is not found at Open() time. - all the entry points in the driver structure are marked as 0.9.11, since 0.9.11 is out already this would need to be bumped to 0.9.12 assuming that will be the next version and the patches make it in time for that release (scheduled at the end of the month) Ok, I'll correct these things.
- the configure check blindly assumes that if compiled on linux the pvs driver should be activated. Since we rely on a command line tool to provide the interface that's a relatively safe assumption, but if I understand correctly Parrallels requires a modified kernel version, which is not upstream, right ? PVS is a full distribution, based on cloud linux and it has modified kernel. To that extend you're at the same level as the OpenVZ driver (you rely on it underneath, right ?) Do you support all linux archs ? If not please improve the configure time check to the architecture you actually support. Ok, I'll add additional checks.
Okay, should be simple enough to fix those 3 bits
In general I wonder why make it a new driver instead of ramping up the existing OpenVZ one, are the 2 really incompatible, or is the existing OpenVZ not proper for current versions ? Basically I wonder if we really need 2 drivers assuming the implementation of the hypervisor is based on the same core for both, First, PVS and OpenVZ are different products, PVS includes full virtualization support and OS level one, OpenVz supports only OS level virtualization. In PVS both types of virtual environments can be managed using single prlctl utility, while vzctl can handle only containers. This version of pvs driver supports only virtual machines, but support of containers planned too.
Second, prlctl and vzctl+vzlist have completely different output, command-line parameters also differ, especially in devices and network parameters. I think that drivers, which supports two types of virtualization and works with two different utilities will be too complicated and it's better to have two separate drivers. Also OpenVZ driver works directly with containers config files in some places, which is not possible while using prlctl.
Okay, thanks for the explanations :-) The simple fact that PVS can do full virt support is IMHO a sufficient differentiator ! Daniel -- Daniel Veillard | libxml Gnome XML XSLT toolkit http://xmlsoft.org/ daniel@veillard.com | Rpmfind RPM search engine http://rpmfind.net/ http://veillard.com/ | virtualization library http://libvirt.org/
participants (2)
-
Daniel Veillard
-
Dmitry Guryanov