Re: [libvirt] [libvirt-users] virt-manager - how to add /dev/mapper as a storage pool
by Cole Robinson
On 08/09/2011 12:31 PM, Marc Haber wrote:
> On Tue, Aug 09, 2011 at 11:09:06AM -0400, Cole Robinson wrote:
>> On 08/08/2011 04:37 PM, Marc Haber wrote:
>>> I would like to be able to configure VMs running off dm-crypt devices
>>> that were unlocked in the host. Unlocked dm-crypt devices show up in
>>> /dev/mapper/devicename, with devicename being the second parameter
>>> given to cryptsetup luksOpen.
>>>
>>> The LVM storage pool type insists on searching in /dev/vgname and
>>> cannot be tricked into reading /dev/mapper by giving it a fake VG
>>> named mapper; the LVM storage pool type "dir" mishandles
>>> /dev/mapper/control ("illegal seek").
>>>
>>> Is there a workaround to be able to use such devices in virt-manager
>>> without having to define a single storage pool for every device used?
>>
>> cc-ing virt-tool-list
>
> Bcc, or forgotten?
>
>> Latest virt-manager-0.9.0 allows adding a libvirt mpath pool which might
>> be what you're looking for. If you don't have that version you can try
>> configuring it on the command line with virsh.
>
> I have that version, but an mpath pool set to /dev/mapper stays empty.
> Googling and reading the available docs suggests that this feature
> only looks for /dev/mapper/mpath*
>
Forgotten, sorry.
Not really sure then, maybe this is something that libvirt should be
extended to handle. CCing libvirt devel list
- Cole
13 years, 8 months
[libvirt] [PATCH] Fix memory leak while scanning snapshots
by Philipp Hahn
If a snapshot with the name already exists, virDomainSnapshotAssignDef()
just returns NULL, in which case the snapshot definition is leaked.
Currently this leak is not a big problem, since qemuDomainSnapshotLoad()
is only called once during initial startup of libvirtd.
Signed-off-by: Philipp Hahn <hahn(a)univention.de>
---
src/qemu/qemu_driver.c | 6 +++++-
1 files changed, 5 insertions(+), 1 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index ce19be7..b815046 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -293,6 +293,7 @@ static void qemuDomainSnapshotLoad(void *payload,
int ret;
char *fullpath;
virDomainSnapshotDefPtr def = NULL;
+ virDomainSnapshotObjPtr snap = NULL;
char ebuf[1024];
virDomainObjLock(vm);
@@ -344,7 +345,10 @@ static void qemuDomainSnapshotLoad(void *payload,
continue;
}
- virDomainSnapshotAssignDef(&vm->snapshots, def);
+ snap = virDomainSnapshotAssignDef(&vm->snapshots, def);
+ if (snap == NULL) {
+ virDomainSnapshotDefFree(def);
+ }
VIR_FREE(fullpath);
VIR_FREE(xmlStr);
--
1.7.1
13 years, 8 months
[libvirt] [RFC v4] Export KVM Host Power Management capabilities
by Srivatsa S. Bhat
This patch exports KVM Host Power Management capabilities as XML so that
higher-level systems management software can make use of these features
available in the host.
The script "pm-is-supported" (from pm-utils package) is run to discover if
Suspend-to-RAM (S3) or Suspend-to-Disk (S4) is supported by the host.
If either of them are supported, then a new tag "<power_management>" is
introduced in the XML under the <host> tag.
Eg: When the host supports both S3 and S4, the XML looks like this:
<capabilities>
<host>
<uuid>dc699581-48a2-11cb-b8a8-9a0265a79bbe</uuid>
<cpu>
<arch>i686</arch>
<model>coreduo</model>
<vendor>Intel</vendor>
<topology sockets='1' cores='2' threads='1'/>
<feature name='xtpr'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='vmx'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
</cpu>
<power_management> <<<=== New host power management features
<S3/>
<S4/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
</host>
.
.
.
However in case the query to check for power management features succeeded,
but the host does not support any such feature, then the XML will contain
an empty <power_management/> tag. In the event that the PM query itself
failed, the XML will not contain any "power_management" tag.
Open issues:
-----------
1. Design new APIs in libvirt to exploit power management features
such as S3/S4. This was discussed in [3] and [4].
Please let me know your comments and feedback.
Changelog:
---------
v1: The idea of exporting host power management capabilities through
libvirt was discussed in [1].
v2: A working implementation was presented for review in [2].
v3: Omissions and improvements pointed out in v2 were taken care of in [5].
References:
----------
[1] Exporting KVM host power saving capabilities through libvirt
http://thread.gmane.org/gmane.comp.emulators.libvirt/40886
[2] http://www.redhat.com/archives/libvir-list/2011-August/msg00238.html
[3] http://www.redhat.com/archives/libvir-list/2011-August/msg00248.html
[4] http://www.redhat.com/archives/libvir-list/2011-August/msg00302.html
[5] http://www.redhat.com/archives/libvir-list/2011-August/msg00282.html
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat(a)linux.vnet.ibm.com>
---
docs/formatcaps.html.in | 19 +++++++---
docs/schemas/capability.rng | 18 +++++++++
include/libvirt/virterror.h | 1 +
libvirt.spec.in | 2 +
src/conf/capabilities.c | 27 +++++++++++++-
src/conf/capabilities.h | 4 ++
src/libvirt_private.syms | 1 +
src/qemu/qemu_capabilities.c | 18 +++++++++
src/util/util.c | 82 ++++++++++++++++++++++++++++++++++++++++++
src/util/util.h | 14 +++++++
src/util/virterror.c | 3 ++
11 files changed, 183 insertions(+), 6 deletions(-)
diff --git a/docs/formatcaps.html.in b/docs/formatcaps.html.in
index a4297ce..ce6f9a6 100644
--- a/docs/formatcaps.html.in
+++ b/docs/formatcaps.html.in
@@ -28,6 +28,10 @@ BIOS you will see</p>
<feature name='xtpr'/>
...
</cpu>
+ <power_management>
+ <S3/>
+ <S4/>
+ <power_management/>
</host></span>
<!-- xen-3.0-x86_64 -->
@@ -61,11 +65,16 @@ BIOS you will see</p>
...
</capabilities></pre>
<p>The first block (in red) indicates the host hardware capabilities, currently
-it is limited to the CPU properties but other information may be available,
-it shows the CPU architecture, topology, model name, and additional features
-which are not included in the model but the CPU provides them. Features of the
-chip are shown within the feature block (the block is similar to what you will
-find in a Xen fully virtualized domain description).</p>
+it is limited to the CPU properties and the power management features of
+the host platform, but other information may be available, it shows the CPU architecture,
+topology, model name, and additional features which are not included in the model but the
+CPU provides them. Features of the chip are shown within the feature block (the block is
+similar to what you will find in a Xen fully virtualized domain description). Further,
+the power management features supported by the host are shown, such as Suspend-to-RAM (S3)
+and Suspend-to-Disk (S4). In case the query for power management features succeeded but the
+host does not support any such feature, then an empty <power_management/>
+tag will be shown. Otherwise, if the query itself failed, no such tag will
+be displayed (i.e., there will not be any power_management block or empty tag in the XML).</p>
<p>The second block (in blue) indicates the paravirtualization support of the
Xen support, you will see the os_type of xen to indicate a paravirtual
kernel, then architecture information and potential features.</p>
diff --git a/docs/schemas/capability.rng b/docs/schemas/capability.rng
index 99b4a9a..8238a37 100644
--- a/docs/schemas/capability.rng
+++ b/docs/schemas/capability.rng
@@ -35,6 +35,9 @@
</optional>
</element>
<optional>
+ <ref name='power_management'/>
+ </optional>
+ <optional>
<ref name='migration'/>
</optional>
<optional>
@@ -105,6 +108,21 @@
</zeroOrMore>
</define>
+ <define name='power_management'>
+ <element name='power_management'>
+ <optional>
+ <element name='S3'>
+ <empty/>
+ </element>
+ </optional>
+ <optional>
+ <element name='S4'>
+ <empty/>
+ </element>
+ </optional>
+ </element>
+ </define>
+
<define name='migration'>
<element name='migration_features'>
<optional>
diff --git a/include/libvirt/virterror.h b/include/libvirt/virterror.h
index 9cac437..a831c73 100644
--- a/include/libvirt/virterror.h
+++ b/include/libvirt/virterror.h
@@ -82,6 +82,7 @@ typedef enum {
VIR_FROM_EVENT = 40, /* Error from event loop impl */
VIR_FROM_LIBXL = 41, /* Error from libxenlight driver */
VIR_FROM_LOCKING = 42, /* Error from lock manager */
+ VIR_FROM_CAPABILITIES = 43, /* Error from capabilities */
} virErrorDomain;
diff --git a/libvirt.spec.in b/libvirt.spec.in
index e2b7f65..3193de3 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -482,6 +482,8 @@ Requires: nc
Requires: gettext
# Needed by virt-pki-validate script.
Requires: gnutls-utils
+# Needed for probing the power management features of the host.
+Requires: pm-utils
%if %{with_sasl}
Requires: cyrus-sasl
# Not technically required, but makes 'out-of-box' config
diff --git a/src/conf/capabilities.c b/src/conf/capabilities.c
index 2f243ae..e8ab599 100644
--- a/src/conf/capabilities.c
+++ b/src/conf/capabilities.c
@@ -29,6 +29,13 @@
#include "util.h"
#include "uuid.h"
#include "cpu_conf.h"
+#include "virterror_internal.h"
+
+
+#define VIR_FROM_THIS VIR_FROM_CAPABILITIES
+
+VIR_ENUM_IMPL(virHostPMCapability, VIR_HOST_PM_LAST,
+ "S3", "S4")
/**
* virCapabilitiesNew:
@@ -201,7 +208,6 @@ virCapabilitiesAddHostFeature(virCapsPtr caps,
return 0;
}
-
/**
* virCapabilitiesAddHostMigrateTransport:
* @caps: capabilities to extend
@@ -686,6 +692,25 @@ virCapabilitiesFormatXML(virCapsPtr caps)
virBufferAddLit(&xml, " </cpu>\n");
+ if(caps->host.powerMgmt_valid) {
+ /* The PM query was successful. */
+ if(caps->host.powerMgmt) {
+ /* The host supports some PM features. */
+ unsigned int pm = caps->host.powerMgmt;
+ virBufferAddLit(&xml, " <power_management>\n");
+ while(pm) {
+ int bit = ffs(pm) - 1;
+ virBufferAsprintf(&xml, " <%s/>\n",
+ virHostPMCapabilityTypeToString(bit));
+ pm &= ~(1U << bit);
+ }
+ virBufferAddLit(&xml, " </power_management>\n");
+ } else {
+ /* The host does not support any PM feature. */
+ virBufferAddLit(&xml, " <power_management/>\n");
+ }
+ }
+
if (caps->host.offlineMigrate) {
virBufferAddLit(&xml, " <migration_features>\n");
if (caps->host.liveMigrate)
diff --git a/src/conf/capabilities.h b/src/conf/capabilities.h
index e2fa1d6..c51f220 100644
--- a/src/conf/capabilities.h
+++ b/src/conf/capabilities.h
@@ -105,6 +105,10 @@ struct _virCapsHost {
size_t nfeatures;
size_t nfeatures_max;
char **features;
+ bool powerMgmt_valid;
+ unsigned int powerMgmt; /* Bitmask of the PM capabilities.
+ * See enum virHostPMCapability.
+ */
int offlineMigrate;
int liveMigrate;
size_t nmigrateTrans;
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index 830222b..40fc4d0 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -1058,6 +1058,7 @@ virFormatMacAddr;
virGenerateMacAddr;
virGetGroupID;
virGetHostname;
+virGetPMCapabilities;
virGetUserDirectory;
virGetUserID;
virGetUserName;
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 3f36212..581b80f 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -794,6 +794,8 @@ virCapsPtr qemuCapsInit(virCapsPtr old_caps)
struct utsname utsname;
virCapsPtr caps;
int i;
+ int status = -1;
+ unsigned int pmbitmask = 0;
char *xenner = NULL;
/* Really, this never fails - look at the man-page. */
@@ -824,6 +826,22 @@ virCapsPtr qemuCapsInit(virCapsPtr old_caps)
old_caps->host.cpu = NULL;
}
+ /* Add the power management features of the host */
+
+ status = virGetPMCapabilities(&pmbitmask);
+ if(status < 0) {
+ caps->host.powerMgmt_valid = false;
+ VIR_WARN("Failed to get host power management capabilities");
+ } else {
+ /* The PM query succeeded. */
+ caps->host.powerMgmt_valid = true;
+
+ /* The power management features supported by the host are
+ * represented as a bitmask by 'pmbitmask'.
+ */
+ caps->host.powerMgmt = pmbitmask;
+ }
+
virCapabilitiesAddHostMigrateTransport(caps,
"tcp");
diff --git a/src/util/util.c b/src/util/util.c
index 03a9e1a..b1a6434 100644
--- a/src/util/util.c
+++ b/src/util/util.c
@@ -2641,3 +2641,85 @@ or other application using the libvirt API.\n\
return 0;
}
+
+/**
+ * Get the Power Management Capabilities of the host system.
+ * The script 'pm-is-supported' (from the pm-utils package) is run
+ * to find out all the power management features supported by the host,
+ * such as Suspend-to-RAM (S3) and Suspend-to-Disk (S4).
+ *
+ * @bitmask: Pointer to the bitmask that must be set appropriately to
+ * indicate all the supported host power management features.
+ * This will be set to zero if the host does not support any
+ * power management feature.
+ *
+ * Return values:
+ * 0 if the query was successful.
+ * -1 on error like 'pm-is-supported' is not found.
+ */
+int
+virGetPMCapabilities(unsigned int * bitmask)
+{
+
+ char *path = NULL;
+ int status = -1;
+ int ret = -1;
+ virCommandPtr cmd;
+
+ *bitmask = 0;
+ if((path = virFindFileInPath("pm-is-supported")) == NULL) {
+ virUtilError(VIR_ERR_INTERNAL_ERROR,
+ "%s", _("Failed to get the path of pm-is-supported"));
+ return -1;
+ }
+
+ /* Check support for Suspend-to-RAM (S3) */
+ cmd = virCommandNew(path);
+ virCommandAddArg(cmd, "--suspend");
+ if(virCommandRun(cmd, &status) < 0) {
+ virUtilError(VIR_ERR_INTERNAL_ERROR,
+ "%s", _("Failed to run command"
+ "'pm-is-supported --suspend'"));
+ virCommandFree(cmd);
+ ret = -1;
+ goto cleanup;
+ } else {
+ ret = 0;
+
+ /* Check return code of command == 0 for success
+ * (i.e., the PM capability is supported)
+ */
+ if(status == 0)
+ *bitmask |= 1U << VIR_HOST_PM_S3;
+
+ virCommandFree(cmd);
+ }
+
+ /* Check support for Suspend-to-Disk (S4) */
+ cmd = virCommandNew(path);
+ virCommandAddArg(cmd, "--hibernate");
+ if(virCommandRun(cmd, &status) < 0) {
+ virUtilError(VIR_ERR_INTERNAL_ERROR,
+ "%s", _("Failed to run command"
+ "'pm-is-supported --hibernate'"));
+
+ virCommandFree(cmd);
+ ret = -1;
+ goto cleanup;
+ } else {
+ ret = 0;
+
+ /* Check return code of command == 0 for success
+ * (i.e., the PM capability is supported)
+ */
+ if(status == 0)
+ *bitmask |= 1U << VIR_HOST_PM_S4;
+
+ virCommandFree(cmd);
+ }
+
+cleanup:
+ VIR_FREE(path);
+ return ret;
+}
+
diff --git a/src/util/util.h b/src/util/util.h
index af8b15d..24a87ff 100644
--- a/src/util/util.h
+++ b/src/util/util.h
@@ -272,4 +272,18 @@ bool virIsDevMapperDevice(const char *devname) ATTRIBUTE_NONNULL(1);
int virEmitXMLWarning(int fd,
const char *name,
const char *cmd) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3);
+
+/* Power Management Capabilities of the host system */
+
+enum virHostPMCapability {
+ VIR_HOST_PM_S3, /* Suspend-to-RAM */
+ VIR_HOST_PM_S4, /* Suspend-to-Disk */
+
+ VIR_HOST_PM_LAST
+};
+
+VIR_ENUM_DECL(virHostPMCapability)
+
+int virGetPMCapabilities(unsigned int *);
+
#endif /* __VIR_UTIL_H__ */
diff --git a/src/util/virterror.c b/src/util/virterror.c
index 9a27feb..e07de61 100644
--- a/src/util/virterror.c
+++ b/src/util/virterror.c
@@ -172,6 +172,9 @@ static const char *virErrorDomainName(virErrorDomain domain) {
case VIR_FROM_LOCKING:
dom = "Locking ";
break;
+ case VIR_FROM_CAPABILITIES:
+ dom = "Capabilities ";
+ break;
}
return(dom);
}
13 years, 8 months
[libvirt] Allow to migrate to same host?
by Osier Yang
The request is from a PhD stdudent in France, he is developing some
distributed system based on libvirt, before deploying the codes to real
instances, he want to test it on his own box first.
Libvirt checks if it's migrating to same host by comparing the hostname,
I'm wondering if we can add some flag to switch the checking on/off,
(probally a new property in qemu.conf?), so that one can test the migration
on same host just for development purpose.
Thoughts?
Regards
Osier
13 years, 8 months
[libvirt] [test-API][PATCH 1/2] Declare hypervisor connection variable as the global
by Guannan Ren
The is solve the problem where the failure of open() will lead to
null conn variable, and when we call close() later,it will report
no global 'conn' attribute to the ConnectAPI object.
remove a duplicated close() function
---
lib/connectAPI.py | 19 ++++++-------------
1 files changed, 6 insertions(+), 13 deletions(-)
diff --git a/lib/connectAPI.py b/lib/connectAPI.py
index 5d5b94f..702a088 100644
--- a/lib/connectAPI.py
+++ b/lib/connectAPI.py
@@ -40,11 +40,11 @@ import exception
class ConnectAPI(object):
def __init__(self):
- pass
+ self.conn = None
def open(self, uri):
try:
- self.conn = libvirt.open(uri)
+ conn = libvirt.open(uri)
return self.conn
except libvirt.libvirtError, e:
message = e.get_error_message()
@@ -53,7 +53,7 @@ class ConnectAPI(object):
def open_read_only(self, uri):
try:
- self.conn = libvirt.openReadOnly(uri)
+ conn = libvirt.openReadOnly(uri)
return self.conn
except libvirt.libvirtError, e:
message = e.get_error_message()
@@ -62,21 +62,13 @@ class ConnectAPI(object):
def openAuth(self, uri, auth, flags = 0):
try:
- self.conn = libvirt.openAuth(uri, auth, flags)
+ conn = libvirt.openAuth(uri, auth, flags)
return self.conn
except libvirt.libvirtError, e:
message = e.get_error_message()
code = e.get_error_code()
raise exception.LibvirtAPI(message, code)
- def close(self):
- try:
- self.conn.close()
- except libvirt.libvirtError, e:
- message = e.get_error_message()
- code = e.get_error_code()
- raise exception.LibvirtAPI(message, code)
-
def get_caps(self):
try:
caps = self.conn.getCapabilities()
@@ -398,7 +390,8 @@ class ConnectAPI(object):
def close(self):
try:
- return self.conn.close()
+ if self.conn:
+ return self.conn.close()
except libvirt.libvirtError, e:
message = e.get_error_message()
code = e.get_error_code()
--
1.7.1
13 years, 8 months
[libvirt] [test-API][PATCH] Add new testcase for libvirtd connection with tcp socket and with SASL authentication
by Guannan Ren
---
repos/remoteAccess/tcp_setup.py | 242 +++++++++++++++++++++++++++++++++++++++
1 files changed, 242 insertions(+), 0 deletions(-)
create mode 100644 repos/remoteAccess/tcp_setup.py
diff --git a/repos/remoteAccess/tcp_setup.py b/repos/remoteAccess/tcp_setup.py
new file mode 100644
index 0000000..8f88810
--- /dev/null
+++ b/repos/remoteAccess/tcp_setup.py
@@ -0,0 +1,242 @@
+#!/usr/bin/env python
+""" Configure and test libvirt tcp connection
+ remoteAccess:tcp_setup
+ target_machine
+ xx.xx.xx.xx
+ username
+ root
+ password
+ xxxxxx
+ listen_tcp
+ enable|disable
+ auth_tcp
+ none|sasl
+"""
+
+__author__ = 'Guannan Ren: gren(a)redhat.com'
+__date__ = 'Sun Aug 7, 2011'
+__version__ = '0.1.0'
+__credits__ = 'Copyright (C) 2011 Red Hat, Inc.'
+__all__ = ['tcp_setup', 'tcp_libvirtd_set', 'hypervisor_connecting_test']
+
+import os
+import re
+import sys
+
+def append_path(path):
+ """Append root path of package"""
+ if path in sys.path:
+ pass
+ else:
+ sys.path.append(path)
+
+pwd = os.getcwd()
+result = re.search('(.*)libvirt-test-API', pwd)
+append_path(result.group(0))
+
+from lib import connectAPI
+from utils.Python import utils
+from exception import LibvirtAPI
+
+SASLPASSWD2 = "/usr/sbin/saslpasswd2"
+LIBVIRTD_CONF = "/etc/libvirt/libvirtd.conf"
+SYSCONFIG_LIBVIRTD = "/etc/sysconfig/libvirtd"
+
+def check_params(params):
+ """check out the arguments requried for this testcases"""
+ logger = params['logger']
+ keys = ['target_machine', 'username', 'password', 'listen_tcp', 'auth_tcp']
+ for key in keys:
+ if key not in params:
+ logger.error("Argument %s is required" % key)
+ return 1
+ return 0
+
+def sasl_user_add(target_machine, username, password, util, logger):
+ """ execute saslpasswd2 to add sasl user """
+ logger.info("add sasl user on server side")
+ saslpasswd2_add = "echo %s | %s -a libvirt %s" % (password, SASLPASSWD2, username)
+ ret = util.remote_exec_pexpect(target_machine, username,
+ password, saslpasswd2_add)
+ if ret:
+ logger.error("failed to add sasl user")
+ return 1
+
+ return 0
+
+def tcp_libvirtd_set(target_machine, username, password,
+ listen_tcp, auth_tcp, util, logger):
+ """ configure libvirtd.conf on libvirt server """
+ logger.info("setting libvirtd.conf on libvirt server")
+ # open libvirtd --listen option
+ listen_open_cmd = "echo 'LIBVIRTD_ARGS=\"--listen\"' >> %s" % SYSCONFIG_LIBVIRTD
+ ret = util.remote_exec_pexpect(target_machine, username,
+ password, listen_open_cmd)
+ if ret:
+ logger.error("failed to uncomment --listen in %s" % SYSCONFIG_LIBVIRTD)
+ return 1
+
+ # set listen_tls
+ logger.info("set listen_tls to 0 in %s" % LIBVIRTD_CONF)
+ listen_tls_disable = "echo \"listen_tls = 0\" >> %s" % LIBVIRTD_CONF
+ ret = util.remote_exec_pexpect(target_machine, username,
+ password, listen_tls_disable)
+ if ret:
+ logger.error("failed to set listen_tls to 0 in %s" % LIBVIRTD_CONF)
+ return 1
+
+ # set listen_tcp
+ if listen_tcp == 'enable':
+ logger.info("enable listen_tcp = 1 in %s" % LIBVIRTD_CONF)
+ listen_tcp_set = "echo 'listen_tcp = 1' >> %s" % LIBVIRTD_CONF
+ ret = util.remote_exec_pexpect(target_machine, username,
+ password, listen_tcp_set)
+ if ret:
+ logger.error("failed to set listen_tcp in %s" % LIBVIRTD_CONF)
+ return 1
+
+ # set auth_tcp
+ logger.info("set auth_tcp to \"%s\" in %s" % (auth_tcp, LIBVIRTD_CONF))
+ auth_tcp_set = "echo 'auth_tcp = \"%s\"' >> %s" % (auth_tcp, LIBVIRTD_CONF)
+ ret = util.remote_exec_pexpect(target_machine, username,
+ password, auth_tcp_set)
+ if ret:
+ logger.error("failed to set auth_tcp in %s" % LIBVIRTD_CONF)
+ return 1
+
+ # restart remote libvirtd service
+ libvirtd_restart_cmd = "service libvirtd restart"
+ logger.info("libvirtd restart")
+ ret = util.remote_exec_pexpect(target_machine, username,
+ password, libvirtd_restart_cmd)
+ if ret:
+ logger.error("failed to restart libvirtd service")
+ return 1
+
+ logger.info("done to libvirtd configuration")
+ return 0
+
+def request_credentials(credentials, user_data):
+ for credential in credentials:
+ if credential[0] == connectAPI.VIR_CRED_AUTHNAME:
+ credential[4] = user_data[0]
+
+ if len(credential[4]) == 0:
+ credential[4] = credential[3]
+ elif credential[0] == connectAPI.VIR_CRED_PASSPHRASE:
+ credential[4] = user_data[1]
+ else:
+ return -1
+
+ return 0
+
+def hypervisor_connecting_test(uri, auth_tcp, username,
+ password, logger, expected_result):
+ """ connect remote server """
+ ret = 1
+ try:
+ conn = connectAPI.ConnectAPI()
+ if auth_tcp == 'none':
+ virconn = conn.open(uri)
+ elif auth_tcp == 'sasl':
+ user_data = [username, password]
+ auth = [[connectAPI.VIR_CRED_AUTHNAME, connectAPI.VIR_CRED_PASSPHRASE], request_credentials, user_data]
+ virconn = conn.openAuth(uri, auth, 0)
+
+ ret = 0
+ conn.close()
+ except LibvirtAPI, e:
+ logger.error("API error message: %s, error code is %s" % \
+ (e.response()['message'], e.response()['code']))
+
+ ret = 1
+ conn.close()
+
+ if ret == 0 and expected_result == 'success':
+ logger.info("tcp connnection success")
+ return 0
+ elif ret == 1 and expected_result == 'fail':
+ logger.info("tcp connection failed, but that is expected")
+ return 0
+ elif ret == 0 and expected_result == 'fail':
+ logger.error("tcp connection success, but we hope the reverse")
+ return 1
+ elif ret == 1 and expected_result == 'success':
+ logger.error("tcp connection failed")
+ return 1
+
+ return 0
+
+def tcp_setup(params):
+ """ configure libvirt and connect to it through TCP socket"""
+ logger = params['logger']
+ params_check_result = check_params(params)
+ if params_check_result:
+ return 1
+
+ target_machine = params['target_machine']
+ username = params['username']
+ password = params['password']
+ listen_tcp = params['listen_tcp']
+ auth_tcp = params['auth_tcp']
+
+ uri = "qemu+tcp://%s/system" % target_machine
+
+ util = utils.Utils()
+
+ logger.info("the hostname of server is %s" % target_machine)
+ logger.info("the value of listen_tcp is %s" % listen_tcp)
+ logger.info("the value of auth_tcp is %s" % auth_tcp)
+
+ if not util.do_ping(target_machine, 0):
+ logger.error("failed to ping host %s" % target_machine)
+ return 1
+
+ if auth_tcp == 'sasl':
+ if sasl_user_add(target_machine, username, password, util, logger):
+ return 1
+
+ if tcp_libvirtd_set(target_machine, username, password,
+ listen_tcp, auth_tcp, util, logger):
+ return 1
+
+ if listen_tcp == 'disable':
+ if hypervisor_connecting_test(uri, auth_tcp, username,
+ password, logger, 'fail'):
+ return 1
+ elif listen_tcp == 'enable':
+ if hypervisor_connecting_test(uri, auth_tcp, username,
+ password, logger, 'success'):
+ return 1
+
+ return 0
+
+def tcp_setup_clean(params):
+ """cleanup testing environment"""
+
+ logger = params['logger']
+ target_machine = params['target_machine']
+ username = params['username']
+ password = params['password']
+ listen_tcp = params['listen_tcp']
+ auth_tcp = params['auth_tcp']
+
+ util = utils.Utils()
+
+ if auth_tcp == 'sasl':
+ saslpasswd2_delete = "%s -a libvirt -d %s" % (SASLPASSWD2, username)
+ ret = util.remote_exec_pexpect(target_machine, username,
+ password, saslpasswd2_delete)
+ if ret:
+ logger.error("failed to delete sasl user")
+ libvirtd_conf_retore = "sed -i -n '/^[ #]/p' %s" % LIBVIRTD_CONF
+ ret = util.remote_exec_pexpect(target_machine, username,
+ password, libvirtd_conf_retore)
+ if ret:
+ logger.error("failed to restore %s" % LIBVIRTD_CONF)
+
+ sysconfig_libvirtd_restore = "sed -i -n '/^[ #]/p' %s" % SYSCONFIG_LIBVIRTD
+ ret = util.remote_exec_pexpect(target_machine, username,
+ password, sysconfig_libvirtd_restore)
+ if ret:
+ logger.error("failed to restore %s" % SYSCONFIG_LIBVIRTD)
--
1.7.1
13 years, 8 months
[libvirt] [test-API][PATCH] Add testcases for testing permission control and sasl authentication of unix socket
by Guannan Ren
add new testcases repos/remoteAccess/unix_perm_sasl.py
---
repos/remoteAccess/unix_perm_sasl.py | 234 ++++++++++++++++++++++++++++++++++
1 files changed, 234 insertions(+), 0 deletions(-)
create mode 100644 repos/remoteAccess/unix_perm_sasl.py
diff --git a/repos/remoteAccess/unix_perm_sasl.py b/repos/remoteAccess/unix_perm_sasl.py
new file mode 100644
index 0000000..9bb2600
--- /dev/null
+++ b/repos/remoteAccess/unix_perm_sasl.py
@@ -0,0 +1,234 @@
+#!/usr/bin/env python
+""" testing for permission and authentication of unix domain socket
+ remoteAccess:unix_perm_sasl
+ auth_unix_ro
+ none|sasl
+ auth_unix_rw
+ none|sasl
+ unix_sock_group(optional)
+ libvirt
+"""
+
+__author__ = 'Guannan Ren: gren(a)redhat.com'
+__date__ = 'Fri Aug 5, 2011'
+__version__ = '0.1.0'
+__credits__ = 'Copyright (C) 2011 Red Hat, Inc.'
+__all__ = ['unix_perm_sasl', 'group_sasl_set',
+ 'libvirt_configure', 'hypervisor_connecting_test']
+
+import os
+import re
+import sys
+import commands
+
+from pwd import getpwnam
+
+def append_path(path):
+ """Append root path of package"""
+ if path in sys.path:
+ pass
+ else:
+ sys.path.append(path)
+
+pwd = os.getcwd()
+result = re.search('(.*)libvirt-test-API', pwd)
+append_path(result.group(0))
+
+from lib import connectAPI
+from exception import LibvirtAPI
+
+TESTING_USER = 'testapi'
+LIBVIRTD_CONF = "/etc/libvirt/libvirtd.conf"
+SASLPASSWD2 = "/usr/sbin/saslpasswd2"
+
+def check_params(params):
+ """check out the arguments requried for the testcase"""
+ logger = params['logger']
+ keys = ['auth_unix_ro', 'auth_unix_rw']
+ for key in keys:
+ if key not in params:
+ logger.error("Argument %s is required" % key)
+ return 1
+ return 0
+
+def get_output(command, flag, logger):
+ """execute shell command
+ """
+ status, ret = commands.getstatusoutput(command)
+ if not flag and status:
+ logger.error("executing "+ "\"" + command + "\"" + " failed")
+ logger.error(ret)
+ return status, ret
+
+def libvirt_configure(unix_sock_group, auth_unix_ro, auth_unix_rw, logger):
+ """configure libvirt.conf """
+ logger.info("configuring libvirt.conf")
+
+ # uncomment unix_sock_group
+ unix_group_add = "echo 'unix_sock_group = \"%s\"' >> %s" % \
+ (unix_sock_group, LIBVIRTD_CONF)
+ status, output = get_output(unix_group_add, 0, logger)
+ if status:
+ logger.error("setting unix_sock_group to %s failed" % unix_sock_group)
+ return 1
+
+ auth_unix_ro_add = "echo 'auth_unix_ro = \"%s\"' >> %s" % \
+ (auth_unix_ro, LIBVIRTD_CONF)
+ status, output = get_output(auth_unix_ro_add, 0, logger)
+ if status:
+ logger.error("setting auth_unix_ro to %s failed" % auth_unix_ro)
+ return 1
+
+ auth_unix_rw_add = "echo 'auth_unix_rw = \"%s\"' >> %s" % \
+ (auth_unix_rw, LIBVIRTD_CONF)
+ status, output = get_output(auth_unix_rw_add, 0, logger)
+ if status:
+ logger.error("setting auth_unix_rw to %s failed" % auth_unix_rw)
+ return 1
+
+ return 0
+
+def group_sasl_set(unix_sock_group, auth_unix_ro, auth_unix_rw, logger):
+ """add libvirt group and set sasl authentication if needed"""
+ logger.info("add unix socket group and sasl authentication if we need")
+
+ # add unix socket group
+ libvirt_group_add = "groupadd %s" % unix_sock_group
+ status, output = get_output(libvirt_group_add, 0, logger)
+ if status:
+ logger.error("failed to add %s group" % unix_sock_group)
+ return 1
+
+ # add "testapi" as the testing user
+ libvirt_user_add = "useradd -g %s %s" % (unix_sock_group, TESTING_USER)
+ status, output = get_output(libvirt_user_add, 0, logger)
+ if status:
+ logger.error("failed to add %s user into group %s" % \
+ (TESTING_USER, unix_sock_group))
+ return 1
+
+ # add sasl user
+ if auth_unix_ro == 'sasl' or auth_unix_rw == 'sasl':
+ saslpasswd2_add = "echo %s | %s -a libvirt %s" % \
+ (TESTING_USER, SASLPASSWD2, TESTING_USER)
+ status, output = get_output(saslpasswd2_add, 0, logger)
+ if status:
+ logger.error("failed to set sasl user %s" % TESTING_USER)
+ return 1
+
+ return 0
+
+def request_credentials(credentials, user_data):
+ for credential in credentials:
+ if credential[0] == connectAPI.VIR_CRED_AUTHNAME:
+ credential[4] = user_data[0]
+
+ if len(credential[4]) == 0:
+ credential[4] = credential[3]
+ elif credential[0] == connectAPI.VIR_CRED_PASSPHRASE:
+ credential[4] = user_data[1]
+ else:
+ return -1
+
+ return 0
+
+def hypervisor_connecting_test(uri, auth_unix_ro, auth_unix_rw, logger):
+ """connect to hypervisor"""
+ logger.info("connect to hypervisor")
+ orginal_user = os.geteuid()
+ testing_user_id = getpwnam(TESTING_USER)[2]
+ logger.info("the testing_user id is %d" % testing_user_id)
+
+ logger.info("set euid to %d" % testing_user_id)
+ os.seteuid(testing_user_id)
+
+ try:
+ conn = connectAPI.ConnectAPI()
+ if auth_unix_ro == 'none':
+ virconn = conn.open_read_only(uri)
+ elif auth_unix_ro == 'sasl':
+ user_data = [TESTING_USER, TESTING_USER]
+ auth = [[connectAPI.VIR_CRED_AUTHNAME, \
+ connectAPI.VIR_CRED_PASSPHRASE],
+ request_credentials, user_data]
+ virconn = conn.openAuth(uri, auth, 0)
+
+ if auth_unix_rw == 'none':
+ virconn = conn.open(uri)
+ elif auth_unix_rw == 'sasl':
+ user_data = [TESTING_USER, TESTING_USER]
+ auth = [[connectAPI.VIR_CRED_AUTHNAME, \
+ connectAPI.VIR_CRED_PASSPHRASE],
+ request_credentials, user_data]
+ virconn = conn.openAuth(uri, auth, 0)
+ conn.close()
+ except LibvirtAPI, e:
+ logger.error("API error message: %s, error code is %s" % \
+ (e.response()['message'], e.response()['code']))
+ logger.info("set euid back to %d" % orginal_user)
+ os.seteuid(orginal_user)
+ conn.close()
+ return 1
+
+ logger.info("set euid back to %d" % orginal_user)
+ os.seteuid(orginal_user)
+ return 0
+
+def unix_perm_sasl(params):
+ """ test unix socket group function and sasl authentication"""
+ logger = params['logger']
+ params_check_result = check_params(params)
+ if params_check_result:
+ return 1
+
+ auth_unix_ro = params['auth_unix_ro']
+ auth_unix_rw = params['auth_unix_rw']
+
+ unix_sock_group = 'libvirt'
+ if params.has_key('unix_sock_group'):
+ unix_sock_group = params['unix_sock_group']
+
+ uri = "qemu:///system"
+
+
+ if group_sasl_set(unix_sock_group, auth_unix_ro, auth_unix_rw, logger):
+ return 1
+
+ if libvirt_configure(unix_sock_group, auth_unix_ro, auth_unix_rw, logger):
+ return 1
+
+ if hypervisor_connecting_test(uri, auth_unix_ro, auth_unix_rw, logger):
+ return 1
+
+ return 0
+
+def unix_perm_sasl_clean(params):
+ """clean testing environment"""
+ logger = params['logger']
+
+ auth_unix_ro = params['auth_unix_ro']
+ auth_unix_rw = params['auth_unix_rw']
+
+ unix_sock_group = 'libvirt'
+ if params.has_key('unix_sock_group'):
+ unix_sock_group = params['unix_sock_group']
+
+ # delete "testapi" user
+ libvirt_user_del = "userdel %s" % TESTING_USER
+ status, output = get_output(libvirt_user_del, 0, logger)
+ if status:
+ logger.error("failed to del %s user into group %s" % TESTING_USER)
+
+ # delete unix socket group
+ libvirt_group_del = "groupdel %s" % unix_sock_group
+ status, output = get_output(libvirt_group_del, 0, logger)
+ if status:
+ logger.error("failed to del %s group" % unix_sock_group)
+
+ # delete sasl user
+ if auth_unix_ro == 'sasl' or auth_unix_rw == 'sasl':
+ saslpasswd2_delete = "%s -a libvirt -d %s" % (SASLPASSWD2, TESTING_USER)
+ status, output = get_output(saslpasswd2_delete, 0, logger)
+ if status:
+ logger.error("failed to delete sasl user %s" % TESTING_USER)
+
--
1.7.1
13 years, 8 months
[libvirt] [Libvirt] [PATCH v2] Fix bug #611823 prohibit pools with duplicate storage
by Lei Li
Make sure the unique storage pool defined and create from different directory to avoid inconsistent version of volume pool created.
Signed-off-by: Lei Li <lilei(a)linux.vnet.ibm.com>
---
src/conf/storage_conf.c | 36 ++++++++++++++++++++++++++++++++++++
src/conf/storage_conf.h | 4 ++++
src/libvirt_private.syms | 2 ++
src/storage/storage_driver.c | 6 ++++++
4 files changed, 48 insertions(+), 0 deletions(-)
diff --git a/src/conf/storage_conf.c b/src/conf/storage_conf.c
index 995f9a6..9078f78 100644
--- a/src/conf/storage_conf.c
+++ b/src/conf/storage_conf.c
@@ -1317,6 +1317,21 @@ virStoragePoolObjFindByName(virStoragePoolObjListPtr pools,
return NULL;
}
+virStoragePoolObjPtr
+virStoragePoolObjFindByPath(virStoragePoolObjListPtr pools,
+ const char *path) {
+ unsigned int i;
+
+ for (i = 0 ; i < pools->count ; i++) {
+ virStoragePoolObjLock(pools->objs[i]);
+ if (STREQ(pools->objs[i]->def->target.path, path))
+ return pools->objs[i];
+ virStoragePoolObjUnlock(pools->objs[i]);
+ }
+
+ return NULL;
+}
+
void
virStoragePoolObjClearVols(virStoragePoolObjPtr pool)
{
@@ -1707,6 +1722,27 @@ cleanup:
return ret;
}
+int virStoragePoolTargetDuplicate(virStoragePoolObjListPtr pools,
+ virStoragePoolDefPtr def)
+{
+ int ret = 1;
+ virStoragePoolObjPtr pool = NULL;
+
+ /* Check the pool list if defined target path already exist */
+ pool = virStoragePoolObjFindByPath(pools, def->target.path);
+ if (pool) {
+ virStorageReportError(VIR_ERR_OPERATION_FAILED,
+ _("target path '%s' is already in use"),
+ pool->def->target.path);
+ ret = -1;
+ goto cleanup;
+ }
+
+cleanup:
+ if (pool)
+ virStoragePoolObjUnlock(pool);
+ return ret;
+}
void virStoragePoolObjLock(virStoragePoolObjPtr obj)
{
diff --git a/src/conf/storage_conf.h b/src/conf/storage_conf.h
index 271441a..454c43d 100644
--- a/src/conf/storage_conf.h
+++ b/src/conf/storage_conf.h
@@ -335,6 +335,8 @@ virStoragePoolObjPtr virStoragePoolObjFindByUUID(virStoragePoolObjListPtr pools,
const unsigned char *uuid);
virStoragePoolObjPtr virStoragePoolObjFindByName(virStoragePoolObjListPtr pools,
const char *name);
+virStoragePoolObjPtr virStoragePoolObjFindByPath(virStoragePoolObjListPtr pools,
+ const char *path);
virStorageVolDefPtr virStorageVolDefFindByKey(virStoragePoolObjPtr pool,
const char *key);
@@ -387,6 +389,8 @@ char *virStoragePoolSourceListFormat(virStoragePoolSourceListPtr def);
int virStoragePoolObjIsDuplicate(virStoragePoolObjListPtr pools,
virStoragePoolDefPtr def,
unsigned int check_active);
+int virStoragePoolTargetDuplicate(virStoragePoolObjListPtr pools,
+ virStoragePoolDefPtr def);
void virStoragePoolObjLock(virStoragePoolObjPtr obj);
void virStoragePoolObjUnlock(virStoragePoolObjPtr obj);
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index 830222b..37afaf2 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -937,7 +937,9 @@ virStoragePoolObjClearVols;
virStoragePoolObjDeleteDef;
virStoragePoolObjFindByName;
virStoragePoolObjFindByUUID;
+virStoragePoolObjFindByPath;
virStoragePoolObjIsDuplicate;
+virStoragePoolTargetDuplicate;
virStoragePoolObjListFree;
virStoragePoolObjLock;
virStoragePoolObjRemove;
diff --git a/src/storage/storage_driver.c b/src/storage/storage_driver.c
index 9c353e3..b757911 100644
--- a/src/storage/storage_driver.c
+++ b/src/storage/storage_driver.c
@@ -536,6 +536,9 @@ storagePoolCreate(virConnectPtr conn,
if (virStoragePoolObjIsDuplicate(&driver->pools, def, 1) < 0)
goto cleanup;
+ if (virStoragePoolTargetDuplicate(&driver->pools, def) < 0)
+ goto cleanup;
+
if ((backend = virStorageBackendForType(def->type)) == NULL)
goto cleanup;
@@ -589,6 +592,9 @@ storagePoolDefine(virConnectPtr conn,
if (virStoragePoolObjIsDuplicate(&driver->pools, def, 0) < 0)
goto cleanup;
+ if (virStoragePoolTargetDuplicate(&driver->pools, def) < 0)
+ goto cleanup;
+
if (virStorageBackendForType(def->type) == NULL)
goto cleanup;
--
1.7.1
13 years, 8 months
[libvirt] How to avoid failure of migration/restoring/starting if cdrom is ejected inside guest?
by Osier Yang
Hello list,
There is problem of migration if the changedable medium is ejected inside
guest, this is caused by qemu closes the block driver backend once the
medium is ejected, but it doesn't gives a way to let libvirt known the fact.
So, libvirt will try to migrate the guest with the media still exists.
This will
cause the failure, as qemu already closes the block driver backend.
Actually this could also break domain restoring and starting (if the domain
has a managed saving image, supposing the media is ejected before saving
or managed saving).
It's ideal if qemu could provide some event so that libvirt could known the
media is changed immediately, but it's bad news qemu upstream won't make
a patch for this in a short time.
As a alternative solution, they proposed patch to expose the status of
changeable medium via monitor command "info block":
http://lists.gnu.org/archive/html/qemu-devel/2011-08/msg00408.html
The output of the improved "info block" looks like below:
(qemu) info block
disk0: removable=0 file=/home/armbru/work/images/test.qcow2
backing_file=test.img ro=0 drv=qcow2 encrypted=0
cd: removable=1 locked=0 ejected file=x.iso ro=1 drv=raw encrypted=0
What libvirt can do with the qemu improvement is checking the meduim status
at the time of migration, but it doesn't kill all the bugs, such as for a
live migration, one can eject the media inside guest just during the migration
process. This probally makes a race and cause the failure just the same.
And moreover, this won't solve the problem on restoring and starting
with managed
saving image, can't get the medium status as the guest is not active.
So, I 'm hesitating to use "info block" to resolve the problems, it
can't resolve
the root problem thoroughly.
Or I missed some good ideas? Any thought is welcomed, thanks.
By the way, it might be deserving to report the cdrom tray status using
the improved
"info block" though, though qemu might keep improving the command to
output more
infomation, such as the media is inserted, but the tray is still open
(which means
different if we report the tray status as "close" if "info block"
outputs "inserted",
need to change the codes then)?
Patches of qemu side to improve the tray handling:
http://lists.nongnu.org/archive/html/qemu-devel/2011-06/msg00381.html
Regards
Osier
13 years, 8 months
[libvirt] [RFC v3] Export KVM Host Power Management capabilities
by Srivatsa S. Bhat
This patch exports KVM Host Power Management capabilities as XML so that
higher-level systems management software can make use of these features
available in the host.
The script "pm-is-supported" (from pm-utils package) is run to discover if
Suspend-to-RAM (S3) or Suspend-to-Disk (S4) is supported by the host.
If either of them are supported, then a new tag "<power_management>" is
introduced in the XML under the <host> tag.
Eg: When the host supports both S3 and S4, the XML looks like this:
<capabilities>
<host>
<uuid>dc699581-48a2-11cb-b8a8-9a0265a79bbe</uuid>
<cpu>
<arch>i686</arch>
<model>coreduo</model>
<vendor>Intel</vendor>
<topology sockets='1' cores='2' threads='1'/>
<feature name='xtpr'/>
<feature name='tm2'/>
<feature name='est'/>
<feature name='vmx'/>
<feature name='pbe'/>
<feature name='tm'/>
<feature name='ht'/>
<feature name='ss'/>
<feature name='acpi'/>
<feature name='ds'/>
</cpu>
<power_management> <<<=== New host power management features
<S3/>
<S4/>
</power_management>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
</uri_transports>
</migration_features>
</host>
.
.
.
However in case the query to check for power management features succeeded,
but the host does not support any such feature, then the XML will contain
an empty <power_management/> tag. In the event that the PM query itself
failed, the XML will not contain any "power_management" tag.
Open issues:
-----------
1. Design new APIs in libvirt to actually exploit the host power management
features instead of relying on external programs. This was discussed in
[4].
2. Decide on whether to include "pm-utils" package in the libvirt.spec
file considering the fact that the package name (pm-utils) may differ
from one Linux distribution to another.
Please let me know your comments and feedback.
Changelog:
---------
v1: The idea of exporting host power management capabilities through
libvirt was discussed in [1]. The choice to name the new tag as
"power_management" was discussed in [2].
v2: A working implementation was presented for review in [3].
References:
----------
[1] Exporting KVM host power saving capabilities through libvirt
http://thread.gmane.org/gmane.comp.emulators.libvirt/40886
[2] http://article.gmane.org/gmane.comp.emulators.libvirt/41688
[3] http://www.redhat.com/archives/libvir-list/2011-August/msg00238.html
[4] http://www.redhat.com/archives/libvir-list/2011-August/msg00248.html
Signed-off-by: Srivatsa S. Bhat <srivatsa.bhat(a)linux.vnet.ibm.com>
---
docs/formatcaps.html.in | 19 ++++++++++---
docs/schemas/capability.rng | 23 ++++++++++++++++
include/libvirt/virterror.h | 1 +
src/conf/capabilities.c | 50 ++++++++++++++++++++++++++++++++++
src/conf/capabilities.h | 8 ++++++
src/libvirt_private.syms | 2 +
src/qemu/qemu_capabilities.c | 27 +++++++++++++++++++
src/util/util.c | 61 ++++++++++++++++++++++++++++++++++++++++++
src/util/util.h | 14 ++++++++++
src/util/virterror.c | 3 ++
10 files changed, 203 insertions(+), 5 deletions(-)
diff --git a/docs/formatcaps.html.in b/docs/formatcaps.html.in
index a4297ce..ce6f9a6 100644
--- a/docs/formatcaps.html.in
+++ b/docs/formatcaps.html.in
@@ -28,6 +28,10 @@ BIOS you will see</p>
<feature name='xtpr'/>
...
</cpu>
+ <power_management>
+ <S3/>
+ <S4/>
+ <power_management/>
</host></span>
<!-- xen-3.0-x86_64 -->
@@ -61,11 +65,16 @@ BIOS you will see</p>
...
</capabilities></pre>
<p>The first block (in red) indicates the host hardware capabilities, currently
-it is limited to the CPU properties but other information may be available,
-it shows the CPU architecture, topology, model name, and additional features
-which are not included in the model but the CPU provides them. Features of the
-chip are shown within the feature block (the block is similar to what you will
-find in a Xen fully virtualized domain description).</p>
+it is limited to the CPU properties and the power management features of
+the host platform, but other information may be available, it shows the CPU architecture,
+topology, model name, and additional features which are not included in the model but the
+CPU provides them. Features of the chip are shown within the feature block (the block is
+similar to what you will find in a Xen fully virtualized domain description). Further,
+the power management features supported by the host are shown, such as Suspend-to-RAM (S3)
+and Suspend-to-Disk (S4). In case the query for power management features succeeded but the
+host does not support any such feature, then an empty <power_management/>
+tag will be shown. Otherwise, if the query itself failed, no such tag will
+be displayed (i.e., there will not be any power_management block or empty tag in the XML).</p>
<p>The second block (in blue) indicates the paravirtualization support of the
Xen support, you will see the os_type of xen to indicate a paravirtual
kernel, then architecture information and potential features.</p>
diff --git a/docs/schemas/capability.rng b/docs/schemas/capability.rng
index 99b4a9a..930374c 100644
--- a/docs/schemas/capability.rng
+++ b/docs/schemas/capability.rng
@@ -35,6 +35,9 @@
</optional>
</element>
<optional>
+ <ref name='power_management'/>
+ </optional>
+ <optional>
<ref name='migration'/>
</optional>
<optional>
@@ -105,6 +108,26 @@
</zeroOrMore>
</define>
+ <define name='power_management'>
+ <choice>
+ <element name='power_management'>
+ <optional>
+ <element name='S3'>
+ <empty/>
+ </element>
+ </optional>
+ <optional>
+ <element name='S4'>
+ <empty/>
+ </element>
+ </optional>
+ </element>
+ <element name='power_management'>
+ <empty/>
+ </element>
+ </choice>
+ </define>
+
<define name='migration'>
<element name='migration_features'>
<optional>
diff --git a/include/libvirt/virterror.h b/include/libvirt/virterror.h
index 9cac437..a831c73 100644
--- a/include/libvirt/virterror.h
+++ b/include/libvirt/virterror.h
@@ -82,6 +82,7 @@ typedef enum {
VIR_FROM_EVENT = 40, /* Error from event loop impl */
VIR_FROM_LIBXL = 41, /* Error from libxenlight driver */
VIR_FROM_LOCKING = 42, /* Error from lock manager */
+ VIR_FROM_CAPABILITIES = 43, /* Error from capabilities */
} virErrorDomain;
diff --git a/src/conf/capabilities.c b/src/conf/capabilities.c
index 2f243ae..d39a3f9 100644
--- a/src/conf/capabilities.c
+++ b/src/conf/capabilities.c
@@ -29,6 +29,13 @@
#include "util.h"
#include "uuid.h"
#include "cpu_conf.h"
+#include "virterror_internal.h"
+
+
+#define VIR_FROM_THIS VIR_FROM_CAPABILITIES
+
+VIR_ENUM_IMPL(virHostPMCapability, VIR_HOST_PM_LAST,
+ "S3", "S4")
/**
* virCapabilitiesNew:
@@ -166,6 +173,8 @@ virCapabilitiesFree(virCapsPtr caps) {
virCapabilitiesFreeNUMAInfo(caps);
+ VIR_FREE(caps->host.powerMgmt);
+
for (i = 0 ; i < caps->host.nmigrateTrans ; i++)
VIR_FREE(caps->host.migrateTrans[i]);
VIR_FREE(caps->host.migrateTrans);
@@ -201,6 +210,28 @@ virCapabilitiesAddHostFeature(virCapsPtr caps,
return 0;
}
+/**
+ * virCapabilitiesAddHostPowerManagement:
+ * @caps: capabilities to extend
+ * @feature: the power management feature to be added
+ *
+ * Registers a new host power management feature, eg: 'S3' or 'S4'
+ */
+int
+virCapabilitiesAddHostPowerManagement(virCapsPtr caps,
+ int feature)
+{
+ if(VIR_RESIZE_N(caps->host.powerMgmt, caps->host.npowerMgmt_max,
+ caps->host.npowerMgmt, 1) < 0) {
+ virReportOOMError();
+ return -1;
+ }
+
+ caps->host.powerMgmt[caps->host.npowerMgmt] = feature;
+ caps->host.npowerMgmt++;
+
+ return 0;
+}
/**
* virCapabilitiesAddHostMigrateTransport:
@@ -686,6 +717,25 @@ virCapabilitiesFormatXML(virCapsPtr caps)
virBufferAddLit(&xml, " </cpu>\n");
+ if(caps->host.isPMQuerySuccess) {
+ if(caps->host.npowerMgmt) {
+ /* The PM Query was successful and the host supports
+ * some PM features.
+ */
+ virBufferAddLit(&xml, " <power_management>\n");
+ for (i = 0; i < caps->host.npowerMgmt ; i++) {
+ virBufferAsprintf(&xml, " <%s/>\n",
+ virHostPMCapabilityTypeToString(caps->host.powerMgmt[i]));
+ }
+ virBufferAddLit(&xml, " </power_management>\n");
+ } else {
+ /* The PM Query was successful but the host does not
+ * support any PM feature.
+ */
+ virBufferAddLit(&xml, " <power_management/>\n");
+ }
+ }
+
if (caps->host.offlineMigrate) {
virBufferAddLit(&xml, " <migration_features>\n");
if (caps->host.liveMigrate)
diff --git a/src/conf/capabilities.h b/src/conf/capabilities.h
index e2fa1d6..afbf732 100644
--- a/src/conf/capabilities.h
+++ b/src/conf/capabilities.h
@@ -105,6 +105,10 @@ struct _virCapsHost {
size_t nfeatures;
size_t nfeatures_max;
char **features;
+ bool isPMQuerySuccess;
+ size_t npowerMgmt;
+ size_t npowerMgmt_max;
+ int *powerMgmt; /* enum virHostPMCapability */
int offlineMigrate;
int liveMigrate;
size_t nmigrateTrans;
@@ -186,6 +190,10 @@ virCapabilitiesAddHostFeature(virCapsPtr caps,
const char *name);
extern int
+virCapabilitiesAddHostPowerManagement(virCapsPtr caps,
+ int feature);
+
+extern int
virCapabilitiesAddHostMigrateTransport(virCapsPtr caps,
const char *name);
diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms
index 830222b..5754fdd 100644
--- a/src/libvirt_private.syms
+++ b/src/libvirt_private.syms
@@ -41,6 +41,7 @@ virCapabilitiesAddGuestFeature;
virCapabilitiesAddHostFeature;
virCapabilitiesAddHostMigrateTransport;
virCapabilitiesAddHostNUMACell;
+virCapabilitiesAddHostPowerManagement;
virCapabilitiesAllocMachines;
virCapabilitiesDefaultGuestArch;
virCapabilitiesDefaultGuestEmulator;
@@ -1025,6 +1026,7 @@ safezero;
virArgvToString;
virAsprintf;
virBuildPathInternal;
+virCheckPMCapability;
virDirCreate;
virEmitXMLWarning;
virEnumFromString;
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 3f36212..f3d0c0a 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -794,6 +794,7 @@ virCapsPtr qemuCapsInit(virCapsPtr old_caps)
struct utsname utsname;
virCapsPtr caps;
int i;
+ int status = -1;
char *xenner = NULL;
/* Really, this never fails - look at the man-page. */
@@ -824,6 +825,32 @@ virCapsPtr qemuCapsInit(virCapsPtr old_caps)
old_caps->host.cpu = NULL;
}
+ /* Add the power management features of the host */
+
+ /* Check for Suspend-to-RAM support (S3) */
+ status = virCheckPMCapability(VIR_HOST_PM_S3);
+ if(status < 0) {
+ caps->host.isPMQuerySuccess = false;
+ VIR_WARN("Failed to get host power management features");
+ } else {
+ /* The PM Query succeeded */
+ caps->host.isPMQuerySuccess = true;
+ if(status == 1) /* S3 is supported */
+ virCapabilitiesAddHostPowerManagement(caps, VIR_HOST_PM_S3);
+ }
+
+ /* Check for Suspend-to-Disk support (S4) */
+ status = virCheckPMCapability(VIR_HOST_PM_S4);
+ if(status < 0) {
+ caps->host.isPMQuerySuccess = false;
+ VIR_WARN("Failed to get host power management features");
+ } else {
+ /* The PM Query succeeded */
+ caps->host.isPMQuerySuccess = true;
+ if(status == 1) /* S4 is supported */
+ virCapabilitiesAddHostPowerManagement(caps, VIR_HOST_PM_S4);
+ }
+
virCapabilitiesAddHostMigrateTransport(caps,
"tcp");
diff --git a/src/util/util.c b/src/util/util.c
index 03a9e1a..489c4d6 100644
--- a/src/util/util.c
+++ b/src/util/util.c
@@ -2641,3 +2641,64 @@ or other application using the libvirt API.\n\
return 0;
}
+
+/**
+ * Check the Power Management Capabilities of the host system.
+ * The script 'pm-is-supported' (from the pm-utils package) is run
+ * to find out if the capability is supported by the host.
+ *
+ * @capability: capability to check for
+ * VIR_HOST_PM_S3: Check for Suspend-to-RAM support
+ * VIR_HOST_PM_S4: Check for Suspend-to-Disk support
+ *
+ * Return values:
+ * 1 if the capability is supported.
+ * 0 if the query was successful but the capability is
+ * not supported by the host.
+ * -1 on error like 'pm-is-supported' is not found.
+ */
+int
+virCheckPMCapability(int capability)
+{
+
+ char *path = NULL;
+ int status = -1;
+ int ret = -1;
+ virCommandPtr cmd;
+
+ if((path = virFindFileInPath("pm-is-supported")) == NULL) {
+ virUtilError(VIR_ERR_INTERNAL_ERROR,
+ "%s", _("Failed to get the path of pm-is-supported"));
+ return -1;
+ }
+
+ cmd = virCommandNew(path);
+ switch(capability) {
+ case VIR_HOST_PM_S3:
+ /* Check support for suspend (S3) */
+ virCommandAddArg(cmd, "--suspend");
+ break;
+
+ case VIR_HOST_PM_S4:
+ /* Check support for hibernation (S4) */
+ virCommandAddArg(cmd, "--hibernate");
+ break;
+
+ default:
+ goto cleanup;
+ }
+
+ if(virCommandRun(cmd, &status) < 0)
+ goto cleanup;
+
+ /* Check return code of command == 0 for success
+ * (i.e., the PM capability is supported)
+ */
+ ret = (status == 0) ? 1 : 0;
+
+cleanup:
+ virCommandFree(cmd);
+ VIR_FREE(path);
+ return ret;
+}
+
diff --git a/src/util/util.h b/src/util/util.h
index af8b15d..dfb8c1a 100644
--- a/src/util/util.h
+++ b/src/util/util.h
@@ -272,4 +272,18 @@ bool virIsDevMapperDevice(const char *devname) ATTRIBUTE_NONNULL(1);
int virEmitXMLWarning(int fd,
const char *name,
const char *cmd) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3);
+
+/* Power Management Capabilities of the host system */
+
+enum virHostPMCapability {
+ VIR_HOST_PM_S3, /* Suspend-to-RAM */
+ VIR_HOST_PM_S4, /* Suspend-to-Disk */
+
+ VIR_HOST_PM_LAST
+};
+
+VIR_ENUM_DECL(virHostPMCapability)
+
+int virCheckPMCapability(int capability);
+
#endif /* __VIR_UTIL_H__ */
diff --git a/src/util/virterror.c b/src/util/virterror.c
index 9a27feb..26d6011 100644
--- a/src/util/virterror.c
+++ b/src/util/virterror.c
@@ -148,6 +148,9 @@ static const char *virErrorDomainName(virErrorDomain domain) {
case VIR_FROM_CPU:
dom = "CPU ";
break;
+ case VIR_FROM_CAPABILITIES:
+ dom = "Capabilities ";
+ break;
case VIR_FROM_NWFILTER:
dom = "Network Filter ";
break;
13 years, 8 months