[libvirt] Globally Reserve Resources for Host
by Dusty Mabe
Hi,
I am interested in the capability to globally reserve resources(cpu and
memory) for a KVM host. I know you can configure memory limits for each
guest (http://libvirt.org/formatdomain.html#elementsMemoryTuning), but
would like the ability to reserve host cpu and memory without having to
actively do it by modifying each guests xml.
For clarity, what I mean by "reserve resources" is that there are certain
cpus and a certain amount of memory that guests will never have access to.
This can be achieved using cgroups.
Does anyone think this functionality would be useful? This is primarily to
prevent the host from being starved when the allocation of guests have the
host overcommitted/oversubscribed.
Note: I think vmware has a similar functionality, but I am not sure as I
don't really use vmware
http://blogs.technet.com/b/virtualpfe/archive/2011/08/29/hyper-v-dynamic-...
Thanks for any thoughts,
Dusty
11 years, 12 months
[libvirt] [PATCH] network: fix crash when portgroup has no name
by Laine Stump
This resolves: https://bugzilla.redhat.com/show_bug.cgi?id=879473
The name attribute is required for portgroup elements (yes, the RNG
specifies that), and there is code in libvirt that assumes it is
non-null. Unfortunately, the portgroup parsing function wasn't
checking for lack of portgroup. One adverse result of this was that
attempts to update a network by adding a portgroup with no name would
cause libvirtd to segfault. For example:
virsh net-update default add portgroup "<portgroup default='yes'/>"
This patch causes virNetworkPortGroupParseXML to fail if no name is
specified, thus avoiding any later problems.
---
src/conf/network_conf.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/conf/network_conf.c b/src/conf/network_conf.c
index 228951d..6ce2e63 100644
--- a/src/conf/network_conf.c
+++ b/src/conf/network_conf.c
@@ -1175,6 +1175,12 @@ virNetworkPortGroupParseXML(virPortGroupDefPtr def,
/* grab raw data from XML */
def->name = virXPathString("string(./@name)", ctxt);
+ if (!def->name) {
+ virReportError(VIR_ERR_XML_ERROR, "%s",
+ _("Missing required name attribute in portgroup"));
+ goto error;
+ }
+
isDefault = virXPathString("string(./@default)", ctxt);
def->isDefault = isDefault && STRCASEEQ(isDefault, "yes");
--
1.7.11.7
11 years, 12 months
[libvirt] [PATCH] qemu: Remove full stop from error messages
by Jiri Denemark
---
Pushed as trivial.
src/qemu/qemu_agent.c | 2 +-
src/qemu/qemu_monitor.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c
index 7062d53..893f7f2 100644
--- a/src/qemu/qemu_agent.c
+++ b/src/qemu/qemu_agent.c
@@ -242,7 +242,7 @@ qemuAgentOpenUnix(const char *monitor, pid_t cpid, bool *inProgress)
if (ret != 0) {
virReportSystemError(errno, "%s",
- _("monitor socket did not show up."));
+ _("monitor socket did not show up"));
goto error;
}
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index fe8424f..aef5044 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -297,7 +297,7 @@ qemuMonitorOpenUnix(const char *monitor, pid_t cpid)
if (ret != 0) {
virReportSystemError(errno, "%s",
- _("monitor socket did not show up."));
+ _("monitor socket did not show up"));
goto error;
}
--
1.8.0
11 years, 12 months
[libvirt] [test-API][PATCH v3] Add test case of set vcpus with flags
by Wayne Sun
v2: break down the case to small cases with separate flags
* Use setVcpusFlags API to set domain vcpus with flags
* 3 cases added, each only deal with one set flag value as in
config, live or maximum
* cases are independent on domain states, API will report error
if not suitable for certain states
* the sample conf is only one scenario of hotplug domain vcpus
v3: merge config and maximum case to config
* maximum flag can only work when domain is shutoff, merge it
to config case to simplify code
Signed-off-by: Wayne Sun <gsun(a)redhat.com>
---
cases/set_vcpus_flags.conf | 67 +++++++++++++++++++++++++
repos/setVcpus/set_vcpus_config.py | 93 ++++++++++++++++++++++++++++++++++
repos/setVcpus/set_vcpus_live.py | 96 ++++++++++++++++++++++++++++++++++++
3 files changed, 256 insertions(+), 0 deletions(-)
create mode 100644 cases/set_vcpus_flags.conf
create mode 100644 repos/setVcpus/__init__.py
create mode 100644 repos/setVcpus/set_vcpus_config.py
create mode 100644 repos/setVcpus/set_vcpus_live.py
diff --git a/cases/set_vcpus_flags.conf b/cases/set_vcpus_flags.conf
new file mode 100644
index 0000000..6cf595f
--- /dev/null
+++ b/cases/set_vcpus_flags.conf
@@ -0,0 +1,67 @@
+domain:install_linux_cdrom
+ guestname
+ $defaultname
+ guestos
+ $defaultos
+ guestarch
+ $defaultarch
+ vcpu
+ $defaultvcpu
+ memory
+ $defaultmem
+ hddriver
+ $defaulthd
+ nicdriver
+ $defaultnic
+ imageformat
+ qcow2
+
+domain:destroy
+ guestname
+ $defaultname
+
+setVcpus:set_vcpus_config
+ guestname
+ $defaultname
+ vcpu
+ 1
+ maxvcpu
+ 8
+
+domain:start
+ guestname
+ $defaultname
+
+setVcpus:set_vcpus_live
+ guestname
+ $defaultname
+ vcpu
+ 3
+ username
+ $username
+ password
+ $password
+
+setVcpus:set_vcpus_config
+ guestname
+ $defaultname
+ vcpu
+ 5
+
+domain:destroy
+ guestname
+ $defaultname
+
+domain:start
+ guestname
+ $defaultname
+
+domain:destroy
+ guestname
+ $defaultname
+
+domain:undefine
+ guestname
+ $defaultname
+
+options cleanup=enable
diff --git a/repos/setVcpus/__init__.py b/repos/setVcpus/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/repos/setVcpus/set_vcpus_config.py b/repos/setVcpus/set_vcpus_config.py
new file mode 100644
index 0000000..08eb53f
--- /dev/null
+++ b/repos/setVcpus/set_vcpus_config.py
@@ -0,0 +1,93 @@
+#!/usr/bin/env python
+# Test set domain vcpu with flag VIR_DOMAIN_AFFECT_CONFIG, also set
+# and check max vcpu with flag VIR_DOMAIN_VCPU_MAXIMUM if maxvcpu
+# param is given
+
+from xml.dom import minidom
+
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+
+required_params = ('guestname', 'vcpu', )
+optional_params = {'maxvcpu': 8,
+ }
+
+def get_vcpu_number(domobj):
+ """dump domain config xml description to get vcpu number, return
+ current vcpu and maximum vcpu number
+ """
+ try:
+ guestxml = domobj.XMLDesc(2)
+ logger.debug("domain %s xml is :\n%s" %(domobj.name(), guestxml))
+ xml = minidom.parseString(guestxml)
+ vcpu = xml.getElementsByTagName('vcpu')[0]
+ maxvcpu = int(vcpu.childNodes[0].data)
+ logger.info("domain max vcpu number is: %s" % maxvcpu)
+
+ if vcpu.hasAttribute('current'):
+ attr = vcpu.getAttributeNode('current')
+ current = int(attr.nodeValue)
+ else:
+ logger.info("no 'current' atrribute for element vcpu")
+ current = int(vcpu.childNodes[0].data)
+
+ logger.info("domain current vcpu number is: %s" % current)
+
+ except libvirtError, e:
+ logger.error("libvirt call failed: " + str(e))
+ return False
+
+ return current, maxvcpu
+
+def set_vcpus_config(params):
+ """set domain vcpu with config flag and check, also set and check
+ max vcpu with maximum flag if optional param maxvcpu is given
+ """
+ global logger
+ logger = params['logger']
+ params.pop('logger')
+ guestname = params['guestname']
+ vcpu = int(params['vcpu'])
+ maxvcpu = params.get('maxvcpu', None)
+
+ logger.info("the name of virtual machine is %s" % guestname)
+ logger.info("the given vcpu number is %s" % vcpu)
+
+ conn = sharedmod.libvirtobj['conn']
+
+ try:
+ domobj = conn.lookupByName(guestname)
+ logger.info("set domain vcpu as %s with flag: %s" %
+ (vcpu, libvirt.VIR_DOMAIN_AFFECT_CONFIG))
+ domobj.setVcpusFlags(vcpu, libvirt.VIR_DOMAIN_AFFECT_CONFIG)
+ logger.info("set domain vcpu succeed")
+
+ if maxvcpu:
+ logger.info("the given max vcpu number is %s" % maxvcpu)
+ logger.info("set domain maximum vcpu as %s with flag: %s" %
+ (maxvcpu, libvirt.VIR_DOMAIN_VCPU_MAXIMUM))
+ domobj.setVcpusFlags(int(maxvcpu), libvirt.VIR_DOMAIN_VCPU_MAXIMUM)
+ logger.info("set domain vcpu succeed")
+
+ except libvirtError, e:
+ logger.error("libvirt call failed: " + str(e))
+ return 1
+
+ logger.info("check domain config xml to get vcpu number")
+ ret = get_vcpu_number(domobj)
+ if ret[0] == vcpu:
+ logger.info("domain current vcpu is equal as set")
+ if maxvcpu:
+ if ret[1] == int(maxvcpu):
+ logger.info("domain max vcpu is equal as set")
+ return 0
+ else:
+ logger.error("domain max vcpu is not equal as set")
+ return 1
+ else:
+ return 0
+ else:
+ logger.error("domain current vcpu is not equal as set")
+ return 1
diff --git a/repos/setVcpus/set_vcpus_live.py b/repos/setVcpus/set_vcpus_live.py
new file mode 100644
index 0000000..35a2976
--- /dev/null
+++ b/repos/setVcpus/set_vcpus_live.py
@@ -0,0 +1,96 @@
+#!/usr/bin/env python
+# Test set domain vcpu with flag VIR_DOMAIN_VCPU_LIVE. Check
+# domain xml and inside domain to get current vcpu number. The
+# live flag only work on running domain, so test on shutoff
+# domain will fail.
+
+from xml.dom import minidom
+
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+from utils import utils
+
+required_params = ('guestname', 'vcpu', 'username', 'password', )
+optional_params = {}
+
+def get_current_vcpu(domobj, username, password):
+ """dump domain live xml description to get current vcpu number
+ and check in domain to confirm
+ """
+ try:
+ guestxml = domobj.XMLDesc(1)
+ guestname = domobj.name()
+ logger.debug("domain %s xml is :\n%s" %(guestname, guestxml))
+ xml = minidom.parseString(guestxml)
+ vcpu = xml.getElementsByTagName('vcpu')[0]
+
+ if vcpu.hasAttribute('current'):
+ attr = vcpu.getAttributeNode('current')
+ current = int(attr.nodeValue)
+ else:
+ logger.info("no 'current' atrribute for element vcpu")
+ current = int(vcpu.childNodes[0].data)
+
+ logger.info("domain current vcpu number in live xml is: %s" % current)
+
+ except libvirtError, e:
+ logger.error("libvirt call failed: " + str(e))
+ return False
+
+ logger.debug("get the mac address of vm %s" % guestname)
+ mac = utils.get_dom_mac_addr(guestname)
+ logger.debug("the mac address of vm %s is %s" % (guestname, mac))
+
+ logger.info("check cpu number in domain")
+ ip = utils.mac_to_ip(mac, 180)
+
+ cmd = "cat /proc/cpuinfo | grep processor | wc -l"
+ ret, output = utils.remote_exec_pexpect(ip, username, password, cmd)
+ if not ret:
+ logger.info("cpu number in domain is %s" % output)
+ if int(output) == current:
+ logger.info("cpu in domain is equal to current vcpu value")
+ else:
+ logger.error("current vcpu is not equal as check in domain")
+ return False
+ else:
+ logger.error("check in domain fail")
+ return False
+
+ return current
+
+def set_vcpus_live(params):
+ """set domain vcpu with live flag and check
+ """
+ global logger
+ logger = params['logger']
+ params.pop('logger')
+ guestname = params['guestname']
+ vcpu = int(params['vcpu'])
+ username = params['username']
+ password = params['password']
+
+ logger.info("the name of virtual machine is %s" % guestname)
+ logger.info("the given vcpu number is %s" % vcpu)
+
+ conn = sharedmod.libvirtobj['conn']
+
+ try:
+ domobj = conn.lookupByName(guestname)
+ logger.info("set domain vcpu as %s with flag: %s" %
+ (vcpu, libvirt.VIR_DOMAIN_VCPU_LIVE))
+ domobj.setVcpusFlags(vcpu, libvirt.VIR_DOMAIN_VCPU_LIVE)
+ except libvirtError, e:
+ logger.error("libvirt call failed: " + str(e))
+ return 1
+
+ logger.info("check domain vcpu")
+ ret = get_current_vcpu(domobj, username, password)
+ if ret == vcpu:
+ logger.info("domain vcpu is equal as set")
+ return 0
+ else:
+ logger.error("domain vcpu is not equal as set")
+ return 1
--
1.7.1
11 years, 12 months
[libvirt] [PATCH 0/3] qemu: QMP Capability Probing Fixes
by Viktor Mihajlovski
QMP Capability probing will fail if the QEMU process cannot create the
monitor socket file in /var/lib/libvirt/qemu. This is the case if the
configured QEMU user is not root, but QEMU is run under root to perfom
the probing.
The suggested solution is to run QEMU as qemu user for probing as well.
As it happens, this developed into a mini-series: it was necessary
to let libvirt handle the pid file as this is stored in root-owned
directory /var/run/libvirt/qemu. This prompted a race condition opening
a socket. Last but not least caps->version was not filled with QMP
probing.
Viktor Mihajlovski (3):
qemu: Wait for monitor socket even without pid
qemu: Fix QMP Capabability Probing Failure
qemu: Add QEMU version computation to QMP probing
src/qemu/qemu_capabilities.c | 89 ++++++++++++++++++++++++++++++++++----------
src/qemu/qemu_capabilities.h | 7 +++-
src/qemu/qemu_driver.c | 4 +-
src/qemu/qemu_monitor.c | 2 +-
4 files changed, 78 insertions(+), 24 deletions(-)
--
1.7.12.4
11 years, 12 months
[libvirt] [test-API][PATCH v2] Add test case of set vcpus with flags
by Wayne Sun
v2: break down the case to small cases with separate flags
* Use setVcpusFlags API to set domain vcpus with flags
* 3 cases added, each only deal with one set flag value as in
config, live or maximum
* cases are independent on domain states, API will report error
if not suitable for certain states
* the sample conf is only one scenario of hotplug domain vcpus
Signed-off-by: Wayne Sun <gsun(a)redhat.com>
---
cases/set_vcpus_flags.conf | 64 +++++++++++++++++++++++
repos/setVcpus/set_vcpus_config.py | 69 +++++++++++++++++++++++++
repos/setVcpus/set_vcpus_live.py | 96 +++++++++++++++++++++++++++++++++++
repos/setVcpus/set_vcpus_maximum.py | 62 ++++++++++++++++++++++
4 files changed, 291 insertions(+), 0 deletions(-)
create mode 100644 cases/set_vcpus_flags.conf
create mode 100644 repos/setVcpus/__init__.py
create mode 100644 repos/setVcpus/set_vcpus_config.py
create mode 100644 repos/setVcpus/set_vcpus_live.py
create mode 100644 repos/setVcpus/set_vcpus_maximum.py
diff --git a/cases/set_vcpus_flags.conf b/cases/set_vcpus_flags.conf
new file mode 100644
index 0000000..d346735
--- /dev/null
+++ b/cases/set_vcpus_flags.conf
@@ -0,0 +1,64 @@
+domain:install_linux_cdrom
+ guestname
+ $defaultname
+ guestos
+ $defaultos
+ guestarch
+ $defaultarch
+ vcpu
+ $defaultvcpu
+ memory
+ $defaultmem
+ hddriver
+ $defaulthd
+ nicdriver
+ $defaultnic
+ imageformat
+ qcow2
+
+
+domain:destroy
+ guestname
+ $defaultname
+
+setVcpus:set_vcpus_maximum
+ guestname
+ $defaultname
+ vcpu
+ 4
+
+setVcpus:set_vcpus_config
+ guestname
+ $defaultname
+ vcpu
+ 1
+
+domain:start
+ guestname
+ $defaultname
+
+setVcpus:set_vcpus_live
+ guestname
+ $defaultname
+ vcpu
+ 3
+ username
+ $username
+ password
+ $password
+
+setVcpus:set_vcpus_config
+ guestname
+ $defaultname
+ vcpu
+ 2
+
+domain:destroy
+ guestname
+ $defaultname
+
+domain:undefine
+ guestname
+ $defaultname
+
+options cleanup=enable
diff --git a/repos/setVcpus/__init__.py b/repos/setVcpus/__init__.py
new file mode 100644
index 0000000..e69de29
diff --git a/repos/setVcpus/set_vcpus_config.py b/repos/setVcpus/set_vcpus_config.py
new file mode 100644
index 0000000..2b8f5e7
--- /dev/null
+++ b/repos/setVcpus/set_vcpus_config.py
@@ -0,0 +1,69 @@
+#!/usr/bin/env python
+# Test set domain vcpu with flag VIR_DOMAIN_AFFECT_CONFIG. Check
+# domain config xml to get 'current' vcpu number.
+
+from xml.dom import minidom
+
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+
+required_params = ('guestname', 'vcpu', )
+optional_params = {}
+
+def get_current_vcpu(domobj):
+ """dump domain config xml description to get current vcpu number
+ """
+ try:
+ guestxml = domobj.XMLDesc(2)
+ logger.debug("domain %s xml is :\n%s" %(domobj.name(), guestxml))
+ xml = minidom.parseString(guestxml)
+ vcpu = xml.getElementsByTagName('vcpu')[0]
+
+ if vcpu.hasAttribute('current'):
+ attr = vcpu.getAttributeNode('current')
+ current = int(attr.nodeValue)
+ else:
+ logger.info("no 'current' atrribute for element vcpu")
+ current = int(vcpu.childNodes[0].data)
+
+ logger.info("domain current vcpu number is: %s" % current)
+
+ except libvirtError, e:
+ logger.error("libvirt call failed: " + str(e))
+ return False
+
+ return current
+
+def set_vcpus_config(params):
+ """set domain vcpu with config flag and check
+ """
+ global logger
+ logger = params['logger']
+ params.pop('logger')
+ guestname = params['guestname']
+ vcpu = int(params['vcpu'])
+
+ logger.info("the name of virtual machine is %s" % guestname)
+ logger.info("the given vcpu number is %s" % vcpu)
+
+ conn = sharedmod.libvirtobj['conn']
+
+ try:
+ domobj = conn.lookupByName(guestname)
+ logger.info("set domain vcpu as %s with flag: %s" %
+ (vcpu, libvirt.VIR_DOMAIN_AFFECT_CONFIG))
+ domobj.setVcpusFlags(vcpu, libvirt.VIR_DOMAIN_AFFECT_CONFIG)
+ except libvirtError, e:
+ logger.error("libvirt call failed: " + str(e))
+ return 1
+
+ logger.info("check domain config xml to get current vcpu")
+ ret = get_current_vcpu(domobj)
+ if ret == vcpu:
+ logger.info("domain current vcpu is equal as set")
+ return 0
+ else:
+ logger.error("domain current vcpu is not equal as set")
+ return 1
diff --git a/repos/setVcpus/set_vcpus_live.py b/repos/setVcpus/set_vcpus_live.py
new file mode 100644
index 0000000..35a2976
--- /dev/null
+++ b/repos/setVcpus/set_vcpus_live.py
@@ -0,0 +1,96 @@
+#!/usr/bin/env python
+# Test set domain vcpu with flag VIR_DOMAIN_VCPU_LIVE. Check
+# domain xml and inside domain to get current vcpu number. The
+# live flag only work on running domain, so test on shutoff
+# domain will fail.
+
+from xml.dom import minidom
+
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+from utils import utils
+
+required_params = ('guestname', 'vcpu', 'username', 'password', )
+optional_params = {}
+
+def get_current_vcpu(domobj, username, password):
+ """dump domain live xml description to get current vcpu number
+ and check in domain to confirm
+ """
+ try:
+ guestxml = domobj.XMLDesc(1)
+ guestname = domobj.name()
+ logger.debug("domain %s xml is :\n%s" %(guestname, guestxml))
+ xml = minidom.parseString(guestxml)
+ vcpu = xml.getElementsByTagName('vcpu')[0]
+
+ if vcpu.hasAttribute('current'):
+ attr = vcpu.getAttributeNode('current')
+ current = int(attr.nodeValue)
+ else:
+ logger.info("no 'current' atrribute for element vcpu")
+ current = int(vcpu.childNodes[0].data)
+
+ logger.info("domain current vcpu number in live xml is: %s" % current)
+
+ except libvirtError, e:
+ logger.error("libvirt call failed: " + str(e))
+ return False
+
+ logger.debug("get the mac address of vm %s" % guestname)
+ mac = utils.get_dom_mac_addr(guestname)
+ logger.debug("the mac address of vm %s is %s" % (guestname, mac))
+
+ logger.info("check cpu number in domain")
+ ip = utils.mac_to_ip(mac, 180)
+
+ cmd = "cat /proc/cpuinfo | grep processor | wc -l"
+ ret, output = utils.remote_exec_pexpect(ip, username, password, cmd)
+ if not ret:
+ logger.info("cpu number in domain is %s" % output)
+ if int(output) == current:
+ logger.info("cpu in domain is equal to current vcpu value")
+ else:
+ logger.error("current vcpu is not equal as check in domain")
+ return False
+ else:
+ logger.error("check in domain fail")
+ return False
+
+ return current
+
+def set_vcpus_live(params):
+ """set domain vcpu with live flag and check
+ """
+ global logger
+ logger = params['logger']
+ params.pop('logger')
+ guestname = params['guestname']
+ vcpu = int(params['vcpu'])
+ username = params['username']
+ password = params['password']
+
+ logger.info("the name of virtual machine is %s" % guestname)
+ logger.info("the given vcpu number is %s" % vcpu)
+
+ conn = sharedmod.libvirtobj['conn']
+
+ try:
+ domobj = conn.lookupByName(guestname)
+ logger.info("set domain vcpu as %s with flag: %s" %
+ (vcpu, libvirt.VIR_DOMAIN_VCPU_LIVE))
+ domobj.setVcpusFlags(vcpu, libvirt.VIR_DOMAIN_VCPU_LIVE)
+ except libvirtError, e:
+ logger.error("libvirt call failed: " + str(e))
+ return 1
+
+ logger.info("check domain vcpu")
+ ret = get_current_vcpu(domobj, username, password)
+ if ret == vcpu:
+ logger.info("domain vcpu is equal as set")
+ return 0
+ else:
+ logger.error("domain vcpu is not equal as set")
+ return 1
diff --git a/repos/setVcpus/set_vcpus_maximum.py b/repos/setVcpus/set_vcpus_maximum.py
new file mode 100644
index 0000000..389a214
--- /dev/null
+++ b/repos/setVcpus/set_vcpus_maximum.py
@@ -0,0 +1,62 @@
+#!/usr/bin/env python
+# Test set domain vcpu with flag VIR_DOMAIN_VCPU_MAXIMUM. Check
+# domain xml to get max vcpu number. The maxinum flag only work
+# on shutoff domain, so test on running domain will fail.
+
+from xml.dom import minidom
+
+import libvirt
+from libvirt import libvirtError
+
+from src import sharedmod
+
+required_params = ('guestname', 'vcpu', )
+optional_params = {}
+
+def get_max_vcpu(domobj):
+ """dump domain xml description to get max vcpu number
+ """
+ try:
+ guestxml = domobj.XMLDesc(1)
+ logger.debug("domain %s xml is :\n%s" %(domobj.name(), guestxml))
+ xml = minidom.parseString(guestxml)
+ vcpu = xml.getElementsByTagName('vcpu')[0]
+ max = int(vcpu.childNodes[0].data)
+ logger.info("domain maximum vcpu number in xml is: %s" % max)
+ except libvirtError, e:
+ logger.error("libvirt call failed: " + str(e))
+ return False
+
+ return max
+
+def set_vcpus_maximum(params):
+ """set domain vcpu with maximum flag and check
+ """
+ global logger
+ logger = params['logger']
+ params.pop('logger')
+ guestname = params['guestname']
+ vcpu = int(params['vcpu'])
+
+ logger.info("the name of virtual machine is %s" % guestname)
+ logger.info("the given vcpu number is %s" % vcpu)
+
+ conn = sharedmod.libvirtobj['conn']
+
+ try:
+ domobj = conn.lookupByName(guestname)
+ logger.info("set domain maximum vcpu as %s with flag: %s" %
+ (vcpu, libvirt.VIR_DOMAIN_VCPU_MAXIMUM))
+ domobj.setVcpusFlags(vcpu, libvirt.VIR_DOMAIN_VCPU_MAXIMUM)
+ except libvirtError, e:
+ logger.error("libvirt call failed: " + str(e))
+ return 1
+
+ logger.info("check domain xml to get max vcpu")
+ ret = get_max_vcpu(domobj)
+ if ret == vcpu:
+ logger.info("domain max vcpu is equal as set")
+ return 0
+ else:
+ logger.error("domain max vcpu is not equal as set")
+ return 1
--
1.7.1
11 years, 12 months
[libvirt] [PATCH v2 0/4] Introduce support for FITRIM within guest OS
by Michal Privoznik
https://bugzilla.redhat.com/show_bug.cgi?id=831159
diff to v1:
- Peter's review suggestions worked in
2/4 has been ACKed already.
Michal Privoznik (4):
Introduce virDomainFSTrim() public API
remote: Implement virDomainFSTrim
qemu: Implement virDomainFSTrim
virsh: Expose new virDomainFSTrim API
include/libvirt/libvirt.h.in | 4 ++
src/driver.h | 6 ++++
src/libvirt.c | 55 ++++++++++++++++++++++++++++++++++
src/libvirt_public.syms | 5 +++
src/qemu/qemu_agent.c | 25 +++++++++++++++
src/qemu/qemu_agent.h | 2 +
src/qemu/qemu_driver.c | 68 ++++++++++++++++++++++++++++++++++++++++++
src/remote/remote_driver.c | 1 +
src/remote/remote_protocol.x | 10 +++++-
src/remote_protocol-structs | 7 ++++
src/rpc/gendispatch.pl | 1 +
tools/virsh-domain.c | 47 +++++++++++++++++++++++++++++
tools/virsh.pod | 14 ++++++++
13 files changed, 244 insertions(+), 1 deletions(-)
--
1.7.8.6
11 years, 12 months