[libvirt] [RFC] block I/O throttling: how to enable in libvirt
by Zhi Yong Wu
----- Forwarded message from Zhi Yong Wu <wuzhy(a)linux.vnet.ibm.com> -----
Date: Thu, 1 Sep 2011 11:55:17 +0800
From: Zhi Yong Wu <wuzhy(a)linux.vnet.ibm.com>
To: Stefan Hajnoczi <stefanha(a)gmail.com>
Cc: "Daniel P. Berrange" <berrange(a)redhat.com>, Stefan Hajnoczi
<stefanha(a)gmail.com>, Adam Litke <agl(a)us.ibm.com>, Zhi Yong Wu
<wuzhy(a)linux.vnet.ibm.com>, QEMU Developers <qemu-devel(a)nongnu.org>,
guijianfeng(a)cn.fujitsu.com, hutao(a)cn.fujitsu.com
Subject: [RFC] block I/O throttling: how to enable in libvirt
Message-ID: <20110901035517.GD16985(a)f15.cn.ibm.com>
References: <CAEH94Li_C=BOe2gV8NyM48njYWMBAo9MTGc1eUOh-Y=cODs6VA(a)mail.gmail.com>
<CAJSP0QW1CPCokX=F5z7y==vn1S4wH0VtOaQ7oj4kC7f7uQM4MQ(a)mail.gmail.com>
<20110830134636.GB29130(a)aglitke.rchland.ibm.com>
<CAJSP0QUHm=y8XJC_KXRg7ufFZt3K_XDDfQb--sxjC+c0GjO8qg(a)mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <CAJSP0QUHm=y8XJC_KXRg7ufFZt3K_XDDfQb--sxjC+c0GjO8qg(a)mail.gmail.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
X-Xagent-From: wuzhy(a)linux.vnet.ibm.com
X-Xagent-To: wuzhy(a)linux.vnet.ibm.com
X-Xagent-Gateway: vmsdvma.vnet.ibm.com (XAGENTU7 at VMSDVMA)
On Wed, Aug 31, 2011 at 08:18:19AM +0100, Stefan Hajnoczi wrote:
>Subject: Re: The design choice for how to enable block I/O throttling
> function in libvirt
>From: Stefan Hajnoczi <stefanha(a)gmail.com>
>To: Adam Litke <agl(a)us.ibm.com>
>Cc: libvir-list(a)redhat.com, "Daniel P. Berrange" <berrange(a)redhat.com>, Zhi
> Yong Wu <wuzhy(a)linux.vnet.ibm.com>, Zhi Yong Wu <zwu.kernel(a)gmail.com>
>Content-Type: text/plain; charset=ISO-8859-1
>Content-Transfer-Encoding: quoted-printable
>X-Brightmail-Tracker: AAAAAA==
>X-Xagent-From: stefanha(a)gmail.com
>X-Xagent-To: wuzhy(a)linux.vnet.ibm.com
>X-Xagent-Gateway: bldgate.vnet.ibm.com (XAGENTU7 at BLDGATE)
>
>On Tue, Aug 30, 2011 at 2:46 PM, Adam Litke <agl(a)us.ibm.com> wrote:
>> On Tue, Aug 30, 2011 at 09:53:33AM +0100, Stefan Hajnoczi wrote:
>>> On Tue, Aug 30, 2011 at 3:55 AM, Zhi Yong Wu <zwu.kernel(a)gmail.com> wrote:
>>> > I am trying to enable block I/O throttling function in libvirt. But
>>> > currently i met some design questions, and don't make sure if we
>>> > should extend blkiotune to support block I/O throttling or introduce
>>> > one new libvirt command "blkiothrottle" to cover it or not. If you
>>> > have some better idea, pls don't hesitate to drop your comments.
>>>
>>> A little bit of context: this discussion is about adding libvirt
>>> support for QEMU disk I/O throttling.
>>
>> Thanks for the additional context Stefan.
>>
>>> Today libvirt supports the cgroups blkio-controller, which handles
>>> proportional shares and throughput/iops limits on host block devices.
>>> blkio-controller does not support network file systems (NFS) or other
>>> QEMU remote block drivers (curl, Ceph/rbd, sheepdog) since they are
>>> not host block devices. QEMU I/O throttling works with all types of
>>> -drive and therefore complements blkio-controller.
>>
>> The first question that pops into my mind is: Should a user need to understand
>> when to use the cgroups blkio-controller vs. the QEMU I/O throttling method? In
>> my opinion, it would be nice if libvirt had a single interface for block I/O
>> throttling and libvirt would decide which mechanism to use based on the type of
>> device and the specific limits that need to be set.
>
>Yes, I agree it would be simplest to pick the right mechanism,
>depending on the type of throttling the user wants. More below.
>
>>> I/O throttling can be applied independently to each -drive attached to
>>> a guest and supports throughput/iops limits. For more information on
>>> this QEMU feature and a comparison with blkio-controller, see Ryan
>>> Harper's KVM Forum 2011 presentation:
>>
>>> http://www.linux-kvm.org/wiki/images/7/72/2011-forum-keep-a-limit-on-it-i...
>>
>> From the presentation, it seems that both the cgroups method the the qemu method
>> offer comparable control (assuming a block device) so it might possible to apply
>> either method from the same API in a transparent manner. Am I correct or are we
>> suggesting that the Qemu throttling approach should always be used for Qemu
>> domains?
>
>QEMU I/O throttling does not provide a proportional share mechanism.
>So you cannot assign weights to VMs and let them receive a fraction of
>the available disk time. That is only supported by cgroups
>blkio-controller because it requires a global view which QEMU does not
>have.
>
>So I think the two are complementary:
>
>If proportional share should be used on a host block device, use
>cgroups blkio-controller.
>Otherwise use QEMU I/O throttling.
Stefan,
Do you agree with introducing one new libvirt command blkiothrottle now?
If so, i will work on the code draft to make it work.
Daniel and other maintainers,
If you are available, can you make some comments for us?:)
Regards,
Zhi Yong Wu
>
>Stefan
----- End forwarded message -----
13 years, 2 months
[libvirt] [test-API][PATCH] Add libvirtd restart test case
by Wayne Sun
* libvirtd restart should not affect the running domains. This test
check the libvirtd status before and after libvirtd restart, and
also by checking the domain pid to confirm the domain is not
affected.
---
repos/libvirtd/restart.py | 143 +++++++++++++++++++++++++++++++++++++++++++++
1 files changed, 143 insertions(+), 0 deletions(-)
create mode 100644 repos/libvirtd/restart.py
diff --git a/repos/libvirtd/restart.py b/repos/libvirtd/restart.py
new file mode 100644
index 0000000..15dd43c
--- /dev/null
+++ b/repos/libvirtd/restart.py
@@ -0,0 +1,143 @@
+#!/usr/bin/evn python
+""" Restart libvirtd testing. A running guest is required in this test.
+ During libvirtd restart, the guest remains running and not affected
+ by libvirtd restart.
+ libvirtd:restart
+ guestname
+ #GUESTNAME#
+"""
+
+__author__ = 'Wayne Sun: gsun(a)redhat.com'
+__date__ = 'Thu Aug 4, 2011'
+__version__ = '0.1.0'
+__credits__ = 'Copyright (C) 2011 Red Hat, Inc.'
+__all__ = ['restart']
+
+import os
+import re
+import sys
+import time
+
+def append_path(path):
+ """Append root path of package"""
+ if path not in sys.path:
+ sys.path.append(path)
+
+pwd = os.getcwd()
+result = re.search('(.*)libvirt-test-API', pwd)
+append_path(result.group(0))
+
+from lib import connectAPI
+from lib import domainAPI
+from utils.Python import utils
+
+VIRSH_LIST = "virsh list --all"
+RESTART_CMD = "service libvirtd restart"
+
+def check_params(params):
+ """Verify inputing parameter dictionary"""
+ logger = params['logger']
+ keys = ['guestname']
+ for key in keys:
+ if key not in params:
+ logger.error("%s is required" %key)
+ return 1
+ return 0
+
+def libvirtd_check(util, logger):
+ """check libvirtd status
+ """
+ cmd = "service libvirtd status"
+ ret, out = util.exec_cmd(cmd, shell=True)
+ if ret != 0:
+ logger.error("failed to get libvirtd status")
+ return 1
+ else:
+ logger.info(out[0])
+
+ logger.info(VIRSH_LIST)
+ ret, out = util.exec_cmd(VIRSH_LIST, shell=True)
+ if ret != 0:
+ logger.error("failed to get virsh list result")
+ return 1
+ else:
+ for i in range(len(out)):
+ logger.info(out[i])
+
+ return 0
+
+def get_domain_pid(util, logger, guestname):
+ """get the pid of running domain"""
+ logger.info("get the pid of running domain %s" % guestname)
+ get_pid_cmd = "cat /var/run/libvirt/qemu/%s.pid" % guestname
+ ret, pid = util.exec_cmd(get_pid_cmd, shell=True)
+ if ret:
+ logger.error("fail to get the pid of runnings domain %s" % \
+ guestname)
+ return 1, ""
+ else:
+ logger.info("the pid of domain %s is %s" % \
+ (guestname, pid[0]))
+ return 0, pid[0]
+
+def restart(params):
+ """restart libvirtd test"""
+ # Initiate and check parameters
+ params_check_result = check_params(params)
+ if params_check_result:
+ return 1
+
+ logger = params['logger']
+ guestname = params['guestname']
+ util = utils.Utils()
+ uri = util.get_uri('127.0.0.1')
+
+ conn = connectAPI.ConnectAPI()
+ virconn = conn.open(uri)
+ domobj = domainAPI.DomainAPI(virconn)
+ state = domobj.get_state(guestname)
+ conn.close()
+
+ if(state == "shutoff"):
+ logger.info("guest is shutoff, if u want to run this case, \
+ guest must be running")
+ return 1
+
+ logger.info("check the libvirtd status:")
+ result = libvirtd_check(util, logger)
+ if result:
+ return 1
+
+ ret, pid_before = get_domain_pid(util, logger, guestname)
+ if ret:
+ return 1
+
+ logger.info("restart libvirtd service:")
+ ret, out = util.exec_cmd(RESTART_CMD, shell=True)
+ if ret != 0:
+ logger.error("failed to restart libvirtd")
+ for i in range(len(out)):
+ logger.error(out[i])
+ return 1
+ else:
+ for i in range(len(out)):
+ logger.info(out[i])
+
+ logger.info("recheck libvirtd status:")
+ result = libvirtd_check(util, logger)
+ if result:
+ return 1
+
+ ret, pid_after = get_domain_pid(util, logger, guestname)
+ if ret:
+ return 1
+
+ if pid_before != pid_after:
+ logger.error("%s pid changed during libvirtd restart" % \
+ guestname)
+ return 1
+ else:
+ logger.info("domain pid not change, %s keeps running during \
+ libvirtd restart" % guestname)
+
+ return 0
--
1.7.1
13 years, 2 months
[libvirt] [PATCH] API: Init conn in case of it might be used uninitialized
by Osier Yang
There is a goto before "conn" is initialized.
---
src/libvirt.c | 3 ++-
1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/src/libvirt.c b/src/libvirt.c
index 4284954..eca919a 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -15225,12 +15225,13 @@ virDomainMigrateGetMaxSpeed(virDomainPtr domain,
return -1;
}
+ conn = domain->conn;
+
if (!bandwidth) {
virLibDomainError(VIR_ERR_INVALID_ARG, __FUNCTION__);
goto error;
}
- conn = domain->conn;
if (conn->flags & VIR_CONNECT_RO) {
virLibDomainError(VIR_ERR_OPERATION_DENIED, __FUNCTION__);
goto error;
--
1.7.6
13 years, 2 months
[libvirt] [test-API][PATCH 1/2] Remove cases/migrate.conf testcases and create a set of migration testcases
by Guannan Ren
*cases/migrate.conf remove it.
*cases/migration/* create a new set of migraion testcases with
tcp, tls and sasl combination
It's huge, so I send the header of commit here.
---
cases/migrate.conf | 97 ----
cases/migration/ssh_persistent_paused_no_dst.conf | 459 +++++++++++++++++++
.../migration/ssh_persistent_paused_with_dst.conf | 459 +++++++++++++++++++
cases/migration/ssh_persistent_running_no_dst.conf | 435 ++++++++++++++++++
.../migration/ssh_persistent_running_with_dst.conf | 435 ++++++++++++++++++
cases/migration/ssh_transient_paused_no_dst.conf | 403 +++++++++++++++++
cases/migration/ssh_transient_paused_with_dst.conf | 403 +++++++++++++++++
cases/migration/ssh_transient_running_no_dst.conf | 388 ++++++++++++++++
.../migration/ssh_transient_running_with_dst.conf | 382 ++++++++++++++++
cases/migration/tcp_persistent_paused_no_dst.conf | 471 ++++++++++++++++++++
.../migration/tcp_persistent_paused_with_dst.conf | 471 ++++++++++++++++++++
cases/migration/tcp_persistent_running_no_dst.conf | 447 +++++++++++++++++++
.../migration/tcp_persistent_running_with_dst.conf | 447 +++++++++++++++++++
.../tcp_sasl_persistent_paused_no_dst.conf | 167 +++++++
.../tcp_sasl_persistent_paused_with_dst.conf | 168 +++++++
.../tcp_sasl_persistent_running_no_dst.conf | 159 +++++++
.../tcp_sasl_persistent_running_with_dst.conf | 159 +++++++
.../tcp_sasl_transient_paused_no_dst.conf | 151 +++++++
.../tcp_sasl_transient_paused_with_dst.conf | 151 +++++++
.../tcp_sasl_transient_running_no_dst.conf | 143 ++++++
.../tcp_sasl_transient_running_with_dst.conf | 143 ++++++
cases/migration/tcp_transient_paused_no_dst.conf | 415 +++++++++++++++++
cases/migration/tcp_transient_paused_with_dst.conf | 415 +++++++++++++++++
cases/migration/tcp_transient_running_no_dst.conf | 400 +++++++++++++++++
.../migration/tcp_transient_running_with_dst.conf | 394 ++++++++++++++++
cases/migration/tls_persistent_paused_no_dst.conf | 471 ++++++++++++++++++++
.../migration/tls_persistent_paused_with_dst.conf | 471 ++++++++++++++++++++
cases/migration/tls_persistent_running_no_dst.conf | 447 +++++++++++++++++++
.../migration/tls_persistent_running_with_dst.conf | 447 +++++++++++++++++++
.../tls_sasl_persistent_paused_no_dst.conf | 167 +++++++
.../tls_sasl_persistent_paused_with_dst.conf | 167 +++++++
.../tls_sasl_persistent_running_no_dst.conf | 159 +++++++
.../tls_sasl_persistent_running_with_dst.conf | 159 +++++++
.../tls_sasl_transient_paused_no_dst.conf | 151 +++++++
.../tls_sasl_transient_paused_with_dst.conf | 151 +++++++
.../tls_sasl_transient_running_no_dst.conf | 143 ++++++
.../tls_sasl_transient_running_with_dst.conf | 143 ++++++
cases/migration/tls_transient_paused_no_dst.conf | 415 +++++++++++++++++
cases/migration/tls_transient_paused_with_dst.conf | 415 +++++++++++++++++
cases/migration/tls_transient_running_no_dst.conf | 400 +++++++++++++++++
.../migration/tls_transient_running_with_dst.conf | 394 ++++++++++++++++
41 files changed, 12765 insertions(+), 97 deletions(-)
delete mode 100644 cases/migrate.conf
create mode 100644 cases/migration/ssh_persistent_paused_no_dst.conf
create mode 100644 cases/migration/ssh_persistent_paused_with_dst.conf
create mode 100644 cases/migration/ssh_persistent_running_no_dst.conf
create mode 100644 cases/migration/ssh_persistent_running_with_dst.conf
create mode 100644 cases/migration/ssh_transient_paused_no_dst.conf
create mode 100644 cases/migration/ssh_transient_paused_with_dst.conf
create mode 100644 cases/migration/ssh_transient_running_no_dst.conf
create mode 100644 cases/migration/ssh_transient_running_with_dst.conf
create mode 100644 cases/migration/tcp_persistent_paused_no_dst.conf
create mode 100644 cases/migration/tcp_persistent_paused_with_dst.conf
create mode 100644 cases/migration/tcp_persistent_running_no_dst.conf
create mode 100644 cases/migration/tcp_persistent_running_with_dst.conf
create mode 100644 cases/migration/tcp_sasl_persistent_paused_no_dst.conf
create mode 100644 cases/migration/tcp_sasl_persistent_paused_with_dst.conf
create mode 100644 cases/migration/tcp_sasl_persistent_running_no_dst.conf
create mode 100644 cases/migration/tcp_sasl_persistent_running_with_dst.conf
create mode 100644 cases/migration/tcp_sasl_transient_paused_no_dst.conf
create mode 100644 cases/migration/tcp_sasl_transient_paused_with_dst.conf
create mode 100644 cases/migration/tcp_sasl_transient_running_no_dst.conf
create mode 100644 cases/migration/tcp_sasl_transient_running_with_dst.conf
create mode 100644 cases/migration/tcp_transient_paused_no_dst.conf
create mode 100644 cases/migration/tcp_transient_paused_with_dst.conf
create mode 100644 cases/migration/tcp_transient_running_no_dst.conf
create mode 100644 cases/migration/tcp_transient_running_with_dst.conf
create mode 100644 cases/migration/tls_persistent_paused_no_dst.conf
create mode 100644 cases/migration/tls_persistent_paused_with_dst.conf
create mode 100644 cases/migration/tls_persistent_running_no_dst.conf
create mode 100644 cases/migration/tls_persistent_running_with_dst.conf
create mode 100644 cases/migration/tls_sasl_persistent_paused_no_dst.conf
create mode 100644 cases/migration/tls_sasl_persistent_paused_with_dst.conf
create mode 100644 cases/migration/tls_sasl_persistent_running_no_dst.conf
create mode 100644 cases/migration/tls_sasl_persistent_running_with_dst.conf
create mode 100644 cases/migration/tls_sasl_transient_paused_no_dst.conf
create mode 100644 cases/migration/tls_sasl_transient_paused_with_dst.conf
create mode 100644 cases/migration/tls_sasl_transient_running_no_dst.conf
create mode 100644 cases/migration/tls_sasl_transient_running_with_dst.conf
create mode 100644 cases/migration/tls_transient_paused_no_dst.conf
create mode 100644 cases/migration/tls_transient_paused_with_dst.conf
create mode 100644 cases/migration/tls_transient_running_no_dst.conf
create mode 100644 cases/migration/tls_transient_running_with_dst.conf
13 years, 2 months
[libvirt] [RFC] NUMA topology specification
by Bharata B Rao
Hi,
qemu supports specification of NUMA topology on command line using -numa option.
-numa node[,mem=size][,cpus=cpu[-cpu]][,nodeid=node]
I see that there is no way to specify such NUMA topology in libvirt
XML. Are there plans to add support for NUMA topology specification ?
Is anybody already working on this ? If not I would like to add this
support for libvirt.
Currently the topology specification available in libvirt ( <topology
sockets='1' cores='2' threads='1'/>) translates to "-smp
sockets=1,cores=2,threads=1" option of qemu. There is not equivalent
in libvirt that could generate -numa command line option of qemu.
How about something like this ? (OPTION 1)
<cpu>
...
<numa nodeid='node' cpus='cpu[-cpu]' mem='size'>
...
</cpu>
And we could specify multiple such lines, one for each node.
-numa and -smp options in qemu do not work all that well since they
are parsed independent of each other and one could specify a cpu set
with -numa option that is incompatible with sockets,cores and threads
specified on -smp option. This should be fixed in qemu, but given that
such a problem has been observed, should libvirt tie the specification
of numa and smp (sockets,threads,cores) together so that one is forced
to specify only valid combinations of nodes and cpus in libvirt ?
May be something like this: (OPTION 2)
<cpu>
...
<topology sockets='1' cores='2' threads='1' nodeid='0' cpus='0-1' mem='size'>
<topology sockets='1' cores='2' threads='1' nodeid='1' cpus='2-3' mem='size'>
...
</cpu
This should result in a 2 node system with each node having 1 socket
with 2 cores.
Comments, suggestions ?
Regards,
Bharata.
--
http://bharata.sulekha.com/blog/posts.htm, http://raobharata.wordpress.com/
13 years, 2 months
[libvirt] [PATCH] start: allow discarding managed save
by Eric Blake
There have been several instances of people having problems with
a broken managed save file causing 'virsh start' to fail, and not
being aware that they could use 'virsh managedsave-remove dom' to
fix things. Making it possible to do this as part of starting a
domain makes the same functionality easier to find, and one less
API call.
* include/libvirt/libvirt.h.in (VIR_DOMAIN_START_FORCE_BOOT): New
flag.
* src/libvirt.c (virDomainCreateWithFlags): Document it.
* src/qemu/qemu_driver.c (qemuDomainObjStart): Alter signature.
(qemuAutostartDomain, qemuDomainStartWithFlags): Update callers.
* tools/virsh.c (cmdStart): Expose it in virsh.
* tools/virsh.pod (start): Document it.
---
include/libvirt/libvirt.h.in | 1 +
src/libvirt.c | 3 ++
src/qemu/qemu_driver.c | 50 +++++++++++++++++++++++++----------------
tools/virsh.c | 7 +++++-
tools/virsh.pod | 5 ++-
5 files changed, 43 insertions(+), 23 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 53a2f7d..c51a5b9 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -236,6 +236,7 @@ typedef enum {
VIR_DOMAIN_START_PAUSED = 1 << 0, /* Launch guest in paused state */
VIR_DOMAIN_START_AUTODESTROY = 1 << 1, /* Automatically kill guest when virConnectPtr is closed */
VIR_DOMAIN_START_BYPASS_CACHE = 1 << 2, /* Avoid file system cache pollution */
+ VIR_DOMAIN_START_FORCE_BOOT = 1 << 3, /* Boot, discarding any managed save */
} virDomainCreateFlags;
diff --git a/src/libvirt.c b/src/libvirt.c
index 65a099b..80c8b7c 100644
--- a/src/libvirt.c
+++ b/src/libvirt.c
@@ -7081,6 +7081,9 @@ error:
* the file, or fail if it cannot do so for the given system; this can allow
* less pressure on file system cache, but also risks slowing loads from NFS.
*
+ * If the VIR_DOMAIN_START_FORCE_BOOT flag is set, then any managed save
+ * file for this domain is discarded, and the domain boots from scratch.
+ *
* Returns 0 in case of success, -1 in case of error
*/
int
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index f21122d..5033998 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -120,9 +120,7 @@ static int qemudShutdown(void);
static int qemuDomainObjStart(virConnectPtr conn,
struct qemud_driver *driver,
virDomainObjPtr vm,
- bool start_paused,
- bool autodestroy,
- bool bypass_cache);
+ unsigned int flags);
static int qemudDomainGetMaxVcpus(virDomainPtr dom);
@@ -135,11 +133,16 @@ struct qemuAutostartData {
};
static void
-qemuAutostartDomain(void *payload, const void *name ATTRIBUTE_UNUSED, void *opaque)
+qemuAutostartDomain(void *payload, const void *name ATTRIBUTE_UNUSED,
+ void *opaque)
{
virDomainObjPtr vm = payload;
struct qemuAutostartData *data = opaque;
virErrorPtr err;
+ int flags = 0;
+
+ if (data->driver->autoStartBypassCache)
+ flags |= VIR_DOMAIN_START_BYPASS_CACHE;
virDomainObjLock(vm);
virResetLastError();
@@ -152,9 +155,7 @@ qemuAutostartDomain(void *payload, const void *name ATTRIBUTE_UNUSED, void *opaq
} else {
if (vm->autostart &&
!virDomainObjIsActive(vm) &&
- qemuDomainObjStart(data->conn, data->driver, vm,
- false, false,
- data->driver->autoStartBypassCache) < 0) {
+ qemuDomainObjStart(data->conn, data->driver, vm, flags) < 0) {
err = virGetLastError();
VIR_ERROR(_("Failed to autostart VM '%s': %s"),
vm->def->name,
@@ -4441,12 +4442,14 @@ static int
qemuDomainObjStart(virConnectPtr conn,
struct qemud_driver *driver,
virDomainObjPtr vm,
- bool start_paused,
- bool autodestroy,
- bool bypass_cache)
+ unsigned int flags)
{
int ret = -1;
char *managed_save;
+ bool start_paused = (flags & VIR_DOMAIN_START_PAUSED) != 0;
+ bool autodestroy = (flags & VIR_DOMAIN_START_AUTODESTROY) != 0;
+ bool bypass_cache = (flags & VIR_DOMAIN_START_BYPASS_CACHE) != 0;
+ bool force_boot = (flags & VIR_DOMAIN_START_FORCE_BOOT) != 0;
/*
* If there is a managed saved state restore it instead of starting
@@ -4458,13 +4461,22 @@ qemuDomainObjStart(virConnectPtr conn,
goto cleanup;
if (virFileExists(managed_save)) {
- ret = qemuDomainObjRestore(conn, driver, vm, managed_save,
- bypass_cache);
+ if (force_boot) {
+ if (unlink(managed_save) < 0) {
+ virReportSystemError(errno,
+ _("cannot remove managed save file %s"),
+ managed_save);
+ goto cleanup;
+ }
+ } else {
+ ret = qemuDomainObjRestore(conn, driver, vm, managed_save,
+ bypass_cache);
- if ((ret == 0) && (unlink(managed_save) < 0))
- VIR_WARN("Failed to remove the managed state %s", managed_save);
+ if ((ret == 0) && (unlink(managed_save) < 0))
+ VIR_WARN("Failed to remove the managed state %s", managed_save);
- goto cleanup;
+ goto cleanup;
+ }
}
ret = qemuProcessStart(conn, driver, vm, NULL, start_paused,
@@ -4493,7 +4505,8 @@ qemuDomainStartWithFlags(virDomainPtr dom, unsigned int flags)
virCheckFlags(VIR_DOMAIN_START_PAUSED |
VIR_DOMAIN_START_AUTODESTROY |
- VIR_DOMAIN_START_BYPASS_CACHE, -1);
+ VIR_DOMAIN_START_BYPASS_CACHE |
+ VIR_DOMAIN_START_FORCE_BOOT, -1);
qemuDriverLock(driver);
vm = virDomainFindByUUID(&driver->domains, dom->uuid);
@@ -4515,10 +4528,7 @@ qemuDomainStartWithFlags(virDomainPtr dom, unsigned int flags)
goto endjob;
}
- if (qemuDomainObjStart(dom->conn, driver, vm,
- (flags & VIR_DOMAIN_START_PAUSED) != 0,
- (flags & VIR_DOMAIN_START_AUTODESTROY) != 0,
- (flags & VIR_DOMAIN_START_BYPASS_CACHE) != 0) < 0)
+ if (qemuDomainObjStart(dom->conn, driver, vm, flags) < 0)
goto endjob;
ret = 0;
diff --git a/tools/virsh.c b/tools/virsh.c
index 15b9bdd..49034ae 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -1537,9 +1537,12 @@ static const vshCmdOptDef opts_start[] = {
{"console", VSH_OT_BOOL, 0, N_("attach to console after creation")},
#endif
{"paused", VSH_OT_BOOL, 0, N_("leave the guest paused after creation")},
- {"autodestroy", VSH_OT_BOOL, 0, N_("automatically destroy the guest when virsh disconnects")},
+ {"autodestroy", VSH_OT_BOOL, 0,
+ N_("automatically destroy the guest when virsh disconnects")},
{"bypass-cache", VSH_OT_BOOL, 0,
N_("avoid file system cache when loading")},
+ {"force-boot", VSH_OT_BOOL, 0,
+ N_("force fresh boot by discarding any managed save")},
{NULL, 0, 0, NULL}
};
@@ -1572,6 +1575,8 @@ cmdStart(vshControl *ctl, const vshCmd *cmd)
flags |= VIR_DOMAIN_START_AUTODESTROY;
if (vshCommandOptBool(cmd, "bypass-cache"))
flags |= VIR_DOMAIN_START_BYPASS_CACHE;
+ if (vshCommandOptBool(cmd, "force-boot"))
+ flags |= VIR_DOMAIN_START_FORCE_BOOT;
/* Prefer older API unless we have to pass a flag. */
if ((flags ? virDomainCreateWithFlags(dom, flags)
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 81d7a1e..2cd0f73 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -890,7 +890,7 @@ The exact behavior of a domain when it shuts down is set by the
I<on_shutdown> parameter in the domain's XML definition.
=item B<start> I<domain-name> [I<--console>] [I<--paused>] [I<--autodestroy>]
-[I<--bypass-cache>]
+[I<--bypass-cache>] [I<--force-boot>]
Start a (previously defined) inactive domain, either from the last
B<managedsave> state, or via a fresh boot if no managedsave state is
@@ -901,7 +901,8 @@ If I<--autodestroy> is requested, then the guest will be automatically
destroyed when virsh closes its connection to libvirt, or otherwise
exits. If I<--bypass-cache> is specified, and managedsave state exists,
the restore will avoid the file system cache, although this may slow
-down the operation.
+down the operation. If I<--force-boot> is specified, then any
+managedsave state is discarded and a fresh boot occurs.
=item B<suspend> I<domain-id>
--
1.7.4.4
13 years, 2 months
[libvirt] [PATCH] reserve slot 1 on pci bus0
by Wen Congyang
After supporting multi function pci device, we only reserve function 1 on slot 1.
The user can use the other function on slot 1 in the xml config file. We should
detect this wrong usage.
---
src/qemu/qemu_command.c | 12 +++++++++---
1 files changed, 9 insertions(+), 3 deletions(-)
diff --git a/src/qemu/qemu_command.c b/src/qemu/qemu_command.c
index 287ad8d..4b5734c 100644
--- a/src/qemu/qemu_command.c
+++ b/src/qemu/qemu_command.c
@@ -1072,6 +1072,7 @@ qemuAssignDevicePCISlots(virDomainDefPtr def, qemuDomainPCIAddressSetPtr addrs)
int i;
bool reservedIDE = false;
bool reservedVGA = false;
+ int function;
/* Host bridge */
if (qemuDomainPCIAddressReserveSlot(addrs, 0) < 0)
@@ -1107,9 +1108,14 @@ qemuAssignDevicePCISlots(virDomainDefPtr def, qemuDomainPCIAddressSetPtr addrs)
/* PIIX3 (ISA bridge, IDE controller, something else unknown, USB controller)
* hardcoded slot=1, multifunction device
*/
- if (!reservedIDE &&
- qemuDomainPCIAddressReserveSlot(addrs, 1) < 0)
- goto error;
+ for (function = 0; function <= QEMU_PCI_ADDRESS_LAST_FUNCTION; function++) {
+ if (function == 1 && reservedIDE)
+ /* we have reserved this pci address */
+ continue;
+
+ if (qemuDomainPCIAddressReserveFunction(addrs, 1, function) < 0)
+ goto error;
+ }
/* First VGA is hardcoded slot=2 */
if (def->nvideos > 0) {
--
1.7.1
13 years, 2 months
[libvirt] [PATCH] virsh: avoid memory leak on cmdVolCreateAs
by ajia@redhat.com
* tools/virsh.c: fix memory leak on cmdVolCreateAs function.
* Detected in valgrind run:
==4746==
==4746== 48 (40 direct, 8 indirect) bytes in 1 blocks are definitely lost in loss record 26 of 52
==4746== at 0x4A04A28: calloc (vg_replace_malloc.c:467)
==4746== by 0x4C76E51: virAlloc (memory.c:101)
==4746== by 0x4CD9418: virGetStoragePool (datatypes.c:592)
==4746== by 0x4D21367: remoteStoragePoolLookupByName (remote_driver.c:4126)
==4746== by 0x4CE42B0: virStoragePoolLookupByName (libvirt.c:10232)
==4746== by 0x40C276: vshCommandOptPoolBy (virsh.c:13660)
==4746== by 0x40CA37: cmdVolCreateAs (virsh.c:8094)
==4746== by 0x412AF2: vshCommandRun (virsh.c:13770)
==4746== by 0x422F11: main (virsh.c:15127)
==4746==
==4746== 1,011 bytes in 1 blocks are definitely lost in loss record 45 of 52
==4746== at 0x4A05FDE: malloc (vg_replace_malloc.c:236)
==4746== by 0x4A06167: realloc (vg_replace_malloc.c:525)
==4746== by 0x4C76ECB: virReallocN (memory.c:161)
==4746== by 0x4C60319: virBufferGrow (buf.c:72)
==4746== by 0x4C606AA: virBufferAdd (buf.c:106)
==4746== by 0x40CB37: cmdVolCreateAs (virsh.c:8118)
==4746== by 0x412AF2: vshCommandRun (virsh.c:13770)
==4746== by 0x422F11: main (virsh.c:15127)
==4746==
==4746== LEAK SUMMARY:
==4746== definitely lost: 1,051 bytes in 2 blocks
==4746== indirectly lost: 8 bytes in 1 blocks
==4746== possibly lost: 0 bytes in 0 blocks
==4746== still reachable: 390,767 bytes in 1,373 blocks
==4746== suppressed: 0 bytes in 0 blocks
* How to reproduce?
% valgrind -v --leak-check=full virsh vol-create-as default foo.img 10M \
--allocation 0 --format qcow2 --backing-vol bar.img
Notes: bar.img doesn't exist.
Signed-off-by: Alex Jia <ajia(a)redhat.com>
---
tools/virsh.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index f9bcd2c..44c2f1c 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -8166,7 +8166,7 @@ cmdVolCreateAs(vshControl *ctl, const vshCmd *cmd)
}
if (snapVol == NULL) {
vshError(ctl, _("failed to get vol '%s'"), snapshotStrVol);
- return false;
+ goto cleanup;
}
char *snapshotStrVolPath;
--
1.7.1
13 years, 2 months
[libvirt] [PATCH] lxc: do not require 'ifconfig' or 'ipconfig' in container
by Scott Moser
Currently, the lxc implementation invokes 'ip' and 'ifconfig' commands
inside a container using 'virRun'. That has the side effect of requiring
those commands to be present and to function in a manner consistent with
the usage. Some small roots (such as ttylinux) may not have 'ip' or
'ifconfig'.
This patch replaces the use of these commands with usage of
netdevice. The result is that lxc containers do not have to implement
those commands, and lxc in libvirt is only dependent on the netdevice
interface.
I've tested this patch locally against the ubuntu libvirt version enough
to verify its generally sane. I attempted to build upstream today, but
failed with:
/usr/bin/ld:
../src/.libs/libvirt_driver_qemu.a(libvirt_driver_qemu_la-qemu_domain.o):
undefined reference to symbol 'xmlXPathRegisterNs@(a)LIBXML2_2.4.30
Thats probably a local issue only, but I wanted to get this patch up and
see what others thought of it. This is ubuntu bug
https://bugs.launchpad.net/ubuntu/+source/libvirt/+bug/828211 .
diff --git a/src/lxc/veth.c b/src/lxc/veth.c
index 34cb804..c24df91 100644
--- a/src/lxc/veth.c
+++ b/src/lxc/veth.c
@@ -12,8 +12,11 @@
#include <config.h>
+#include <linux/sockios.h>
+#include <net/if.h>
#include <string.h>
#include <stdio.h>
+#include <sys/ioctl.h>
#include <sys/types.h>
#include <sys/wait.h>
@@ -186,41 +189,49 @@ int vethDelete(const char *veth)
* @veth: name of veth device
* @upOrDown: 0 => down, 1 => up
*
- * Enables a veth device using the ifconfig command. A NULL inetAddress
- * will cause it to be left off the command line.
+ * Enables a veth device using SIOCSIFFLAGS
*
- * Returns 0 on success or -1 in case of error
+ * Returns 0 on success, -1 on failure, with errno set
*/
int vethInterfaceUpOrDown(const char* veth, int upOrDown)
{
- int rc;
- const char *argv[] = {"ifconfig", veth, NULL, NULL};
- int cmdResult = 0;
+ struct ifreq ifr;
+ int fd, ret;
- if (0 == upOrDown)
- argv[2] = "down";
- else
- argv[2] = "up";
+ if ((fd = socket(PF_PACKET, SOCK_DGRAM, 0)) == -1)
+ return(-1);
- rc = virRun(argv, &cmdResult);
+ memset(&ifr, 0, sizeof(struct ifreq));
- if (rc != 0 ||
- (WIFEXITED(cmdResult) && WEXITSTATUS(cmdResult) != 0)) {
- if (0 == upOrDown)
+ if (virStrcpyStatic(ifr.ifr_name, veth) == NULL) {
+ errno = EINVAL;
+ return -1;
+ }
+
+ if ((ret = ioctl(fd, SIOCGIFFLAGS, &ifr)) == 0) {
+ if (upOrDown)
+ ifr.ifr_flags |= IFF_UP;
+ else
+ ifr.ifr_flags &= ~(IFF_UP | IFF_RUNNING);
+
+ ret = ioctl(fd, SIOCSIFFLAGS, &ifr);
+ }
+
+ close(fd);
+ if (ret == -1)
+ if (upOrDown == 0)
/*
* Prevent overwriting an error log which may be set
* where an actual failure occurs.
*/
- VIR_DEBUG("Failed to disable '%s' (%d)",
- veth, WEXITSTATUS(cmdResult));
+ VIR_DEBUG("Failed to disable '%s'", veth);
else
vethError(VIR_ERR_INTERNAL_ERROR,
- _("Failed to enable '%s' (%d)"),
- veth, WEXITSTATUS(cmdResult));
- rc = -1;
- }
+ _("Failed to enable '%s'"), veth);
+ else
+ ret = 0;
- return rc;
+ return(ret);
}
/**
@@ -279,17 +290,29 @@ int setMacAddr(const char* iface, const char* macaddr)
* @iface: name of device
* @new: new name of @iface
*
- * Changes the name of the given device with the
- * given new name using this command:
- * ip link set @iface name @new
+ * Changes the name of the given device.
*
- * Returns 0 on success or -1 in case of error
+ * Returns 0 on success, -1 on failure with errno set.
*/
int setInterfaceName(const char* iface, const char* new)
{
- const char *argv[] = {
- "ip", "link", "set", iface, "name", new, NULL
- };
+ struct ifreq ifr;
+ int fd = socket(PF_PACKET, SOCK_DGRAM, 0);
- return virRun(argv, NULL);
+ memset(&ifr, 0, sizeof(struct ifreq));
+
+ if (virStrcpyStatic(ifr.ifr_name, iface) == NULL) {
+ errno = EINVAL;
+ return -1;
+ }
+
+ if (virStrcpyStatic(ifr.ifr_newname, new) == NULL) {
+ errno = EINVAL;
+ return -1;
+ }
+
+ if (ioctl(fd, SIOCSIFNAME, &ifr))
+ return -1;
+
+ return 0;
}
13 years, 2 months