[libvirt] Supporting vhost-net and macvtap in libvirt for QEMU
by Anthony Liguori
Disclaimer: I am neither an SR-IOV nor a vhost-net expert, but I've CC'd
people that are who can throw tomatoes at me for getting bits wrong :-)
I wanted to start a discussion about supporting vhost-net in libvirt.
vhost-net has not yet been merged into qemu but I expect it will be soon
so it's a good time to start this discussion.
There are two modes worth supporting for vhost-net in libvirt. The
first mode is where vhost-net backs to a tun/tap device. This is
behaves in very much the same way that -net tap behaves in qemu today.
Basically, the difference is that the virtio backend is in the kernel
instead of in qemu so there should be some performance improvement.
Current, libvirt invokes qemu with -net tap,fd=X where X is an already
open fd to a tun/tap device. I suspect that after we merge vhost-net,
libvirt could support vhost-net in this mode by just doing -net
vhost,fd=X. I think the only real question for libvirt is whether to
provide a user visible switch to use vhost or to just always use vhost
when it's available and it makes sense. Personally, I think the later
makes sense.
The more interesting invocation of vhost-net though is one where the
vhost-net device backs directly to a physical network card. In this
mode, vhost should get considerably better performance than the current
implementation. I don't know the syntax yet, but I think it's
reasonable to assume that it will look something like -net
tap,dev=eth0. The effect will be that eth0 is dedicated to the guest.
On most modern systems, there is a small number of network devices so
this model is not all that useful except when dealing with SR-IOV
adapters. In that case, each physical device can be exposed as many
virtual devices (VFs). There are a few restrictions here though. The
biggest is that currently, you can only change the number of VFs by
reloading a kernel module so it's really a parameter that must be set at
startup time.
I think there are a few ways libvirt could support vhost-net in this
second mode. The simplest would be to introduce a new tag similar to
<source network='br0'>. In fact, if you probed the device type for the
network parameter, you could probably do something like <source
network='eth0'> and have it Just Work.
Another model would be to have libvirt see an SR-IOV adapter as a
network pool whereas it handled all of the VF management. Considering
how inflexible SR-IOV is today, I'm not sure whether this is the best model.
Has anyone put any more thought into this problem or how this should be
modeled in libvirt? Michael, could you share your current thinking for
-net syntax?
--
Regards,
Anthony Liguori
1 year, 1 month
[libvirt] Libvirt multi queue support
by Naor Shlomo
Hello experts,
Could anyone please tell me if Multi Queue it fully supported in Libvirt and if so what version contains it?
Thanks,
Naor
8 years, 6 months
[libvirt] [libvirt-python PATCH] setup: Make libvirt API XML path configurable
by Martin Kletzander
Adding a support for LIBVIRT_API_PATH evironment variable, which can
control where the script should look for the 'libvirt-api.xml' file.
This allows building libvirt-python against different libvirt than the
one installed in the system. This may be used for example in autotest
or by packagers without the need to install libvirt into the system.
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
setup.py | 12 +++++++++++-
1 file changed, 11 insertions(+), 1 deletion(-)
diff --git a/setup.py b/setup.py
index 17b4722..566c210 100755
--- a/setup.py
+++ b/setup.py
@@ -109,7 +109,17 @@ class my_build(build):
"""Check with pkg-config that libvirt is present and extract
the API XML file paths we need from it"""
- libvirt_api = get_pkgconfig_data(["--variable", "libvirt_api"], "libvirt")
+ libvirt_api = os.getenv("LIBVIRT_API_PATH")
+
+ if libvirt_api:
+ if not libvirt_api.endswith("-api.xml"):
+ raise ValueError("Invalid path '%s' for API XML" % libvirt_api)
+ if not os.path.exists(libvirt_api):
+ raise ValueError("API XML '%s' does not exist, "
+ "have you built libvirt?" % libvirt_api)
+ else:
+ libvirt_api = get_pkgconfig_data(["--variable", "libvirt_api"],
+ "libvirt")
offset = libvirt_api.index("-api.xml")
libvirt_qemu_api = libvirt_api[0:offset] + "-qemu-api.xml"
--
1.8.4.3
10 years, 9 months
[libvirt] [PATCH] qemu: Fix seamless SPICE migration
by Martin Kletzander
Since the wait is done during migration (still inside
QEMU_ASYNC_JOB_MIGRATION_OUT), the code should enter the monitor as such
in order to prohibit all other jobs from interfering in the meantime.
This patch fixes bug #1009886 in which qemuDomainGetBlockInfo was
waiting on the monitor condition and after GetSpiceMigrationStatus
mangled its internal data, the daemon crashed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1009886
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
src/qemu/qemu_migration.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index d7b89fc..3a1aab7 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1595,7 +1595,10 @@ qemuMigrationWaitForSpice(virQEMUDriverPtr driver,
/* Poll every 50ms for progress & to allow cancellation */
struct timespec ts = { .tv_sec = 0, .tv_nsec = 50 * 1000 * 1000ull };
- qemuDomainObjEnterMonitor(driver, vm);
+ if (qemuDomainObjEnterMonitorAsync(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
+ return -1;
+
if (qemuMonitorGetSpiceMigrationStatus(priv->mon,
&spice_migrated) < 0) {
qemuDomainObjExitMonitor(driver, vm);
--
1.8.3.2
10 years, 9 months
[libvirt] [PATCH] qemu: add PCI-multibus support for ppc
by Olivia Yin
Signed-off-by: Olivia Yin <hong-hua.yin(a)freescale.com>
---
src/qemu/qemu_capabilities.c | 10 ++++++++++
1 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 7bc1ebc..7d7791d 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -2209,6 +2209,11 @@ virQEMUCapsInitHelp(virQEMUCapsPtr qemuCaps, uid_t runUid, gid_t runGid)
virQEMUCapsClear(qemuCaps, QEMU_CAPS_NO_ACPI);
}
+ /* ppc support PCI-multibus */
+ if (qemuCaps->arch == VIR_ARCH_PPC) {
+ virQEMUCapsSet(qemuCaps, QEMU_CAPS_PCI_MULTIBUS);
+ }
+
/* virQEMUCapsExtractDeviceStr will only set additional caps if qemu
* understands the 0.13.0+ notion of "-device driver,". */
if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE) &&
@@ -2450,6 +2455,11 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
virQEMUCapsSet(qemuCaps, QEMU_CAPS_NO_ACPI);
}
+ /* ppc support PCI-multibus */
+ if (qemuCaps->arch == VIR_ARCH_PPC) {
+ virQEMUCapsSet(qemuCaps, QEMU_CAPS_PCI_MULTIBUS);
+ }
+
if (virQEMUCapsProbeQMPCommands(qemuCaps, mon) < 0)
goto cleanup;
if (virQEMUCapsProbeQMPEvents(qemuCaps, mon) < 0)
--
1.6.4
10 years, 9 months
[libvirt] [PATCH 0/4] Expose FSFreeze/FSThaw within the guest as commands
by Tomoki Sekiyama
Currently FSFreeze and FSThaw are supported by qemu guest agent and they are
used internally in snapshot-create command with --quiesce option.
However, when users want to utilize the native snapshot feature of storage
devices (such as LVM over iSCSI, various enterprise storage systems, etc.),
they need to issue fsfreeze command separately from libvirt-driven snapshots.
(OpenStack cinder provides these storages' snapshot feature, but it cannot
quiesce the guest filesystems automatically for now.)
Although virDomainQemuGuestAgent() API could be used for this purpose, it
depends too much on specific hypervisor implementation.
This patchset adds virDomainFSFreeze()/virDomainFSThaw() APIs and virsh
domfsfreeze/domfsthaw commands to enable the users to freeze and thaw
domain's filesystems cleanly.
The APIs has mountPoint and flags option currently unsupported for future
extension, as virDomainFSTrim() API.
Duplicated FSFreeze results in error caused by qemu guest agent.
---
Tomoki Sekiyama (4):
Introduce virDomainFSFreeze() public API
remote: Implement virDomainFSFreeze and virDomainFSThaw
qemu: Implement virDomainFSFreeze
virsh: Expose new virDomainFSFreeze and virDomainFSThaw API
include/libvirt/libvirt.h.in | 8 ++
src/access/viraccessperm.c | 2 -
src/access/viraccessperm.h | 6 ++
src/driver.h | 12 ++++
src/libvirt.c | 92 +++++++++++++++++++++++++++
src/libvirt_public.syms | 6 ++
src/qemu/qemu_driver.c | 142 ++++++++++++++++++++++++++++++++++++++++++
src/remote/remote_driver.c | 2 +
src/remote/remote_protocol.x | 26 +++++++-
src/remote_protocol-structs | 12 ++++
src/rpc/gendispatch.pl | 2 +
tools/virsh-domain.c | 108 ++++++++++++++++++++++++++++++++
tools/virsh.pod | 17 +++++
13 files changed, 433 insertions(+), 2 deletions(-)
10 years, 9 months
[libvirt] [RFC PATCH] libvirt support to force convergence of live guest migration
by Chegu Vinod
Busy enterprise workloads hosted on large sized VM's tend to dirty
memory faster than the transfer rate achieved via live guest migration.
Despite some good recent improvements (& using dedicated 10Gig NICs
between hosts) the live migration may NOT converge.
Recently support was added in qemu (version 1.6) to allow a user to
choose if they wish to force convergence of their migration via a
new migration capability : "auto-converge". This feature allows for qemu
to auto-detect lack of convergence and trigger a throttle-down of the
VCPUs.
This RFC patch includes the libvirt support needed to trigger this
feature. (Testing is still in progress)
Signed-off-by: Chegu Vinod <chegu_vinod(a)hp.com>
---
include/libvirt/libvirt.h.in | 1 +
src/qemu/qemu_migration.c | 44 ++++++++++++++++++++++++++++++++++++++++++
src/qemu/qemu_migration.h | 1 +
src/qemu/qemu_monitor.c | 2 +-
src/qemu/qemu_monitor.h | 1 +
tools/virsh-domain.c | 7 ++++++
6 files changed, 55 insertions(+), 1 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index 146a59b..13b0bfc 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -1192,6 +1192,7 @@ typedef enum {
VIR_MIGRATE_OFFLINE = (1 << 10), /* offline migrate */
VIR_MIGRATE_COMPRESSED = (1 << 11), /* compress data during migration */
VIR_MIGRATE_ABORT_ON_ERROR = (1 << 12), /* abort migration on I/O errors happened during migration */
+ VIR_MIGRATE_AUTO_CONVERGE = (1 << 13), /* force auto-convergence during during migration */
} virDomainMigrateFlags;
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index e87ea85..8cc0c56 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1565,6 +1565,40 @@ cleanup:
}
static int
+qemuMigrationSetAutoConverge(virQEMUDriverPtr driver,
+ virDomainObjPtr vm,
+ enum qemuDomainAsyncJob job)
+{
+ qemuDomainObjPrivatePtr priv = vm->privateData;
+ int ret;
+
+ if (qemuDomainObjEnterMonitorAsync(driver, vm, job) < 0)
+ return -1;
+
+ ret = qemuMonitorGetMigrationCapability(
+ priv->mon,
+ QEMU_MONITOR_MIGRATION_CAPS_AUTO_CONVERGE);
+
+ if (ret < 0) {
+ goto cleanup;
+ } else if (ret == 0) {
+ virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s",
+ _("Auto-Converge migration is not supported by "
+ "QEMU binary"));
+ ret = -1;
+ goto cleanup;
+ }
+
+ ret = qemuMonitorSetMigrationCapability(
+ priv->mon,
+ QEMU_MONITOR_MIGRATION_CAPS_AUTO_CONVERGE);
+
+cleanup:
+ qemuDomainObjExitMonitor(driver, vm);
+ return ret;
+}
+
+static int
qemuMigrationWaitForSpice(virQEMUDriverPtr driver,
virDomainObjPtr vm)
{
@@ -2389,6 +2423,11 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver,
QEMU_ASYNC_JOB_MIGRATION_IN) < 0)
goto stop;
+ if (flags & VIR_MIGRATE_AUTO_CONVERGE &&
+ qemuMigrationSetAutoConverge(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_IN) < 0)
+ goto stop;
+
if (mig->lockState) {
VIR_DEBUG("Received lockstate %s", mig->lockState);
VIR_FREE(priv->lockState);
@@ -3181,6 +3220,11 @@ qemuMigrationRun(virQEMUDriverPtr driver,
QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
goto cleanup;
+ if (flags & VIR_MIGRATE_AUTO_CONVERGE &&
+ qemuMigrationSetAutoConverge(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
+ goto cleanup;
+
if (qemuDomainObjEnterMonitorAsync(driver, vm,
QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
goto cleanup;
diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h
index cafa2a2..c4258a1 100644
--- a/src/qemu/qemu_migration.h
+++ b/src/qemu/qemu_migration.h
@@ -39,6 +39,7 @@
VIR_MIGRATE_UNSAFE | \
VIR_MIGRATE_OFFLINE | \
VIR_MIGRATE_COMPRESSED | \
+ VIR_MIGRATE_AUTO_CONVERGE | \
VIR_MIGRATE_ABORT_ON_ERROR)
/* All supported migration parameters and their types. */
diff --git a/src/qemu/qemu_monitor.c b/src/qemu/qemu_monitor.c
index 1514715..780a29a 100644
--- a/src/qemu/qemu_monitor.c
+++ b/src/qemu/qemu_monitor.c
@@ -118,7 +118,7 @@ VIR_ENUM_IMPL(qemuMonitorMigrationStatus,
VIR_ENUM_IMPL(qemuMonitorMigrationCaps,
QEMU_MONITOR_MIGRATION_CAPS_LAST,
- "xbzrle")
+ "xbzrle", "auto-converge")
VIR_ENUM_IMPL(qemuMonitorVMStatus,
QEMU_MONITOR_VM_STATUS_LAST,
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index eabf000..95e70ab 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -440,6 +440,7 @@ int qemuMonitorGetSpiceMigrationStatus(qemuMonitorPtr mon,
typedef enum {
QEMU_MONITOR_MIGRATION_CAPS_XBZRLE,
+ QEMU_MONITOR_MIGRATION_CAPS_AUTO_CONVERGE,
QEMU_MONITOR_MIGRATION_CAPS_LAST
} qemuMonitorMigrationCaps;
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 1fe138c..d94a81b 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -8532,6 +8532,10 @@ static const vshCmdOptDef opts_migrate[] = {
.type = VSH_OT_BOOL,
.help = N_("compress repeated pages during live migration")
},
+ {.name = "auto-converge",
+ .type = VSH_OT_BOOL,
+ .help = N_("force auto convergence during live migration")
+ },
{.name = "abort-on-error",
.type = VSH_OT_BOOL,
.help = N_("abort on soft errors during migration")
@@ -8676,6 +8680,9 @@ doMigrate(void *opaque)
if (vshCommandOptBool(cmd, "compressed"))
flags |= VIR_MIGRATE_COMPRESSED;
+ if (vshCommandOptBool(cmd, "auto-converge"))
+ flags |= VIR_MIGRATE_AUTO_CONVERGE;
+
if (vshCommandOptBool(cmd, "offline")) {
flags |= VIR_MIGRATE_OFFLINE;
}
--
1.7.1
10 years, 10 months