[libvirt] Supporting vhost-net and macvtap in libvirt for QEMU
by Anthony Liguori
Disclaimer: I am neither an SR-IOV nor a vhost-net expert, but I've CC'd
people that are who can throw tomatoes at me for getting bits wrong :-)
I wanted to start a discussion about supporting vhost-net in libvirt.
vhost-net has not yet been merged into qemu but I expect it will be soon
so it's a good time to start this discussion.
There are two modes worth supporting for vhost-net in libvirt. The
first mode is where vhost-net backs to a tun/tap device. This is
behaves in very much the same way that -net tap behaves in qemu today.
Basically, the difference is that the virtio backend is in the kernel
instead of in qemu so there should be some performance improvement.
Current, libvirt invokes qemu with -net tap,fd=X where X is an already
open fd to a tun/tap device. I suspect that after we merge vhost-net,
libvirt could support vhost-net in this mode by just doing -net
vhost,fd=X. I think the only real question for libvirt is whether to
provide a user visible switch to use vhost or to just always use vhost
when it's available and it makes sense. Personally, I think the later
makes sense.
The more interesting invocation of vhost-net though is one where the
vhost-net device backs directly to a physical network card. In this
mode, vhost should get considerably better performance than the current
implementation. I don't know the syntax yet, but I think it's
reasonable to assume that it will look something like -net
tap,dev=eth0. The effect will be that eth0 is dedicated to the guest.
On most modern systems, there is a small number of network devices so
this model is not all that useful except when dealing with SR-IOV
adapters. In that case, each physical device can be exposed as many
virtual devices (VFs). There are a few restrictions here though. The
biggest is that currently, you can only change the number of VFs by
reloading a kernel module so it's really a parameter that must be set at
startup time.
I think there are a few ways libvirt could support vhost-net in this
second mode. The simplest would be to introduce a new tag similar to
<source network='br0'>. In fact, if you probed the device type for the
network parameter, you could probably do something like <source
network='eth0'> and have it Just Work.
Another model would be to have libvirt see an SR-IOV adapter as a
network pool whereas it handled all of the VF management. Considering
how inflexible SR-IOV is today, I'm not sure whether this is the best model.
Has anyone put any more thought into this problem or how this should be
modeled in libvirt? Michael, could you share your current thinking for
-net syntax?
--
Regards,
Anthony Liguori
1 year, 1 month
[libvirt] Libvirt multi queue support
by Naor Shlomo
Hello experts,
Could anyone please tell me if Multi Queue it fully supported in Libvirt and if so what version contains it?
Thanks,
Naor
8 years, 6 months
[libvirt] [PATCH] qemu: Fix seamless SPICE migration
by Martin Kletzander
Since the wait is done during migration (still inside
QEMU_ASYNC_JOB_MIGRATION_OUT), the code should enter the monitor as such
in order to prohibit all other jobs from interfering in the meantime.
This patch fixes bug #1009886 in which qemuDomainGetBlockInfo was
waiting on the monitor condition and after GetSpiceMigrationStatus
mangled its internal data, the daemon crashed.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1009886
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
src/qemu/qemu_migration.c | 5 ++++-
1 file changed, 4 insertions(+), 1 deletion(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index d7b89fc..3a1aab7 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1595,7 +1595,10 @@ qemuMigrationWaitForSpice(virQEMUDriverPtr driver,
/* Poll every 50ms for progress & to allow cancellation */
struct timespec ts = { .tv_sec = 0, .tv_nsec = 50 * 1000 * 1000ull };
- qemuDomainObjEnterMonitor(driver, vm);
+ if (qemuDomainObjEnterMonitorAsync(driver, vm,
+ QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
+ return -1;
+
if (qemuMonitorGetSpiceMigrationStatus(priv->mon,
&spice_migrated) < 0) {
qemuDomainObjExitMonitor(driver, vm);
--
1.8.3.2
10 years, 9 months
[libvirt] [PATCH] qemu: add PCI-multibus support for ppc
by Olivia Yin
Signed-off-by: Olivia Yin <hong-hua.yin(a)freescale.com>
---
src/qemu/qemu_capabilities.c | 10 ++++++++++
1 files changed, 10 insertions(+), 0 deletions(-)
diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c
index 7bc1ebc..7d7791d 100644
--- a/src/qemu/qemu_capabilities.c
+++ b/src/qemu/qemu_capabilities.c
@@ -2209,6 +2209,11 @@ virQEMUCapsInitHelp(virQEMUCapsPtr qemuCaps, uid_t runUid, gid_t runGid)
virQEMUCapsClear(qemuCaps, QEMU_CAPS_NO_ACPI);
}
+ /* ppc support PCI-multibus */
+ if (qemuCaps->arch == VIR_ARCH_PPC) {
+ virQEMUCapsSet(qemuCaps, QEMU_CAPS_PCI_MULTIBUS);
+ }
+
/* virQEMUCapsExtractDeviceStr will only set additional caps if qemu
* understands the 0.13.0+ notion of "-device driver,". */
if (virQEMUCapsGet(qemuCaps, QEMU_CAPS_DEVICE) &&
@@ -2450,6 +2455,11 @@ virQEMUCapsInitQMP(virQEMUCapsPtr qemuCaps,
virQEMUCapsSet(qemuCaps, QEMU_CAPS_NO_ACPI);
}
+ /* ppc support PCI-multibus */
+ if (qemuCaps->arch == VIR_ARCH_PPC) {
+ virQEMUCapsSet(qemuCaps, QEMU_CAPS_PCI_MULTIBUS);
+ }
+
if (virQEMUCapsProbeQMPCommands(qemuCaps, mon) < 0)
goto cleanup;
if (virQEMUCapsProbeQMPEvents(qemuCaps, mon) < 0)
--
1.6.4
10 years, 9 months
[libvirt] Add patches to allow users to join running containers.
by dwalsh@redhat.com
[PATCH 1/2] Add virGetUserDirectoryByUID to retrieve users homedir
[PATCH 2/2] virt-login-shell joins users into lxc container.
This patch implements most of the changes suggested by Dan Berrange and
Eric Blake.
Some replies to suggested changes.
Removed mingw-libvirt.spec.in changes since virt lxc probably can not be
supported in Windows. Not sure if I need to make changes so my code will not
build on that platform.
Did not make the changes to install virt-login-shell as 4755 automatically.
I guess I want a more firm, make that change request...
I did not make a helper function to parse a list of strings out of conf file.
The getuid and getgid calls return the user that executed the program, when the app is setuid geteuid and getegid return "0". I believe getuid and getgid are correct.
Added virt-login-shell --help, not sure what --program would do?
The program is hard coded to LXC because there is no way that I know of for a ZZ
process to join a running qemu instance.
I have heard back from one security review from Miloslav Trmac, who had similar comments as Eric.
10 years, 11 months
[libvirt] JNA Error Callback could cause core dump.
by Benjamin Wang (gendwang)
Hi,
When I changed code as following:
public class Connect {
// Load the native part
static {
Libvirt.INSTANCE.virInitialize();
try {
ErrorHandler.processError(Libvirt.INSTANCE);
} catch (Exception e) {
e.printStackTrace();
}
+ Libvirt.INSTANCE.virSetErrorFunc(null, new ErrorCallback());
}
The server will generate the following core dump:
Program terminated with signal 6, Aborted.
#0 0x0000003f9b030265 in raise () from /lib64/libc.so.6
(gdb) where
#0 0x0000003f9b030265 in raise () from /lib64/libc.so.6
#1 0x0000003f9b031d10 in abort () from /lib64/libc.so.6
#2 0x0000003f9b06a84b in __libc_message () from /lib64/libc.so.6
#3 0x0000003f9b07230f in _int_free () from /lib64/libc.so.6
#4 0x0000003f9b07276b in free () from /lib64/libc.so.6
#5 0x00002aaaacf46868 in ?? ()
#6 0x0000000000000000 in ?? ()
The problem was caused that when JNA call setErrorFunc, it will create ErrorCallback object. But when GC is executed, the object is GCed. But even I change code as following.
When GC is excuted, the callback object will be moved. Then C can't find this object. Both of scenarios will cause core dump. It seems that JNA mustn't provide ErrorCallback Class,
Because nobody can use this.
Please correct me.
public class Connect {
+ private static final ErrorCallback callback = new ErrorCallback();
// Load the native part
static {
Libvirt.INSTANCE.virInitialize();
try {
ErrorHandler.processError(Libvirt.INSTANCE);
} catch (Exception e) {
e.printStackTrace();
}
+ Libvirt.INSTANCE.virSetErrorFunc(null, callback);
}
B.R.
Benjamin Wang
11 years
[libvirt] [PATCH 0/4] Support for integrating cgroups with systemd
by Daniel P. Berrange
From: "Daniel P. Berrange" <berrange(a)redhat.com>
This is a much changed / expanded version of my previous work to
create cgroups via systemd. The difference is that this time it
actually works :-)
I'm not proposing this for merge until after the 1.1.1 release.
Daniel P. Berrange (4):
Add APIs for formatting systemd slice/scope names
Add support for systemd cgroup mount
Cope with races while killing processes
Enable support for systemd-machined in cgroups creation
src/libvirt_private.syms | 2 +
src/lxc/lxc_process.c | 10 +-
src/qemu/qemu_cgroup.c | 1 +
src/util/vircgroup.c | 270 +++++++++++++++++++++++++++++++++++++++++------
src/util/vircgroup.h | 2 +
src/util/virsystemd.c | 96 ++++++++++++++++-
src/util/virsystemd.h | 5 +
tests/vircgrouptest.c | 9 ++
tests/virsystemdtest.c | 48 +++++++++
9 files changed, 403 insertions(+), 40 deletions(-)
--
1.8.1.4
11 years