[libvirt] [PATCH] Implement LVM volume resize via lvresize
by apolyakov@beget.ru
From: Alexander Polyakov <apolyakov(a)beget.com>
Signed-off-by: Alexander Polyakov <apolyakov(a)beget.com>
---
m4/virt-storage-lvm.m4 | 4 ++++
src/storage/storage_backend_logical.c | 22 ++++++++++++++++++++++
2 files changed, 26 insertions(+)
diff --git a/m4/virt-storage-lvm.m4 b/m4/virt-storage-lvm.m4
index a0ccca7a0..0932995b4 100644
--- a/m4/virt-storage-lvm.m4
+++ b/m4/virt-storage-lvm.m4
@@ -30,6 +30,7 @@ AC_DEFUN([LIBVIRT_STORAGE_CHECK_LVM], [
AC_PATH_PROG([VGREMOVE], [vgremove], [], [$LIBVIRT_SBIN_PATH])
AC_PATH_PROG([LVREMOVE], [lvremove], [], [$LIBVIRT_SBIN_PATH])
AC_PATH_PROG([LVCHANGE], [lvchange], [], [$LIBVIRT_SBIN_PATH])
+ AC_PATH_PROG([LVRESIZE], [lvresize], [], [$LIBVIRT_SBIN_PATH])
AC_PATH_PROG([VGCHANGE], [vgchange], [], [$LIBVIRT_SBIN_PATH])
AC_PATH_PROG([VGSCAN], [vgscan], [], [$LIBVIRT_SBIN_PATH])
AC_PATH_PROG([PVS], [pvs], [], [$LIBVIRT_SBIN_PATH])
@@ -44,6 +45,7 @@ AC_DEFUN([LIBVIRT_STORAGE_CHECK_LVM], [
if test -z "$VGREMOVE" ; then AC_MSG_ERROR([We need vgremove for LVM storage driver]) ; fi
if test -z "$LVREMOVE" ; then AC_MSG_ERROR([We need lvremove for LVM storage driver]) ; fi
if test -z "$LVCHANGE" ; then AC_MSG_ERROR([We need lvchange for LVM storage driver]) ; fi
+ if test -z "$LVRESIZE" ; then AC_MSG_ERROR([We need lvresize for LVM storage driver]) ; fi
if test -z "$VGCHANGE" ; then AC_MSG_ERROR([We need vgchange for LVM storage driver]) ; fi
if test -z "$VGSCAN" ; then AC_MSG_ERROR([We need vgscan for LVM storage driver]) ; fi
if test -z "$PVS" ; then AC_MSG_ERROR([We need pvs for LVM storage driver]) ; fi
@@ -57,6 +59,7 @@ AC_DEFUN([LIBVIRT_STORAGE_CHECK_LVM], [
if test -z "$VGREMOVE" ; then with_storage_lvm=no ; fi
if test -z "$LVREMOVE" ; then with_storage_lvm=no ; fi
if test -z "$LVCHANGE" ; then with_storage_lvm=no ; fi
+ if test -z "$LVRESIZE" ; then with_storage_lvm=no ; fi
if test -z "$VGCHANGE" ; then with_storage_lvm=no ; fi
if test -z "$VGSCAN" ; then with_storage_lvm=no ; fi
if test -z "$PVS" ; then with_storage_lvm=no ; fi
@@ -75,6 +78,7 @@ AC_DEFUN([LIBVIRT_STORAGE_CHECK_LVM], [
AC_DEFINE_UNQUOTED([VGREMOVE],["$VGREMOVE"],[Location of vgremove program])
AC_DEFINE_UNQUOTED([LVREMOVE],["$LVREMOVE"],[Location of lvremove program])
AC_DEFINE_UNQUOTED([LVCHANGE],["$LVCHANGE"],[Location of lvchange program])
+ AC_DEFINE_UNQUOTED([LVRESIZE],["$LVRESIZE"],[Location of lvresize program])
AC_DEFINE_UNQUOTED([VGCHANGE],["$VGCHANGE"],[Location of vgchange program])
AC_DEFINE_UNQUOTED([VGSCAN],["$VGSCAN"],[Location of vgscan program])
AC_DEFINE_UNQUOTED([PVS],["$PVS"],[Location of pvs program])
diff --git a/src/storage/storage_backend_logical.c b/src/storage/storage_backend_logical.c
index 67f70e551..810e4ee88 100644
--- a/src/storage/storage_backend_logical.c
+++ b/src/storage/storage_backend_logical.c
@@ -1072,6 +1072,27 @@ virStorageBackendLogicalVolWipe(virConnectPtr conn,
return -1;
}
+static int
+virStorageBackendLogicalResizeVol(virConnectPtr conn ATTRIBUTE_UNUSED,
+ virStoragePoolObjPtr pool,
+ virStorageVolDefPtr vol,
+ unsigned long long capacity,
+ unsigned int flags)
+{
+
+ virCheckFlags(0, -1);
+
+ (void)pool;
+ virCommandPtr cmd = virCommandNewArgList(LVRESIZE, "-L", NULL);
+ virCommandAddArgFormat(cmd, "%lluK", VIR_DIV_UP(capacity, 1024));
+ virCommandAddArgFormat(cmd, "%s", vol->target.path);
+ int ret = virCommandRun(cmd, NULL);
+
+ virCommandFree(cmd);
+ return ret;
+
+}
+
virStorageBackend virStorageBackendLogical = {
.type = VIR_STORAGE_POOL_LOGICAL,
@@ -1089,6 +1110,7 @@ virStorageBackend virStorageBackendLogical = {
.uploadVol = virStorageBackendVolUploadLocal,
.downloadVol = virStorageBackendVolDownloadLocal,
.wipeVol = virStorageBackendLogicalVolWipe,
+ .resizeVol = virStorageBackendLogicalResizeVol,
};
--
2.13.5
7 years, 2 months
[libvirt] [PATCH] Add entry for ZStack to apps page
by Shuang He
Signed-off-by: Shuang He <shuang.he(a)zstack.io>
---
docs/apps.html.in | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/docs/apps.html.in b/docs/apps.html.in
index 1ced03c..06bf8a2 100644
--- a/docs/apps.html.in
+++ b/docs/apps.html.in
@@ -286,6 +286,17 @@
perfect for setting up low-end servers in a cloud or a
cloud where you want the most bang for the bucks.
</dd>
+
+ <dt><a href="http://en.zstack.io/">ZStack</a></dt>
+ <dd>
+ ZStack is open source IaaS software aiming to automate
+ datacenters, managing resources of compute, storage,
+ and networking all by APIs. Users can setup ZStack
+ environments in a download-and-run manner, spending 5 minutes
+ building a POC environment all on a single Linux machine,
+ or 30 minutes building a multi-node production environment
+ that can scale to hundreds of thousands of physical servers.
+ </dd>
</dl>
<h2><a id="libraries">Libraries</a></h2>
--
1.8.3.1
7 years, 2 months
[libvirt] [PATCH] qemu: Forbid rx/tx_queue_size change explicitly
by Michal Privoznik
https://bugzilla.redhat.com/show_bug.cgi?id=1484230
When updating a virtio enabled vNIC and trying to change either
of rx_queue_size or tx_queue_size success is reported although no
operation is actually performed. Moreover, there's no way how to
change these on the fly. This is due to way we check for changes:
explicitly for each struct member. Therefore it's easy to miss
one.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/qemu/qemu_hotplug.c | 2 ++
1 file changed, 2 insertions(+)
diff --git a/src/qemu/qemu_hotplug.c b/src/qemu/qemu_hotplug.c
index 4be0f546c..9611df517 100644
--- a/src/qemu/qemu_hotplug.c
+++ b/src/qemu/qemu_hotplug.c
@@ -3067,6 +3067,8 @@ qemuDomainChangeNet(virQEMUDriverPtr driver,
olddev->driver.virtio.ioeventfd != newdev->driver.virtio.ioeventfd ||
olddev->driver.virtio.event_idx != newdev->driver.virtio.event_idx ||
olddev->driver.virtio.queues != newdev->driver.virtio.queues ||
+ olddev->driver.virtio.rx_queue_size != newdev->driver.virtio.rx_queue_size ||
+ olddev->driver.virtio.tx_queue_size != newdev->driver.virtio.tx_queue_size ||
olddev->driver.virtio.host.csum != newdev->driver.virtio.host.csum ||
olddev->driver.virtio.host.gso != newdev->driver.virtio.host.gso ||
olddev->driver.virtio.host.tso4 != newdev->driver.virtio.host.tso4 ||
--
2.13.5
7 years, 2 months
[libvirt] [PATCH] news: add an entry for chardev reconnect feature
by Pavel Hrdina
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
docs/news.xml | 9 +++++++++
1 file changed, 9 insertions(+)
diff --git a/docs/news.xml b/docs/news.xml
index 088966d61d..4b48f0fb3a 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -60,6 +60,15 @@
qemu: Added support for setting heads of virtio GPU
</summary>
</change>
+ <change>
+ <summary>
+ qemu: Added support to configure reconnect timeout for chardev devices
+ </summary>
+ <description>
+ When you have a TCP or UNIX chardev device and it's connected somewhere
+ you can configure reconnect timeout if the connection is closed.
+ </description>
+ </change>
</section>
<section title="Improvements">
</section>
--
2.13.5
7 years, 2 months
[libvirt] [PATCH v6 00/13] Add support for Veritas HyperScale (VxHS) block device protocol
by John Ferlan
Here's the reworked v5 series I promised:
https://www.redhat.com/archives/libvir-list/2017-August/thread.html
Each of the patches lists changes that I recall making in the
area. I may have missed a few... and I may have missed something
from my own review - so hopefully Ashish you can keep me honest and
of course since you have the environment, please check/test that
things actually work.
I've done quite a bit of reformatting the order and splitting things
up so that XML changes are in one patch and qemu changes are in a
subsequent patch. Not too little change, but not too excessive.
I think we do need to think about the default TLS environment and
whether we really care to fail in the event that cfg->vxhsTLS = 0
and src->haveTLS = yes.
Ashish Mittal (10):
storage: Introduce VIR_STORAGE_NET_PROTOCOL_VXHS
docs: Add schema and docs for Veritas HyperScale (VxHS)
util: storage: Add JSON backing volume parse for VxHS
qemu: Add qemu command line generation for a VxHS block device
conf: Introduce TLS options for VxHS block device clients
util: Add haveTLS to virStorageSource
util: Add virstoragetest to parse/format a tls='yes'
qemu: Add TLS support for Veritas HyperScale (VxHS)
tests: Add test for failure when vxhs_tls=0
tests: Add a test case for multiple VxHS disk configuration
John Ferlan (3):
qemu: Add QEMU 2.10 x86_64 the generated capabilities
qemu: Detect support for vxhs
qemu: Introduce qemuDomainPrepareDiskSource
docs/formatdomain.html.in | 46 +-
docs/schemas/domaincommon.rng | 18 +
src/conf/domain_conf.c | 19 +
src/libxl/libxl_conf.c | 1 +
src/qemu/libvirtd_qemu.aug | 4 +
src/qemu/qemu.conf | 33 +
src/qemu/qemu_block.c | 70 +-
src/qemu/qemu_block.h | 4 +-
src/qemu/qemu_capabilities.c | 4 +
src/qemu/qemu_capabilities.h | 3 +
src/qemu/qemu_command.c | 41 +-
src/qemu/qemu_conf.c | 16 +
src/qemu/qemu_conf.h | 3 +
src/qemu/qemu_domain.c | 58 +
src/qemu/qemu_domain.h | 5 +
src/qemu/qemu_driver.c | 3 +
src/qemu/qemu_parse_command.c | 15 +
src/qemu/qemu_process.c | 4 +
src/qemu/test_libvirtd_qemu.aug.in | 2 +
src/util/virstoragefile.c | 54 +-
src/util/virstoragefile.h | 4 +
src/xenconfig/xen_xl.c | 1 +
.../caps_2.10.0.x86_64.replies | 17994 +++++++++++++++++++
tests/qemucapabilitiesdata/caps_2.10.0.x86_64.xml | 792 +
tests/qemucapabilitiestest.c | 1 +
...ml2argv-disk-drive-network-tlsx509-err-vxhs.xml | 34 +
...-disk-drive-network-tlsx509-multidisk-vxhs.args | 43 +
...v-disk-drive-network-tlsx509-multidisk-vxhs.xml | 50 +
...muxml2argv-disk-drive-network-tlsx509-vxhs.args | 30 +
...emuxml2argv-disk-drive-network-tlsx509-vxhs.xml | 32 +
.../qemuxml2argv-disk-drive-network-vxhs.args | 27 +
.../qemuxml2argv-disk-drive-network-vxhs.xml | 32 +
tests/qemuxml2argvtest.c | 10 +
...uxml2xmlout-disk-drive-network-tlsx509-vxhs.xml | 34 +
.../qemuxml2xmlout-disk-drive-network-vxhs.xml | 34 +
tests/qemuxml2xmltest.c | 2 +
tests/virstoragetest.c | 23 +
37 files changed, 19534 insertions(+), 12 deletions(-)
create mode 100644 tests/qemucapabilitiesdata/caps_2.10.0.x86_64.replies
create mode 100644 tests/qemucapabilitiesdata/caps_2.10.0.x86_64.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-err-vxhs.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-multidisk-vxhs.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-multidisk-vxhs.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-vxhs.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-vxhs.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml
create mode 100644 tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-drive-network-tlsx509-vxhs.xml
create mode 100644 tests/qemuxml2xmloutdata/qemuxml2xmlout-disk-drive-network-vxhs.xml
--
2.9.5
7 years, 2 months
[libvirt] [PATCH] keycodes: fix for 'make dist'
by Nikolay Shirokovskiy
'make dist' fails with error now:
make[2]: Entering directory `/root/dev/libvirt/src'
make[2]: *** No rule to make target `linux', needed by `distdir'. Stop
It turns out that in am__libvirt_util_la_SOURCES_DIST variable KEYTABLES
is not expanded correctly. Like 'linux' stays 'linux' instead of util/virkeycodetable_linux.h.
We do not need generated headers in distribution anyway and won't get
the error too.
---
src/Makefile.am | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/Makefile.am b/src/Makefile.am
index 0ed4331..94ca528 100644
--- a/src/Makefile.am
+++ b/src/Makefile.am
@@ -273,7 +273,6 @@ KEYMANS = $(KEYPODS:%.pod=%.7)
man7_MANS = $(KEYMANS)
-UTIL_SOURCES += $(KEYTABLES)
BUILT_SOURCES += $(KEYTABLES)
MAINTAINERCLEANFILES += $(KEYTABLES)
CLEANFILES += $(KEYMANS) $(KEYPODS)
@@ -1224,6 +1223,7 @@ libvirt_la_LIBADD = $(libvirt_la_BUILT_LIBADD)
libvirt_la_BUILT_LIBADD = libvirt_util.la
libvirt_util_la_SOURCES = \
$(UTIL_SOURCES)
+nodist_libvirt_util_la_SOURCES = $(KEYTABLES)
libvirt_util_la_CFLAGS = $(CAPNG_CFLAGS) $(YAJL_CFLAGS) $(LIBNL_CFLAGS) \
$(AM_CFLAGS) $(AUDIT_CFLAGS) $(DEVMAPPER_CFLAGS) \
$(DBUS_CFLAGS) $(LDEXP_LIBM) $(NUMACTL_CFLAGS) \
--
1.8.3.1
7 years, 2 months
[libvirt] [PATCH 0/3] Fix docs/news.xml template structure and add new features
by Kothapally Madhu Pavan
This patchset will fix docs/news.xml template structure which is broken by
commit b7e779c1a51793 and add support for managedsave-define, managedsave-edit
and managedsave-dumpxml commands.
Kothapally Madhu Pavan (3):
doc: Fix docs/news.xml template structure
doc: Document managedsave-define and managedsave-edit support
doc: Document managedsave-dumpxml support
docs/news.xml | 23 +++++++++++++++++++++--
1 file changed, 21 insertions(+), 2 deletions(-)
--
1.8.3.1
7 years, 2 months
[libvirt] [PATCH v5 0/9] Add support for Veritas HyperScale (VxHS) block device protocol
by Ashish Mittal
QEMU changes for VxHS (including TLS support) are already upstream.
This series of patches adds support for VxHS block devices in libvirt.
Patch 1 adds the base functionality for supporting VxHS protocol.
Patches 2 and 3 add test cases for the base functionality.
Patch 4 adds two new configuration options in qemu.conf to enable TLS
for VxHS devices.
Patch 5 implements the main TLS functionality.
Patches 6 through 9 add test cases for the TLS functionality.
This series has the following changes -
(1) Rebased to latest master.
(2) Most of the review comments for patch 1 have been incorporated.
(3) Patches have been broken into smaller chunks
TODO:
Changes in response to review comments on the TLS functionality are still
pending and will be addressed next.
Ashish Mittal (9):
Add support for Veritas HyperScale (VxHS) block device protocol
Add a test case to verify generation of qemu command line args for a
VxHS disk
Add a test case to verify parsing of VxHS backing storage.
conf: Introduce TLS options for VxHS block device clients
Add TLS support for Veritas HyperScale (VxHS) block device protocol
Add a test case to verify TLS arguments are added for VxHS disk
Add a test case to verify TLS arguments are parsed correctly for a
VxHS disk
Add a test case to verify setting vxhs_tls=0 disables TLS for VxHS
disks
Add a test case to verify different TLS combinations for a VxHS disk
docs/formatdomain.html.in | 31 ++++++++-
docs/schemas/domaincommon.rng | 18 +++++
src/conf/domain_conf.c | 19 ++++++
src/libxl/libxl_conf.c | 1 +
src/qemu/libvirtd_qemu.aug | 4 ++
src/qemu/qemu.conf | 23 +++++++
src/qemu/qemu_block.c | 78 ++++++++++++++++++++++
src/qemu/qemu_command.c | 76 +++++++++++++++++++++
src/qemu/qemu_conf.c | 7 ++
src/qemu/qemu_conf.h | 3 +
src/qemu/qemu_driver.c | 3 +
src/qemu/qemu_parse_command.c | 15 +++++
src/qemu/test_libvirtd_qemu.aug.in | 2 +
src/util/virstoragefile.c | 53 ++++++++++++++-
src/util/virstoragefile.h | 10 +++
src/xenconfig/xen_xl.c | 1 +
...ml2argv-disk-drive-network-tlsx509-err-vxhs.xml | 34 ++++++++++
...-disk-drive-network-tlsx509-multidisk-vxhs.args | 43 ++++++++++++
...v-disk-drive-network-tlsx509-multidisk-vxhs.xml | 56 ++++++++++++++++
...muxml2argv-disk-drive-network-tlsx509-vxhs.args | 30 +++++++++
...emuxml2argv-disk-drive-network-tlsx509-vxhs.xml | 34 ++++++++++
.../qemuxml2argv-disk-drive-network-vxhs.args | 27 ++++++++
.../qemuxml2argv-disk-drive-network-vxhs.xml | 34 ++++++++++
tests/qemuxml2argvtest.c | 10 +++
tests/virstoragetest.c | 21 ++++++
25 files changed, 629 insertions(+), 4 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-err-vxhs.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-multidisk-vxhs.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-multidisk-vxhs.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-vxhs.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-tlsx509-vxhs.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-vxhs.xml
--
2.5.5
7 years, 2 months
Re: [libvirt] [RFC v2] [OVS/NOVA] Vhost-user backends cross-version migration support
by Eduardo Habkost
I'm CCing libvir-list and qemu-devel because I would like to get
feedback from libvirt and QEMU developers too.
On Tue, Aug 08, 2017 at 10:49:21PM +0300, Michael S. Tsirkin wrote:
> On Tue, Jul 18, 2017 at 03:42:08PM +0200, Maxime Coquelin wrote:
> > This is an revival from a thread I initiated earlier this year [0], that
> > I had to postpone due to other priorities.
> >
> > First, I'd like to thanks reviewers of my first proposal, this new
> > version tries to address the comments made:
> > 1.This is Nova's role and not Libvirt's to query hosts supported
> > compatibility modes and to select one, since Nova adds the vhost-user
> > ports and has visibility on other hosts. Hence I remove libvirt ML and
> > add Openstack one in the recipient list.
> > 2. By default, the compatibility version selected is the most recent
> > one, except if the admin selects on older compat version.
> >
> > The goal of this thread is to draft a solution based on the outcomes
> > of discussions with contributors of the different parties (DPDK/OVS
> > /Nova/...).
> >
> > I'm really interested on feedback from OVS & Nova contributors,
> > as my experience with these projects is rather limited.
> >
> > Problem statement:
> > ==================
> >
> > When migrating a VM from one host to another, the interfaces exposed by
> > QEMU must stay unchanged in order to guarantee a successful migration.
> > In the case of vhost-user interface, parameters like supported Virtio
> > feature set, max number of queues, max vring sizes,... must remain
> > compatible. Indeed, the frontend not being re-initialized, no
> > re-negotiation happens at migration time.
> >
> > For example, we have a VM that runs on host A, which has its vhost-user
> > backend advertising VIRTIO_F_RING_INDIRECT_DESC feature. Since the Guest
> > also support this feature, it is successfully negotiated, and guest
> > transmit packets using indirect descriptor tables, that the backend
> > knows to handle.
> >
> > At some point, the VM is being migrated to host B, which runs an older
> > version of the backend not supporting this VIRTIO_F_RING_INDIRECT_DESC
> > feature. The migration would break, because the Guest still have the
> > VIRTIO_F_RING_INDIRECT_DESC bit sets, and the virtqueue contains some
> > decriptors pointing to indirect tables, that backend B doesn't know to
> > handle.
> > This is just an example about Virtio features compatibility, but other
> > backend implementation details could cause other failures. (e.g.
> > configurable queues sizes)
> >
> > What we need is to be able to query the destination host's backend to
> > ensure migration is possible before it is initiated.
>
> This remided me strongly of the issues around the virtual CPU modeling
> in KVM, see
> https://wiki.qemu.org/index.php/Features/CPUModels#Querying_host_capabili...
>
> QEMU recently gained query-cpu-model-expansion to allow capability queries.
>
> Cc Eduardo accordingly. Eduardo, could you please take a look -
> how is the problem solved on the KVM/VCPU side? Do the above
> problem and solution for vhost look similar?
(Sorry for taking so long to reply)
CPU configuration in QEMU has the additional problem of features
depending on host hardware and kernel capabilities (not just QEMU
software capabilities). Do you have vhost-user features that
depend on the host kernel or hardware too, or all of them just
depend on the vhost-user backend software?
If it depends only on software, a solution similar to how
machine-types work in QEMU sound enough. If features depend on
host kernel or host hardware too, it is a bit more complex: it
means you need an interface to find out if each configurable
feature/version is really available on the host.
(In the case of CPU models, we started with an interface that
reported which CPU models were runnable on the host. But as
libvirt allows enabling/disabling individual CPU features, the
interface had to be extended to report which CPU features were
available/unavailable on the host.)
* * *
Now, there's one thing that seems very different here: the
guest-visible interface is not defined only by QEMU, but also by
the vhost-user backend. Is that correct?
This means QEMU won't fully control the resulting guest ABI
anymore. I would really prefer if we could keep libvirt+QEMU in
control of the guest ABI as usual, making QEMU configure all the
guest-visible vhost-user features. But I understand this would
require additional interfaces between QEMU and libvirt, and
extending the libvirt APIs.
So, if QEMU is really not going to control the resulting guest
ABI completely, can we at least provide a mechanism which QEMU
can use to ask vhost-user for guest ABI details on migration, and
block migration if vhost-user was misconfigured on the
destination host when migrating?
>
> > The below proposal has been drafted based on how Qemu manages machine types:
> >
> > Proposal
> > ========
> >
> > The idea is to have a table of supported version strings in OVS,
> > associated to key/value pairs. Nova or any other management tool could
> > query OVS for the list of supported versions strings for each hosts.
> > By default, the latest compatibility version will be selected, but the
> > admin can select manually an older compatibility mode in order to ensure
> > successful migration to an older destination host.
> >
> > Then, Nova would add OVS's vhost-user port with adding the selected
> > version (compatibility mode) as an extra parameter.
> >
> > Before starting the VM migration, Nova will ensure both source and
> > destination hosts' vhost-user interfaces run in the same compatibility
> > modes, and will prevent it if this is not the case.
> >
> > For example host A runs OVS-2.7, and host B OVS-2.6.
> > Host A's OVS-2.7 has an OVS-2.6 compatibility mode (e.g. with indirect
> > descriptors disabled), which should be selected at vhost-user port add
> > time to ensure migration will succeed to host B.
> >
> > Advantage of doing so is that Nova does not need any update if new keys
> > are introduced (i.e. it does not need to know how the new keys have to
> > be handled), all these checks remain in OVS's vhost-user implementation.
> >
> > Ideally, we would support per vhost-user interface compatibility mode,
> > which may have an impact also on DPDK API, as the Virtio feature update
> > API is global, and not per port.
> >
> > - Implementation:
> > -----------------
> >
> > Goal here is just to illustrate this proposal, I'm sure you will have
> > good suggestion to improve it.
> > In OVS vhost-user library, we would introduce a new structure, for
> > example (neither compiled nor tested):
> >
> > struct vhostuser_compat {
> > char *version;
> > uint64_t virtio_features;
> > uint32_t max_rx_queue_sz;
> > uint32_t max_nr_queues;
> > };
> >
> > *version* field is the compatibility version string. It could be
> > something like: "upstream.ovs-dpdk.v2.6". In case for example Fedora
> > adds some more patches to its package that would break migration to
> > upstream version, it could have a dedicated compatibility string:
> > "fc26.ovs-dpdk.v2.6". In case OVS-v2.7 does not break compatibility with
> > previous OVS-v2.6 version, then no need to create a new entry, just keep
> > v2.6 one.
> >
> > *virtio_features* field is the Virtio features set for a given
> > compatibility version. When an OVS tag is to be created, it would be
> > associated to a DPDK version. The Virtio features for these version
> > would be stored in this field. It would allow to upgrade the DPDK
> > package for example from v16.07 to v16.11 without breaking migration.
> > In case the distribution wants to benefit from latests Virtio
> > features, it would have to create a new entry to ensure migration
> > won't be broken.
> >
> > *max_rx_queue_sz*
> > *max_nr_queues* fields are just here for example, don't think this is
> > needed today. I just want to illustrate that we have to anticipate
> > other parameters than the Virtio feature set, even if not necessary
> > at the moment.
> >
> > We create a table with different compatibility versions in OVS
> > vhost-user lib:
> >
> > static struct vhostuser_compat vu_compat[] = {
> > {
> > .version = "upstream.ovs-dpdk.v2.7",
> > .virtio_features = 0x12045694,
> > .max_rx_queue_sz = 512,
> > },
> > {
> > .version = "upstream.ovs-dpdk.v2.6",
> > .virtio_features = 0x10045694,
> > .max_rx_queue_sz = 1024,
> > },
> > };
> >
> > At some time during installation, or system init, the table would be
> > parsed, and compatibility version strings would be stored into the OVS
> > database, or a new tool would be created to list these strings, or a
> > config file packaged with OVS stores the list of compatibiliy versions.
> >
> > Before launching the VM, Nova will query the version strings for the
> > host so that the admin can select an older compatibility mode. If none
> > selected by the admin, then the most recent one will be used by default,
> > and passed to the OVS's add-port command as parameter. Note that if no
> > compatibility mode is passed to the add-port command, the most recent
> > one is selected by OVS as default.
> >
> > When the vhost-user connection is initiated, OVS would know in which
> > compatibility mode to init the interface, for example by restricting the
> > support Virtio features of the interface.
> >
> > Cheers,
> > Maxime
> >
> > [0]:
> > https://mail.openvswitch.org/pipermail/ovs-dev/2017-February/328257.html
> > <b2a5501c-7df7-ad2a-002f-d731c445a502 at redhat.com>
--
Eduardo
7 years, 2 months