[libvirt PATCH] NEWS: Fix vertical spacing between sections
by Andrea Bolognani
Looking at the entire repository reveals we're not too consistent
about this, but at least in this specific document we mostly have
two blank lines between sections, so let's stick with that.
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
Pushed as trivial.
NEWS.rst | 2 ++
1 file changed, 2 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index 6b7735b9a0..3587bc2c13 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -46,6 +46,7 @@ v6.9.0 (unreleased)
Relying on the "Description" field caused queries to fail on non-"en-US"
systems. The queries have been updated to avoid using localized strings.
+
v6.8.0 (2020-10-01)
===================
@@ -143,6 +144,7 @@ v6.8.0 (2020-10-01)
in libvirt. udev backend is used on Linux OSes and devd can be eventually
implemented as replacement for FreeBSD.
+
v6.7.0 (2020-09-01)
===================
--
2.26.2
4 years, 1 month
[PATCH v5 06/12] nbd: Update qapi to support exporting multiple bitmaps
by Eric Blake
Since 'nbd-server-add' is deprecated, and 'block-export-add' is new to
5.2, we can still tweak the interface. Allowing 'bitmaps':['str'] is
nicer than 'bitmap':'str'. This wires up the qapi and qemu-nbd
changes to permit passing multiple bitmaps as distinct metadata
contexts that the NBD client may request, but the actual support for
more than one will require a further patch to the server.
Signed-off-by: Eric Blake <eblake(a)redhat.com>
Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov(a)virtuozzo.com>
---
docs/system/deprecated.rst | 4 +++-
qapi/block-export.json | 18 ++++++++++++------
blockdev-nbd.c | 13 +++++++++++++
nbd/server.c | 19 +++++++++++++------
qemu-nbd.c | 10 +++++-----
5 files changed, 46 insertions(+), 18 deletions(-)
diff --git a/docs/system/deprecated.rst b/docs/system/deprecated.rst
index 905628f3a0cb..d6cd027ac740 100644
--- a/docs/system/deprecated.rst
+++ b/docs/system/deprecated.rst
@@ -268,7 +268,9 @@ the 'wait' field, which is only applicable to sockets in server mode
''''''''''''''''''''''''''''''''''''''''''''''''''''''''
Use the more generic commands ``block-export-add`` and ``block-export-del``
-instead.
+instead. As part of this deprecation, it is now preferred to export a
+list of dirty bitmaps via ``bitmaps``, rather than a single bitmap via
+``bitmap``.
Human Monitor Protocol (HMP) commands
-------------------------------------
diff --git a/qapi/block-export.json b/qapi/block-export.json
index 893d5cde5dfe..c7c749d61097 100644
--- a/qapi/block-export.json
+++ b/qapi/block-export.json
@@ -74,10 +74,10 @@
# @description: Free-form description of the export, up to 4096 bytes.
# (Since 5.0)
#
-# @bitmap: Also export the dirty bitmap reachable from @device, so the
-# NBD client can use NBD_OPT_SET_META_CONTEXT with the
-# metadata context name "qemu:dirty-bitmap:NAME" to inspect the
-# bitmap. (since 4.0)
+# @bitmaps: Also export each of the named dirty bitmaps reachable from
+# @device, so the NBD client can use NBD_OPT_SET_META_CONTEXT with
+# the metadata context name "qemu:dirty-bitmap:BITMAP" to inspect
+# each bitmap. (since 5.2)
#
# @allocation-depth: Also export the allocation depth map for @device, so
# the NBD client can use NBD_OPT_SET_META_CONTEXT with
@@ -88,7 +88,7 @@
##
{ 'struct': 'BlockExportOptionsNbd',
'data': { '*name': 'str', '*description': 'str',
- '*bitmap': 'str', '*allocation-depth': 'bool' } }
+ '*bitmaps': ['str'], '*allocation-depth': 'bool' } }
##
# @NbdServerAddOptions:
@@ -100,12 +100,18 @@
# @writable: Whether clients should be able to write to the device via the
# NBD connection (default false).
#
+# @bitmap: Also export a single dirty bitmap reachable from @device, so the
+# NBD client can use NBD_OPT_SET_META_CONTEXT with the metadata
+# context name "qemu:dirty-bitmap:BITMAP" to inspect the bitmap
+# (since 4.0). Mutually exclusive with @bitmaps, and newer
+# clients should use that instead.
+#
# Since: 5.0
##
{ 'struct': 'NbdServerAddOptions',
'base': 'BlockExportOptionsNbd',
'data': { 'device': 'str',
- '*writable': 'bool' } }
+ '*writable': 'bool', '*bitmap': 'str' } }
##
# @nbd-server-add:
diff --git a/blockdev-nbd.c b/blockdev-nbd.c
index cee9134b12eb..cfd46223bf4d 100644
--- a/blockdev-nbd.c
+++ b/blockdev-nbd.c
@@ -192,6 +192,19 @@ void qmp_nbd_server_add(NbdServerAddOptions *arg, Error **errp)
return;
}
+ /*
+ * New code should use the list 'bitmaps'; but until this code is
+ * gone, we must support the older single 'bitmap'. Use only one.
+ */
+ if (arg->has_bitmap) {
+ if (arg->has_bitmaps) {
+ error_setg(errp, "Can't mix 'bitmap' and 'bitmaps'");
+ return;
+ }
+ arg->has_bitmaps = true;
+ QAPI_LIST_ADD(arg->bitmaps, g_strdup(arg->bitmap));
+ }
+
/*
* block-export-add would default to the node-name, but we may have to use
* the device name as a default here for compatibility.
diff --git a/nbd/server.c b/nbd/server.c
index 30cfe0eee467..884ffa00f1bd 100644
--- a/nbd/server.c
+++ b/nbd/server.c
@@ -1495,6 +1495,7 @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
uint64_t perm, shared_perm;
bool readonly = !exp_args->writable;
bool shared = !exp_args->writable;
+ strList *bitmaps;
int ret;
assert(exp_args->type == BLOCK_EXPORT_TYPE_NBD);
@@ -1556,12 +1557,18 @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
}
exp->size = QEMU_ALIGN_DOWN(size, BDRV_SECTOR_SIZE);
- if (arg->bitmap) {
+ /* XXX Allow more than one bitmap */
+ if (arg->bitmaps && arg->bitmaps->next) {
+ error_setg(errp, "multiple bitmaps per export not supported yet");
+ return -EOPNOTSUPP;
+ }
+ for (bitmaps = arg->bitmaps; bitmaps; bitmaps = bitmaps->next) {
+ const char *bitmap = bitmaps->value;
BlockDriverState *bs = blk_bs(blk);
BdrvDirtyBitmap *bm = NULL;
while (bs) {
- bm = bdrv_find_dirty_bitmap(bs, arg->bitmap);
+ bm = bdrv_find_dirty_bitmap(bs, bitmap);
if (bm != NULL) {
break;
}
@@ -1571,7 +1578,7 @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
if (bm == NULL) {
ret = -ENOENT;
- error_setg(errp, "Bitmap '%s' is not found", arg->bitmap);
+ error_setg(errp, "Bitmap '%s' is not found", bitmap);
goto fail;
}
@@ -1585,15 +1592,15 @@ static int nbd_export_create(BlockExport *blk_exp, BlockExportOptions *exp_args,
ret = -EINVAL;
error_setg(errp,
"Enabled bitmap '%s' incompatible with readonly export",
- arg->bitmap);
+ bitmap);
goto fail;
}
bdrv_dirty_bitmap_set_busy(bm, true);
exp->export_bitmap = bm;
- assert(strlen(arg->bitmap) <= BDRV_BITMAP_MAX_NAME_SIZE);
+ assert(strlen(bitmap) <= BDRV_BITMAP_MAX_NAME_SIZE);
exp->export_bitmap_context = g_strdup_printf("qemu:dirty-bitmap:%s",
- arg->bitmap);
+ bitmap);
assert(strlen(exp->export_bitmap_context) < NBD_MAX_STRING_SIZE);
}
diff --git a/qemu-nbd.c b/qemu-nbd.c
index 847fde435a7f..5473821216f7 100644
--- a/qemu-nbd.c
+++ b/qemu-nbd.c
@@ -572,7 +572,7 @@ int main(int argc, char **argv)
const char *export_name = NULL; /* defaults to "" later for server mode */
const char *export_description = NULL;
bool alloc_depth = false;
- const char *bitmap = NULL;
+ strList *bitmaps = NULL;
const char *tlscredsid = NULL;
bool imageOpts = false;
bool writethrough = true;
@@ -701,7 +701,7 @@ int main(int argc, char **argv)
alloc_depth = true;
break;
case 'B':
- bitmap = optarg;
+ QAPI_LIST_ADD(bitmaps, g_strdup(optarg));
break;
case 'k':
sockpath = optarg;
@@ -798,7 +798,7 @@ int main(int argc, char **argv)
}
if (export_name || export_description || dev_offset ||
device || disconnect || fmt || sn_id_or_name || alloc_depth ||
- bitmap || seen_aio || seen_discard || seen_cache) {
+ bitmaps || seen_aio || seen_discard || seen_cache) {
error_report("List mode is incompatible with per-device settings");
exit(EXIT_FAILURE);
}
@@ -1082,8 +1082,8 @@ int main(int argc, char **argv)
.name = g_strdup(export_name),
.has_description = !!export_description,
.description = g_strdup(export_description),
- .has_bitmap = !!bitmap,
- .bitmap = g_strdup(bitmap),
+ .has_bitmaps = !!bitmaps,
+ .bitmaps = bitmaps,
.has_allocation_depth = alloc_depth,
.allocation_depth = alloc_depth,
},
--
2.29.0
4 years, 1 month
[libvirt PATCH v5 0/6] Add support for vDPA network devices
by Jonathon Jongsma
vDPA network devices allow high-performance networking in a virtual machine by
providing a wire-speed data path. These devices require a vendor-specific host
driver but the data path follows the virtio specification.
The support for vDPA devices was recently added to qemu. This allows
libvirt to support these devices. This patchset requires that the device is
configured on the host with the appropriate vendor-specific driver. This will
create a chardev on the host at e.g. /dev/vhost-vdpa-0. That chardev path can
then be used to define a new interface with type=3D'vdpa'.
Note that in order for hot-unplug to work properly, you may need to apply a
qemu patch[1] for now. Without the patch, qemu will not close the fd properly
and any subsequent attempts to use the vdpa chardev will fail like this:
virsh # attach-device guest1 vdpa.xml
error: Failed to attach device from vdpa.xml
error: Unable to open '/dev/vhost-vdpa-0' for vdpa device: Device or reso=
urce busy
[1] https://lists.nongnu.org/archive/html/qemu-devel/2020-09/msg06374.html
Changes in v5:
- rebased to latest master
- fixed a case where qemuDomainObjExitMonitor() was not called on an error p=
ath
- Improved the nodedev xml. It now includes the path to the chardev in /dev
- also updated the nodedev xml schema
- added sample nodedev-dumpxml output to the commit message of patch #6
Jonathon Jongsma (6):
conf: Add support for vDPA network devices
qemu: add vhost-vdpa capability
qemu: add vdpa support
qemu: add monitor functions for handling file descriptors
qemu: support hotplug of vdpa devices
Include vdpa devices in node device list
docs/formatdomain.rst | 24 +++
docs/formatnode.html.in | 9 +
docs/schemas/domaincommon.rng | 15 ++
docs/schemas/nodedev.rng | 10 +
include/libvirt/libvirt-nodedev.h | 1 +
src/conf/domain_conf.c | 31 ++++
src/conf/domain_conf.h | 4 +
src/conf/netdev_bandwidth_conf.c | 1 +
src/conf/node_device_conf.c | 14 ++
src/conf/node_device_conf.h | 11 +-
src/conf/virnodedeviceobj.c | 4 +-
src/libxl/libxl_conf.c | 1 +
src/libxl/xen_common.c | 1 +
src/lxc/lxc_controller.c | 1 +
src/lxc/lxc_driver.c | 3 +
src/lxc/lxc_process.c | 1 +
src/node_device/node_device_udev.c | 53 ++++++
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 36 +++-
src/qemu/qemu_command.h | 3 +-
src/qemu/qemu_domain.c | 6 +-
src/qemu/qemu_hotplug.c | 75 +++++++-
src/qemu/qemu_interface.c | 25 +++
src/qemu/qemu_interface.h | 2 +
src/qemu/qemu_migration.c | 10 +-
src/qemu/qemu_monitor.c | 93 ++++++++++
src/qemu/qemu_monitor.h | 41 +++++
src/qemu/qemu_monitor_json.c | 173 ++++++++++++++++++
src/qemu/qemu_monitor_json.h | 12 ++
src/qemu/qemu_process.c | 2 +
src/qemu/qemu_validate.c | 15 ++
src/vmx/vmx.c | 1 +
.../caps_5.1.0.x86_64.xml | 1 +
.../caps_5.2.0.x86_64.xml | 1 +
tests/qemuhotplugmock.c | 9 +
tests/qemuhotplugtest.c | 16 ++
.../qemuhotplug-interface-vdpa.xml | 4 +
.../qemuhotplug-base-live+interface-vdpa.xml | 57 ++++++
.../net-vdpa.x86_64-latest.args | 38 ++++
tests/qemuxml2argvdata/net-vdpa.xml | 28 +++
tests/qemuxml2argvmock.c | 11 +-
tests/qemuxml2argvtest.c | 1 +
tests/qemuxml2xmloutdata/net-vdpa.xml | 34 ++++
tests/qemuxml2xmltest.c | 1 +
tools/virsh-domain.c | 1 +
tools/virsh-nodedev.c | 3 +
47 files changed, 870 insertions(+), 16 deletions(-)
create mode 100644 tests/qemuhotplugtestdevices/qemuhotplug-interface-vdpa.x=
ml
create mode 100644 tests/qemuhotplugtestdomains/qemuhotplug-base-live+interf=
ace-vdpa.xml
create mode 100644 tests/qemuxml2argvdata/net-vdpa.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/net-vdpa.xml
create mode 100644 tests/qemuxml2xmloutdata/net-vdpa.xml
--=20
2.26.2
4 years, 1 month
[PATCH v2] news: introduce memory failure event
by zhenwei pi
Signed-off-by: zhenwei pi <pizhenwei(a)bytedance.com>
---
NEWS.rst | 10 ++++++++++
1 file changed, 10 insertions(+)
diff --git a/NEWS.rst b/NEWS.rst
index d0454b7840..428928e80b 100644
--- a/NEWS.rst
+++ b/NEWS.rst
@@ -13,6 +13,16 @@ v6.9.0 (unreleased)
* **New features**
+ * Introduce memory failure event
+
+ Libvirt could handle domain's memory failure event. Drivers need to
+ implement their own method.
+
+ * qemu: Implement memory failure event
+
+ New event is implemented that is emitted whenever a guest encounters a
+ memory failure.
+
* qemu: Implement support for ``<transient/>`` disks
VMs based on the QEMU hypervisor now can use ``<transient/>`` option for
--
2.11.0
4 years, 1 month
[PATCH v2 0/7] migration/dirtyrate: Introduce APIs for getting domain memory dirty rate
by Hao Wang
V1 -> V2:
replace QEMU_JOB_ASYNC with QEMU_JOB_QUERY
Sometimes domain's memory dirty rate is expected by user in order to
decide whether it's proper to be migrated out or not.
We have already completed the QEMU part of the capability:
https://patchew.org/QEMU/1600237327-33618-1-git-send-email-zhengchuan@hua...
And this serial of patches introduce the corresponding LIBVIRT part --
DomainGetDirtyRateInfo API and corresponding virsh api -- "getdirtyrate".
instructions:
bash# virsh getdirtyrate --help
NAME
getdirtyrate - Get a vm's memory dirty rate
SYNOPSIS
getdirtyrate <domain> [--seconds <number>] [--calculate] [--query]
DESCRIPTION
Get memory dirty rate of a domain in order to decide whether it's proper to be migrated out or not.
OPTIONS
[--domain] <string> domain name, id or uuid
--seconds <number> calculate memory dirty rate within specified seconds, a valid range of values is [1, 60], and would default to 1s.
--calculate calculate dirty rate only, can be used together with --query, either or both is expected, otherwise would default to both.
--query query dirty rate only, can be used together with --calculate, either or both is expected, otherwise would default to both.
example:
bash# virsh getdirtyrate --calculate --query --domain vm0 --seconds 1
status: measured
startTime: 820148
calcTime: 1 s
dirtyRate: 6 MB/s
Hao Wang (7):
migration/dirtyrate: Introduce virDomainDirtyRateInfo structure
migration/dirtyrate: Implement qemuMonitorJSONExtractDirtyRateInfo
migration/dirtyrate: Implement qemuDomainQueryDirtyRate
migration/dirtyrate: Implement qemuDomainCalculateDirtyRate
migration/dirtyrate: Introduce virDomainDirtyRateFlags
migration/dirtyrate: Introduce DomainGetDirtyRateInfo API
migration/dirtyrate: Introduce getdirtyrate virsh api
include/libvirt/libvirt-domain.h | 64 ++++++++++++++++++
src/driver-hypervisor.h | 7 ++
src/libvirt-domain.c | 46 +++++++++++++
src/libvirt_public.syms | 5 ++
src/qemu/qemu_driver.c | 70 +++++++++++++++++++
src/qemu/qemu_migration.c | 59 ++++++++++++++++
src/qemu/qemu_migration.h | 10 +++
src/qemu/qemu_monitor.c | 24 +++++++
src/qemu/qemu_monitor.h | 8 +++
src/qemu/qemu_monitor_json.c | 97 ++++++++++++++++++++++++++
src/qemu/qemu_monitor_json.h | 8 +++
src/remote/remote_driver.c | 1 +
src/remote/remote_protocol.x | 21 +++++-
tools/virsh-domain.c | 112 +++++++++++++++++++++++++++++++
14 files changed, 531 insertions(+), 1 deletion(-)
--
2.23.0
4 years, 1 month
Re: [PATCH] pci: Refuse to hotplug PCI Devices when the Guest OS is not ready
by Igor Mammedov
On Fri, 23 Oct 2020 11:54:40 -0400
"Michael S. Tsirkin" <mst(a)redhat.com> wrote:
> On Fri, Oct 23, 2020 at 09:47:14AM +0300, Marcel Apfelbaum wrote:
> > Hi David,
> >
> > On Fri, Oct 23, 2020 at 6:49 AM David Gibson <dgibson(a)redhat.com> wrote:
> >
> > On Thu, 22 Oct 2020 11:01:04 -0400
> > "Michael S. Tsirkin" <mst(a)redhat.com> wrote:
> >
> > > On Thu, Oct 22, 2020 at 05:50:51PM +0300, Marcel Apfelbaum wrote:
> > > [...]ÂÂ
> > >
> > > Right. After detecting just failing unconditionally it a bit too
> > > simplistic IMHO.
> >
> > There's also another factor here, which I thought I'd mentioned
> > already, but looks like I didn't: I think we're still missing some
> > details in what's going on.
> >
> > The premise for this patch is that plugging while the indicator is in
> > transition state is allowed to fail in any way on the guest side. I
> > don't think that's a reasonable interpretation, because it's unworkable
> > for physical hotplug. If the indicator starts blinking while you're in
> > the middle of shoving a card in, you'd be in trouble.
> >
> > So, what I'm assuming here is that while "don't plug while blinking" is
> > the instruction for the operator to obey as best they can, on the guest
> > side the rule has to be "start blinking, wait a while and by the time
> > you leave blinking state again, you can be confident any plugs or
> > unplugs have completed". Obviously still racy in the strict computer
> > science sense, but about the best you can do with slow humans in the
> > mix.
> >
> > So, qemu should of course endeavour to follow that rule as though it
> > was a human operator on a physical machine and not plug when the
> > indicator is blinking. *But* the qemu plug will in practice be fast
> > enough that if we're hitting real problems here, it suggests the guest
> > is still doing something wrong.
> >
> >
> > I personally think there is a little bit of over-engineering here.
> > Let's start with the spec:
> >
> >   Power Indicator Blinking
> >   A blinking Power Indicator indicates that the slot is powering up or
> > powering down and that
> >   insertion or removal of the adapter is not permitted.
> >
> > What exactly is an interpretation here?
> > As you stated, the races are theoretical, the whole point of the indicator
> > is to let the operator know he can't plug the device just yet.
> >
> > I understand it would be more user friendly if the QEMU would wait internally
> > for the
> > blinking to end, but the whole point of the indicator is to let the operatorÂÂ
> > (human or machine)
> > know they can't plug the device at a specific time.
> > Should QEMU take the responsibility of the operator? Is it even correct?
> >
> > Even if we would want such a feature, how is it related to this patch?
> > The patch simply refuses to start a hotplug operation when it knows it will not
> > succeed.ÂÂ
> > ÂÂ
> > Another way that would make sense to me would be is a new QEMU interface other
> > than
> > "add_device", let's say "adding_device_allowed", that would return true if the
> > hotplug is allowed
> > at this point of time. (I am aware of the theoretical races)ÂÂ
>
> Rather than adding_device_allowed, something like "query slot"
> might be helpful for debugging. That would help user figure out
> e.g. why isn't device visible without any races.
Would be new command useful tough? What we end up is broken guest
(if I read commit message right) and a user who has no idea if
device_add was successful or not.
So what user should do in this case
- wait till it explodes?
- can user remove it or it would be stuck there forever?
- poll slot before hotplug, manually?
(if this is the case then failing device_add cleanly doesn't sound bad,
it looks similar to another error we have "/* Check if hot-plug is disabled on the slot */"
in pcie_cap_slot_pre_plug_cb)
CCing libvirt, as it concerns not only QEMU.
>
> > The above will at least mimic the mechanics of the pyhs world. The operator
> > looks at the indicator,
> > the management software checks if adding the device is allowed.
> > Since it is a corner case I would prefer the device_add to fail rather than
> > introducing a new interface,
> > but that's just me.
> >
> > Thanks,
> > Marcel
> >
>
> I think we want QEMU management interface to be reasonably
> abstract and agnostic if possible. Pushing knowledge of hardware
> detail to management will just lead to pain IMHO.
> We supported device_add which practically never fails for years,
For CPUs and RAM, device_add can fail so maybe management is also
prepared to handle errors on PCI hotplug path.
> at this point it's easier to keep supporting it than
> change all users ...
>
>
> >
> > --
> > David Gibson <dgibson(a)redhat.com>
> > Principal Software Engineer, Virtualization, Red Hat
> >
>
>
4 years, 1 month
[libvirt PATCH 1/2] qemu: fix memory leak reported by coverity
by Jonathon Jongsma
Let g_autoptr clean up on early return.
Signed-off-by: Jonathon Jongsma <jjongsma(a)redhat.com>
---
src/qemu/qemu_monitor_json.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index cba9ec7b19..2491cbf9b8 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -4013,7 +4013,7 @@ int qemuMonitorJSONAddFileHandleToSet(qemuMonitorPtr mon,
const char *opaque,
qemuMonitorAddFdInfoPtr fdinfo)
{
- virJSONValuePtr args = NULL;
+ g_autoptr(virJSONValue) args = NULL;
g_autoptr(virJSONValue) reply = NULL;
g_autoptr(virJSONValue) cmd = NULL;
@@ -4024,7 +4024,8 @@ int qemuMonitorJSONAddFileHandleToSet(qemuMonitorPtr mon,
if (virJSONValueObjectAdd(args, "j:fdset-id", fdset, NULL) < 0)
return -1;
- if (!(cmd = qemuMonitorJSONMakeCommandInternal("add-fd", args)))
+ if (!(cmd = qemuMonitorJSONMakeCommandInternal("add-fd",
+ g_steal_pointer(&args))))
return -1;
if (qemuMonitorJSONCommandWithFd(mon, cmd, fd, &reply) < 0)
--
2.26.2
4 years, 2 months
[PATCH 0/6] qemu: Switch from 'nbd-server-add' to 'block-export-add'
by Peter Krempa
QEMU deprecated nbd-server-add in favor of block-export-add recently
(and didn't let us know via the standard means). Adapt to this change.
Peter Krempa (6):
qemu: block: Extract code for adding NBD exports to
'qemuBlockExportAddNBD'
qemumonitorjsontest: Allow deprecation of 'nbd-server-add' QMP command
tests: qemucapabilities: Update capabilities for qemu-5.2 dev cycle
qemu: capabilities: Add QEMU_CAPS_BLOCK_EXPORT_ADD
qemu: Add infrastructure for 'block-export-add' to export NBD
qemuBlockExportAddNBD: Use 'block-export-add' when available
src/qemu/qemu_backup.c | 11 +-
src/qemu/qemu_block.c | 75 +
src/qemu/qemu_block.h | 15 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_migration.c | 12 +-
src/qemu/qemu_monitor.c | 10 +
src/qemu/qemu_monitor.h | 4 +
src/qemu/qemu_monitor_json.c | 21 +
src/qemu/qemu_monitor_json.h | 4 +
.../caps_5.2.0.x86_64.replies | 5090 +++++++++--------
.../caps_5.2.0.x86_64.xml | 6 +-
tests/qemumonitorjsontest.c | 25 +-
13 files changed, 2890 insertions(+), 2386 deletions(-)
--
2.26.2
4 years, 2 months
[PATCH] qemu: Don't pass mode when opening domain log file for reading
by Michal Privoznik
In qemuDomainLogContextNew() the domain log file is opened.
Twice, the first time for writing, and the second time for
reading (if required by caller). When opening the log file for
reading a mode is provided. This doesn't do much harm, but is
unnecessary. Drop the mode.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/qemu/qemu_domain.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 161b369712..d7dbca487a 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -6251,7 +6251,7 @@ qemuDomainLogContextPtr qemuDomainLogContextNew(virQEMUDriverPtr driver,
}
if (mode == QEMU_DOMAIN_LOG_CONTEXT_MODE_START) {
- if ((ctxt->readfd = open(ctxt->path, O_RDONLY, S_IRUSR | S_IWUSR)) < 0) {
+ if ((ctxt->readfd = open(ctxt->path, O_RDONLY)) < 0) {
virReportSystemError(errno, _("failed to open logfile %s"),
ctxt->path);
goto error;
--
2.26.2
4 years, 2 months