Summary on new backup interfaces in QEMU
by Vladimir Sementsov-Ogievskiy
Hi all!
Here I want to summarize new interfaces and use cases for backup in QEMU.
TODO for me: convert this into good rst documentation in docs/.
OK, let's begin.
First, note that drive-backup qmp command is deprecated.
Next, some terminology:
push backup: the whole process is inside QEMU process, also may be called "internal backup"
pull backup: QEMU only exports a kind of snapshot (for example by NBD), and third party software reads this export and stores it somehow, also called "external backup"
copy-before-write operations: We usually do backup of active disk, guest is running and may write to the disk during the process of backup. When guest wants to rewrite data region which is not backed up yet, we must stop this guest write, and copy original data to somewhere before continuing guest write. That's a copy-before-write operation.
image-fleecing: the technique that allows to export a "snapshotted" state of the active disk with help of copy-before-write operations. We create a temporary image - target for copy-before-write operations, and provide an interface to the user to read the "snapshotted" state. And for read, we do read from temporary image the data which is already changed in original active disk, and we read unchanged data directly from active disk. The temporary image itself is also called "reverse delta" or "reversed delta".
== Simple push backup ==
Just use blockdev-backup, nothing new here. I just note some technical details, that are relatively new:
1. First, backup job inserts copy-before-write filter above source disk, to do copy-before-write operation.
2. Created copy-before-write filter shares internal block-copy state with backup job, so they work in collaboration, to not copy same things twice.
== Full pull backup ==
Assume, we are going to do incremental backup in future, so we also need to create a dirty bitmap, to track dirtiness of active disk since full backup.
1. Create empty temporary image for fleecing. It must be of the same size as active disk. It's not necessary to be qcow2, and if it's a qcow2, you shouldn't make the original active disk a backing file for the new temporary qcow2 image (it was necessary in old fleecing scheme).
Example:
qemu-img create -f qcow2 temp.qcow2 64G
2. Initialize fleecing scheme and create dirty bitmap for future incremental backup.
Assume, disk0 is an active disk, attached to qdev-id sda, to be backed up.
qmp: transaction [
block-dirty-bitmap-add {node: disk0, name: bitmap0, persistent: true}
blockdev-add* {node-name: tmp-protocol, driver: file, filename: temp.qcow2}
blockdev-add {node-name: tmp, driver: qcow2, file: tmp-protocol}
blockdev-add {node-name: cbw, driver: copy-before-write, file: disk0, target: tmp}
blockdev-replace** {parent-type: qdev, qdev-id: sda, new-child: cbw}
blockdev-add {node-name: acc, driver: snapshot-access, file: cbw}
]
qmp: nbd-server-start {...}
qmp: nbd-server-add {device: acc, ...}
This way we create the following block-graph:
[guest] [NBD export]
| |
| root | root
v file v
[copy-before-write]<------[snapshot-access]
| |
| file | target
v v
[active-disk] [temp.qcow2]
* "[PATCH 0/2] blockdev-add transaction" series needed for this
** "[PATCH v3 00/11] blockdev-replace" series needed for this
Note additional useful options for copy-before-write filter:
"[PATCH 0/3] block: copy-before-write: on-cbw-error behavior" provides possibility of option on-cbw-error=break-snapshot, which means that on failure of CBW operation we will not break guest write, but instead all further reads by NBD client will fail, which formally means: break the backup process, not guest write.
"[PATCH 0/4] block: copy-before-write: cbw-timeout" provides an option cbw-timeout, to set a timeout for CBW operations. That's very useful to avoid guest stuck.
3. Now third party backup tool can read data from NBD export
NBD_CMD_TRIM (discard) operation is supported on the export, it has the following effects:
1. discard this data from temp image, if it is stored here
2. avoid further copy-before-write operation (guest is free to rewrite corresponding data with no extra latency)
3. all further read requests from discarded areas by NBD client will fail
So, NBD client may discard regions that are already backed up to avoid extra latency for guest writes and to free disk space on the host.
Possible TODO here is to implement NBD protocol extension, that allows to READ & DISCARD in command. In this case we avoid extra command in the wire, but lose possibility of retrying the READ operation if it failed.
4. After backup is complete, we should destroy the fleecing scheme:
qmp: nbd-server-stop
qmp: blockdev-del {node-name: acc}
qmp: blockdev-replace {parent-type: qdev, qdev-id: sda, new-child: disk0}
qmp: blockdev-del {node-name: cbw}
qmp: blockdev-del {node-name: tmp}
qmp: blockdev-del {node-name: tmp-protocol}
5. If backup failed, we should remove created dirty bitmap:
qmp: block-dirty-bitmap-remove {node: disk0, name: bitmap0}
== Incremental pull backup ==
OK, now we have a bitmap called bitmap0, and want to do incremental backup, accordingly to that bitmap. In short, we want:
- create a new bitmap to continue dirty tracking for next incremental backup
- export "snapshotted" state of disk0 through NBD
- export "frozen" bitmap, so that external tool know what to copy
Mostly, all points remains the same, let's go through:
1. Create empty temporary image for fleecing -- same as for full backup, no difference
2. Initialize fleecing scheme and create dirty bitmap for future incremental backup.
qmp: transaction [
block-dirty-bitmap-add {node: disk0, name: bitmap1, persistent: true}
block-dirty-bitmap-disable {node: disk0, name: bitmap0}
blockdev-add {node-name: tmp-protocol, driver: file, filename: temp.qcow2}
blockdev-add {node-name: tmp, driver: qcow2, file: tmp-protocol}
blockdev-add {node-name: cbw, driver: copy-before-write, file: disk0, target: tmp, bitmap: {node: disk0, name: bitmap0}}
blockdev-replace {parent-type: qdev, qdev-id: sda, new-child: cbw}
blockdev-add {node-name: acc, driver: snapshot-access, file: cbw}
]
qmp: nbd-server-start {...}
qmp: block-export-add {type: nbd, node-name: acc, bitmaps: [{node: disk0, name: bitmap0}]}
3. Now third party backup tool can read data from NBD export
- Client may negotiate meta contexts, to query exported dirty bitmap by NBD_BLOCK_STATUS commend
- If client reads "not-dirty" (by bitmap0) areas, it gets an error.
- NBD_CMD_TRIM (discard) works as for full backup, no difference
4. After backup is complete, we should destroy the fleecing scheme:
- Same as for full backup
5. Next, we should handle dirty bitmaps:
5.1 Failure path
Merge-back bitmap1 to bitmap0 and continue tracking in bitmap0:
qmp: transaction [
block-dirty-bitmap-enable {node: disk0, name: bitmap0}
block-dirty-bitmap-merge {node: disk0, target: bitmap0, bitmaps: ['bitmap1']}
block-dirty-bitmap-remove {node: disk0, name: bitmap1}
]
5.2 Success path
We have two possible user scenarios on success:
5.2.1 Continue tracking for next incremental backup in bitmap1
In this case, just remove bitmap0:
qmp: block-dirty-bitmap-remove {node: disk0, name: bitmap0}
Or you may not delete bitmap0 and keep it disabled, to be reused in future for differential backup (see below).
5.2.2 Continue tracking for next incremental backup in bitmap0 (assume, we always work with one bitmap and don't want any kind of differential backups, and don't associate bitmap name with stored backups)
In this case, enable and clear bitmap0, merge bitmap1 to bitmap0 and remove bitmap1:
qmp: transaction [
block-dirty-bitmap-enable {node: disk0, name: bitmap0}
block-dirty-bitmap-clear {node: disk0, name: bitmap0}
block-dirty-bitmap-merge {node: disk0, target: bitmap0, bitmaps: ['bitmap1']}
block-dirty-bitmap-remove {node: disk0, name: bitmap1}
]
== Push backup with fleecing full/incremental ==
Reasoning: the main problem of simple push backup is that guest writes may be seriously affected by copy-on-write operations, when backup target is slow. To solve this problem, we'll use the scheme like for pull backup: we create local temporary image, which is a target for copy-before-write operations, and instead of exporting the "snapshot-access" node we start internal backup from it to the target.
So, the scheme and commands looks exactly the same as for full and incremental pull backup. The only difference is that we don't need to start nbd export, but instead we should add target node to qemu and start internal backup. And good thing is that it may be done in same transaction with initializing fleecing scheme:
qmp: transaction [
... initialize fleecing scheme for full or incremental backup ...
# Add target node. Here is qcow2 added, but it may be nbd node or something else
blockdev-add {node-name: target-protocol, driver: file, filename: target.qcow2}
blockdev-add {node-name: target, driver: qcow2, file: target-protocol}
# Start backup
blockdev-backup {device: acc, target: target, ...}
]
If it is an incremental backup, pass also bitmap parameter:
blockdev-backup {..., bitmap: bitmap0, sync: incremental, bitmap-mode: never}
Note bitmap-mode=never: this means that backup will do nothing with bitmap0, so we have same scheme like for pull backups (handle bitmaps by hand after backup). Still, push-backup scheme may be adopted to use other bitmap modes.
What we lack here is discarding in 'acc' node after successful copying of the block to the target, to safe disk space and avoid extra copy-before-write operations. It's a TODO, should be implemented like discard-source parameter for blockdev-backup.
== Differential backups ==
I'm not fan of this idea, but I think it should be described.
Assume we have already a chain of incremental backups (represented as qcow2 chain on backup storage server, for example). They corresponds to some points in time: T0, T1, T2, T3. Assume T3 is the last backup.
If we want to create usual incremental backup, it would be diff between T3 and current time (which becomes T4).
Differential backup say: I want to make backup starting from T1 to current time. What's for? Maybe T2 and T3 was removed or somehow damaged..
How to do that in Qemu: on each incremental backup you start a new bitmap, and _keep_ old one as disabled.
This way we have bitmap0 (which presents diff between T0 and T1), bitmap1 (diff T1 T2), bitmap2 (diff T2 T3), and bitmap3 which shows diff from T3 up to current time. bitmap3 is the only enabled bitmap and others are disabled.
So, to make differential backup, use block-dirty-bitmap-merge command, to merge all bitmaps you need into one, and than use it in any backup scheme.
The drawback is that all these disabled bitmaps eat RAM. Possible solution is to not keep them in RAM, it's OK to keep them in qcow2, and load only on demand. That's not realized now and that's a TODO for thous who want differential backups.
--
Best regards,
Vladimir
1 year, 9 months
Proposal: renaming 'master' branch to 'main'
by Daniel P. Berrangé
I don't recall exactly when it first came up, but it has been a few years
now since the idea of moving away from 'master' as the git default branch
name became a topic in OSS communities. Far from the first mention:
https://sfconservancy.org/news/2020/jun/23/gitbranchname/
Both gitlab.com and github.com now default to 'main' as the default
branch when creating new repositories:
https://about.gitlab.com/blog/2021/03/10/new-git-default-branch-name/
https://github.blog/changelog/2020-10-01-the-default-branch-for-newly-cre...
Some notable large open source projects have done, (or at least started)
to rename the default branch in their existing projects too. eg
GNOME: https://gitlab.gnome.org/GNOME/glib/-/issues/2348
Fedora: https://fedoraproject.org/wiki/Changes/GitRepos-master-to-main
For libosinfo we did a rename last year. There was a little disruption
but nothing too terrible, IIRC, missing weblate translation branch
update was our main mistake.
I'd suggest it is time to libvirt to get on this train and rename our
default branch to 'main' in all repositories.
There are essentially two options
* Rename 'master' to 'main'
With this, anyone pulling from an existing checkout will get an
error telling them that 'master' does not exist. It won't tell
them about 'main', but at least it gives them a sign that something
in their checkout probably needs changing.
Downside is that any URLs pointing to source files / commits with
a branch name in the URL will become 404s.
* Clone 'master' to 'main'
With this, anyone pulling from an existing checkout will get no
updates. It is very easy for people to not realize that they are
tracking a branch which is no longer used
Downside is also that the undesirable term 'master' remains
present in the repo, even if unused. We might also miss places
which still refer to 'master' which will end up outdated
Ideally, we would rename 'master' to 'main', while the git server
adds 'symbolic-ref' to effectively create a symlink to 'main'. This
would mean anyone pulling from 'master' would get content from 'main'.
AFAICT, this is not supported by gitlab.com though. I had held off
suggesting the rename, hoping such support might arrive, but I'm
doubting it will happen on a timescale to be useful, if at all.
In terms of impact
* Developers need to checkout 'main' and delete their now stale
'master' branch
* Any open merge requests need updating, hitting 'Edit' and choose
the new 'main' as the target
* User repo forks may wish to delete their 'master' and push
'main', but that's entirely optional, since many people never
touch/look at the default branch in their forks.
* CI mostly shouldn't be impacted since we use CI_DEFAULT_BRANCH
most places instead of hardcoding 'master'. A few exceptions
- Links to artifacts include a branch name
- integration tests mention branch name
- References to check-doc/cirrus-run jobs use 'master'
as the docker tag
* GitHub mirroring won't track the rename, it'll just add 'main'
without removing 'master', so needs manual fixup.
* Need to update 'protected branch' and 'default branch' fields
in gitlab repo settings for each repo
* Weblate needs updating to translate from 'main' instead of
'master'
So the rename isn't free of cost, but it should all be one-time
only costs, which I think is something we can live with. Given
our never ceasing development stream there's not really any
'right' time todo such a change. Just after a release is probably
as good as it gets, and January is marginally better since we skip
Feb and have a 6 week gap until the March 1st release.
With regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
1 year, 9 months
[PATCH v3] cpu_s390: Implement getVendorForModel for IBM Z
by Thomas Huth
When running "virsh domcapabilities" on a s390x host, all the CPU
models show up with vendor='unknown' - which sounds kind of weird
since the vendor of these mainframe CPUs is well known: IBM.
All CPUs starting with either "z" or "gen" match a real mainframe
CPU by IBM, so let's return the string "IBM" for those now.
The only remaining ones are now the artifical "qemu" and "max"
models from QEMU itself, so it should be OK to get an "unknown"
vendor for those two.
Reviewed-by: Jiri Denemark <jdenemar(a)redhat.com>
Signed-off-by: Boris Fiuczynski<fiuczy(a)linux.ibm.com>
Signed-off-by: Thomas Huth <thuth(a)redhat.com>
---
v3: Use STRPREFIX() for the checks (thanks to Jiri for the hint!)
src/cpu/cpu_s390.c | 11 ++
tests/domaincapsdata/qemu_4.2.0.s390x.xml | 144 +++++++++++-----------
tests/domaincapsdata/qemu_5.2.0.s390x.xml | 144 +++++++++++-----------
tests/domaincapsdata/qemu_6.0.0.s390x.xml | 144 +++++++++++-----------
4 files changed, 227 insertions(+), 216 deletions(-)
diff --git a/src/cpu/cpu_s390.c b/src/cpu/cpu_s390.c
index d908a83928..81a1513ecb 100644
--- a/src/cpu/cpu_s390.c
+++ b/src/cpu/cpu_s390.c
@@ -109,6 +109,16 @@ virCPUs390ValidateFeatures(virCPUDef *cpu)
}
+static const char *
+virCPUs390GetVendorForModel(const char *modelName)
+{
+ if (STRPREFIX(modelName, "z") || STRPREFIX(modelName, "gen"))
+ return "IBM";
+
+ return NULL;
+}
+
+
struct cpuArchDriver cpuDriverS390 = {
.name = "s390",
.arch = archs,
@@ -119,4 +129,5 @@ struct cpuArchDriver cpuDriverS390 = {
.baseline = NULL,
.update = virCPUs390Update,
.validateFeatures = virCPUs390ValidateFeatures,
+ .getVendorForModel = virCPUs390GetVendorForModel,
};
diff --git a/tests/domaincapsdata/qemu_4.2.0.s390x.xml b/tests/domaincapsdata/qemu_4.2.0.s390x.xml
index 4f176e2d37..66841881a1 100644
--- a/tests/domaincapsdata/qemu_4.2.0.s390x.xml
+++ b/tests/domaincapsdata/qemu_4.2.0.s390x.xml
@@ -83,79 +83,79 @@
<feature policy='require' name='cmm'/>
</mode>
<mode name='custom' supported='yes'>
- <model usable='yes' vendor='unknown'>z800-base</model>
- <model usable='yes' vendor='unknown'>z890.2-base</model>
- <model usable='yes' vendor='unknown'>z9EC.2</model>
- <model usable='yes' vendor='unknown'>z13.2</model>
- <model usable='yes' vendor='unknown'>z9BC-base</model>
- <model usable='yes' vendor='unknown'>z990.5-base</model>
- <model usable='yes' vendor='unknown'>z890.2</model>
- <model usable='yes' vendor='unknown'>z890</model>
- <model usable='yes' vendor='unknown'>z9BC</model>
- <model usable='yes' vendor='unknown'>z13</model>
- <model usable='yes' vendor='unknown'>z196</model>
- <model usable='yes' vendor='unknown'>z13s</model>
- <model usable='yes' vendor='unknown'>z990.3</model>
- <model usable='yes' vendor='unknown'>z13s-base</model>
- <model usable='yes' vendor='unknown'>z9EC</model>
- <model usable='yes' vendor='unknown'>gen15a</model>
- <model usable='yes' vendor='unknown'>z14ZR1-base</model>
- <model usable='yes' vendor='unknown'>z14.2-base</model>
- <model usable='yes' vendor='unknown'>z900.3-base</model>
- <model usable='yes' vendor='unknown'>z13.2-base</model>
- <model usable='yes' vendor='unknown'>z196.2-base</model>
- <model usable='yes' vendor='unknown'>zBC12-base</model>
- <model usable='yes' vendor='unknown'>z9BC.2-base</model>
- <model usable='yes' vendor='unknown'>z900.2-base</model>
- <model usable='yes' vendor='unknown'>z9EC.3</model>
- <model usable='yes' vendor='unknown'>zEC12</model>
- <model usable='yes' vendor='unknown'>z900</model>
- <model usable='yes' vendor='unknown'>z114-base</model>
- <model usable='yes' vendor='unknown'>zEC12-base</model>
- <model usable='yes' vendor='unknown'>z10EC.2</model>
- <model usable='yes' vendor='unknown'>z10EC-base</model>
- <model usable='yes' vendor='unknown'>z900.3</model>
- <model usable='yes' vendor='unknown'>z14ZR1</model>
- <model usable='yes' vendor='unknown'>z10BC</model>
- <model usable='yes' vendor='unknown'>z10BC.2-base</model>
- <model usable='yes' vendor='unknown'>z990.2</model>
- <model usable='yes' vendor='unknown'>z9BC.2</model>
- <model usable='yes' vendor='unknown'>z990</model>
- <model usable='yes' vendor='unknown'>z14</model>
- <model usable='yes' vendor='unknown'>gen15b-base</model>
- <model usable='yes' vendor='unknown'>z990.4</model>
+ <model usable='yes' vendor='IBM'>z800-base</model>
+ <model usable='yes' vendor='IBM'>z890.2-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.2</model>
+ <model usable='yes' vendor='IBM'>z13.2</model>
+ <model usable='yes' vendor='IBM'>z9BC-base</model>
+ <model usable='yes' vendor='IBM'>z990.5-base</model>
+ <model usable='yes' vendor='IBM'>z890.2</model>
+ <model usable='yes' vendor='IBM'>z890</model>
+ <model usable='yes' vendor='IBM'>z9BC</model>
+ <model usable='yes' vendor='IBM'>z13</model>
+ <model usable='yes' vendor='IBM'>z196</model>
+ <model usable='yes' vendor='IBM'>z13s</model>
+ <model usable='yes' vendor='IBM'>z990.3</model>
+ <model usable='yes' vendor='IBM'>z13s-base</model>
+ <model usable='yes' vendor='IBM'>z9EC</model>
+ <model usable='yes' vendor='IBM'>gen15a</model>
+ <model usable='yes' vendor='IBM'>z14ZR1-base</model>
+ <model usable='yes' vendor='IBM'>z14.2-base</model>
+ <model usable='yes' vendor='IBM'>z900.3-base</model>
+ <model usable='yes' vendor='IBM'>z13.2-base</model>
+ <model usable='yes' vendor='IBM'>z196.2-base</model>
+ <model usable='yes' vendor='IBM'>zBC12-base</model>
+ <model usable='yes' vendor='IBM'>z9BC.2-base</model>
+ <model usable='yes' vendor='IBM'>z900.2-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.3</model>
+ <model usable='yes' vendor='IBM'>zEC12</model>
+ <model usable='yes' vendor='IBM'>z900</model>
+ <model usable='yes' vendor='IBM'>z114-base</model>
+ <model usable='yes' vendor='IBM'>zEC12-base</model>
+ <model usable='yes' vendor='IBM'>z10EC.2</model>
+ <model usable='yes' vendor='IBM'>z10EC-base</model>
+ <model usable='yes' vendor='IBM'>z900.3</model>
+ <model usable='yes' vendor='IBM'>z14ZR1</model>
+ <model usable='yes' vendor='IBM'>z10BC</model>
+ <model usable='yes' vendor='IBM'>z10BC.2-base</model>
+ <model usable='yes' vendor='IBM'>z990.2</model>
+ <model usable='yes' vendor='IBM'>z9BC.2</model>
+ <model usable='yes' vendor='IBM'>z990</model>
+ <model usable='yes' vendor='IBM'>z14</model>
+ <model usable='yes' vendor='IBM'>gen15b-base</model>
+ <model usable='yes' vendor='IBM'>z990.4</model>
<model usable='yes' vendor='unknown'>max</model>
- <model usable='yes' vendor='unknown'>z10EC.2-base</model>
- <model usable='yes' vendor='unknown'>gen15a-base</model>
- <model usable='yes' vendor='unknown'>z800</model>
- <model usable='yes' vendor='unknown'>zEC12.2</model>
- <model usable='yes' vendor='unknown'>z10EC</model>
- <model usable='yes' vendor='unknown'>z990.2-base</model>
- <model usable='yes' vendor='unknown'>z900-base</model>
- <model usable='yes' vendor='unknown'>z10BC.2</model>
- <model usable='yes' vendor='unknown'>z9EC-base</model>
- <model usable='yes' vendor='unknown'>z9EC.3-base</model>
- <model usable='yes' vendor='unknown'>z114</model>
- <model usable='yes' vendor='unknown'>z890.3</model>
- <model usable='yes' vendor='unknown'>z196-base</model>
- <model usable='yes' vendor='unknown'>z9EC.2-base</model>
- <model usable='yes' vendor='unknown'>z196.2</model>
- <model usable='yes' vendor='unknown'>z14.2</model>
- <model usable='yes' vendor='unknown'>z990-base</model>
- <model usable='yes' vendor='unknown'>z900.2</model>
- <model usable='yes' vendor='unknown'>z890-base</model>
- <model usable='yes' vendor='unknown'>z10EC.3</model>
- <model usable='yes' vendor='unknown'>z14-base</model>
- <model usable='yes' vendor='unknown'>z990.4-base</model>
- <model usable='yes' vendor='unknown'>z10EC.3-base</model>
- <model usable='yes' vendor='unknown'>z10BC-base</model>
- <model usable='yes' vendor='unknown'>z13-base</model>
- <model usable='yes' vendor='unknown'>z990.3-base</model>
- <model usable='yes' vendor='unknown'>zEC12.2-base</model>
- <model usable='yes' vendor='unknown'>zBC12</model>
- <model usable='yes' vendor='unknown'>z890.3-base</model>
- <model usable='yes' vendor='unknown'>z990.5</model>
- <model usable='yes' vendor='unknown'>gen15b</model>
+ <model usable='yes' vendor='IBM'>z10EC.2-base</model>
+ <model usable='yes' vendor='IBM'>gen15a-base</model>
+ <model usable='yes' vendor='IBM'>z800</model>
+ <model usable='yes' vendor='IBM'>zEC12.2</model>
+ <model usable='yes' vendor='IBM'>z10EC</model>
+ <model usable='yes' vendor='IBM'>z990.2-base</model>
+ <model usable='yes' vendor='IBM'>z900-base</model>
+ <model usable='yes' vendor='IBM'>z10BC.2</model>
+ <model usable='yes' vendor='IBM'>z9EC-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.3-base</model>
+ <model usable='yes' vendor='IBM'>z114</model>
+ <model usable='yes' vendor='IBM'>z890.3</model>
+ <model usable='yes' vendor='IBM'>z196-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.2-base</model>
+ <model usable='yes' vendor='IBM'>z196.2</model>
+ <model usable='yes' vendor='IBM'>z14.2</model>
+ <model usable='yes' vendor='IBM'>z990-base</model>
+ <model usable='yes' vendor='IBM'>z900.2</model>
+ <model usable='yes' vendor='IBM'>z890-base</model>
+ <model usable='yes' vendor='IBM'>z10EC.3</model>
+ <model usable='yes' vendor='IBM'>z14-base</model>
+ <model usable='yes' vendor='IBM'>z990.4-base</model>
+ <model usable='yes' vendor='IBM'>z10EC.3-base</model>
+ <model usable='yes' vendor='IBM'>z10BC-base</model>
+ <model usable='yes' vendor='IBM'>z13-base</model>
+ <model usable='yes' vendor='IBM'>z990.3-base</model>
+ <model usable='yes' vendor='IBM'>zEC12.2-base</model>
+ <model usable='yes' vendor='IBM'>zBC12</model>
+ <model usable='yes' vendor='IBM'>z890.3-base</model>
+ <model usable='yes' vendor='IBM'>z990.5</model>
+ <model usable='yes' vendor='IBM'>gen15b</model>
<model usable='no' vendor='unknown'>qemu</model>
</mode>
</cpu>
diff --git a/tests/domaincapsdata/qemu_5.2.0.s390x.xml b/tests/domaincapsdata/qemu_5.2.0.s390x.xml
index 760f514d7b..31ddbfbc75 100644
--- a/tests/domaincapsdata/qemu_5.2.0.s390x.xml
+++ b/tests/domaincapsdata/qemu_5.2.0.s390x.xml
@@ -85,79 +85,79 @@
<feature policy='require' name='cmm'/>
</mode>
<mode name='custom' supported='yes'>
- <model usable='yes' vendor='unknown'>z800-base</model>
- <model usable='yes' vendor='unknown'>z890.2-base</model>
- <model usable='yes' vendor='unknown'>z9EC.2</model>
- <model usable='yes' vendor='unknown'>z13.2</model>
- <model usable='yes' vendor='unknown'>z990.5-base</model>
- <model usable='yes' vendor='unknown'>z9BC-base</model>
- <model usable='yes' vendor='unknown'>z890.2</model>
- <model usable='yes' vendor='unknown'>z890</model>
- <model usable='yes' vendor='unknown'>z9BC</model>
- <model usable='yes' vendor='unknown'>z13</model>
- <model usable='yes' vendor='unknown'>z196</model>
- <model usable='yes' vendor='unknown'>z13s</model>
- <model usable='yes' vendor='unknown'>z990.3</model>
- <model usable='yes' vendor='unknown'>z13s-base</model>
- <model usable='yes' vendor='unknown'>z9EC</model>
- <model usable='yes' vendor='unknown'>gen15a</model>
- <model usable='yes' vendor='unknown'>z14ZR1-base</model>
- <model usable='yes' vendor='unknown'>z14.2-base</model>
- <model usable='yes' vendor='unknown'>z900.3-base</model>
- <model usable='yes' vendor='unknown'>z13.2-base</model>
- <model usable='yes' vendor='unknown'>z196.2-base</model>
- <model usable='yes' vendor='unknown'>zBC12-base</model>
- <model usable='yes' vendor='unknown'>z9BC.2-base</model>
- <model usable='yes' vendor='unknown'>z900.2-base</model>
- <model usable='yes' vendor='unknown'>z9EC.3</model>
- <model usable='yes' vendor='unknown'>zEC12</model>
- <model usable='yes' vendor='unknown'>z900</model>
- <model usable='yes' vendor='unknown'>z114-base</model>
- <model usable='yes' vendor='unknown'>zEC12-base</model>
- <model usable='yes' vendor='unknown'>z10EC.2</model>
- <model usable='yes' vendor='unknown'>z10EC-base</model>
- <model usable='yes' vendor='unknown'>z900.3</model>
- <model usable='yes' vendor='unknown'>z14ZR1</model>
- <model usable='yes' vendor='unknown'>z10BC</model>
- <model usable='yes' vendor='unknown'>z10BC.2-base</model>
- <model usable='yes' vendor='unknown'>z9BC.2</model>
- <model usable='yes' vendor='unknown'>z990</model>
- <model usable='yes' vendor='unknown'>z990.2</model>
- <model usable='yes' vendor='unknown'>z14</model>
- <model usable='yes' vendor='unknown'>gen15b-base</model>
- <model usable='yes' vendor='unknown'>z990.4</model>
+ <model usable='yes' vendor='IBM'>z800-base</model>
+ <model usable='yes' vendor='IBM'>z890.2-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.2</model>
+ <model usable='yes' vendor='IBM'>z13.2</model>
+ <model usable='yes' vendor='IBM'>z990.5-base</model>
+ <model usable='yes' vendor='IBM'>z9BC-base</model>
+ <model usable='yes' vendor='IBM'>z890.2</model>
+ <model usable='yes' vendor='IBM'>z890</model>
+ <model usable='yes' vendor='IBM'>z9BC</model>
+ <model usable='yes' vendor='IBM'>z13</model>
+ <model usable='yes' vendor='IBM'>z196</model>
+ <model usable='yes' vendor='IBM'>z13s</model>
+ <model usable='yes' vendor='IBM'>z990.3</model>
+ <model usable='yes' vendor='IBM'>z13s-base</model>
+ <model usable='yes' vendor='IBM'>z9EC</model>
+ <model usable='yes' vendor='IBM'>gen15a</model>
+ <model usable='yes' vendor='IBM'>z14ZR1-base</model>
+ <model usable='yes' vendor='IBM'>z14.2-base</model>
+ <model usable='yes' vendor='IBM'>z900.3-base</model>
+ <model usable='yes' vendor='IBM'>z13.2-base</model>
+ <model usable='yes' vendor='IBM'>z196.2-base</model>
+ <model usable='yes' vendor='IBM'>zBC12-base</model>
+ <model usable='yes' vendor='IBM'>z9BC.2-base</model>
+ <model usable='yes' vendor='IBM'>z900.2-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.3</model>
+ <model usable='yes' vendor='IBM'>zEC12</model>
+ <model usable='yes' vendor='IBM'>z900</model>
+ <model usable='yes' vendor='IBM'>z114-base</model>
+ <model usable='yes' vendor='IBM'>zEC12-base</model>
+ <model usable='yes' vendor='IBM'>z10EC.2</model>
+ <model usable='yes' vendor='IBM'>z10EC-base</model>
+ <model usable='yes' vendor='IBM'>z900.3</model>
+ <model usable='yes' vendor='IBM'>z14ZR1</model>
+ <model usable='yes' vendor='IBM'>z10BC</model>
+ <model usable='yes' vendor='IBM'>z10BC.2-base</model>
+ <model usable='yes' vendor='IBM'>z9BC.2</model>
+ <model usable='yes' vendor='IBM'>z990</model>
+ <model usable='yes' vendor='IBM'>z990.2</model>
+ <model usable='yes' vendor='IBM'>z14</model>
+ <model usable='yes' vendor='IBM'>gen15b-base</model>
+ <model usable='yes' vendor='IBM'>z990.4</model>
<model usable='yes' vendor='unknown'>max</model>
- <model usable='yes' vendor='unknown'>z990.2-base</model>
- <model usable='yes' vendor='unknown'>z10EC.2-base</model>
- <model usable='yes' vendor='unknown'>gen15a-base</model>
- <model usable='yes' vendor='unknown'>z800</model>
- <model usable='yes' vendor='unknown'>z10EC</model>
- <model usable='yes' vendor='unknown'>zEC12.2</model>
- <model usable='yes' vendor='unknown'>z900-base</model>
- <model usable='yes' vendor='unknown'>z10BC.2</model>
- <model usable='yes' vendor='unknown'>z9EC-base</model>
- <model usable='yes' vendor='unknown'>z9EC.3-base</model>
- <model usable='yes' vendor='unknown'>z114</model>
- <model usable='yes' vendor='unknown'>z890.3</model>
- <model usable='yes' vendor='unknown'>z196-base</model>
- <model usable='yes' vendor='unknown'>z9EC.2-base</model>
- <model usable='yes' vendor='unknown'>z196.2</model>
- <model usable='yes' vendor='unknown'>z14.2</model>
- <model usable='yes' vendor='unknown'>z990-base</model>
- <model usable='yes' vendor='unknown'>z900.2</model>
- <model usable='yes' vendor='unknown'>z10EC.3</model>
- <model usable='yes' vendor='unknown'>z890-base</model>
- <model usable='yes' vendor='unknown'>z14-base</model>
- <model usable='yes' vendor='unknown'>z990.4-base</model>
- <model usable='yes' vendor='unknown'>z10EC.3-base</model>
- <model usable='yes' vendor='unknown'>z10BC-base</model>
- <model usable='yes' vendor='unknown'>z13-base</model>
- <model usable='yes' vendor='unknown'>z990.3-base</model>
- <model usable='yes' vendor='unknown'>zEC12.2-base</model>
- <model usable='yes' vendor='unknown'>zBC12</model>
- <model usable='yes' vendor='unknown'>z890.3-base</model>
- <model usable='yes' vendor='unknown'>z990.5</model>
- <model usable='yes' vendor='unknown'>gen15b</model>
+ <model usable='yes' vendor='IBM'>z990.2-base</model>
+ <model usable='yes' vendor='IBM'>z10EC.2-base</model>
+ <model usable='yes' vendor='IBM'>gen15a-base</model>
+ <model usable='yes' vendor='IBM'>z800</model>
+ <model usable='yes' vendor='IBM'>z10EC</model>
+ <model usable='yes' vendor='IBM'>zEC12.2</model>
+ <model usable='yes' vendor='IBM'>z900-base</model>
+ <model usable='yes' vendor='IBM'>z10BC.2</model>
+ <model usable='yes' vendor='IBM'>z9EC-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.3-base</model>
+ <model usable='yes' vendor='IBM'>z114</model>
+ <model usable='yes' vendor='IBM'>z890.3</model>
+ <model usable='yes' vendor='IBM'>z196-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.2-base</model>
+ <model usable='yes' vendor='IBM'>z196.2</model>
+ <model usable='yes' vendor='IBM'>z14.2</model>
+ <model usable='yes' vendor='IBM'>z990-base</model>
+ <model usable='yes' vendor='IBM'>z900.2</model>
+ <model usable='yes' vendor='IBM'>z10EC.3</model>
+ <model usable='yes' vendor='IBM'>z890-base</model>
+ <model usable='yes' vendor='IBM'>z14-base</model>
+ <model usable='yes' vendor='IBM'>z990.4-base</model>
+ <model usable='yes' vendor='IBM'>z10EC.3-base</model>
+ <model usable='yes' vendor='IBM'>z10BC-base</model>
+ <model usable='yes' vendor='IBM'>z13-base</model>
+ <model usable='yes' vendor='IBM'>z990.3-base</model>
+ <model usable='yes' vendor='IBM'>zEC12.2-base</model>
+ <model usable='yes' vendor='IBM'>zBC12</model>
+ <model usable='yes' vendor='IBM'>z890.3-base</model>
+ <model usable='yes' vendor='IBM'>z990.5</model>
+ <model usable='yes' vendor='IBM'>gen15b</model>
<model usable='yes' vendor='unknown'>qemu</model>
</mode>
</cpu>
diff --git a/tests/domaincapsdata/qemu_6.0.0.s390x.xml b/tests/domaincapsdata/qemu_6.0.0.s390x.xml
index b1968668db..1cb19e051b 100644
--- a/tests/domaincapsdata/qemu_6.0.0.s390x.xml
+++ b/tests/domaincapsdata/qemu_6.0.0.s390x.xml
@@ -86,79 +86,79 @@
<feature policy='require' name='cmm'/>
</mode>
<mode name='custom' supported='yes'>
- <model usable='yes' vendor='unknown'>z800-base</model>
- <model usable='yes' vendor='unknown'>z890.2-base</model>
- <model usable='yes' vendor='unknown'>z9EC.2</model>
- <model usable='yes' vendor='unknown'>z13.2</model>
- <model usable='yes' vendor='unknown'>z990.5-base</model>
- <model usable='yes' vendor='unknown'>z9BC-base</model>
- <model usable='yes' vendor='unknown'>z890.2</model>
- <model usable='yes' vendor='unknown'>z890</model>
- <model usable='yes' vendor='unknown'>z9BC</model>
- <model usable='yes' vendor='unknown'>z13</model>
- <model usable='yes' vendor='unknown'>z196</model>
- <model usable='yes' vendor='unknown'>z13s</model>
- <model usable='yes' vendor='unknown'>z990.3</model>
- <model usable='yes' vendor='unknown'>z13s-base</model>
- <model usable='yes' vendor='unknown'>z9EC</model>
- <model usable='yes' vendor='unknown'>gen15a</model>
- <model usable='yes' vendor='unknown'>z14ZR1-base</model>
- <model usable='yes' vendor='unknown'>z14.2-base</model>
- <model usable='yes' vendor='unknown'>z900.3-base</model>
- <model usable='yes' vendor='unknown'>z13.2-base</model>
- <model usable='yes' vendor='unknown'>z196.2-base</model>
- <model usable='yes' vendor='unknown'>zBC12-base</model>
- <model usable='yes' vendor='unknown'>z9BC.2-base</model>
- <model usable='yes' vendor='unknown'>z900.2-base</model>
- <model usable='yes' vendor='unknown'>z9EC.3</model>
- <model usable='yes' vendor='unknown'>zEC12</model>
- <model usable='yes' vendor='unknown'>z900</model>
- <model usable='yes' vendor='unknown'>z114-base</model>
- <model usable='yes' vendor='unknown'>zEC12-base</model>
- <model usable='yes' vendor='unknown'>z10EC.2</model>
- <model usable='yes' vendor='unknown'>z10EC-base</model>
- <model usable='yes' vendor='unknown'>z900.3</model>
- <model usable='yes' vendor='unknown'>z14ZR1</model>
- <model usable='yes' vendor='unknown'>z10BC</model>
- <model usable='yes' vendor='unknown'>z10BC.2-base</model>
- <model usable='yes' vendor='unknown'>z9BC.2</model>
- <model usable='yes' vendor='unknown'>z990</model>
- <model usable='yes' vendor='unknown'>z990.2</model>
- <model usable='yes' vendor='unknown'>z14</model>
- <model usable='yes' vendor='unknown'>gen15b-base</model>
- <model usable='yes' vendor='unknown'>z990.4</model>
+ <model usable='yes' vendor='IBM'>z800-base</model>
+ <model usable='yes' vendor='IBM'>z890.2-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.2</model>
+ <model usable='yes' vendor='IBM'>z13.2</model>
+ <model usable='yes' vendor='IBM'>z990.5-base</model>
+ <model usable='yes' vendor='IBM'>z9BC-base</model>
+ <model usable='yes' vendor='IBM'>z890.2</model>
+ <model usable='yes' vendor='IBM'>z890</model>
+ <model usable='yes' vendor='IBM'>z9BC</model>
+ <model usable='yes' vendor='IBM'>z13</model>
+ <model usable='yes' vendor='IBM'>z196</model>
+ <model usable='yes' vendor='IBM'>z13s</model>
+ <model usable='yes' vendor='IBM'>z990.3</model>
+ <model usable='yes' vendor='IBM'>z13s-base</model>
+ <model usable='yes' vendor='IBM'>z9EC</model>
+ <model usable='yes' vendor='IBM'>gen15a</model>
+ <model usable='yes' vendor='IBM'>z14ZR1-base</model>
+ <model usable='yes' vendor='IBM'>z14.2-base</model>
+ <model usable='yes' vendor='IBM'>z900.3-base</model>
+ <model usable='yes' vendor='IBM'>z13.2-base</model>
+ <model usable='yes' vendor='IBM'>z196.2-base</model>
+ <model usable='yes' vendor='IBM'>zBC12-base</model>
+ <model usable='yes' vendor='IBM'>z9BC.2-base</model>
+ <model usable='yes' vendor='IBM'>z900.2-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.3</model>
+ <model usable='yes' vendor='IBM'>zEC12</model>
+ <model usable='yes' vendor='IBM'>z900</model>
+ <model usable='yes' vendor='IBM'>z114-base</model>
+ <model usable='yes' vendor='IBM'>zEC12-base</model>
+ <model usable='yes' vendor='IBM'>z10EC.2</model>
+ <model usable='yes' vendor='IBM'>z10EC-base</model>
+ <model usable='yes' vendor='IBM'>z900.3</model>
+ <model usable='yes' vendor='IBM'>z14ZR1</model>
+ <model usable='yes' vendor='IBM'>z10BC</model>
+ <model usable='yes' vendor='IBM'>z10BC.2-base</model>
+ <model usable='yes' vendor='IBM'>z9BC.2</model>
+ <model usable='yes' vendor='IBM'>z990</model>
+ <model usable='yes' vendor='IBM'>z990.2</model>
+ <model usable='yes' vendor='IBM'>z14</model>
+ <model usable='yes' vendor='IBM'>gen15b-base</model>
+ <model usable='yes' vendor='IBM'>z990.4</model>
<model usable='yes' vendor='unknown'>max</model>
- <model usable='yes' vendor='unknown'>z10EC.2-base</model>
- <model usable='yes' vendor='unknown'>gen15a-base</model>
- <model usable='yes' vendor='unknown'>z800</model>
- <model usable='yes' vendor='unknown'>z10EC</model>
- <model usable='yes' vendor='unknown'>zEC12.2</model>
- <model usable='yes' vendor='unknown'>z990.2-base</model>
- <model usable='yes' vendor='unknown'>z900-base</model>
- <model usable='yes' vendor='unknown'>z10BC.2</model>
- <model usable='yes' vendor='unknown'>z9EC-base</model>
- <model usable='yes' vendor='unknown'>z9EC.3-base</model>
- <model usable='yes' vendor='unknown'>z114</model>
- <model usable='yes' vendor='unknown'>z890.3</model>
- <model usable='yes' vendor='unknown'>z196-base</model>
- <model usable='yes' vendor='unknown'>z9EC.2-base</model>
- <model usable='yes' vendor='unknown'>z196.2</model>
- <model usable='yes' vendor='unknown'>z14.2</model>
- <model usable='yes' vendor='unknown'>z990-base</model>
- <model usable='yes' vendor='unknown'>z900.2</model>
- <model usable='yes' vendor='unknown'>z890-base</model>
- <model usable='yes' vendor='unknown'>z10EC.3</model>
- <model usable='yes' vendor='unknown'>z14-base</model>
- <model usable='yes' vendor='unknown'>z990.4-base</model>
- <model usable='yes' vendor='unknown'>z10EC.3-base</model>
- <model usable='yes' vendor='unknown'>z10BC-base</model>
- <model usable='yes' vendor='unknown'>z13-base</model>
- <model usable='yes' vendor='unknown'>z990.3-base</model>
- <model usable='yes' vendor='unknown'>zEC12.2-base</model>
- <model usable='yes' vendor='unknown'>zBC12</model>
- <model usable='yes' vendor='unknown'>z890.3-base</model>
- <model usable='yes' vendor='unknown'>z990.5</model>
- <model usable='yes' vendor='unknown'>gen15b</model>
+ <model usable='yes' vendor='IBM'>z10EC.2-base</model>
+ <model usable='yes' vendor='IBM'>gen15a-base</model>
+ <model usable='yes' vendor='IBM'>z800</model>
+ <model usable='yes' vendor='IBM'>z10EC</model>
+ <model usable='yes' vendor='IBM'>zEC12.2</model>
+ <model usable='yes' vendor='IBM'>z990.2-base</model>
+ <model usable='yes' vendor='IBM'>z900-base</model>
+ <model usable='yes' vendor='IBM'>z10BC.2</model>
+ <model usable='yes' vendor='IBM'>z9EC-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.3-base</model>
+ <model usable='yes' vendor='IBM'>z114</model>
+ <model usable='yes' vendor='IBM'>z890.3</model>
+ <model usable='yes' vendor='IBM'>z196-base</model>
+ <model usable='yes' vendor='IBM'>z9EC.2-base</model>
+ <model usable='yes' vendor='IBM'>z196.2</model>
+ <model usable='yes' vendor='IBM'>z14.2</model>
+ <model usable='yes' vendor='IBM'>z990-base</model>
+ <model usable='yes' vendor='IBM'>z900.2</model>
+ <model usable='yes' vendor='IBM'>z890-base</model>
+ <model usable='yes' vendor='IBM'>z10EC.3</model>
+ <model usable='yes' vendor='IBM'>z14-base</model>
+ <model usable='yes' vendor='IBM'>z990.4-base</model>
+ <model usable='yes' vendor='IBM'>z10EC.3-base</model>
+ <model usable='yes' vendor='IBM'>z10BC-base</model>
+ <model usable='yes' vendor='IBM'>z13-base</model>
+ <model usable='yes' vendor='IBM'>z990.3-base</model>
+ <model usable='yes' vendor='IBM'>zEC12.2-base</model>
+ <model usable='yes' vendor='IBM'>zBC12</model>
+ <model usable='yes' vendor='IBM'>z890.3-base</model>
+ <model usable='yes' vendor='IBM'>z990.5</model>
+ <model usable='yes' vendor='IBM'>gen15b</model>
<model usable='yes' vendor='unknown'>qemu</model>
</mode>
</cpu>
--
2.31.1
1 year, 9 months
[PATCH V2 0/4] migration: add qemu parallel migration options
by Jiang Jiacheng
Add qemu parallel migration options to set multifd-compression
multifd-zlib-level and multifd-zstd-level. These parameters has
been supported by QEMU since 5.0.
v2 of:
https://listman.redhat.com/archives/libvir-list/2023-January/237088.html
diff to v1:
* remove VIR_MIGRATE_PARAM_PARALLEL_COMPRESSION and related message
* support to reuse VIR_MIGRATE_PARAM_COMPRESSION to set parallel
migration compress method
* update commit message
Jiang Jiacheng (4):
Add public API for parallel compression method
qemu: Add qemu parallel migration parameters
qemu: support set parallel migration compression method
virsh: Add migrate options to set parallel compress level
docs/manpages/virsh.rst | 22 ++++---
include/libvirt/libvirt-domain.h | 24 +++++++-
src/qemu/qemu_migration.h | 2 +
src/qemu/qemu_migration_params.c | 100 ++++++++++++++++++++++++++++++-
src/qemu/qemu_migration_params.h | 3 +
tools/virsh-domain.c | 26 ++++++++
6 files changed, 166 insertions(+), 11 deletions(-)
--
2.33.0
1 year, 9 months
[libvirt PATCH v4 00/31] Use nbdkit for http/ftp/ssh network drives in libvirt
by Jonathon Jongsma
This is the fourth version of this patch series. See
https://bugzilla.redhat.com/show_bug.cgi?id=2016527 for more information about
the goal, but the summary is that RHEL does not want to ship the qemu storage
plugins for curl and ssh. Handling them outside of the qemu process provides
several advantages such as reduced attack surface and stability.
See previous series for more info:
https://listman.redhat.com/archives/libvir-list/2022-October/235052.html
Note that gitlab CI will not work for this series without changes to the ci
definitions due to the addition of libnbd dependency.
Changes in v4:
- Added new schema that makes ssh disks actually useable with nbdkit.
- supports authentication with password or ssh key
- enable both http and https protocols together
- improve logging and error reporting
- adds a dependency on libnbd to validate the storage before launching qemu
- nbdkit output logged to a separate file
- add missing support for hotplug
- lots of smaller changes from Peter's review
Jonathon Jongsma (31):
schema: allow 'ssh' as a protocol for network disks
qemu: Add functions for determining nbdkit availability
qemu: expand nbdkit capabilities
util: Allow virFileCache data to be any GObject
qemu: implement basic virFileCache for nbdkit caps
qemu: implement persistent file cache for nbdkit caps
qemu: use file cache for nbdkit caps
qemu: Add qemuNbdkitProcess
qemu: query nbdkit module dir from binary
qemu: add functions to start and stop nbdkit
qemu: remove unused 'mode' param from qemuDomainLogContextNew()
Generalize qemuDomainLogContextNew()
qemu: Extract qemuDomainLogContext into a new file
qemu: move qemuProcessReadLog() to qemuLogContext
qemu: log error output from nbdkit
tests: add ability to test various nbdkit capabilities
qemu: split qemuDomainSecretStorageSourcePrepare
qemu: include nbdkit state in private xml
qemu: pass sensitive data to nbdkit via pipe
qemu: use nbdkit to serve network disks if available
util: make virCommandSetSendBuffer testable
tests: add tests for nbdkit invocation
qemu: add test for authenticating a https network disk
qemu: Monitor nbdkit process for exit
qemu: try to connect to nbdkit early to detect errors
schema: add password configuration for ssh disk
qemu: implement password auth for ssh disks with nbdkit
schema: add configuration for host verification of ssh disks
qemu: implement knownHosts for ssh disks with nbdkit
schema: add keyfile configuration for ssh disks
qemu: implement keyfile auth for ssh disk with nbdkit
build-aux/syntax-check.mk | 2 +-
docs/formatdomain.rst | 41 +-
meson.build | 14 +
meson_options.txt | 1 +
po/POTFILES | 2 +
src/conf/domain_conf.c | 32 +
src/conf/schemas/domaincommon.rng | 53 +
src/conf/storage_source_conf.c | 3 +
src/conf/storage_source_conf.h | 6 +-
src/libvirt_private.syms | 1 +
src/qemu/meson.build | 3 +
src/qemu/qemu_block.c | 162 +-
src/qemu/qemu_conf.c | 22 +
src/qemu/qemu_conf.h | 6 +
src/qemu/qemu_domain.c | 415 ++----
src/qemu/qemu_domain.h | 39 +-
src/qemu/qemu_driver.c | 3 +
src/qemu/qemu_extdevice.c | 56 +
src/qemu/qemu_hotplug.c | 7 +
src/qemu/qemu_logcontext.c | 329 ++++
src/qemu/qemu_logcontext.h | 41 +
src/qemu/qemu_nbdkit.c | 1326 +++++++++++++++++
src/qemu/qemu_nbdkit.h | 116 ++
src/qemu/qemu_nbdkitpriv.h | 31 +
src/qemu/qemu_process.c | 119 +-
src/util/vircommand.c | 17 +-
src/util/vircommand.h | 8 +
src/util/vircommandpriv.h | 4 +
src/util/virfilecache.c | 14 +-
src/util/virfilecache.h | 2 +-
tests/meson.build | 1 +
tests/qemublocktest.c | 2 +-
...w2-invalid.json => network-ssh-qcow2.json} | 0
...cow2-invalid.xml => network-ssh-qcow2.xml} | 0
.../disk-cdrom-network.args.disk0 | 6 +
.../disk-cdrom-network.args.disk1 | 8 +
.../disk-cdrom-network.args.disk1.pipe.778 | 1 +
.../disk-cdrom-network.args.disk2 | 8 +
.../disk-cdrom-network.args.disk2.pipe.780 | 1 +
.../disk-network-http.args.disk0 | 6 +
.../disk-network-http.args.disk1 | 5 +
.../disk-network-http.args.disk2 | 6 +
.../disk-network-http.args.disk2.pipe.778 | 1 +
.../disk-network-http.args.disk3 | 7 +
.../disk-network-http.args.disk3.pipe.780 | 1 +
...work-source-curl-nbdkit-backing.args.disk0 | 7 +
...ce-curl-nbdkit-backing.args.disk0.pipe.778 | 1 +
.../disk-network-source-curl.args.disk0 | 7 +
...sk-network-source-curl.args.disk0.pipe.778 | 1 +
.../disk-network-source-curl.args.disk1 | 9 +
...sk-network-source-curl.args.disk1.pipe.780 | 1 +
...sk-network-source-curl.args.disk1.pipe.782 | 1 +
.../disk-network-source-curl.args.disk2 | 7 +
...sk-network-source-curl.args.disk2.pipe.782 | 1 +
...sk-network-source-curl.args.disk2.pipe.784 | 1 +
.../disk-network-source-curl.args.disk3 | 6 +
.../disk-network-source-curl.args.disk4 | 6 +
.../disk-network-ssh-key.args.disk0 | 10 +
.../disk-network-ssh-password.args.disk0 | 9 +
...k-network-ssh-password.args.disk0.pipe.778 | 1 +
.../disk-network-ssh.args.disk0 | 7 +
.../disk-network-ssh.args.disk1 | 8 +
.../disk-network-ssh.args.disk1.pipe.778 | 1 +
.../disk-network-ssh.args.disk2 | 9 +
tests/qemunbdkittest.c | 302 ++++
tests/qemustatusxml2xmldata/modern-in.xml | 4 +
...sk-cdrom-network-nbdkit.x86_64-latest.args | 42 +
.../disk-cdrom-network-nbdkit.xml | 1 +
...isk-network-http-nbdkit.x86_64-latest.args | 45 +
.../disk-network-http-nbdkit.xml | 1 +
...rce-curl-nbdkit-backing.x86_64-latest.args | 38 +
...isk-network-source-curl-nbdkit-backing.xml | 45 +
...work-source-curl-nbdkit.x86_64-latest.args | 50 +
.../disk-network-source-curl-nbdkit.xml | 1 +
...isk-network-source-curl.x86_64-latest.args | 54 +
.../disk-network-source-curl.xml | 74 +
.../qemuxml2argvdata/disk-network-ssh-key.xml | 33 +
...disk-network-ssh-nbdkit.x86_64-latest.args | 36 +
.../disk-network-ssh-nbdkit.xml | 1 +
...sk-network-ssh-password.x86_64-latest.args | 36 +
.../disk-network-ssh-password.xml | 35 +
.../disk-network-ssh.x86_64-latest.args | 36 +
tests/qemuxml2argvdata/disk-network-ssh.xml | 32 +
tests/qemuxml2argvtest.c | 19 +
tests/testutilsqemu.c | 27 +
tests/testutilsqemu.h | 5 +
86 files changed, 3463 insertions(+), 475 deletions(-)
create mode 100644 src/qemu/qemu_logcontext.c
create mode 100644 src/qemu/qemu_logcontext.h
create mode 100644 src/qemu/qemu_nbdkit.c
create mode 100644 src/qemu/qemu_nbdkit.h
create mode 100644 src/qemu/qemu_nbdkitpriv.h
rename tests/qemublocktestdata/imagecreate/{network-ssh-qcow2-invalid.json => network-ssh-qcow2.json} (100%)
rename tests/qemublocktestdata/imagecreate/{network-ssh-qcow2-invalid.xml => network-ssh-qcow2.xml} (100%)
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk1
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk1.pipe.778
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk2
create mode 100644 tests/qemunbdkitdata/disk-cdrom-network.args.disk2.pipe.780
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk1
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk2
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk2.pipe.778
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk3
create mode 100644 tests/qemunbdkitdata/disk-network-http.args.disk3.pipe.780
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl-nbdkit-backing.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl-nbdkit-backing.args.disk0.pipe.778
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk0.pipe.778
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk1
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk1.pipe.780
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk1.pipe.782
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk2
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk2.pipe.782
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk2.pipe.784
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk3
create mode 100644 tests/qemunbdkitdata/disk-network-source-curl.args.disk4
create mode 100644 tests/qemunbdkitdata/disk-network-ssh-key.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-ssh-password.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-ssh-password.args.disk0.pipe.778
create mode 100644 tests/qemunbdkitdata/disk-network-ssh.args.disk0
create mode 100644 tests/qemunbdkitdata/disk-network-ssh.args.disk1
create mode 100644 tests/qemunbdkitdata/disk-network-ssh.args.disk1.pipe.778
create mode 100644 tests/qemunbdkitdata/disk-network-ssh.args.disk2
create mode 100644 tests/qemunbdkittest.c
create mode 100644 tests/qemuxml2argvdata/disk-cdrom-network-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-cdrom-network-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-http-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-network-http-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit-backing.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit-backing.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-network-source-curl-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-source-curl.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh-key.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh-nbdkit.x86_64-latest.args
create mode 120000 tests/qemuxml2argvdata/disk-network-ssh-nbdkit.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh-password.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh-password.xml
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/disk-network-ssh.xml
--
2.39.0
1 year, 9 months
[PATCH v3 0/5] logging: add log cleanup for obsolete domains
by Oleg Vasilev
Presently, logs from deleted domains remain forever. Particular motivation
comes from the case when libguestfs has repeatedly created transient VMs,
which in turn created plenty of logs. This takes up space and lots of files
troubles filesystem navigation.
More motivation in [1]. Patch solving same problem in [2].
Changes in v3: codestyle cleanup, minor fixes
Changes in v2: substantial rework according to Martin Kletzander's comments
v1: https://www.mail-archive.com/libvir-list@redhat.com/msg233754.html
v2: https://www.spinics.net/linux/fedora/libvir/msg236081.html
[1]: https://listman.redhat.com/archives/libvir-list/2022-February/228149.html
[2]: https://listman.redhat.com/archives/libvir-list/2022-February/msg00865.html
CC: Martin Kletzander <mkletzan(a)redhat.com>
Oleg Vasilev (5):
logging: refactor to store config inside log handler
logging: move virLogHandler to header
logging: add configuration for future log cleaner
logging: add log cleanup for obsolete domains
logging: use the log cleaner
po/POTFILES | 1 +
src/logging/log_cleaner.c | 268 +++++++++++++++++++++++++++++++
src/logging/log_cleaner.h | 29 ++++
src/logging/log_daemon.c | 6 +-
src/logging/log_daemon_config.c | 9 ++
src/logging/log_daemon_config.h | 3 +
src/logging/log_handler.c | 64 +++-----
src/logging/log_handler.h | 50 ++++--
src/logging/meson.build | 1 +
src/logging/test_virtlogd.aug.in | 2 +
src/logging/virtlogd.aug | 2 +
src/logging/virtlogd.conf | 14 ++
12 files changed, 391 insertions(+), 58 deletions(-)
create mode 100644 src/logging/log_cleaner.c
create mode 100644 src/logging/log_cleaner.h
--
2.39.1
1 year, 9 months
[PATCH 0/3] qemu: Fix setting TPM state seclabels wrt save/restore
by Michal Privoznik
*** BLURB HERE ***
Michal Prívozník (3):
qemuProcessStop: Fix detection of outgoing migration for external
devices
qemuExtTPMStop: Restore TPM state label more often
qemuProcessLaunch: Tighten rules for external devices wrt incoming
migration
src/qemu/qemu_process.c | 11 +++++++++--
src/qemu/qemu_tpm.c | 2 +-
2 files changed, 10 insertions(+), 3 deletions(-)
--
2.39.1
1 year, 9 months
[libvirt PATCH v2 0/8] Extract the integration job commands to a shell script
by Erik Skultety
Using shell scripts rather than inlining shell commands to YAML feels more
natural, more readable, and will keep all different variations of execution
consistent. Essentially the only disadvantage is that we won't see each command
listed one-by-one in gitlab's log output (unless we set -x that is), but given
that shell would complain if something was wrong with the script, it's fairly
easy to identify the problem.
Here's a test pipeline after the change:
https://gitlab.com/eskultety/libvirt/-/pipelines/759277200
Since v1:
- 3/7 - reworded commit message as requested
- 4/7 was dropped
- point the SCRATCH_DIR to /var/tmp instead of /tmp to not be limited by the
size of ramdisk mounted in there
Erik Skultety (8):
syntax-check: Drop the shell's 'check for minus' rule
ci: Move the SCRATCH_DIR from /tmp
ci: integration: Extract several hidden job definitions to a script
ci: integration: Drop the 'install-deps' hidden job and reference
ci: integration-template: Drop the '-lt Fedora 35' check
ci: integration.sh: Add/Rewrite/Reformat commentaries
ci: integration.sh: Replace 'test' with '[' operator
ci: integration.sh: Define the SCRATCH_DIR variable for local
execution
build-aux/syntax-check.mk | 9 --------
ci/integration-template.yml | 44 +++--------------------------------
ci/integration.sh | 46 +++++++++++++++++++++++++++++++++++++
3 files changed, 49 insertions(+), 50 deletions(-)
create mode 100644 ci/integration.sh
--
2.39.1
1 year, 9 months
[PATCH 0/7] various cleanups
by Peter Krempa
Collection of cleanup patches from my previous series attempting to
remove secrets clearing refactored so that it doesn't depend on the
patches which were not accepted.
Peter Krempa (7):
virCryptoEncryptDataAESgnutls: Restructure control flow
virStorageBackendISCSISetAuth: Refactor cleanup
virStorageBackendISCSISetAuth: Use g_strndup to '\0' terminate data
virStorageBackendISCSIDirectSetAuth: Refactor cleanup
virStorageBackendISCSIDirectSetAuth: Use 'g_strndup' to '\0' terminate
data
libxlMakeNetworkDiskSrc: Refactor cleanup
storageBackendCreateQemuImgSecretPath: Refactor cleanup
src/libxl/libxl_conf.c | 17 ++++------
src/storage/storage_backend_iscsi.c | 17 ++++------
src/storage/storage_backend_iscsi_direct.c | 23 ++++++--------
src/storage/storage_util.c | 36 ++++++++++------------
src/util/vircrypto.c | 28 ++++++++---------
5 files changed, 49 insertions(+), 72 deletions(-)
--
2.39.1
1 year, 9 months
[PATCH 0/7] virDomainNetDefFormat: Modernize XML formatting
by Michal Privoznik
A bit. After these, the function is still long and could be broken into
smaller ones, but let's leave that as an exercise for future us.
Michal Prívozník (7):
qemuxml2xmloutdata: Turn net-mtu.xml into a symlink
virDomainNetDefFormat: Rename @attrBuf to @targetAttrBuf
virDomainNetDefFormat: Modernize <tune/> formatting
virDomainNetDefFormat: Modernize <guest/> formatting
virDomainNetDefFormat: Modernize <source/> formatting
virDomainNetDefFormat: Simplify @sourceAttrBuf handling for some types
of VIR_DOMAIN_NET
virDomainNetDefFormat: Modernize <mac/> formatting
src/conf/domain_conf.c | 151 ++++++++++-----------------
tests/qemuxml2argvdata/net-mtu.xml | 26 +++--
tests/qemuxml2xmloutdata/net-mtu.xml | 72 +------------
3 files changed, 74 insertions(+), 175 deletions(-)
mode change 100644 => 120000 tests/qemuxml2xmloutdata/net-mtu.xml
--
2.39.1
1 year, 9 months