On Fri, Jul 12, 2019 at 06:05:46PM +0200, Peter Krempa wrote:
Block jobs currently belong to disks only so we can look up the block
job data for them in the corresponding disks. This won't be the case
when using blockdev as certain jobs don't even correspond to a disk and
most of them can run on a part of the backing chain.
Add a global table of blockjobs which can be used to look up the data
for the blockjobs when the job events need to be processed.
The table is a hash table organized by job name and has a reference to
the job. New and running jobs will later be added to this table.
Reference counting will allow to reap job state for synchronous callers.
Signed-off-by: Peter Krempa <pkrempa(a)redhat.com>
---
src/qemu/qemu_domain.c | 7 +++++++
src/qemu/qemu_domain.h | 3 +++
2 files changed, 10 insertions(+)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 073c9744d3..5af8f3b30c 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -1982,6 +1982,9 @@ qemuDomainObjPrivateAlloc(void *opaque)
if (!(priv->devs = virChrdevAlloc()))
goto error;
+ if (!(priv->blockjobs = virHashCreate(5, virObjectFreeHashData)))
A prime choice for the size.
+ goto error;
+
priv->migMaxBandwidth = QEMU_DOMAIN_MIG_BANDWIDTH_MAX;
priv->driver = opaque;
Reviewed-by: Ján Tomko <jtomko(a)redhat.com>
Jano