[libvirt RFCv9 00/31] multifd save restore prototype

This is v9 of the multifd save prototype, which splits things up more in order to hopefully offer more upstreamable code, and adds DIRECT I/O APIs and changes to the save/restore format to be more block friendly, potentially removing the need of an iohelper completely. This is demonstrated in this case using the multifd streams supplied by the multifd helper, but could also work with block-aware QEMU streams if ever available in the future. KNOWN ISSUES: a) still applies only to save/restore (no managed save etc) b) multifd: saves to multiple files instead of just one file --- changes from v8: * rebased on master * reordered patches to add more upstreamable content at the start * split introduction of virQEMUSaveFd, so the first part is multifd-free * new virQEMUSaveDataRead as a mirror of virQEMUSaveDataWrite * introduced virFileDirect API, using it in virFileDisk operations and for virQEMUSaveRead and virQEMUSaveWrite --- changes from v7: * [ base params API and iohelper refactoring upstreamed ] * extended the QEMU save image format more, to record the nr of multifd channels on save. Made the data header struct packed. * removed --parallel-connections from the restore command, as now it is useless due to QEMU save image format extension. * separate out patches to expose migration_params APIs to saveimage, including qemuMigrationParamsSetString, SetCap, SetInt. * fixed bugs in the ImageOpen patch (missing saveFd init), removed some whitespace, and fixed some convoluted code paths for return value -3. --- changes from v6: * improved error path handling, with error messages and especially cancellation of qemu process on error during restore. * split patches more and reordered them to keep general refactoring at the beginning before the --parallel stuff is introduced. * improved multifd compression support, including adding an enum and extending the QEMU save image format to record the compression used on save, and pick it up automatically on restore. --- changes from v4: * runIO renamed to virFileDiskCopy and rethought arguments * renamed new APIs from ...ParametersFlags to ...Params * introduce the new virDomainSaveParams and virDomainRestoreParams without any additional parameters, so they can be upstreamed first. * solved the issue in the gendispatch.pl script generating code that was missing the conn parameter. --- changes from v3: * reordered series to have all helper-related change at the start * solved all reported issues from ninja test, including documentation * fixed most broken migration capabilities code (still imperfect likely) * added G_GNUC_UNUSED as needed * after multifd restore, added what I think were the missing operations: qemuProcessRefreshState(), qemuProcessStartCPUs() - most importantly, virDomainObjSave() The domain now starts running after restore without further encouragement * removed the sleep(10) from the multifd-helper --- changes from v2: * added ability to restore the VM from disk using multifd * fixed the multifd-helper to work in both directions, assuming the need to listen for save, and connect for restore. * fixed a large number of bugs, and probably introduced some :-) --- Claudio Fontana (31): virfile: introduce virFileDirect APIs virfile: use virFileDirect API in runIOCopy qemu: saveimage: rework save format and read/write to be O_DIRECT friendly virfile: virFileDiskCopy: prepare for O_DIRECT files without wrapper qemu: saveimage: introduce virQEMUSaveFd qemu: saveimage: convert qemuSaveImageCreate to use virQEMUSaveFd qemu: saveimage: convert qemuSaveImageOpen to use virQEMUSaveFd tools: prepare doSave to use parameters tools: prepare cmdRestore to use parameters libvirt: add new VIR_DOMAIN_SAVE_PARALLEL flag and parameter qemu: add stub support for VIR_DOMAIN_SAVE_PARALLEL in save qemu: add stub support for VIR_DOMAIN_SAVE_PARALLEL in restore multifd-helper: new helper for parallel save/restore qemu: saveimage: add virQEMUSaveFd APIs for multifd qemu: saveimage: wire up saveimage code with the multifd helper qemu: capabilities: add multifd to the probed migration capabilities qemu: saveimage: add multifd related fields to save format qemu: migration_params: add APIs to set Int and Cap qemu: migration: implement qemuMigrationSrcToFilesMultiFd for save qemu: add parameter to qemuMigrationDstRun to skip waiting qemu: implement qemuSaveImageLoadMultiFd for restore tools: add parallel parameter to virsh save command tools: add parallel parameter to virsh restore command qemu: add migration parameter multifd-compression libvirt: add new VIR_DOMAIN_SAVE_PARAM_PARALLEL_COMPRESSION qemu: saveimage: add parallel compression argument to ImageCreate qemu: saveimage: add stub support for multifd compression parameter qemu: migration: expose qemuMigrationParamsSetString qemu: saveimage: implement multifd-compression in parallel save qemu: saveimage: restore compressed parallel images tools: add parallel-compression parameter to virsh save command docs/manpages/virsh.rst | 39 +- include/libvirt/libvirt-domain.h | 29 + po/POTFILES.in | 1 + src/libvirt_private.syms | 7 + src/qemu/qemu_capabilities.c | 4 + src/qemu/qemu_capabilities.h | 2 + src/qemu/qemu_driver.c | 146 ++-- src/qemu/qemu_migration.c | 160 ++-- src/qemu/qemu_migration.h | 16 +- src/qemu/qemu_migration_params.c | 71 +- src/qemu/qemu_migration_params.h | 15 + src/qemu/qemu_process.c | 3 +- src/qemu/qemu_process.h | 5 +- src/qemu/qemu_saveimage.c | 753 +++++++++++++----- src/qemu/qemu_saveimage.h | 73 +- src/qemu/qemu_snapshot.c | 6 +- src/util/meson.build | 16 + src/util/multifd-helper.c | 247 ++++++ src/util/virfile.c | 316 +++++++- src/util/virfile.h | 10 + src/util/virthread.c | 5 + src/util/virthread.h | 1 + .../caps_4.0.0.aarch64.xml | 1 + .../qemucapabilitiesdata/caps_4.0.0.ppc64.xml | 1 + .../caps_4.0.0.riscv32.xml | 1 + .../caps_4.0.0.riscv64.xml | 1 + .../qemucapabilitiesdata/caps_4.0.0.s390x.xml | 1 + .../caps_4.0.0.x86_64.xml | 1 + .../caps_4.1.0.x86_64.xml | 1 + .../caps_4.2.0.aarch64.xml | 1 + .../qemucapabilitiesdata/caps_4.2.0.ppc64.xml | 1 + .../qemucapabilitiesdata/caps_4.2.0.s390x.xml | 1 + .../caps_4.2.0.x86_64.xml | 1 + .../caps_5.0.0.aarch64.xml | 2 + .../qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 2 + .../caps_5.0.0.riscv64.xml | 2 + .../caps_5.0.0.x86_64.xml | 2 + .../qemucapabilitiesdata/caps_5.1.0.sparc.xml | 2 + .../caps_5.1.0.x86_64.xml | 2 + .../caps_5.2.0.aarch64.xml | 2 + .../qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 2 + .../caps_5.2.0.riscv64.xml | 2 + .../qemucapabilitiesdata/caps_5.2.0.s390x.xml | 2 + .../caps_5.2.0.x86_64.xml | 2 + .../caps_6.0.0.aarch64.xml | 2 + .../qemucapabilitiesdata/caps_6.0.0.s390x.xml | 2 + .../caps_6.0.0.x86_64.xml | 2 + .../caps_6.1.0.x86_64.xml | 2 + .../caps_6.2.0.aarch64.xml | 2 + .../qemucapabilitiesdata/caps_6.2.0.ppc64.xml | 2 + .../caps_6.2.0.x86_64.xml | 2 + .../caps_7.0.0.aarch64.xml | 2 + .../qemucapabilitiesdata/caps_7.0.0.ppc64.xml | 2 + .../caps_7.0.0.x86_64.xml | 2 + .../caps_7.1.0.x86_64.xml | 2 + tools/virsh-domain.c | 101 ++- 56 files changed, 1675 insertions(+), 406 deletions(-) create mode 100644 src/util/multifd-helper.c -- 2.35.3

these functions help with allocating buffers, aligning, reading and writing files opened with O_DIRECT. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/libvirt_private.syms | 6 + src/util/virfile.c | 249 +++++++++++++++++++++++++++++++++++++++ src/util/virfile.h | 10 ++ 3 files changed, 265 insertions(+) diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 9309259751..48ed75aa16 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -2236,7 +2236,13 @@ virFileComparePaths; virFileCopyACLs; virFileDataSync; virFileDeleteTree; +virFileDirectBufferNew; +virFileDirectCopyBuf; virFileDirectFdFlag; +virFileDirectRead; +virFileDirectReadCopy; +virFileDirectReadLim; +virFileDirectWrite; virFileExists; virFileFclose; virFileFdopen; diff --git a/src/util/virfile.c b/src/util/virfile.c index e4522b5f67..3714ca6be4 100644 --- a/src/util/virfile.c +++ b/src/util/virfile.c @@ -192,6 +192,210 @@ virFileDirectFdFlag(void) return O_DIRECT ? O_DIRECT : -1; } +/** + * virFileDirectAlign: align a value, which may include a pointer. + * + * @value: the value that will be incremented to reach alignment + * + * Returns the aligned value. + */ +static inline uintptr_t +virFileDirectAlign(uintptr_t value) +{ + return (value + VIR_FILE_DIRECT_ALIGN_MASK) & ~VIR_FILE_DIRECT_ALIGN_MASK; +} + +/** + * virFileDirectRead: perform a single aligned read. + * @fd: O_DIRECT file descriptor + * @buf: aligned buffer to read into + * @count: the desired read size. Note that if unaligned, + * extra bytes will be read based on alignment. + * + * Note that buf should be able to contain count plus the alignment! + * + * Returns < 0 and errno set on error, or the number of bytes read, + * which may be smaller or even greater than count. + */ +ssize_t +virFileDirectRead(int fd, void *buf, size_t count) +{ + size_t aligned_count = virFileDirectAlign(count); + while (count > 0) { + ssize_t r = read(fd, buf, aligned_count); + if (r < 0 && errno == EINTR) + continue; + return r; + } + return 0; +} + +/** + * virFileDirectReadLim: perform multiple aligned reads up to limit + * @fd: O_DIRECT file descriptor + * @buf: aligned buffer to read into + * @limit: the desired limit to read into buffer. + * Note that if unaligned, extra bytes will be read based on alignment. + * + * Note that buf should be able to contain limit plus the alignment! + * + * Compared with virFileDirectRead, this function reads potentially + * multiple times to fill up to buffer with up to limit bytes plus alignment, + * so on success it does not return a number of bytes smaller than limit. + * + * Returns < 0 and errno set on error, + * and on success the number of bytes read, which may be greater than limit + * due to alignment. + */ +ssize_t +virFileDirectReadLim(int fd, void *buf, size_t lim) +{ + ssize_t nread = 0; + + while (lim > 0) { + ssize_t r = virFileDirectRead(fd, buf, lim); + if (r < 0) + return r; + if (r == 0) + return nread; + buf = (char *)buf + r; + nread += r; + if (lim < r) + break; + lim -= r; + } + return nread; +} + +/** + * virFileDirectCopyBuf: copy a buffer contents to destination in memory + * @buf: aligned buffer to copy from + * @count: amount of data to copy + * @dst: the destination in memory + * @dst_len: the maximum the destination can hold. + * + * copies up to count bytes from buf into dst, but not more than dst_len. + * increments buf pointer and dst pointer, as well as decrementing the + * maximum the destination can hold (dst_len). + * + * Returns the amount copied. + */ +size_t +virFileDirectCopyBuf(void **buf, size_t count, void **dst, size_t *dst_len) +{ + size_t to_copy; + + to_copy = count > *dst_len ? *dst_len : count; + memcpy(*dst, *buf, to_copy); + *dst_len -= to_copy; + *(char **)dst += to_copy; + *(char **)buf += to_copy; + return to_copy; +} + +/** + * virFileDirectReadCopy: read an fd and copy contents to memory + * @fd: O_DIRECT file descriptor + * @buf: aligned buffer to read the fd into, and then copy from + * @buflen: size of the buffer + * @dst: the destination in memory + * @dst_len: the maximum the destination can hold. + * + * reads data from the fd file descriptor into the buffer, + * and then copy to the destination, filling it up to dst_len. + * + * Returns < 0 and errno set on error, + * or the number of bytes read, which may be past the requested dst_len, + * or may be smaller if the fd does not contain enough data. + * + * The buf pointer is updated to point to eventual exccess data in the buffer. + */ +ssize_t +virFileDirectReadCopy(int fd, void **buf, size_t buflen, void *dst, size_t dst_len) +{ + ssize_t nread = 0; + void *d = dst; + char *s = *buf; + + while (dst_len > 0) { + ssize_t rv; + *buf = s; + rv = virFileDirectReadLim(fd, s, dst_len < buflen ? dst_len : buflen); + if (rv < 0) + return rv; + if (rv == 0) + return nread; /* not enough data to fulfill request */ + + nread += rv; /* note, we might read past the requested len */ + virFileDirectCopyBuf(buf, rv, &d, &dst_len); + } + return nread; +} + +/** + * virFileDirectWrite: perform a single aligned write. + * @fd: O_DIRECT file descriptor to write to + * @buf: aligned buffer to write from + * @count: the desired write size. Note that if unaligned, + * extra 0 bytes will be written based on alignment. + * + * Returns < 0 and errno set on error, or the number of bytes written, + * which may be smaller or even greater than count. + */ +ssize_t +virFileDirectWrite(int fd, void *buf, size_t count) +{ + size_t aligned_count = virFileDirectAlign(count); + if (aligned_count > count) { + memset((char *)buf + count, 0, aligned_count - count); + } + while (count > 0) { + ssize_t r = write(fd, buf, aligned_count); /* sc_avoid_write */ + if (r < 0 && errno == EINTR) + continue; + return r; + } + return 0; +} + +/** + * virFileDirectWriteLim: perform multiple aligned writes up to limit + * @fd: O_DIRECT file descriptor + * @buf: aligned buffer to write from + * @limit: the desired limit for the total write size + * Note that if unaligned, extra bytes will be written based on alignment. + * + * Note that buf should be able to contain limit plus the alignment! + * + * Compared with virFileDirectWrite, this function writes potentially + * multiple times to drain the buffer up to the limit bytes plus alignment, + * so on success it does not return a number of bytes smaller than limit. + * + * Returns < 0 and errno set on error, + * and on success the number of bytes written, which may be greater than limit + * due to alignment. + */ + +ssize_t +virFileDirectWriteLim(int fd, void *buf, size_t lim) +{ + ssize_t nwritten = 0; + + while (lim > 0) { + ssize_t r = virFileDirectWrite(fd, buf, lim); + if (r < 0) + return r; + if (r == 0) + return nwritten; + buf = (char *)buf + r; + nwritten += r; + if (lim < r) + break; + lim -= r; + } + return nwritten; +} + /* Opaque type for managing a wrapper around a fd. For now, * read-write is not supported, just a single direction. */ struct _virFileWrapperFd { @@ -202,6 +406,41 @@ struct _virFileWrapperFd { #ifndef WIN32 +/** + * virFileDirectBufferNew: allocate a buffer and return the first + * block-aligned address in it. + * + * @alloc_base: pointer to the to-be-allocated memory buffer. + * @buflen: desired length, which should be greater than alignment. + * + * Allocate a memory area large enough to accommodate an aligned + * buffer of size buflen. + * + * On success, *alloc_base is set to the newly allocated memory, + * and the aligned buffer within it is returned. + * + * On failure, *alloc_base is set to NULL and the function + * returns NULL. + */ +void * +virFileDirectBufferNew(void **alloc_base, size_t buflen) +{ + void *buf; + buflen = virFileDirectAlign(buflen); + +# if WITH_POSIX_MEMALIGN + if (posix_memalign(alloc_base, VIR_FILE_DIRECT_ALIGN_MASK + 1, buflen)) { + *alloc_base = NULL; + return NULL; + } + buf = *alloc_base; +# else + *alloc_base = g_malloc(buflen + VIR_FILE_DIRECT_ALIGN_MASK); + buf = virFileDirectAlign((uintptr_t)*alloc_base); +# endif + return buf; +} + # ifdef __linux__ /** @@ -372,6 +611,16 @@ virFileWrapperFdNew(int *fd, const char *name, unsigned int flags) return NULL; } #else /* WIN32 */ + +void * +virFileDirectBufferNew(void **alloc_base G_GNUC_UNUSED, + size_t buflen G_GNUC_UNUSED) +{ + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("virFileDirectBufferNew unsupported on this platform")); + return NULL; +} + virFileWrapperFd * virFileWrapperFdNew(int *fd G_GNUC_UNUSED, const char *name G_GNUC_UNUSED, diff --git a/src/util/virfile.h b/src/util/virfile.h index 8e378efe30..5031f99f7c 100644 --- a/src/util/virfile.h +++ b/src/util/virfile.h @@ -104,6 +104,16 @@ typedef struct _virFileWrapperFd virFileWrapperFd; int virFileDirectFdFlag(void); +#define VIR_FILE_DIRECT_ALIGN_MASK ((64 * 1024) - 1) + +void *virFileDirectBufferNew(void **alloc_base, size_t buflen); +ssize_t virFileDirectRead(int fd, void *buf, size_t count); +ssize_t virFileDirectWrite(int fd, void *buf, size_t count); +ssize_t virFileDirectReadLim(int fd, void *buf, size_t lim); +ssize_t virFileDirectWriteLim(int fd, void *buf, size_t lim); +size_t virFileDirectCopyBuf(void **buf, size_t count, void **dst, size_t *dst_len); +ssize_t virFileDirectReadCopy(int fd, void **buf, size_t buflen, void *dst, size_t dst_len); + typedef enum { VIR_FILE_WRAPPER_BYPASS_CACHE = (1 << 0), VIR_FILE_WRAPPER_NON_BLOCKING = (1 << 1), -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/util/virfile.c | 28 ++++++++-------------------- 1 file changed, 8 insertions(+), 20 deletions(-) diff --git a/src/util/virfile.c b/src/util/virfile.c index 3714ca6be4..be5418c70f 100644 --- a/src/util/virfile.c +++ b/src/util/virfile.c @@ -4851,20 +4851,14 @@ static off_t runIOCopy(const struct runIOParams p) { g_autofree void *base = NULL; /* Location to be freed */ - char *buf = NULL; /* Aligned location within base */ - size_t buflen = 1024*1024; - intptr_t alignMask = 64*1024 - 1; off_t total = 0; + size_t buflen = 1024*1024; + char *buf = virFileDirectBufferNew(&base, buflen); -# if WITH_POSIX_MEMALIGN - if (posix_memalign(&base, alignMask + 1, buflen)) - abort(); - buf = base; -# else - buf = g_new0(char, buflen + alignMask); - base = buf; - buf = (char *) (((intptr_t) base + alignMask) & ~alignMask); -# endif + if (!buf) { + virReportSystemError(errno, _("Failed to allocate aligned memory in function %s"), __FUNCTION__); + return -5; + } while (1) { ssize_t got; @@ -4876,9 +4870,7 @@ runIOCopy(const struct runIOParams p) * In other cases using saferead reduces number of syscalls. */ if (!p.isWrite && p.isDirect) { - if ((got = read(p.fdin, buf, buflen)) < 0 && - errno == EINTR) - continue; + got = virFileDirectRead(p.fdin, buf, buflen); } else { got = saferead(p.fdin, buf, buflen); } @@ -4894,11 +4886,7 @@ runIOCopy(const struct runIOParams p) /* handle last write size align in direct case */ if (got < buflen && p.isDirect && p.isWrite) { - ssize_t aligned_got = (got + alignMask) & ~alignMask; - - memset(buf + got, 0, aligned_got - got); - - if (safewrite(p.fdout, buf, aligned_got) < 0) { + if (virFileDirectWriteLim(p.fdout, buf, got) < 0) { virReportSystemError(errno, _("Unable to write %s"), p.fdoutname); return -3; } -- 2.35.3

change the saveimage format to: 1) ensure that the header struct fields are packed, so we can be sure no padding will ruin the day 2) finish the libvirt header (header + xml + cookie) with zero padding, in order to ensure that the QEMU VM (QEVM Magic) is aligned. Adapt the read and write of the libvirt header accordingly. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_saveimage.c | 229 ++++++++++++++++++++++---------------- src/qemu/qemu_saveimage.h | 22 ++-- 2 files changed, 143 insertions(+), 108 deletions(-) diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 4fd4c5cfcd..7db54f11e1 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -139,12 +139,12 @@ virQEMUSaveDataWrite(virQEMUSaveData *data, int fd, const char *path) { + g_autofree void *base = NULL; + char *buf, *cur; virQEMUSaveHeader *header = &data->header; size_t len; size_t xml_len; size_t cookie_len = 0; - size_t zerosLen = 0; - g_autofree char *zeros = NULL; xml_len = strlen(data->xml) + 1; if (data->cookie) @@ -165,42 +165,148 @@ virQEMUSaveDataWrite(virQEMUSaveData *data, return -1; } } - - zerosLen = header->data_len - len; - zeros = g_new0(char, zerosLen); + buf = virFileDirectBufferNew(&base, sizeof(*header) + header->data_len); + cur = buf; if (data->cookie) header->cookieOffset = xml_len; - if (safewrite(fd, header, sizeof(*header)) != sizeof(*header)) { - virReportSystemError(errno, - _("failed to write header to domain save file '%s'"), - path); - return -1; + memcpy(cur, header, sizeof(*header)); + cur += sizeof(*header); + memcpy(cur, data->xml, xml_len); + cur += xml_len; + if (data->cookie) { + memcpy(cur, data->cookie, cookie_len); + cur += cookie_len; } - if (safewrite(fd, data->xml, xml_len) != xml_len) { + if (virFileDirectWrite(fd, buf, sizeof(*header) + header->data_len) < 0) { virReportSystemError(errno, - _("failed to write domain xml to '%s'"), + _("failed to write libvirt header of domain save file '%s'"), path); return -1; } - if (data->cookie && - safewrite(fd, data->cookie, cookie_len) != cookie_len) { + return 0; +} + +/* virQEMUSaveDataRead: + * + * Reads libvirt's header (including domain XML) from a saved image. + * + * Returns -1 on generic failure, -3 on a corrupted image, or 0 on success. + */ +int +virQEMUSaveDataRead(virQEMUSaveData *data, + int fd, + const char *path) +{ + g_autofree void *base = NULL; + virQEMUSaveHeader *header = &data->header; + size_t xml_len; + size_t cookie_len; + ssize_t rv; + size_t buflen = 1024 * 1024; + void *dst; + char *buf = virFileDirectBufferNew(&base, buflen); + void *src = buf; + + header = &data->header; + rv = virFileDirectReadCopy(fd, &src, buflen, header, sizeof(*header)); + if (rv < 0) { virReportSystemError(errno, - _("failed to write cookie to '%s'"), + _("failed to read libvirt header of domain save file '%s'"), path); return -1; } + if (rv < sizeof(*header)) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("domain save file '%s' libvirt header appears truncated"), + path); + return -3; + } + rv -= sizeof(*header); - if (safewrite(fd, zeros, zerosLen) != zerosLen) { - virReportSystemError(errno, - _("failed to write padding to '%s'"), - path); + if (memcmp(header->magic, QEMU_SAVE_MAGIC, sizeof(header->magic)) != 0) { + if (memcmp(header->magic, QEMU_SAVE_PARTIAL, sizeof(header->magic)) == 0) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("domain save file '%s' seems incomplete"), + path); + return -3; + } + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("image magic is incorrect")); + return -1; + } + if (header->version > QEMU_SAVE_VERSION) { + /* convert endianness and try again */ + qemuSaveImageBswapHeader(header); + } + if (header->version > QEMU_SAVE_VERSION) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("image version is not supported (%d > %d)"), + header->version, QEMU_SAVE_VERSION); return -1; } + if (header->cookieOffset) + xml_len = header->cookieOffset; + else + xml_len = header->data_len; + if (xml_len <= 0) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("invalid xml length: %lu"), xml_len); + return -1; + } + if (header->data_len < xml_len) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("invalid cookieOffset: %u"), header->cookieOffset); + return -1; + } + cookie_len = header->data_len - xml_len; + data->xml = g_new0(char, xml_len); + dst = data->xml; + if (rv > 0) { + rv -= virFileDirectCopyBuf(&src, rv, &dst, &xml_len); + } + if (xml_len > 0) { + rv = virFileDirectReadCopy(fd, &src, buflen, dst, xml_len); + if (rv < 0) { + virReportSystemError(errno, + _("failed to read libvirt xml in domain save file '%s'"), + path); + return -1; + } + if (rv < xml_len) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("domain save file '%s' xml seems incomplete"), + path); + return -3; + } + } + if (cookie_len > 0) { + data->cookie = g_new0(char, cookie_len); + dst = data->cookie; + if (rv > 0) { + rv -= virFileDirectCopyBuf(&src, rv, &dst, &cookie_len); + } + if (cookie_len > 0) { + rv = virFileDirectReadCopy(fd, &src, buflen, dst, cookie_len); + if (rv < 0) { + virReportSystemError(errno, + _("failed to read libvirt cookie in domain save file '%s'"), + path); + return -1; + } + if (rv < cookie_len) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("domain save file '%s' cookie seems incomplete"), + path); + return -3; + } + } + } + /* we should now be aligned and ready to read the QEVM */ return 0; } @@ -444,11 +550,8 @@ qemuSaveImageOpen(virQEMUDriver *driver, VIR_AUTOCLOSE fd = -1; int ret = -1; g_autoptr(virQEMUSaveData) data = NULL; - virQEMUSaveHeader *header; g_autoptr(virDomainDef) def = NULL; int oflags = open_write ? O_RDWR : O_RDONLY; - size_t xml_len; - size_t cookie_len; if (bypass_cache) { int directFlag = virFileDirectFdFlag(); @@ -469,89 +572,17 @@ qemuSaveImageOpen(virQEMUDriver *driver, return -1; data = g_new0(virQEMUSaveData, 1); - - header = &data->header; - if (saferead(fd, header, sizeof(*header)) != sizeof(*header)) { - if (unlink_corrupt) { + ret = virQEMUSaveDataRead(data, fd, path); + if (ret < 0) { + if (unlink_corrupt && ret == -3) { if (unlink(path) < 0) { virReportSystemError(errno, _("cannot remove corrupt file: %s"), path); return -1; - } else { - return -3; - } - } - - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("failed to read qemu header")); - return -1; - } - - if (memcmp(header->magic, QEMU_SAVE_MAGIC, sizeof(header->magic)) != 0) { - if (memcmp(header->magic, QEMU_SAVE_PARTIAL, sizeof(header->magic)) == 0) { - if (unlink_corrupt) { - if (unlink(path) < 0) { - virReportSystemError(errno, - _("cannot remove corrupt file: %s"), - path); - return -1; - } else { - return -3; - } } - - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("save image is incomplete")); - return -1; - } - - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("image magic is incorrect")); - return -1; - } - - if (header->version > QEMU_SAVE_VERSION) { - /* convert endianness and try again */ - qemuSaveImageBswapHeader(header); - } - - if (header->version > QEMU_SAVE_VERSION) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("image version is not supported (%d > %d)"), - header->version, QEMU_SAVE_VERSION); - return -1; - } - - if (header->data_len <= 0) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("invalid header data length: %d"), header->data_len); - return -1; - } - - if (header->cookieOffset) - xml_len = header->cookieOffset; - else - xml_len = header->data_len; - - cookie_len = header->data_len - xml_len; - - data->xml = g_new0(char, xml_len); - - if (saferead(fd, data->xml, xml_len) != xml_len) { - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("failed to read domain XML")); - return -1; - } - - if (cookie_len > 0) { - data->cookie = g_new0(char, cookie_len); - - if (saferead(fd, data->cookie, cookie_len) != cookie_len) { - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("failed to read cookie")); - return -1; } + return ret; } /* Create a domain from this XML */ @@ -601,7 +632,7 @@ qemuSaveImageStartVM(virConnectPtr conn, virDomainXMLOptionGetSaveCookie(driver->xmlopt)) < 0) goto cleanup; - if ((header->version == 2) && + if ((header->version >= 2) && (header->compressed != QEMU_SAVE_FORMAT_RAW)) { if (!(cmd = qemuSaveImageGetCompressionCommand(header->compressed))) goto cleanup; diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 391cd55ed0..58d0949b9c 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -30,20 +30,20 @@ */ #define QEMU_SAVE_MAGIC "LibvirtQemudSave" #define QEMU_SAVE_PARTIAL "LibvirtQemudPart" -#define QEMU_SAVE_VERSION 2 +#define QEMU_SAVE_VERSION 3 G_STATIC_ASSERT(sizeof(QEMU_SAVE_MAGIC) == sizeof(QEMU_SAVE_PARTIAL)); typedef struct _virQEMUSaveHeader virQEMUSaveHeader; struct _virQEMUSaveHeader { - char magic[sizeof(QEMU_SAVE_MAGIC)-1]; - uint32_t version; - uint32_t data_len; - uint32_t was_running; - uint32_t compressed; - uint32_t cookieOffset; - uint32_t unused[14]; -}; + char magic[sizeof(QEMU_SAVE_MAGIC)-1]; /* 16 bytes */ + uint32_t version; /* 4 bytes */ + uint32_t data_len; /* 4 bytes */ + uint32_t was_running; /* 4 bytes */ + uint32_t compressed; /* 4 bytes */ + uint32_t cookieOffset; /* 4 bytes */ + uint32_t unused[14]; /* 56 bytes */ +} ATTRIBUTE_PACKED; /* = 92 bytes */ typedef struct _virQEMUSaveData virQEMUSaveData; @@ -103,6 +103,10 @@ int virQEMUSaveDataWrite(virQEMUSaveData *data, int fd, const char *path); +int +virQEMUSaveDataRead(virQEMUSaveData *data, + int fd, + const char *path); virQEMUSaveData * virQEMUSaveDataNew(char *domXML, -- 2.35.3

On 5/14/22 5:52 PM, Claudio Fontana wrote:
change the saveimage format to:
1) ensure that the header struct fields are packed, so we can be sure no padding will ruin the day
2) finish the libvirt header (header + xml + cookie) with zero padding, in order to ensure that the QEMU VM (QEVM Magic) is aligned.
Adapt the read and write of the libvirt header accordingly.
Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_saveimage.c | 229 ++++++++++++++++++++++---------------- src/qemu/qemu_saveimage.h | 22 ++-- 2 files changed, 143 insertions(+), 108 deletions(-)
diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 4fd4c5cfcd..7db54f11e1 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -139,12 +139,12 @@ virQEMUSaveDataWrite(virQEMUSaveData *data, int fd, const char *path) { + g_autofree void *base = NULL; + char *buf, *cur; virQEMUSaveHeader *header = &data->header; size_t len; size_t xml_len; size_t cookie_len = 0; - size_t zerosLen = 0; - g_autofree char *zeros = NULL;
xml_len = strlen(data->xml) + 1; if (data->cookie) @@ -165,42 +165,148 @@ virQEMUSaveDataWrite(virQEMUSaveData *data, return -1; } } - - zerosLen = header->data_len - len; - zeros = g_new0(char, zerosLen); + buf = virFileDirectBufferNew(&base, sizeof(*header) + header->data_len); + cur = buf;
if (data->cookie) header->cookieOffset = xml_len;
- if (safewrite(fd, header, sizeof(*header)) != sizeof(*header)) { - virReportSystemError(errno, - _("failed to write header to domain save file '%s'"), - path); - return -1; + memcpy(cur, header, sizeof(*header)); + cur += sizeof(*header); + memcpy(cur, data->xml, xml_len); + cur += xml_len; + if (data->cookie) { + memcpy(cur, data->cookie, cookie_len); + cur += cookie_len; }
- if (safewrite(fd, data->xml, xml_len) != xml_len) { + if (virFileDirectWrite(fd, buf, sizeof(*header) + header->data_len) < 0) { virReportSystemError(errno, - _("failed to write domain xml to '%s'"), + _("failed to write libvirt header of domain save file '%s'"), path); return -1; }
- if (data->cookie && - safewrite(fd, data->cookie, cookie_len) != cookie_len) { + return 0; +} + +/* virQEMUSaveDataRead: + * + * Reads libvirt's header (including domain XML) from a saved image. + * + * Returns -1 on generic failure, -3 on a corrupted image, or 0 on success. + */ +int +virQEMUSaveDataRead(virQEMUSaveData *data, + int fd, + const char *path) +{ + g_autofree void *base = NULL; + virQEMUSaveHeader *header = &data->header; + size_t xml_len; + size_t cookie_len; + ssize_t rv; + size_t buflen = 1024 * 1024; + void *dst; + char *buf = virFileDirectBufferNew(&base, buflen); + void *src = buf; + + header = &data->header; + rv = virFileDirectReadCopy(fd, &src, buflen, header, sizeof(*header)); + if (rv < 0) { virReportSystemError(errno, - _("failed to write cookie to '%s'"), + _("failed to read libvirt header of domain save file '%s'"), path); return -1; } + if (rv < sizeof(*header)) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("domain save file '%s' libvirt header appears truncated"), + path); + return -3; + } + rv -= sizeof(*header);
- if (safewrite(fd, zeros, zerosLen) != zerosLen) { - virReportSystemError(errno, - _("failed to write padding to '%s'"), - path); + if (memcmp(header->magic, QEMU_SAVE_MAGIC, sizeof(header->magic)) != 0) { + if (memcmp(header->magic, QEMU_SAVE_PARTIAL, sizeof(header->magic)) == 0) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("domain save file '%s' seems incomplete"), + path); + return -3; + } + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("image magic is incorrect")); + return -1; + } + if (header->version > QEMU_SAVE_VERSION) { + /* convert endianness and try again */ + qemuSaveImageBswapHeader(header); + } + if (header->version > QEMU_SAVE_VERSION) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("image version is not supported (%d > %d)"), + header->version, QEMU_SAVE_VERSION); return -1; } + if (header->cookieOffset) + xml_len = header->cookieOffset; + else + xml_len = header->data_len;
+ if (xml_len <= 0) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("invalid xml length: %lu"), xml_len); + return -1; + } + if (header->data_len < xml_len) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("invalid cookieOffset: %u"), header->cookieOffset); + return -1; + } + cookie_len = header->data_len - xml_len; + data->xml = g_new0(char, xml_len); + dst = data->xml; + if (rv > 0) { + rv -= virFileDirectCopyBuf(&src, rv, &dst, &xml_len); + } + if (xml_len > 0) { + rv = virFileDirectReadCopy(fd, &src, buflen, dst, xml_len); + if (rv < 0) { + virReportSystemError(errno, + _("failed to read libvirt xml in domain save file '%s'"), + path); + return -1; + } + if (rv < xml_len) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("domain save file '%s' xml seems incomplete"), + path); + return -3; + } + } + if (cookie_len > 0) { + data->cookie = g_new0(char, cookie_len); + dst = data->cookie; + if (rv > 0) { + rv -= virFileDirectCopyBuf(&src, rv, &dst, &cookie_len); + } + if (cookie_len > 0) { + rv = virFileDirectReadCopy(fd, &src, buflen, dst, cookie_len); + if (rv < 0) { + virReportSystemError(errno, + _("failed to read libvirt cookie in domain save file '%s'"), + path); + return -1; + } + if (rv < cookie_len) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("domain save file '%s' cookie seems incomplete"), + path); + return -3; + } + } + } + /* we should now be aligned and ready to read the QEVM */ return 0; }
@@ -444,11 +550,8 @@ qemuSaveImageOpen(virQEMUDriver *driver, VIR_AUTOCLOSE fd = -1; int ret = -1; g_autoptr(virQEMUSaveData) data = NULL; - virQEMUSaveHeader *header; g_autoptr(virDomainDef) def = NULL; int oflags = open_write ? O_RDWR : O_RDONLY; - size_t xml_len; - size_t cookie_len;
if (bypass_cache) { int directFlag = virFileDirectFdFlag(); @@ -469,89 +572,17 @@ qemuSaveImageOpen(virQEMUDriver *driver, return -1;
data = g_new0(virQEMUSaveData, 1); - - header = &data->header; - if (saferead(fd, header, sizeof(*header)) != sizeof(*header)) { - if (unlink_corrupt) { + ret = virQEMUSaveDataRead(data, fd, path); + if (ret < 0) { + if (unlink_corrupt && ret == -3) { if (unlink(path) < 0) { virReportSystemError(errno, _("cannot remove corrupt file: %s"), path); return -1; - } else { - return -3; - } - } - - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("failed to read qemu header")); - return -1; - } - - if (memcmp(header->magic, QEMU_SAVE_MAGIC, sizeof(header->magic)) != 0) { - if (memcmp(header->magic, QEMU_SAVE_PARTIAL, sizeof(header->magic)) == 0) { - if (unlink_corrupt) { - if (unlink(path) < 0) { - virReportSystemError(errno, - _("cannot remove corrupt file: %s"), - path); - return -1; - } else { - return -3; - } } - - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("save image is incomplete")); - return -1; - } - - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("image magic is incorrect")); - return -1; - } - - if (header->version > QEMU_SAVE_VERSION) { - /* convert endianness and try again */ - qemuSaveImageBswapHeader(header); - } - - if (header->version > QEMU_SAVE_VERSION) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("image version is not supported (%d > %d)"), - header->version, QEMU_SAVE_VERSION); - return -1; - } - - if (header->data_len <= 0) { - virReportError(VIR_ERR_OPERATION_FAILED, - _("invalid header data length: %d"), header->data_len); - return -1; - } - - if (header->cookieOffset) - xml_len = header->cookieOffset; - else - xml_len = header->data_len; - - cookie_len = header->data_len - xml_len; - - data->xml = g_new0(char, xml_len); - - if (saferead(fd, data->xml, xml_len) != xml_len) { - virReportError(VIR_ERR_OPERATION_FAILED, - "%s", _("failed to read domain XML")); - return -1; - } - - if (cookie_len > 0) { - data->cookie = g_new0(char, cookie_len); - - if (saferead(fd, data->cookie, cookie_len) != cookie_len) { - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("failed to read cookie")); - return -1; } + return ret; }
/* Create a domain from this XML */ @@ -601,7 +632,7 @@ qemuSaveImageStartVM(virConnectPtr conn, virDomainXMLOptionGetSaveCookie(driver->xmlopt)) < 0) goto cleanup;
- if ((header->version == 2) && + if ((header->version >= 2) && (header->compressed != QEMU_SAVE_FORMAT_RAW)) { if (!(cmd = qemuSaveImageGetCompressionCommand(header->compressed))) goto cleanup; diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 391cd55ed0..58d0949b9c 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -30,20 +30,20 @@ */ #define QEMU_SAVE_MAGIC "LibvirtQemudSave" #define QEMU_SAVE_PARTIAL "LibvirtQemudPart" -#define QEMU_SAVE_VERSION 2 +#define QEMU_SAVE_VERSION 3
Introducing this incompatibility is not necessary, I'll respin momentarily a version that should allow old images to work with libvirt after this commit, and new images to also work in older libvirt. C

we will allow to use already open fds that are not empty for both read and write, as long as they are properly aligned. Adapt the truncation to take into account the initial offset. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/util/virfile.c | 39 +++++++++++++++++---------------------- 1 file changed, 17 insertions(+), 22 deletions(-) diff --git a/src/util/virfile.c b/src/util/virfile.c index be5418c70f..745e8e5836 100644 --- a/src/util/virfile.c +++ b/src/util/virfile.c @@ -4890,12 +4890,6 @@ runIOCopy(const struct runIOParams p) virReportSystemError(errno, _("Unable to write %s"), p.fdoutname); return -3; } - - if (!p.isBlockDev && ftruncate(p.fdout, total) < 0) { - virReportSystemError(errno, _("Unable to truncate %s"), p.fdoutname); - return -4; - } - break; } @@ -4930,6 +4924,7 @@ off_t virFileDiskCopy(int disk_fd, const char *disk_path, int remote_fd, const char *remote_path) { int ret = -1; + off_t off = 0; off_t total = 0; struct stat sb; struct runIOParams p; @@ -4973,23 +4968,17 @@ virFileDiskCopy(int disk_fd, const char *disk_path, int remote_fd, const char *r (oflags & O_ACCMODE)); goto cleanup; } - /* To make the implementation simpler, we give up on any - * attempt to use O_DIRECT in a non-trivial manner. */ if (!p.isBlockDev && p.isDirect) { - off_t off; - if (p.isWrite) { - /* - * note: for write we do not only check that disk_fd is seekable, - * we also want to know that the file is empty, so we need SEEK_END. - */ - if ((off = lseek(disk_fd, 0, SEEK_END)) != 0) { - virReportSystemError(off < 0 ? errno : EINVAL, "%s", - _("O_DIRECT write needs empty seekable file")); - goto cleanup; - } - } else if ((off = lseek(disk_fd, 0, SEEK_CUR)) != 0) { - virReportSystemError(off < 0 ? errno : EINVAL, "%s", - _("O_DIRECT read needs entire seekable file")); + off = lseek(disk_fd, 0, SEEK_CUR); + + /* Detect wrong uses of O_DIRECT. */ + if (off < 0) { + virReportSystemError(errno, "%s", _("O_DIRECT needs a seekable file")); + goto cleanup; + } + if (virFileDirectAlign(off) != off) { + /* we could write some zeroes, but maybe it is safer to just fail */ + virReportSystemError(EINVAL, "%s", _("O_DIRECT attempted on an open fd that is not aligned")); goto cleanup; } } @@ -4997,6 +4986,12 @@ virFileDiskCopy(int disk_fd, const char *disk_path, int remote_fd, const char *r if (total < 0) goto cleanup; + if (!p.isBlockDev && p.isDirect && p.isWrite) { + if (ftruncate(p.fdout, off + total) < 0) { + virReportSystemError(errno, _("Unable to truncate %s"), p.fdoutname); + goto cleanup; + } + } /* Ensure all data is written */ if (virFileDataSync(p.fdout) < 0) { if (errno != EINVAL && errno != EROFS) { -- 2.35.3

use this data type to encapsulate the pathname, file descriptor, wrapper, and need to unlink. This will make management of the resources associated with an FD used for QEMU save/restore much easier, reducing the amount of explicit cleanup required. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_saveimage.c | 112 ++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_saveimage.h | 18 ++++++ 2 files changed, 130 insertions(+) diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 7db54f11e1..63c3116407 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -354,6 +354,118 @@ qemuSaveImageGetCompressionCommand(virQEMUSaveFormat compression) return ret; } +/* + * virQEMUSaveFdInit: initialize a virQEMUSaveFd + * + * @saveFd: the structure to initialize + * @base: the file name + * @oflags the file descriptor open flags + * @cfg: the driver config + * + * Returns -1 on error, 0 on success, + * and in both cases virQEMUSaveFdFini must be called to free resources. + */ +int virQEMUSaveFdInit(virQEMUSaveFd *saveFd, const char *base, + int oflags, virQEMUDriverConfig *cfg) +{ + unsigned int wrapperFlags = VIR_FILE_WRAPPER_NON_BLOCKING; + bool isCreat = oflags & O_CREAT; + bool isDirect = O_DIRECT && (oflags & O_DIRECT); + + if (isDirect) + wrapperFlags |= VIR_FILE_WRAPPER_BYPASS_CACHE; + + saveFd->path = g_strdup(base); + saveFd->wrapper = NULL; + if (isCreat) { + saveFd->fd = virQEMUFileOpenAs(cfg->user, cfg->group, false, saveFd->path, + oflags, &saveFd->need_unlink); + } else { + saveFd->fd = qemuDomainOpenFile(cfg, NULL, saveFd->path, oflags, NULL); + } + if (saveFd->fd < 0) + return -1; + /* + * For O_CREAT, we always add the wrapper, + * and for !O_CREAT, we only add the wrapper if using O_DIRECT. + */ + if (isDirect || isCreat) { + saveFd->wrapper = virFileWrapperFdNew(&saveFd->fd, saveFd->path, wrapperFlags); + if (!saveFd->wrapper) + return -1; + } + return 0; +} + +/* + * virQEMUSaveFdClose: close a virQEMUSaveFd descriptor with normal close. + * + * @saveFd: the saveFd structure with the file descriptors to close. + * @vm: the virDomainObj (necessary to release lock), or NULL. + * + * If saveFd is NULL, the function will return success. + * + * Returns -1 on error, 0 on success. + */ +int virQEMUSaveFdClose(virQEMUSaveFd *saveFd, virDomainObj *vm) +{ + if (!saveFd) + return 0; + + if (VIR_CLOSE(saveFd->fd) < 0) { + virReportSystemError(errno, _("unable to close %s"), saveFd->path); + return -1; + } + if (vm) { + if (qemuDomainFileWrapperFDClose(vm, saveFd->wrapper) < 0) + return -1; + } else { + if (virFileWrapperFdClose(saveFd->wrapper) < 0) + return -1; + } + return 0; +} + +/* + * virQEMUSaveFdFini: finalize a virQEMUSaveFd + * + * @saveFd: the saveFd structure containing the resources to free. + * @vm: the virDomainObj (necessary to release lock for long close ops), or NULL. + * @ret: the current operation result (< 0 is failure) + * + * If saveFd is NULL, the return value will be unchanged. + * + * Returns ret, or -1 if an error is detected. + */ +int virQEMUSaveFdFini(virQEMUSaveFd *saveFd, virDomainObj *vm, int ret) +{ + if (!saveFd) + return ret; + VIR_FORCE_CLOSE(saveFd->fd); + if (vm) { + if (qemuDomainFileWrapperFDClose(vm, saveFd->wrapper) < 0) + ret = -1; + } else { + if (virFileWrapperFdClose(saveFd->wrapper) < 0) + ret = -1; + } + + if (ret < 0 && saveFd->need_unlink && saveFd->path) { + if (unlink(saveFd->path) < 0) { + virReportSystemError(errno, _("cannot remove file: %s"), + saveFd->path); + } + } + if (saveFd->wrapper) { + virFileWrapperFdFree(saveFd->wrapper); + saveFd->wrapper = NULL; + } + + g_free(saveFd->path); + saveFd->path = NULL; + return ret; +} + /* Helper function to execute a migration to file with a correct save header * the caller needs to make sure that the processors are stopped and do all other diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 58d0949b9c..21cb1dc78d 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -54,6 +54,24 @@ struct _virQEMUSaveData { }; +typedef struct _virQEMUSaveFd virQEMUSaveFd; +struct _virQEMUSaveFd { + char *path; + int fd; + bool need_unlink; + virFileWrapperFd *wrapper; +}; + +#define QEMU_SAVEFD_INVALID (virQEMUSaveFd) { .path = NULL, .fd = -1, .need_unlink = false, .wrapper = NULL } + +int virQEMUSaveFdInit(virQEMUSaveFd *saveFd, const char *base, + int oflags, virQEMUDriverConfig *cfg) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(4); + +int virQEMUSaveFdClose(virQEMUSaveFd *saveFd, virDomainObj *vm); + +int virQEMUSaveFdFini(virQEMUSaveFd *saveFd, virDomainObj *vm, int ret); + virDomainDef * qemuSaveImageUpdateDef(virQEMUDriver *driver, virDomainDef *def, -- 2.35.3

now that we introduced virQEMUSaveFd, use it in the creation of a new save image. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_saveimage.c | 54 +++++++++++---------------------------- 1 file changed, 15 insertions(+), 39 deletions(-) diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 63c3116407..3e1089412e 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -480,41 +480,31 @@ qemuSaveImageCreate(virQEMUDriver *driver, virDomainAsyncJob asyncJob) { g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); - bool needUnlink = false; + virQEMUSaveFd saveFd = QEMU_SAVEFD_INVALID; + unsigned int oflags = O_WRONLY | O_TRUNC | O_CREAT; int ret = -1; - int fd = -1; - int directFlag = 0; - virFileWrapperFd *wrapperFd = NULL; - unsigned int wrapperFlags = VIR_FILE_WRAPPER_NON_BLOCKING; - /* Obtain the file handle. */ - if ((flags & VIR_DOMAIN_SAVE_BYPASS_CACHE)) { - wrapperFlags |= VIR_FILE_WRAPPER_BYPASS_CACHE; - directFlag = virFileDirectFdFlag(); - if (directFlag < 0) { + if (flags & VIR_DOMAIN_SAVE_BYPASS_CACHE) { + if (virFileDirectFdFlag() < 0) { virReportError(VIR_ERR_OPERATION_FAILED, "%s", _("bypass cache unsupported by this system")); - goto cleanup; + return -1; } + oflags |= O_DIRECT; } - fd = virQEMUFileOpenAs(cfg->user, cfg->group, false, path, - O_WRONLY | O_TRUNC | O_CREAT | directFlag, - &needUnlink); - if (fd < 0) + if (virQEMUSaveFdInit(&saveFd, path, oflags, cfg) < 0) goto cleanup; - - if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, fd) < 0) + if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, saveFd.fd) < 0) goto cleanup; - - if (!(wrapperFd = virFileWrapperFdNew(&fd, path, wrapperFlags))) + if (virQEMUSaveDataWrite(data, saveFd.fd, saveFd.path) < 0) goto cleanup; - if (virQEMUSaveDataWrite(data, fd, path) < 0) + /* Perform the migration */ + if (qemuMigrationSrcToFile(driver, vm, saveFd.fd, compressor, asyncJob) < 0) goto cleanup; - /* Perform the migration */ - if (qemuMigrationSrcToFile(driver, vm, fd, compressor, asyncJob) < 0) + if (virQEMUSaveFdClose(&saveFd, vm) < 0) goto cleanup; /* Touch up file header to mark image complete. */ @@ -523,29 +513,15 @@ qemuSaveImageCreate(virQEMUDriver *driver, * up to seek backwards on wrapperFd. The reopened fd will * trigger a single page of file system cache pollution, but * that's acceptable. */ - if (VIR_CLOSE(fd) < 0) { - virReportSystemError(errno, _("unable to close %s"), path); - goto cleanup; - } - if (qemuDomainFileWrapperFDClose(vm, wrapperFd) < 0) - goto cleanup; - - if ((fd = qemuDomainOpenFile(cfg, vm->def, path, O_WRONLY, NULL)) < 0 || - virQEMUSaveDataFinish(data, &fd, path) < 0) + if ((saveFd.fd = qemuDomainOpenFile(cfg, vm->def, saveFd.path, O_WRONLY, NULL)) < 0 || + virQEMUSaveDataFinish(data, &saveFd.fd, saveFd.path) < 0) goto cleanup; ret = 0; cleanup: - VIR_FORCE_CLOSE(fd); - if (qemuDomainFileWrapperFDClose(vm, wrapperFd) < 0) - ret = -1; - virFileWrapperFdFree(wrapperFd); - - if (ret < 0 && needUnlink) - unlink(path); - + ret = virQEMUSaveFdFini(&saveFd, vm, ret); return ret; } -- 2.35.3

all the logic to open a fd, create a wrapper etc, is boilerplate code that is best reused, so change the Open function to take an existing already initialized virQEMUSaveFd. Adapt all callers of qemuSaveImageOpen. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_driver.c | 101 ++++++++++++++++++++++---------------- src/qemu/qemu_saveimage.c | 56 +++++---------------- src/qemu/qemu_saveimage.h | 9 ++-- 3 files changed, 73 insertions(+), 93 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 702fd0239c..b6e7e74367 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5838,12 +5838,13 @@ qemuDomainRestoreInternal(virConnectPtr conn, virDomainObj *vm = NULL; g_autofree char *xmlout = NULL; const char *newxml = dxml; - int fd = -1; int ret = -1; virQEMUSaveData *data = NULL; - virFileWrapperFd *wrapperFd = NULL; + virQEMUSaveFd saveFd = QEMU_SAVEFD_INVALID; bool hook_taint = false; bool reset_nvram = false; + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); + int oflags = O_RDONLY; virCheckFlags(VIR_DOMAIN_SAVE_BYPASS_CACHE | VIR_DOMAIN_SAVE_RUNNING | @@ -5853,10 +5854,17 @@ qemuDomainRestoreInternal(virConnectPtr conn, if (flags & VIR_DOMAIN_SAVE_RESET_NVRAM) reset_nvram = true; - fd = qemuSaveImageOpen(driver, NULL, path, &def, &data, - (flags & VIR_DOMAIN_SAVE_BYPASS_CACHE) != 0, - &wrapperFd, false, false); - if (fd < 0) + if (flags & VIR_DOMAIN_SAVE_BYPASS_CACHE) { + if (virFileDirectFdFlag() < 0) { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("bypass cache unsupported by this system")); + return -1; + } + oflags |= O_DIRECT; + } + if (virQEMUSaveFdInit(&saveFd, path, oflags, cfg) < 0) + return -1; + if (qemuSaveImageOpen(driver, NULL, &def, &data, false, &saveFd) < 0) goto cleanup; if (ensureACL(conn, def) < 0) @@ -5910,16 +5918,13 @@ qemuDomainRestoreInternal(virConnectPtr conn, flags) < 0) goto cleanup; - ret = qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, + ret = qemuSaveImageStartVM(conn, driver, vm, &saveFd.fd, data, path, false, reset_nvram, VIR_ASYNC_JOB_START); qemuProcessEndJob(vm); cleanup: - VIR_FORCE_CLOSE(fd); - if (virFileWrapperFdClose(wrapperFd) < 0) - ret = -1; - virFileWrapperFdFree(wrapperFd); + ret = virQEMUSaveFdFini(&saveFd, vm, ret); virQEMUSaveDataFree(data); if (vm && ret < 0) qemuDomainRemoveInactive(driver, vm); @@ -5985,15 +5990,15 @@ qemuDomainSaveImageGetXMLDesc(virConnectPtr conn, const char *path, virQEMUDriver *driver = conn->privateData; char *ret = NULL; g_autoptr(virDomainDef) def = NULL; - int fd = -1; virQEMUSaveData *data = NULL; + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); + virQEMUSaveFd saveFd = QEMU_SAVEFD_INVALID; virCheckFlags(VIR_DOMAIN_SAVE_IMAGE_XML_SECURE, NULL); - fd = qemuSaveImageOpen(driver, NULL, path, &def, &data, - false, NULL, false, false); - - if (fd < 0) + if (virQEMUSaveFdInit(&saveFd, path, O_RDONLY, cfg) < 0) + return NULL; + if (qemuSaveImageOpen(driver, NULL, &def, &data, false, &saveFd) < 0) goto cleanup; if (virDomainSaveImageGetXMLDescEnsureACL(conn, def) < 0) @@ -6003,7 +6008,8 @@ qemuDomainSaveImageGetXMLDesc(virConnectPtr conn, const char *path, cleanup: virQEMUSaveDataFree(data); - VIR_FORCE_CLOSE(fd); + if (virQEMUSaveFdFini(&saveFd, NULL, ret ? 0 : -1) < 0) + ret = NULL; return ret; } @@ -6015,8 +6021,9 @@ qemuDomainSaveImageDefineXML(virConnectPtr conn, const char *path, int ret = -1; g_autoptr(virDomainDef) def = NULL; g_autoptr(virDomainDef) newdef = NULL; - int fd = -1; virQEMUSaveData *data = NULL; + virQEMUSaveFd saveFd = QEMU_SAVEFD_INVALID; + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); int state = -1; virCheckFlags(VIR_DOMAIN_SAVE_RUNNING | @@ -6027,10 +6034,9 @@ qemuDomainSaveImageDefineXML(virConnectPtr conn, const char *path, else if (flags & VIR_DOMAIN_SAVE_PAUSED) state = 0; - fd = qemuSaveImageOpen(driver, NULL, path, &def, &data, - false, NULL, true, false); - - if (fd < 0) + if (virQEMUSaveFdInit(&saveFd, path, O_RDWR, cfg) < 0) + return -1; + if (qemuSaveImageOpen(driver, NULL, &def, &data, false, &saveFd) < 0) goto cleanup; if (virDomainSaveImageDefineXMLEnsureACL(conn, def) < 0) @@ -6057,15 +6063,15 @@ qemuDomainSaveImageDefineXML(virConnectPtr conn, const char *path, VIR_DOMAIN_XML_MIGRATABLE))) goto cleanup; - if (lseek(fd, 0, SEEK_SET) != 0) { + if (lseek(saveFd.fd, 0, SEEK_SET) != 0) { virReportSystemError(errno, _("cannot seek in '%s'"), path); goto cleanup; } - if (virQEMUSaveDataWrite(data, fd, path) < 0) + if (virQEMUSaveDataWrite(data, saveFd.fd, path) < 0) goto cleanup; - if (VIR_CLOSE(fd) < 0) { + if (virQEMUSaveFdClose(&saveFd, NULL) < 0) { virReportSystemError(errno, _("failed to write header data to '%s'"), path); goto cleanup; } @@ -6073,8 +6079,8 @@ qemuDomainSaveImageDefineXML(virConnectPtr conn, const char *path, ret = 0; cleanup: - VIR_FORCE_CLOSE(fd); virQEMUSaveDataFree(data); + ret = virQEMUSaveFdFini(&saveFd, NULL, ret); return ret; } @@ -6086,8 +6092,9 @@ qemuDomainManagedSaveGetXMLDesc(virDomainPtr dom, unsigned int flags) g_autofree char *path = NULL; char *ret = NULL; g_autoptr(virDomainDef) def = NULL; - int fd = -1; virQEMUSaveData *data = NULL; + virQEMUSaveFd saveFd = QEMU_SAVEFD_INVALID; + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); qemuDomainObjPrivate *priv; virCheckFlags(VIR_DOMAIN_SAVE_IMAGE_XML_SECURE, NULL); @@ -6108,15 +6115,18 @@ qemuDomainManagedSaveGetXMLDesc(virDomainPtr dom, unsigned int flags) goto cleanup; } - if ((fd = qemuSaveImageOpen(driver, priv->qemuCaps, path, &def, &data, - false, NULL, false, false)) < 0) + if (virQEMUSaveFdInit(&saveFd, path, O_RDONLY, cfg) < 0) + goto cleanup; + if (qemuSaveImageOpen(driver, priv->qemuCaps, &def, &data, false, + &saveFd) < 0) goto cleanup; ret = qemuDomainDefFormatXML(driver, priv->qemuCaps, def, flags); cleanup: virQEMUSaveDataFree(data); - VIR_FORCE_CLOSE(fd); + if (virQEMUSaveFdFini(&saveFd, vm, ret ? 0 : -1) < 0) + ret = NULL; virDomainObjEndAPI(&vm); return ret; } @@ -6166,20 +6176,30 @@ qemuDomainObjRestore(virConnectPtr conn, { g_autoptr(virDomainDef) def = NULL; qemuDomainObjPrivate *priv = vm->privateData; - int fd = -1; int ret = -1; g_autofree char *xmlout = NULL; virQEMUSaveData *data = NULL; - virFileWrapperFd *wrapperFd = NULL; + virQEMUSaveFd saveFd = QEMU_SAVEFD_INVALID; + int oflags = O_RDONLY; + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); - fd = qemuSaveImageOpen(driver, NULL, path, &def, &data, - bypass_cache, &wrapperFd, false, true); - if (fd < 0) { - if (fd == -3) + if (bypass_cache) { + if (virFileDirectFdFlag() < 0) { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("bypass cache unsupported by this system")); + return -1; + } + oflags |= O_DIRECT; + } + if (virQEMUSaveFdInit(&saveFd, path, oflags, cfg) < 0) + goto cleanup; + ret = qemuSaveImageOpen(driver, NULL, &def, &data, true, &saveFd); + if (ret < 0) { + if (ret == -3) ret = 1; goto cleanup; } - + ret = -1; if (virHookPresent(VIR_HOOK_DRIVER_QEMU)) { int hookret; @@ -6219,15 +6239,12 @@ qemuDomainObjRestore(virConnectPtr conn, virDomainObjAssignDef(vm, &def, true, NULL); - ret = qemuSaveImageStartVM(conn, driver, vm, &fd, data, path, + ret = qemuSaveImageStartVM(conn, driver, vm, &saveFd.fd, data, path, start_paused, reset_nvram, asyncJob); cleanup: virQEMUSaveDataFree(data); - VIR_FORCE_CLOSE(fd); - if (virFileWrapperFdClose(wrapperFd) < 0) - ret = -1; - virFileWrapperFdFree(wrapperFd); + ret = virQEMUSaveFdFini(&saveFd, vm, ret); return ret; } diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 3e1089412e..9259257a07 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -614,61 +614,30 @@ qemuSaveImageGetCompressionProgram(const char *imageFormat, * @path: path of the save image * @ret_def: returns domain definition created from the XML stored in the image * @ret_data: returns structure filled with data from the image header - * @bypass_cache: bypass cache when opening the file - * @wrapperFd: returns the file wrapper structure - * @open_write: open the file for writing (for updates) - * @unlink_corrupt: remove the image file if it is corrupted + * @unlink_corrupt: mark the image file for removal if it is corrupted + * @saveFd: the save file * - * Returns the opened fd of the save image file and fills the appropriate fields - * on success. On error returns -1 on most failures, -3 if corrupt image was - * unlinked (no error raised). + * Returns 0 on success or -1 on failure. + * -3 is a special failure in which the saveFd has been marked for unlinking. + * On success, the appropriate fields are filled. */ int qemuSaveImageOpen(virQEMUDriver *driver, virQEMUCaps *qemuCaps, - const char *path, virDomainDef **ret_def, virQEMUSaveData **ret_data, - bool bypass_cache, - virFileWrapperFd **wrapperFd, - bool open_write, - bool unlink_corrupt) + bool unlink_corrupt, + virQEMUSaveFd *saveFd) { - g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); - VIR_AUTOCLOSE fd = -1; - int ret = -1; + int ret; g_autoptr(virQEMUSaveData) data = NULL; g_autoptr(virDomainDef) def = NULL; - int oflags = open_write ? O_RDWR : O_RDONLY; - - if (bypass_cache) { - int directFlag = virFileDirectFdFlag(); - if (directFlag < 0) { - virReportError(VIR_ERR_OPERATION_FAILED, "%s", - _("bypass cache unsupported by this system")); - return -1; - } - oflags |= directFlag; - } - - if ((fd = qemuDomainOpenFile(cfg, NULL, path, oflags, NULL)) < 0) - return -1; - - if (bypass_cache && - !(*wrapperFd = virFileWrapperFdNew(&fd, path, - VIR_FILE_WRAPPER_BYPASS_CACHE))) - return -1; data = g_new0(virQEMUSaveData, 1); - ret = virQEMUSaveDataRead(data, fd, path); + ret = virQEMUSaveDataRead(data, saveFd->fd, saveFd->path); if (ret < 0) { if (unlink_corrupt && ret == -3) { - if (unlink(path) < 0) { - virReportSystemError(errno, - _("cannot remove corrupt file: %s"), - path); - return -1; - } + saveFd->need_unlink = true; } return ret; } @@ -682,10 +651,7 @@ qemuSaveImageOpen(virQEMUDriver *driver, *ret_def = g_steal_pointer(&def); *ret_data = g_steal_pointer(&data); - ret = fd; - fd = -1; - - return ret; + return 0; } int diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 21cb1dc78d..c7ee851b92 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -92,14 +92,11 @@ qemuSaveImageStartVM(virConnectPtr conn, int qemuSaveImageOpen(virQEMUDriver *driver, virQEMUCaps *qemuCaps, - const char *path, virDomainDef **ret_def, virQEMUSaveData **ret_data, - bool bypass_cache, - virFileWrapperFd **wrapperFd, - bool open_write, - bool unlink_corrupt) - ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4); + bool unlink_corrupt, + virQEMUSaveFd *saveFd) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(6); int qemuSaveImageGetCompressionProgram(const char *imageFormat, -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- tools/virsh-domain.c | 23 +++++++++++++++++------ 1 file changed, 17 insertions(+), 6 deletions(-) diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index ba492e807e..8204d44dcd 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -4203,6 +4203,9 @@ doSave(void *opaque) g_autoptr(virshDomain) dom = NULL; const char *name = NULL; const char *to = NULL; + virTypedParameterPtr params = NULL; + int nparams = 0; + int maxparams = 0; unsigned int flags = 0; const char *xmlfile = NULL; g_autofree char *xml = NULL; @@ -4216,9 +4219,12 @@ doSave(void *opaque) goto out_sig; #endif /* !WIN32 */ - if (vshCommandOptStringReq(ctl, cmd, "file", &to) < 0) + if (vshCommandOptStringReq(ctl, cmd, "file", &to) < 0) { goto out; - + } else if (virTypedParamsAddString(¶ms, &nparams, &maxparams, + VIR_DOMAIN_SAVE_PARAM_FILE, to) < 0) { + goto out; + } if (vshCommandOptBool(cmd, "bypass-cache")) flags |= VIR_DOMAIN_SAVE_BYPASS_CACHE; if (vshCommandOptBool(cmd, "running")) @@ -4232,10 +4238,14 @@ doSave(void *opaque) if (!(dom = virshCommandOptDomain(ctl, cmd, &name))) goto out; - if (xmlfile && - virFileReadAll(xmlfile, VSH_MAX_XML_FILE, &xml) < 0) { - vshReportError(ctl); - goto out; + if (xmlfile) { + if (virFileReadAll(xmlfile, VSH_MAX_XML_FILE, &xml) < 0) { + vshReportError(ctl); + goto out; + } else if (virTypedParamsAddString(¶ms, &nparams, &maxparams, + VIR_DOMAIN_SAVE_PARAM_DXML, xml) < 0) { + goto out; + } } if (flags || xml) { @@ -4252,6 +4262,7 @@ doSave(void *opaque) data->ret = 0; out: + virTypedParamsFree(params, nparams); #ifndef WIN32 pthread_sigmask(SIG_SETMASK, &oldsigmask, NULL); out_sig: -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- tools/virsh-domain.c | 32 ++++++++++++++++++++++---------- 1 file changed, 22 insertions(+), 10 deletions(-) diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 8204d44dcd..8a3c9d53d4 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -5316,15 +5316,21 @@ static bool cmdRestore(vshControl *ctl, const vshCmd *cmd) { const char *from = NULL; + virTypedParameterPtr params = NULL; + int nparams = 0; + int maxparams = 0; unsigned int flags = 0; const char *xmlfile = NULL; g_autofree char *xml = NULL; virshControl *priv = ctl->privData; - int rc; - - if (vshCommandOptStringReq(ctl, cmd, "file", &from) < 0) - return false; + int rc = -1; + if (vshCommandOptStringReq(ctl, cmd, "file", &from) < 0) { + goto out; + } else if (virTypedParamsAddString(¶ms, &nparams, &maxparams, + VIR_DOMAIN_SAVE_PARAM_FILE, from) < 0) { + goto out; + } if (vshCommandOptBool(cmd, "bypass-cache")) flags |= VIR_DOMAIN_SAVE_BYPASS_CACHE; if (vshCommandOptBool(cmd, "running")) @@ -5335,11 +5341,15 @@ cmdRestore(vshControl *ctl, const vshCmd *cmd) flags |= VIR_DOMAIN_SAVE_RESET_NVRAM; if (vshCommandOptStringReq(ctl, cmd, "xml", &xmlfile) < 0) - return false; + goto out; - if (xmlfile && - virFileReadAll(xmlfile, VSH_MAX_XML_FILE, &xml) < 0) - return false; + if (xmlfile) { + if (virFileReadAll(xmlfile, VSH_MAX_XML_FILE, &xml) < 0) + goto out; + else if (virTypedParamsAddString(¶ms, &nparams, &maxparams, + VIR_DOMAIN_SAVE_PARAM_DXML, xml) < 0) + goto out; + } if (flags || xml) { rc = virDomainRestoreFlags(priv->conn, from, xml, flags); @@ -5349,11 +5359,13 @@ cmdRestore(vshControl *ctl, const vshCmd *cmd) if (rc < 0) { vshError(ctl, _("Failed to restore domain from %s"), from); - return false; + goto out; } vshPrintExtra(ctl, _("Domain restored from %s\n"), from); - return true; + out: + virTypedParamsFree(params, nparams); + return rc >= 0; } /* -- 2.35.3

in order to enable parallel save functionality, we will need an opportune new flag and a parameter to specify the number of extra connections to use. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- include/libvirt/libvirt-domain.h | 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index 24846046aa..766e4d116e 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -1555,6 +1555,7 @@ typedef enum { VIR_DOMAIN_SAVE_RUNNING = 1 << 1, /* Favor running over paused (Since: 0.9.5) */ VIR_DOMAIN_SAVE_PAUSED = 1 << 2, /* Favor paused over running (Since: 0.9.5) */ VIR_DOMAIN_SAVE_RESET_NVRAM = 1 << 3, /* Re-initialize NVRAM from template (Since: 8.1.0) */ + VIR_DOMAIN_SAVE_PARALLEL = 1 << 4, /* Parallel Save/Restore to multiple files (Since: 8.4.0) */ } virDomainSaveRestoreFlags; int virDomainSave (virDomainPtr domain, @@ -1582,6 +1583,8 @@ int virDomainRestoreParams (virConnectPtr conn, * VIR_DOMAIN_SAVE_PARAM_FILE: * * the parameter used to specify the savestate file to save to or restore from. + * For parallel saves, this is the main file, with the extra connections adding suffix + * .1 .2 .3 ... up to VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS. * * Since: 8.4.0 */ @@ -1600,6 +1603,21 @@ int virDomainRestoreParams (virConnectPtr conn, */ # define VIR_DOMAIN_SAVE_PARAM_DXML "dxml" +/** + * VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS: + * + * this optional parameter mirrors the migration parameter + * VIR_MIGRATE_PARAM_PARALLEL_CONNECTIONS. + * + * This parameter is used when saving state files in parallel + * using the flag VIR_DOMAIN_SAVE_PARALLEL. + * It specifies the number of extra files to save to using parallel + * connections. + * + * Since: 8.4.0 + */ +# define VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS "parallel.connections" + /* See below for virDomainSaveImageXMLFlags */ char * virDomainSaveImageGetXMLDesc (virConnectPtr conn, const char *file, -- 2.35.3

and its companion param VIR_SAVE_PARAM_PARALLEL_CONNECTIONS Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_driver.c | 18 ++++++++++++------ src/qemu/qemu_saveimage.c | 1 + src/qemu/qemu_saveimage.h | 1 + src/qemu/qemu_snapshot.c | 2 +- 4 files changed, 15 insertions(+), 7 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index b6e7e74367..4114d8919b 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2641,7 +2641,7 @@ static int qemuDomainSaveInternal(virQEMUDriver *driver, virDomainObj *vm, const char *path, int compressed, virCommand *compressor, - const char *xmlin, unsigned int flags) + const char *xmlin, int nconn, unsigned int flags) { g_autofree char *xml = NULL; bool was_running = false; @@ -2722,7 +2722,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, xml = NULL; ret = qemuSaveImageCreate(driver, vm, path, data, compressor, - flags, VIR_ASYNC_JOB_SAVE); + nconn, flags, VIR_ASYNC_JOB_SAVE); if (ret < 0) goto endjob; @@ -2800,7 +2800,7 @@ qemuDomainManagedSaveHelper(virQEMUDriver *driver, VIR_INFO("Saving state of domain '%s' to '%s'", vm->def->name, path); if (qemuDomainSaveInternal(driver, vm, path, compressed, - compressor, dxml, flags) < 0) + compressor, dxml, -1, flags) < 0) return -1; vm->hasManagedSave = true; @@ -2839,7 +2839,7 @@ qemuDomainSaveFlags(virDomainPtr dom, const char *path, const char *dxml, goto cleanup; ret = qemuDomainSaveInternal(driver, vm, path, compressed, - compressor, dxml, flags); + compressor, dxml, -1, flags); cleanup: virDomainObjEndAPI(&vm); @@ -2866,16 +2866,20 @@ qemuDomainSaveParams(virDomainPtr dom, const char *dxml = NULL; int compressed; int ret = -1; + int nconn = 2; virCheckFlags(VIR_DOMAIN_SAVE_BYPASS_CACHE | VIR_DOMAIN_SAVE_RUNNING | - VIR_DOMAIN_SAVE_PAUSED, -1); + VIR_DOMAIN_SAVE_PAUSED | + VIR_DOMAIN_SAVE_PARALLEL, -1); if (virTypedParamsValidate(params, nparams, VIR_DOMAIN_SAVE_PARAM_FILE, VIR_TYPED_PARAM_STRING, VIR_DOMAIN_SAVE_PARAM_DXML, VIR_TYPED_PARAM_STRING, + VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS, + VIR_TYPED_PARAM_INT, NULL) < 0) return -1; @@ -2885,6 +2889,8 @@ qemuDomainSaveParams(virDomainPtr dom, if (virTypedParamsGetString(params, nparams, VIR_DOMAIN_SAVE_PARAM_DXML, &dxml) < 0) return -1; + if (virTypedParamsGetInt(params, nparams, VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS, &nconn) < 0) + return -1; if (!(vm = qemuDomainObjFromDomain(dom))) goto cleanup; @@ -2907,7 +2913,7 @@ qemuDomainSaveParams(virDomainPtr dom, goto cleanup; ret = qemuDomainSaveInternal(driver, vm, to, compressed, - compressor, dxml, flags); + compressor, dxml, nconn, flags); cleanup: virDomainObjEndAPI(&vm); diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 9259257a07..df2fc6e879 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -476,6 +476,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, const char *path, virQEMUSaveData *data, virCommand *compressor, + int nconn G_GNUC_UNUSED, unsigned int flags, virDomainAsyncJob asyncJob) { diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index c7ee851b92..7fc1ad278f 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -111,6 +111,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, const char *path, virQEMUSaveData *data, virCommand *compressor, + int nconn, unsigned int flags, virDomainAsyncJob asyncJob); diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index b62fab7bb3..2e445e8296 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1457,7 +1457,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *driver, memory_existing = virFileExists(snapdef->memorysnapshotfile); if ((ret = qemuSaveImageCreate(driver, vm, snapdef->memorysnapshotfile, - data, compressor, 0, + data, compressor, -1, 0, VIR_ASYNC_JOB_SNAPSHOT)) < 0) goto cleanup; -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_driver.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 4114d8919b..d071df1c81 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5855,7 +5855,8 @@ qemuDomainRestoreInternal(virConnectPtr conn, virCheckFlags(VIR_DOMAIN_SAVE_BYPASS_CACHE | VIR_DOMAIN_SAVE_RUNNING | VIR_DOMAIN_SAVE_PAUSED | - VIR_DOMAIN_SAVE_RESET_NVRAM, -1); + VIR_DOMAIN_SAVE_RESET_NVRAM | + VIR_DOMAIN_SAVE_PARALLEL, -1); if (flags & VIR_DOMAIN_SAVE_RESET_NVRAM) reset_nvram = true; -- 2.35.3

For the save direction, this helper listens on a unix socket which QEMU connects to for multifd migration to files. For the restore direction, this helper connects to a unix socket QEMU listens at for multifd migration from files. The file descriptors are passed as command line parameters. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- po/POTFILES.in | 1 + src/libvirt_private.syms | 1 + src/util/meson.build | 16 +++ src/util/multifd-helper.c | 247 ++++++++++++++++++++++++++++++++++++++ src/util/virthread.c | 5 + src/util/virthread.h | 1 + 6 files changed, 271 insertions(+) create mode 100644 src/util/multifd-helper.c diff --git a/po/POTFILES.in b/po/POTFILES.in index 0d9adb0758..4efb330262 100644 --- a/po/POTFILES.in +++ b/po/POTFILES.in @@ -241,6 +241,7 @@ @SRCDIR@src/storage_file/storage_source_backingstore.c @SRCDIR@src/test/test_driver.c @SRCDIR@src/util/iohelper.c +@SRCDIR@src/util/multifd-helper.c @SRCDIR@src/util/viralloc.c @SRCDIR@src/util/virarptable.c @SRCDIR@src/util/viraudit.c diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index 48ed75aa16..d925692d9f 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -3434,6 +3434,7 @@ virThreadCreateFull; virThreadID; virThreadIsSelf; virThreadJoin; +virThreadJoinRet; virThreadMaxName; virThreadSelf; virThreadSelfID; diff --git a/src/util/meson.build b/src/util/meson.build index 17755373c8..337e454137 100644 --- a/src/util/meson.build +++ b/src/util/meson.build @@ -178,6 +178,11 @@ io_helper_sources = [ 'virfile.c', ] +multifd_helper_sources = [ + 'multifd-helper.c', + 'virfile.c', +] + virt_util_lib = static_library( 'virt_util', [ @@ -219,6 +224,17 @@ if conf.has('WITH_LIBVIRTD') libutil_dep, ], } + virt_helpers += { + 'name': 'libvirt_multifd_helper', + 'sources': [ + files(multifd_helper_sources), + dtrace_gen_headers, + ], + 'deps': [ + acl_dep, + libutil_dep, + ], + } endif util_inc_dir = include_directories('.') diff --git a/src/util/multifd-helper.c b/src/util/multifd-helper.c new file mode 100644 index 0000000000..ad1bac06be --- /dev/null +++ b/src/util/multifd-helper.c @@ -0,0 +1,247 @@ +/* + * multifd-helper.c: listens on Unix socket to perform I/O to multiple files + * + * Copyright (C) 2022 SUSE LLC + * + * This library is free software; you can redistribute it and/or + * modify it under the terms of the GNU Lesser General Public + * License as published by the Free Software Foundation; either + * version 2.1 of the License, or (at your option) any later version. + * + * This library is distributed in the hope that it will be useful, + * but WITHOUT ANY WARRANTY; without even the implied warranty of + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU + * Lesser General Public License for more details. + * + * You should have received a copy of the GNU Lesser General Public + * License along with this library. If not, see + * <http://www.gnu.org/licenses/>. + * + * This has been written to support QEMU multifd migration to file, + * allowing better use of cpu resources to speed up the save/restore. + */ + +#include <config.h> + +#include <unistd.h> +#include <fcntl.h> +#include <stdlib.h> +#include <sys/types.h> +#include <sys/stat.h> +#include <sys/socket.h> +#include <sys/un.h> + +#include "virthread.h" +#include "virfile.h" +#include "virerror.h" +#include "virstring.h" +#include "virgettext.h" + +#define VIR_FROM_THIS VIR_FROM_STORAGE + +typedef struct _multiFdConnData multiFdConnData; +struct _multiFdConnData { + int clientfd; + int filefd; + int oflags; + const char *path; + virThread tid; + + off_t total; +}; + +typedef struct _multiFdThreadArgs multiFdThreadArgs; +struct _multiFdThreadArgs { + int nchannels; + multiFdConnData *conn; /* contains main fd + nchannels */ + const char *sun_path; /* unix socket name to use for the server */ + struct sockaddr_un serv_addr; + + off_t total; +}; + +static void clientThreadFunc(void *a) +{ + multiFdConnData *c = a; + c->total = virFileDiskCopy(c->filefd, c->path, c->clientfd, "socket"); +} + +static off_t waitClientThreads(multiFdConnData *conn, int n) +{ + int idx; + off_t total = 0; + for (idx = 0; idx < n; idx++) { + multiFdConnData *c = &conn[idx]; + if (virThreadJoinRet(&c->tid) < 0) { + total = -1; + } else if (total >= 0) { + total += c->total; + } + if (VIR_CLOSE(c->clientfd) < 0) { + total = -1; + } + } + return total; +} + +static void loadThreadFunc(void *a) +{ + multiFdThreadArgs *args = a; + int idx; + args->total = -1; + + for (idx = 0; idx < args->nchannels + 1; idx++) { + /* Perform outgoing connections */ + multiFdConnData *c = &args->conn[idx]; + c->clientfd = socket(AF_UNIX, SOCK_STREAM, 0); + if (c->clientfd < 0) { + virReportSystemError(errno, "%s", _("loadThread: socket() failed")); + goto cleanup; + } + if (connect(c->clientfd, (const struct sockaddr *)&args->serv_addr, + sizeof(struct sockaddr_un)) < 0) { + virReportSystemError(errno, "%s", _("loadThread: connect() failed")); + goto cleanup; + } + if (virThreadCreate(&c->tid, true, &clientThreadFunc, c) < 0) { + virReportSystemError(errno, "%s", _("loadThread: client thread creation failed")); + goto cleanup; + } + } + args->total = waitClientThreads(args->conn, args->nchannels + 1); + + cleanup: + for (idx = 0; idx < args->nchannels + 1; idx++) { + multiFdConnData *c = &args->conn[idx]; + VIR_FORCE_CLOSE(c->clientfd); + } +} + +static void saveThreadFunc(void *a) +{ + multiFdThreadArgs *args = a; + int idx; + const char buf[1] = {'R'}; + int sockfd; + + if ((sockfd = socket(AF_UNIX, SOCK_STREAM, 0)) < 0) { + virReportSystemError(errno, "%s", _("saveThread: socket() failed")); + return; + } + unlink(args->sun_path); + if (bind(sockfd, (struct sockaddr *)&args->serv_addr, sizeof(args->serv_addr)) < 0) { + virReportSystemError(errno, "%s", _("saveThread: bind() failed")); + goto cleanup; + } + if (listen(sockfd, args->nchannels + 1) < 0) { + virReportSystemError(errno, "%s", _("saveThread: listen() failed")); + goto cleanup; + } + + /* signal that the server is ready */ + if (safewrite(STDOUT_FILENO, &buf, 1) != 1) { + virReportSystemError(errno, "%s", _("saveThread: safewrite failed")); + goto cleanup; + } + + for (idx = 0; idx < args->nchannels + 1; idx++) { + /* Wait for incoming connection. */ + multiFdConnData *c = &args->conn[idx]; + if ((c->clientfd = accept(sockfd, NULL, NULL)) < 0) { + virReportSystemError(errno, "%s", _("saveThread: accept() failed")); + goto cleanup; + } + if (virThreadCreate(&c->tid, true, &clientThreadFunc, c) < 0) { + virReportSystemError(errno, "%s", _("saveThread: client thread creation failed")); + goto cleanup; + } + } + + args->total = waitClientThreads(args->conn, args->nchannels + 1); + + cleanup: + for (idx = 0; idx < args->nchannels + 1; idx++) { + multiFdConnData *c = &args->conn[idx]; + VIR_FORCE_CLOSE(c->clientfd); + } + if (VIR_CLOSE(sockfd) < 0) + args->total = -1; +} + +static const char *program_name; + +G_GNUC_NORETURN static void +usage(int status) +{ + if (status) { + fprintf(stderr, _("%s: try --help for more details"), program_name); + } else { + fprintf(stderr, _("Usage: %s UNIX_SOCNAME N MAINFD FD0 FD1 ... FDn"), program_name); + } + exit(status); +} + +int +main(int argc, char **argv) +{ + virThread tid; + virThreadFunc func = saveThreadFunc; + multiFdThreadArgs args = { 0 }; + int idx; + + program_name = argv[0]; + + if (virGettextInitialize() < 0 || + virErrorInitialize() < 0) { + fprintf(stderr, _("%s: initialization failed"), program_name); + exit(EXIT_FAILURE); + } + + if (argc > 1 && STREQ(argv[1], "--help")) + usage(EXIT_SUCCESS); + if (argc < 4) + usage(EXIT_FAILURE); + + args.sun_path = argv[1]; + if (virStrToLong_i(argv[2], NULL, 10, &args.nchannels) < 0) + fprintf(stderr, _("%s: malformed number of channels N %s"), program_name, argv[2]); + + if (argc < 4 + args.nchannels) + usage(EXIT_FAILURE); + + args.conn = g_new0(multiFdConnData, args.nchannels + 1); + + for (idx = 3; idx < 3 + args.nchannels + 1; idx++) { + multiFdConnData *c = &args.conn[idx - 3]; + + if (virStrToLong_i(argv[idx], NULL, 10, &c->filefd) < 0) { + fprintf(stderr, _("%s: malformed FD %s"), program_name, argv[idx]); + usage(EXIT_FAILURE); + } +#ifndef F_GETFL +#error "multifd-helper requires F_GETFL parameter of fcntl" +#endif + c->oflags = fcntl(c->filefd, F_GETFL); + if ((c->oflags & O_ACCMODE) == O_RDONLY) { + func = loadThreadFunc; + } + } + + /* initialize server address structure */ + memset(&args.serv_addr, 0, sizeof(args.serv_addr)); + args.serv_addr.sun_family = AF_UNIX; + virStrcpyStatic(args.serv_addr.sun_path, args.sun_path); + + if (virThreadCreate(&tid, true, func, &args) < 0) { + virReportSystemError(errno, _("%s: failed to create server thread"), program_name); + exit(EXIT_FAILURE); + } + + if (virThreadJoinRet(&tid) < 0) + exit(EXIT_FAILURE); + + if (args.total < 0) + exit(EXIT_FAILURE); + + exit(EXIT_SUCCESS); +} diff --git a/src/util/virthread.c b/src/util/virthread.c index 5422bb74fd..0f6c6a68fa 100644 --- a/src/util/virthread.c +++ b/src/util/virthread.c @@ -348,6 +348,11 @@ void virThreadJoin(virThread *thread) pthread_join(thread->thread, NULL); } +int virThreadJoinRet(virThread *thread) +{ + return pthread_join(thread->thread, NULL); +} + void virThreadCancel(virThread *thread) { pthread_cancel(thread->thread); diff --git a/src/util/virthread.h b/src/util/virthread.h index 23abe0b6c9..5cecb9bd8a 100644 --- a/src/util/virthread.h +++ b/src/util/virthread.h @@ -89,6 +89,7 @@ int virThreadCreateFull(virThread *thread, void virThreadSelf(virThread *thread); bool virThreadIsSelf(virThread *thread); void virThreadJoin(virThread *thread); +int virThreadJoinRet(virThread *thread); size_t virThreadMaxName(void); -- 2.35.3

add APIs to Create, Close and Free MultiFD files to associate with multifd channels. Adapt virQEMUSaveFdInit to consider multifd. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_driver.c | 10 ++-- src/qemu/qemu_saveimage.c | 117 +++++++++++++++++++++++++++++++++++--- src/qemu/qemu_saveimage.h | 17 +++++- 3 files changed, 129 insertions(+), 15 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d071df1c81..a03ead960b 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5869,7 +5869,7 @@ qemuDomainRestoreInternal(virConnectPtr conn, } oflags |= O_DIRECT; } - if (virQEMUSaveFdInit(&saveFd, path, oflags, cfg) < 0) + if (virQEMUSaveFdInit(&saveFd, path, 0, oflags, cfg, false) < 0) return -1; if (qemuSaveImageOpen(driver, NULL, &def, &data, false, &saveFd) < 0) goto cleanup; @@ -6003,7 +6003,7 @@ qemuDomainSaveImageGetXMLDesc(virConnectPtr conn, const char *path, virCheckFlags(VIR_DOMAIN_SAVE_IMAGE_XML_SECURE, NULL); - if (virQEMUSaveFdInit(&saveFd, path, O_RDONLY, cfg) < 0) + if (virQEMUSaveFdInit(&saveFd, path, 0, O_RDONLY, cfg, false) < 0) return NULL; if (qemuSaveImageOpen(driver, NULL, &def, &data, false, &saveFd) < 0) goto cleanup; @@ -6041,7 +6041,7 @@ qemuDomainSaveImageDefineXML(virConnectPtr conn, const char *path, else if (flags & VIR_DOMAIN_SAVE_PAUSED) state = 0; - if (virQEMUSaveFdInit(&saveFd, path, O_RDWR, cfg) < 0) + if (virQEMUSaveFdInit(&saveFd, path, 0, O_RDWR, cfg, false) < 0) return -1; if (qemuSaveImageOpen(driver, NULL, &def, &data, false, &saveFd) < 0) goto cleanup; @@ -6122,7 +6122,7 @@ qemuDomainManagedSaveGetXMLDesc(virDomainPtr dom, unsigned int flags) goto cleanup; } - if (virQEMUSaveFdInit(&saveFd, path, O_RDONLY, cfg) < 0) + if (virQEMUSaveFdInit(&saveFd, path, 0, O_RDONLY, cfg, false) < 0) goto cleanup; if (qemuSaveImageOpen(driver, priv->qemuCaps, &def, &data, false, &saveFd) < 0) @@ -6198,7 +6198,7 @@ qemuDomainObjRestore(virConnectPtr conn, } oflags |= O_DIRECT; } - if (virQEMUSaveFdInit(&saveFd, path, oflags, cfg) < 0) + if (virQEMUSaveFdInit(&saveFd, path, 0, oflags, cfg, false) < 0) goto cleanup; ret = qemuSaveImageOpen(driver, NULL, &def, &data, true, &saveFd); if (ret < 0) { diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index df2fc6e879..9fe51b6f13 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -358,15 +358,17 @@ qemuSaveImageGetCompressionCommand(virQEMUSaveFormat compression) * virQEMUSaveFdInit: initialize a virQEMUSaveFd * * @saveFd: the structure to initialize - * @base: the file name + * @base: the main file name + * @idx: 0 for the main file, > 0 for the multifd channels. * @oflags the file descriptor open flags * @cfg: the driver config + * @parallel: whether parallel save is enabled * * Returns -1 on error, 0 on success, * and in both cases virQEMUSaveFdFini must be called to free resources. */ -int virQEMUSaveFdInit(virQEMUSaveFd *saveFd, const char *base, - int oflags, virQEMUDriverConfig *cfg) +int virQEMUSaveFdInit(virQEMUSaveFd *saveFd, const char *base, int idx, + int oflags, virQEMUDriverConfig *cfg, bool parallel) { unsigned int wrapperFlags = VIR_FILE_WRAPPER_NON_BLOCKING; bool isCreat = oflags & O_CREAT; @@ -374,8 +376,11 @@ int virQEMUSaveFdInit(virQEMUSaveFd *saveFd, const char *base, if (isDirect) wrapperFlags |= VIR_FILE_WRAPPER_BYPASS_CACHE; - - saveFd->path = g_strdup(base); + if (idx > 0) { + saveFd->path = g_strdup_printf("%s.%d", base, idx); + } else { + saveFd->path = g_strdup(base); + } saveFd->wrapper = NULL; if (isCreat) { saveFd->fd = virQEMUFileOpenAs(cfg->user, cfg->group, false, saveFd->path, @@ -386,10 +391,11 @@ int virQEMUSaveFdInit(virQEMUSaveFd *saveFd, const char *base, if (saveFd->fd < 0) return -1; /* + * iohelper Wrapper is never required for multifd parallel save. * For O_CREAT, we always add the wrapper, * and for !O_CREAT, we only add the wrapper if using O_DIRECT. */ - if (isDirect || isCreat) { + if (!parallel && (isDirect || isCreat)) { saveFd->wrapper = virFileWrapperFdNew(&saveFd->fd, saveFd->path, wrapperFlags); if (!saveFd->wrapper) return -1; @@ -466,6 +472,103 @@ int virQEMUSaveFdFini(virQEMUSaveFd *saveFd, virDomainObj *vm, int ret) return ret; } +/* + * qemuSaveImageFreeMultiFd: free all multifd virQEMUSaveFds. + * @multiFd: the array of saveFds + * @vm: the virDomainObj, to release lock + * @nconn: number of multifd channels + * @ret: the current operation result (< 0 is failure) + * + * If multiFd is NULL, the return value will be unchanged. + * + * Returns ret, or -1 if an error is detected. + */ +int qemuSaveImageFreeMultiFd(virQEMUSaveFd *multiFd, virDomainObj *vm, int nconn, int ret) +{ + int idx; + + if (!multiFd) + return ret; + + for (idx = 0; idx < nconn; idx++) { + ret = virQEMUSaveFdFini(&multiFd[idx], vm, ret); + } + /* + * do it again to unlink all in the error case, + * if error happened in the middle of previous loop. + */ + for (idx = 0; idx < nconn; idx++) { + ret = virQEMUSaveFdFini(&multiFd[idx], vm, ret); + } + g_free(multiFd); + return ret; +} + +/* + * qemuSaveImageCloseMultiFd: perform normal close on all multifd virQEMUSaveFds. + * + * @multiFd: the array of saveFds + * @nconn: number of multifd channels + * @vm: the virDomainObj, to release lock + * + * If multiFd is NULL, the function will return success. + * Returns -1 on error, 0 on success. + */ +int qemuSaveImageCloseMultiFd(virQEMUSaveFd *multiFd, int nconn, virDomainObj *vm) +{ + int idx; + + if (!multiFd) + return 0; + + for (idx = 0; idx < nconn; idx++) { + if (virQEMUSaveFdClose(&multiFd[idx], vm) < 0) { + return -1; + } + } + return 0; +} + +/* + * qemuSaveImageCreateMultiFd: allocate and initialize all multifd virQEMUSaveFds. + * + * @driver: qemu driver data + * @vm: the virDomainObj + * @cmd: the existing multifd helper command, to pass each fd as argument. + * @path: pathname of the main file. + * @oflags: the open flags desired, to be passed to virQEMUSaveFdInit. + * @cfg: the driver config + * @nconn: number of channel files to create or open, depending on oflags. + * + * Returns the new array of virQEMUSaveFds, or NULL on error. + */ +virQEMUSaveFd * +qemuSaveImageCreateMultiFd(virQEMUDriver *driver, virDomainObj *vm, + virCommand *cmd, const char *path, + int oflags, virQEMUDriverConfig *cfg, + int nconn) +{ + virQEMUSaveFd *multiFd = g_new0(virQEMUSaveFd, nconn); + int idx; + + for (idx = 0; idx < nconn; idx++) { + virQEMUSaveFd *m = &multiFd[idx]; + if (virQEMUSaveFdInit(m, path, idx + 1, oflags, cfg, true) < 0 || + qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, m->fd) < 0) { + + virQEMUSaveFdFini(m, vm, -1); + goto error; + } + virCommandAddArgFormat(cmd, "%d", m->fd); + virCommandPassFD(cmd, m->fd, 0); + } + return multiFd; + + error: + qemuSaveImageFreeMultiFd(multiFd, vm, nconn, -1); + return NULL; +} + /* Helper function to execute a migration to file with a correct save header * the caller needs to make sure that the processors are stopped and do all other @@ -494,7 +597,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, oflags |= O_DIRECT; } - if (virQEMUSaveFdInit(&saveFd, path, oflags, cfg) < 0) + if (virQEMUSaveFdInit(&saveFd, path, 0, oflags, cfg, false) < 0) goto cleanup; if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, saveFd.fd) < 0) goto cleanup; diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 7fc1ad278f..9d66eb40bb 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -64,14 +64,25 @@ struct _virQEMUSaveFd { #define QEMU_SAVEFD_INVALID (virQEMUSaveFd) { .path = NULL, .fd = -1, .need_unlink = false, .wrapper = NULL } -int virQEMUSaveFdInit(virQEMUSaveFd *saveFd, const char *base, - int oflags, virQEMUDriverConfig *cfg) - ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(4); +int virQEMUSaveFdInit(virQEMUSaveFd *saveFd, const char *base, int idx, + int oflags, virQEMUDriverConfig *cfg, bool parallel) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(5); int virQEMUSaveFdClose(virQEMUSaveFd *saveFd, virDomainObj *vm); int virQEMUSaveFdFini(virQEMUSaveFd *saveFd, virDomainObj *vm, int ret); +virQEMUSaveFd * +qemuSaveImageCreateMultiFd(virQEMUDriver *driver, virDomainObj *vm, + virCommand *cmd, const char *path, + int oflags, virQEMUDriverConfig *cfg, + int nconn) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(3) ATTRIBUTE_NONNULL(4) ATTRIBUTE_NONNULL(6); + +int qemuSaveImageCloseMultiFd(virQEMUSaveFd *multiFd, int nconn, virDomainObj *vm); + +int qemuSaveImageFreeMultiFd(virQEMUSaveFd *multiFd, virDomainObj *vm, int nconn, int ret); + virDomainDef * qemuSaveImageUpdateDef(virQEMUDriver *driver, virDomainDef *def, -- 2.35.3

use the multifd helper and the new virQEMUSaveFd APIs for multifd. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_saveimage.c | 42 ++++++++++++++++++++++++++++++++++++--- 1 file changed, 39 insertions(+), 3 deletions(-) diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 9fe51b6f13..2b56214f90 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -17,6 +17,7 @@ */ #include <config.h> +#include <configmake.h> #include "qemu_saveimage.h" #include "qemu_domain.h" @@ -585,6 +586,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, { g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); virQEMUSaveFd saveFd = QEMU_SAVEFD_INVALID; + virQEMUSaveFd *multiFd = NULL; unsigned int oflags = O_WRONLY | O_TRUNC | O_CREAT; int ret = -1; @@ -604,9 +606,42 @@ qemuSaveImageCreate(virQEMUDriver *driver, if (virQEMUSaveDataWrite(data, saveFd.fd, saveFd.path) < 0) goto cleanup; - /* Perform the migration */ - if (qemuMigrationSrcToFile(driver, vm, saveFd.fd, compressor, asyncJob) < 0) - goto cleanup; + if (flags & VIR_DOMAIN_SAVE_PARALLEL) { + g_autoptr(virCommand) cmd = NULL; + g_autofree char *helper_path = NULL; + qemuDomainObjPrivate *priv = vm->privateData; + g_autofree char *sun_path = g_strdup_printf("%s/save-multifd.sock", priv->libDir); + char buf[1]; + int helper_out = -1; + if (!(helper_path = virFileFindResource("libvirt_multifd_helper", + abs_top_builddir "/src", + LIBEXECDIR))) + goto cleanup; + cmd = virCommandNewArgList(helper_path, sun_path, NULL); + virCommandAddArgFormat(cmd, "%d", nconn); + virCommandAddArgFormat(cmd, "%d", saveFd.fd); + virCommandPassFD(cmd, saveFd.fd, 0); + virCommandSetOutputFD(cmd, &helper_out); /* should create pipe automagically */ + + /* Perform parallel multifd migration to files (main fd + channels) */ + if (!(multiFd = qemuSaveImageCreateMultiFd(driver, vm, cmd, saveFd.path, oflags, cfg, nconn))) + goto cleanup; + if (virCommandRunAsync(cmd, NULL) < 0) + goto cleanup; + if (saferead(helper_out, &buf, 1) != 1 || buf[0] != 'R') + goto cleanup; + if (chown(sun_path, cfg->user, cfg->group) < 0) + goto cleanup; + /* still using single fd migration for now */ + if (qemuMigrationSrcToFile(driver, vm, saveFd.fd, compressor, asyncJob) < 0) + goto cleanup; + if (qemuSaveImageCloseMultiFd(multiFd, nconn, vm) < 0) + goto cleanup; + } else { + /* Perform non-parallel migration to file */ + if (qemuMigrationSrcToFile(driver, vm, saveFd.fd, compressor, asyncJob) < 0) + goto cleanup; + } if (virQEMUSaveFdClose(&saveFd, vm) < 0) goto cleanup; @@ -625,6 +660,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, ret = 0; cleanup: + ret = qemuSaveImageFreeMultiFd(multiFd, vm, nconn, ret); ret = virQEMUSaveFdFini(&saveFd, vm, ret); return ret; } -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> Reviewed-by: Ani Sinha <ani@anisinha.ca> --- src/qemu/qemu_capabilities.c | 2 ++ src/qemu/qemu_capabilities.h | 1 + tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_4.0.0.ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_4.0.0.riscv32.xml | 1 + tests/qemucapabilitiesdata/caps_4.0.0.riscv64.xml | 1 + tests/qemucapabilitiesdata/caps_4.0.0.s390x.xml | 1 + tests/qemucapabilitiesdata/caps_4.0.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_4.1.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_4.2.0.ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml | 1 + tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml | 1 + tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml | 1 + tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml | 1 + tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml | 1 + tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml | 1 + tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml | 1 + 35 files changed, 36 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index a59d839d85..584a223b9f 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -675,6 +675,7 @@ VIR_ENUM_IMPL(virQEMUCaps, /* 430 */ "chardev.qemu-vdagent", /* QEMU_CAPS_CHARDEV_QEMU_VDAGENT */ + "migrate-multifd", /* QEMU_CAPS_MIGRATE_MULTIFD */ ); @@ -1233,6 +1234,7 @@ struct virQEMUCapsStringFlags virQEMUCapsCommands[] = { struct virQEMUCapsStringFlags virQEMUCapsMigration[] = { { "rdma-pin-all", QEMU_CAPS_MIGRATE_RDMA }, + { "multifd", QEMU_CAPS_MIGRATE_MULTIFD }, }; /* Use virQEMUCapsQMPSchemaQueries for querying parameters of events */ diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index 59c09903f3..69fccc3fce 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -650,6 +650,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for syntax-check */ /* 430 */ QEMU_CAPS_CHARDEV_QEMU_VDAGENT, /* -chardev qemu-vdagent */ + QEMU_CAPS_MIGRATE_MULTIFD, /* migrate can set multifd parameter */ QEMU_CAPS_LAST /* this must always be the last item */ } virQEMUCapsFlags; diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml index 7e0b8fbddf..c3df1b403d 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.aarch64.xml @@ -147,6 +147,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700240</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_4.0.0.ppc64.xml index 19bbbd1de3..17b1c6dc29 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.ppc64.xml @@ -152,6 +152,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>42900240</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.riscv32.xml b/tests/qemucapabilitiesdata/caps_4.0.0.riscv32.xml index ef6f04f54b..8c0ca3cd92 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.riscv32.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.riscv32.xml @@ -144,6 +144,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>0</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.riscv64.xml b/tests/qemucapabilitiesdata/caps_4.0.0.riscv64.xml index 7c65aff290..08b1f87f08 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.riscv64.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.riscv64.xml @@ -144,6 +144,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>0</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.s390x.xml b/tests/qemucapabilitiesdata/caps_4.0.0.s390x.xml index 9b5ed96ba3..a024383b72 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.s390x.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.s390x.xml @@ -114,6 +114,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>39100240</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_4.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_4.0.0.x86_64.xml index 3cf6a66389..d110bd0eaf 100644 --- a/tests/qemucapabilitiesdata/caps_4.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_4.0.0.x86_64.xml @@ -187,6 +187,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100240</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_4.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_4.1.0.x86_64.xml index 5daa7bda75..50e80aab9f 100644 --- a/tests/qemucapabilitiesdata/caps_4.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_4.1.0.x86_64.xml @@ -194,6 +194,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4001000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100241</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml index 833bf955f9..250bf8a433 100644 --- a/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_4.2.0.aarch64.xml @@ -162,6 +162,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4001050</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_4.2.0.ppc64.xml index 1586f28ca5..06432c25bd 100644 --- a/tests/qemucapabilitiesdata/caps_4.2.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_4.2.0.ppc64.xml @@ -159,6 +159,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4001050</version> <kvmVersion>0</kvmVersion> <microcodeVersion>42900242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml b/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml index ce5b782afd..1bd6f44f16 100644 --- a/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml +++ b/tests/qemucapabilitiesdata/caps_4.2.0.s390x.xml @@ -127,6 +127,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>39100242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml index 9ee4c0534d..d96f13b650 100644 --- a/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_4.2.0.x86_64.xml @@ -205,6 +205,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='virtio-blk.queue-size'/> + <flag name='migrate-multifd'/> <version>4002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml index 29ee31473f..2a7f5074bc 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml @@ -174,6 +174,7 @@ <flag name='virtio-blk.queue-size'/> <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> + <flag name='migrate-multifd'/> <version>5000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700241</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml index 868b3b0d0a..3cdb173b01 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml @@ -180,6 +180,7 @@ <flag name='virtio-blk.queue-size'/> <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> + <flag name='migrate-multifd'/> <version>5000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>42900241</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml index 3b58e7fece..933b4870cd 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml @@ -166,6 +166,7 @@ <flag name='virtio-blk.queue-size'/> <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> + <flag name='migrate-multifd'/> <version>5000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>0</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml index bee5a84cf9..9bb48e1982 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml @@ -214,6 +214,7 @@ <flag name='virtio-blk.queue-size'/> <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> + <flag name='migrate-multifd'/> <version>5000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100241</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml b/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml index e6a3ed5ec0..f01cd25278 100644 --- a/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml +++ b/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml @@ -86,6 +86,7 @@ <flag name='input-linux'/> <flag name='query-display-options'/> <flag name='memory-backend-file.prealloc-threads'/> + <flag name='migrate-multifd'/> <version>5001000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>0</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml index 070f64cb1c..f4446674a4 100644 --- a/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml @@ -218,6 +218,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> + <flag name='migrate-multifd'/> <version>5001000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml index 8e17532f3a..f3e3640657 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml @@ -181,6 +181,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> + <flag name='migrate-multifd'/> <version>5002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml index b0b5fe3271..691ffd2caa 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml @@ -185,6 +185,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> + <flag name='migrate-multifd'/> <version>5002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>42900243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml index 7cb4383693..2079257759 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml @@ -171,6 +171,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> + <flag name='migrate-multifd'/> <version>5002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>0</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml b/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml index 30d10236e9..d800e35096 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml @@ -139,6 +139,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> + <flag name='migrate-multifd'/> <version>5002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>39100243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml index cad4ed40e6..f189e299c3 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml @@ -222,6 +222,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> + <flag name='migrate-multifd'/> <version>5002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml index 4b4cc2d3aa..e86a2cf5d6 100644 --- a/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml @@ -189,6 +189,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> + <flag name='migrate-multifd'/> <version>6000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml b/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml index 06543071aa..bbc5825def 100644 --- a/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml +++ b/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml @@ -147,6 +147,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> + <flag name='migrate-multifd'/> <version>6000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>39100242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml index 8c61bf8a84..496dd5564c 100644 --- a/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml @@ -231,6 +231,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> + <flag name='migrate-multifd'/> <version>6000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml index afd8f606eb..fab5e40e35 100644 --- a/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml @@ -236,6 +236,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> + <flag name='migrate-multifd'/> <version>6001000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml index 86fc46918f..a83b8c8d77 100644 --- a/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml @@ -201,6 +201,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> + <flag name='migrate-multifd'/> <version>6001050</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700244</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml index d5a1663c15..b8f4aa2744 100644 --- a/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml @@ -196,6 +196,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> + <flag name='migrate-multifd'/> <version>6002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>42900244</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml index 19605d93ae..1293cd1bf1 100644 --- a/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml @@ -238,6 +238,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> + <flag name='migrate-multifd'/> <version>6002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100244</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml index e24e2235fb..16626a1342 100644 --- a/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml @@ -209,6 +209,7 @@ <flag name='virtio-iommu.boot-bypass'/> <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> + <flag name='migrate-multifd'/> <version>6002092</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml index 6c51e27f46..9274b6f1d5 100644 --- a/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml @@ -213,6 +213,7 @@ <flag name='virtio-iommu.boot-bypass'/> <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> + <flag name='migrate-multifd'/> <version>7000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>42900243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml index 7523b92e6b..5f1f837473 100644 --- a/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml @@ -242,6 +242,7 @@ <flag name='virtio-iommu.boot-bypass'/> <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> + <flag name='migrate-multifd'/> <version>7000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml index f598950cc3..19684ccaf1 100644 --- a/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml @@ -242,6 +242,7 @@ <flag name='virtio-iommu.boot-bypass'/> <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> + <flag name='migrate-multifd'/> <version>7000050</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100244</microcodeVersion> -- 2.35.3

add both multifd compression and number of multifd channels Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_saveimage.c | 17 +++++++++++++++++ src/qemu/qemu_saveimage.h | 4 +++- 2 files changed, 20 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 2b56214f90..e2cca4a417 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -67,6 +67,23 @@ VIR_ENUM_IMPL(qemuSaveCompression, "lzop", ); +typedef enum { + QEMU_SAVE_MULTIFD_COMP_NONE = 0, + QEMU_SAVE_MULTIFD_COMP_ZLIB = 1, + QEMU_SAVE_MULTIFD_COMP_ZSTD = 2, + + /* used for the on-disk format, do not change/re-use numbers */ + QEMU_SAVE_MULTIFD_COMP_LAST +} virQEMUSaveMultiFdComp; + +VIR_ENUM_DECL(qemuSaveMultiFdComp); +VIR_ENUM_IMPL(qemuSaveMultiFdComp, + QEMU_SAVE_MULTIFD_COMP_LAST, + "none", + "zlib", + "zstd", +); + static inline void qemuSaveImageBswapHeader(virQEMUSaveHeader *hdr) { diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 9d66eb40bb..21e335b530 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -42,7 +42,9 @@ struct _virQEMUSaveHeader { uint32_t was_running; /* 4 bytes */ uint32_t compressed; /* 4 bytes */ uint32_t cookieOffset; /* 4 bytes */ - uint32_t unused[14]; /* 56 bytes */ + uint16_t multifd_channels; /* 2 bytes */ + uint16_t multifd_comp; /* 2 bytes */ + uint32_t unused[13]; /* 52 bytes */ } ATTRIBUTE_PACKED; /* = 92 bytes */ -- 2.35.3

similarly to qemuMigrationParamsSetULL, we need to be able to set fields from qemu_saveimage. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_migration_params.c | 22 ++++++++++++++++++++++ src/qemu/qemu_migration_params.h | 9 +++++++++ 2 files changed, 31 insertions(+) diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c index 95fd773645..079a9f844d 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -1109,6 +1109,28 @@ qemuMigrationParamsFetch(virQEMUDriver *driver, } +void +qemuMigrationParamsSetCap(qemuMigrationParams *migParams, + virQEMUCapsFlags flag) +{ + ignore_value(virBitmapSetBit(migParams->caps, flag)); +} + + +int +qemuMigrationParamsSetInt(qemuMigrationParams *migParams, + qemuMigrationParam param, + int value) +{ + if (qemuMigrationParamsCheckType(param, QEMU_MIGRATION_PARAM_TYPE_INT) < 0) + return -1; + + migParams->params[param].value.i = value; + migParams->params[param].set = true; + return 0; +} + + int qemuMigrationParamsSetULL(qemuMigrationParams *migParams, qemuMigrationParam param, diff --git a/src/qemu/qemu_migration_params.h b/src/qemu/qemu_migration_params.h index a0909b9f3d..271d65c338 100644 --- a/src/qemu/qemu_migration_params.h +++ b/src/qemu/qemu_migration_params.h @@ -123,6 +123,15 @@ qemuMigrationParamsFetch(virQEMUDriver *driver, int asyncJob, qemuMigrationParams **migParams); +void +qemuMigrationParamsSetCap(qemuMigrationParams *migParams, + virQEMUCapsFlags flag); + +int +qemuMigrationParamsSetInt(qemuMigrationParams *migParams, + qemuMigrationParam param, + int value); + int qemuMigrationParamsSetULL(qemuMigrationParams *migParams, qemuMigrationParam param, -- 2.35.3

implement a function similar to qemuMigrationSrcToFile that migrates to multiple files using QEMU multifd, and use it for VIR_DOMAIN_SAVE_PARALLEL saves. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_migration.c | 131 +++++++++++++++++++++++++------------- src/qemu/qemu_migration.h | 7 ++ src/qemu/qemu_saveimage.c | 11 ++-- 3 files changed, 100 insertions(+), 49 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 25af291dc6..019e7bd299 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -5875,13 +5875,14 @@ qemuMigrationDstFinish(virQEMUDriver *driver, return dom; } - /* Helper function called while vm is active. */ -int -qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, - int fd, - virCommand *compressor, - virDomainAsyncJob asyncJob) +static int +qemuMigrationSrcToFileAux(virQEMUDriver *driver, virDomainObj *vm, + int fd, + virCommand *compressor, + virDomainAsyncJob asyncJob, + const char *sun_path, + int nchannels) { qemuDomainObjPrivate *priv = vm->privateData; bool bwParam = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_PARAM_BANDWIDTH); @@ -5892,24 +5893,26 @@ qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, char *errbuf = NULL; virErrorPtr orig_err = NULL; g_autoptr(qemuMigrationParams) migParams = NULL; + bool needParams = (bwParam || sun_path); + if (sun_path && !virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATE_MULTIFD)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("QEMU does not seem to support multifd migration, required for parallel migration to files")); + return -1; + } if (qemuMigrationSetDBusVMState(driver, vm) < 0) return -1; /* Increase migration bandwidth to unlimited since target is a file. * Failure to change migration speed is not fatal. */ - if (bwParam) { - if (!(migParams = qemuMigrationParamsNew())) - return -1; + if (needParams && !((migParams = qemuMigrationParamsNew()))) + return -1; + if (bwParam) { if (qemuMigrationParamsSetULL(migParams, QEMU_MIGRATION_PARAM_MAX_BANDWIDTH, QEMU_DOMAIN_MIG_BANDWIDTH_MAX * 1024 * 1024) < 0) return -1; - - if (qemuMigrationParamsApply(driver, vm, asyncJob, migParams) < 0) - return -1; - priv->migMaxBandwidth = QEMU_DOMAIN_MIG_BANDWIDTH_MAX; } else { if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) == 0) { @@ -5920,6 +5923,17 @@ qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, } } + if (sun_path) { + qemuMigrationParamsSetCap(migParams, QEMU_MIGRATION_CAP_MULTIFD); + if (qemuMigrationParamsSetInt(migParams, + QEMU_MIGRATION_PARAM_MULTIFD_CHANNELS, + nchannels) < 0) + return -1; + } + + if (needParams && qemuMigrationParamsApply(driver, vm, asyncJob, migParams) < 0) + return -1; + if (!virDomainObjIsActive(vm)) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("guest unexpectedly quit")); @@ -5927,45 +5941,53 @@ qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, return -1; } - if (compressor && virPipe(pipeFD) < 0) + if (!sun_path && compressor && virPipe(pipeFD) < 0) return -1; - /* All right! We can use fd migration, which means that qemu - * doesn't have to open() the file, so while we still have to - * grant SELinux access, we can do it on fd and avoid cleanup - * later, as well as skip futzing with cgroup. */ - if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, - compressor ? pipeFD[1] : fd) < 0) - goto cleanup; - if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) < 0) goto cleanup; - if (!compressor) { - rc = qemuMonitorMigrateToFd(priv->mon, - QEMU_MONITOR_MIGRATE_BACKGROUND, - fd); + if (sun_path) { + rc = qemuMonitorMigrateToSocket(priv->mon, + QEMU_MONITOR_MIGRATE_BACKGROUND, + sun_path); } else { - virCommandSetInputFD(compressor, pipeFD[0]); - virCommandSetOutputFD(compressor, &fd); - virCommandSetErrorBuffer(compressor, &errbuf); - virCommandDoAsyncIO(compressor); - if (virSetCloseExec(pipeFD[1]) < 0) { - virReportSystemError(errno, "%s", - _("Unable to set cloexec flag")); - qemuDomainObjExitMonitor(vm); - goto cleanup; - } - if (virCommandRunAsync(compressor, NULL) < 0) { - qemuDomainObjExitMonitor(vm); + /* + * All right! We can use fd migration, which means that qemu + * doesn't have to open() the file, so while we still have to + * grant SELinux access, we can do it on fd and avoid cleanup + * later, as well as skip futzing with cgroup. + */ + if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, + compressor ? pipeFD[1] : fd) < 0) goto cleanup; + + if (!compressor) { + rc = qemuMonitorMigrateToFd(priv->mon, + QEMU_MONITOR_MIGRATE_BACKGROUND, + fd); + } else { + virCommandSetInputFD(compressor, pipeFD[0]); + virCommandSetOutputFD(compressor, &fd); + virCommandSetErrorBuffer(compressor, &errbuf); + virCommandDoAsyncIO(compressor); + if (virSetCloseExec(pipeFD[1]) < 0) { + virReportSystemError(errno, "%s", + _("Unable to set cloexec flag")); + qemuDomainObjExitMonitor(vm); + goto cleanup; + } + if (virCommandRunAsync(compressor, NULL) < 0) { + qemuDomainObjExitMonitor(vm); + goto cleanup; + } + rc = qemuMonitorMigrateToFd(priv->mon, + QEMU_MONITOR_MIGRATE_BACKGROUND, + pipeFD[1]); + if (VIR_CLOSE(pipeFD[0]) < 0 || + VIR_CLOSE(pipeFD[1]) < 0) + VIR_WARN("failed to close intermediate pipe"); } - rc = qemuMonitorMigrateToFd(priv->mon, - QEMU_MONITOR_MIGRATE_BACKGROUND, - pipeFD[1]); - if (VIR_CLOSE(pipeFD[0]) < 0 || - VIR_CLOSE(pipeFD[1]) < 0) - VIR_WARN("failed to close intermediate pipe"); } qemuDomainObjExitMonitor(vm); if (rc < 0) @@ -5986,7 +6008,7 @@ qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, goto cleanup; } - if (compressor && virCommandWait(compressor, NULL) < 0) + if (!sun_path && compressor && virCommandWait(compressor, NULL) < 0) goto cleanup; qemuDomainEventEmitJobCompleted(driver, vm); @@ -6025,6 +6047,25 @@ qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, return ret; } +int +qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, + int fd, + virCommand *compressor, + virDomainAsyncJob asyncJob) +{ + return qemuMigrationSrcToFileAux(driver, vm, fd, compressor, + asyncJob, NULL, -1); +} + +int +qemuMigrationSrcToFilesMultiFd(virQEMUDriver *driver, virDomainObj *vm, + virDomainAsyncJob asyncJob, + const char *sun_path, + int nchannels) +{ + return qemuMigrationSrcToFileAux(driver, vm, -1, NULL, + asyncJob, sun_path, nchannels); +} int qemuMigrationSrcCancel(virQEMUDriver *driver, diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index a8afa66119..ddc8e65489 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -213,6 +213,13 @@ qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainAsyncJob asyncJob) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) G_GNUC_WARN_UNUSED_RESULT; +int +qemuMigrationSrcToFilesMultiFd(virQEMUDriver *driver, virDomainObj *vm, + virDomainAsyncJob asyncJob, + const char *sun_path, + int nchannels) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) G_GNUC_WARN_UNUSED_RESULT; + int qemuMigrationSrcCancel(virQEMUDriver *driver, virDomainObj *vm); diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index e2cca4a417..92f619a5f1 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -597,7 +597,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, const char *path, virQEMUSaveData *data, virCommand *compressor, - int nconn G_GNUC_UNUSED, + int nconn, unsigned int flags, virDomainAsyncJob asyncJob) { @@ -616,10 +616,14 @@ qemuSaveImageCreate(virQEMUDriver *driver, oflags |= O_DIRECT; } - if (virQEMUSaveFdInit(&saveFd, path, 0, oflags, cfg, false) < 0) + if (virQEMUSaveFdInit(&saveFd, path, 0, oflags, cfg, flags & VIR_DOMAIN_SAVE_PARALLEL) < 0) goto cleanup; if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, saveFd.fd) < 0) goto cleanup; + + if (nconn > 0) + data->header.multifd_channels = nconn; + if (virQEMUSaveDataWrite(data, saveFd.fd, saveFd.path) < 0) goto cleanup; @@ -649,8 +653,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, goto cleanup; if (chown(sun_path, cfg->user, cfg->group) < 0) goto cleanup; - /* still using single fd migration for now */ - if (qemuMigrationSrcToFile(driver, vm, saveFd.fd, compressor, asyncJob) < 0) + if (qemuMigrationSrcToFilesMultiFd(driver, vm, asyncJob, sun_path, nconn) < 0) goto cleanup; if (qemuSaveImageCloseMultiFd(multiFd, nconn, vm) < 0) goto cleanup; -- 2.35.3

The distinction on whether to wait for the migration completion or not was made on the async job type, but with the future addition of multifd migration from files, we need a way to avoid waiting, so we can prepare multifd migration parameters before starting the transfers. Adapt all callers. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_driver.c | 8 ++++---- src/qemu/qemu_migration.c | 18 ++++++++++-------- src/qemu/qemu_migration.h | 3 ++- src/qemu/qemu_process.c | 3 ++- src/qemu/qemu_process.h | 5 +++-- src/qemu/qemu_saveimage.c | 4 +++- src/qemu/qemu_saveimage.h | 1 + src/qemu/qemu_snapshot.c | 4 ++-- 8 files changed, 27 insertions(+), 19 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index a03ead960b..4dc106a621 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -1630,7 +1630,7 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn, } if (qemuProcessStart(conn, driver, vm, NULL, VIR_ASYNC_JOB_START, - NULL, -1, NULL, NULL, + NULL, -1, NULL, false, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags) < 0) { virDomainAuditStart(vm, "booted", false); @@ -5926,7 +5926,7 @@ qemuDomainRestoreInternal(virConnectPtr conn, goto cleanup; ret = qemuSaveImageStartVM(conn, driver, vm, &saveFd.fd, data, path, - false, reset_nvram, VIR_ASYNC_JOB_START); + false, reset_nvram, true, VIR_ASYNC_JOB_START); qemuProcessEndJob(vm); @@ -6247,7 +6247,7 @@ qemuDomainObjRestore(virConnectPtr conn, virDomainObjAssignDef(vm, &def, true, NULL); ret = qemuSaveImageStartVM(conn, driver, vm, &saveFd.fd, data, path, - start_paused, reset_nvram, asyncJob); + start_paused, reset_nvram, true, asyncJob); cleanup: virQEMUSaveDataFree(data); @@ -6507,7 +6507,7 @@ qemuDomainObjStart(virConnectPtr conn, } ret = qemuProcessStart(conn, driver, vm, NULL, asyncJob, - NULL, -1, NULL, NULL, + NULL, -1, NULL, false, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags); virDomainAuditStart(vm, "booted", ret >= 0); if (ret >= 0) { diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 019e7bd299..6250b707b3 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -2122,7 +2122,8 @@ int qemuMigrationDstRun(virQEMUDriver *driver, virDomainObj *vm, const char *uri, - virDomainAsyncJob asyncJob) + virDomainAsyncJob asyncJob, + bool wait) { qemuDomainObjPrivate *priv = vm->privateData; int rv; @@ -2143,14 +2144,15 @@ qemuMigrationDstRun(virQEMUDriver *driver, if (rv < 0) return -1; - if (asyncJob == VIR_ASYNC_JOB_MIGRATION_IN) { - /* qemuMigrationDstWaitForCompletion is called from the Finish phase */ - return 0; + if (wait) { + /* + * the Migration Finish phase, as well as the multifd load from files, + * need to call qemuMigrationDstWaitForCompletion separately, not here. + */ + if (qemuMigrationDstWaitForCompletion(driver, vm, asyncJob, false) < 0) + return -1; } - if (qemuMigrationDstWaitForCompletion(driver, vm, asyncJob, false) < 0) - return -1; - return 0; } @@ -3024,7 +3026,7 @@ qemuMigrationDstPrepareAny(virQEMUDriver *driver, } if (qemuMigrationDstRun(driver, vm, incoming->uri, - VIR_ASYNC_JOB_MIGRATION_IN) < 0) + VIR_ASYNC_JOB_MIGRATION_IN, false) < 0) goto stopjob; if (qemuProcessFinishStartup(driver, vm, VIR_ASYNC_JOB_MIGRATION_IN, diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index ddc8e65489..c3c48c19c0 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -255,7 +255,8 @@ int qemuMigrationDstRun(virQEMUDriver *driver, virDomainObj *vm, const char *uri, - virDomainAsyncJob asyncJob); + virDomainAsyncJob asyncJob, + bool wait); void qemuMigrationAnyPostcopyFailed(virQEMUDriver *driver, diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index fd4db43a42..7f3bfbdbbd 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -7793,6 +7793,7 @@ qemuProcessStart(virConnectPtr conn, const char *migrateFrom, int migrateFd, const char *migratePath, + bool wait_incoming, virDomainMomentObj *snapshot, virNetDevVPortProfileOp vmop, unsigned int flags) @@ -7855,7 +7856,7 @@ qemuProcessStart(virConnectPtr conn, relabel = true; if (incoming) { - if (qemuMigrationDstRun(driver, vm, incoming->uri, asyncJob) < 0) + if (qemuMigrationDstRun(driver, vm, incoming->uri, asyncJob, wait_incoming) < 0) goto stop; } else { /* Refresh state of devices from QEMU. During migration this happens diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h index f81bfd930a..5a1d005cb0 100644 --- a/src/qemu/qemu_process.h +++ b/src/qemu/qemu_process.h @@ -86,8 +86,9 @@ int qemuProcessStart(virConnectPtr conn, virCPUDef *updatedCPU, virDomainAsyncJob asyncJob, const char *migrateFrom, - int stdin_fd, - const char *stdin_path, + int fd, + const char *migratePath, + bool wait_incoming, virDomainMomentObj *snapshot, virNetDevVPortProfileOp vmop, unsigned int flags); diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 92f619a5f1..c652293a02 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -823,6 +823,7 @@ qemuSaveImageStartVM(virConnectPtr conn, const char *path, bool start_paused, bool reset_nvram, + bool wait_incoming, virDomainAsyncJob asyncJob) { qemuDomainObjPrivate *priv = vm->privateData; @@ -877,7 +878,8 @@ qemuSaveImageStartVM(virConnectPtr conn, priv->disableSlirp = true; if (qemuProcessStart(conn, driver, vm, cookie ? cookie->cpu : NULL, - asyncJob, "stdio", *fd, path, NULL, + asyncJob, "stdio", *fd, path, wait_incoming, + NULL, VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, start_flags) == 0) started = true; diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 21e335b530..952c5cd58a 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -99,6 +99,7 @@ qemuSaveImageStartVM(virConnectPtr conn, const char *path, bool start_paused, bool reset_nvram, + bool wait_incoming, virDomainAsyncJob asyncJob) ATTRIBUTE_NONNULL(4) ATTRIBUTE_NONNULL(5) ATTRIBUTE_NONNULL(6); diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 2e445e8296..626a5a14b9 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -2092,7 +2092,7 @@ qemuSnapshotRevertActive(virDomainObj *vm, rc = qemuProcessStart(snapshot->domain->conn, driver, vm, cookie ? cookie->cpu : NULL, - VIR_ASYNC_JOB_START, NULL, -1, NULL, snap, + VIR_ASYNC_JOB_START, NULL, -1, NULL, false, snap, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags); virDomainAuditStart(vm, "from-snapshot", rc >= 0); @@ -2215,7 +2215,7 @@ qemuSnapshotRevertInactive(virDomainObj *vm, start_flags |= paused ? VIR_QEMU_PROCESS_START_PAUSED : 0; rc = qemuProcessStart(snapshot->domain->conn, driver, vm, NULL, - VIR_ASYNC_JOB_START, NULL, -1, NULL, NULL, + VIR_ASYNC_JOB_START, NULL, -1, NULL, false, NULL, VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags); virDomainAuditStart(vm, "from-snapshot", rc >= 0); -- 2.35.3

use multifd to restore parallel images, if VIR_DOMAIN_SAVE_PARALLEL is enabled. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_driver.c | 16 ++++- src/qemu/qemu_migration.c | 2 +- src/qemu/qemu_migration.h | 6 ++ src/qemu/qemu_saveimage.c | 119 +++++++++++++++++++++++++++++++++++++- src/qemu/qemu_saveimage.h | 8 ++- 5 files changed, 144 insertions(+), 7 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 4dc106a621..d1dbf8f7ab 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -5869,7 +5869,7 @@ qemuDomainRestoreInternal(virConnectPtr conn, } oflags |= O_DIRECT; } - if (virQEMUSaveFdInit(&saveFd, path, 0, oflags, cfg, false) < 0) + if (virQEMUSaveFdInit(&saveFd, path, 0, oflags, cfg, flags & VIR_DOMAIN_SAVE_PARALLEL) < 0) return -1; if (qemuSaveImageOpen(driver, NULL, &def, &data, false, &saveFd) < 0) goto cleanup; @@ -5925,8 +5925,18 @@ qemuDomainRestoreInternal(virConnectPtr conn, flags) < 0) goto cleanup; - ret = qemuSaveImageStartVM(conn, driver, vm, &saveFd.fd, data, path, - false, reset_nvram, true, VIR_ASYNC_JOB_START); + if (flags & VIR_DOMAIN_SAVE_PARALLEL) { + ret = qemuSaveImageLoadMultiFd(conn, vm, oflags, data, reset_nvram, + &saveFd, VIR_ASYNC_JOB_START); + + } else if (data->header.multifd_channels != 0) { + virReportError(VIR_ERR_OPERATION_FAILED, "%s", + _("save file seems to contain multifd channels information, and restore is not flagged as 'parallel'")); + ret = -1; + } else { + ret = qemuSaveImageStartVM(conn, driver, vm, &saveFd.fd, data, path, + false, reset_nvram, true, VIR_ASYNC_JOB_START); + } qemuProcessEndJob(vm); diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index 6250b707b3..c4e1837419 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -1920,7 +1920,7 @@ qemuMigrationSrcWaitForCompletion(virQEMUDriver *driver, } -static int +int qemuMigrationDstWaitForCompletion(virQEMUDriver *driver, virDomainObj *vm, virDomainAsyncJob asyncJob, diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index c3c48c19c0..38f4877cf0 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -191,6 +191,12 @@ qemuMigrationDstFinish(virQEMUDriver *driver, int retcode, bool v3proto); +int +qemuMigrationDstWaitForCompletion(virQEMUDriver *driver, + virDomainObj *vm, + virDomainAsyncJob asyncJob, + bool postcopy); + int qemuMigrationSrcConfirm(virQEMUDriver *driver, virDomainObj *vm, diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index c652293a02..7becaa5c25 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -686,6 +686,114 @@ qemuSaveImageCreate(virQEMUDriver *driver, } +int qemuSaveImageLoadMultiFd(virConnectPtr conn, virDomainObj *vm, int oflags, + virQEMUSaveData *data, bool reset_nvram, + virQEMUSaveFd *saveFd, virDomainAsyncJob asyncJob) +{ + virQEMUDriver *driver = conn->privateData; + qemuDomainObjPrivate *priv = vm->privateData; + virQEMUSaveFd *multiFd = NULL; + g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); + g_autoptr(virCommand) cmd = NULL; + g_autofree char *helper_path = NULL; + g_autofree char *sun_path = g_strdup_printf("%s/restore-multifd.sock", cfg->saveDir); + bool qemu_started = false; + int ret = -1; + int nchannels = data->header.multifd_channels; + + if (!(helper_path = virFileFindResource("libvirt_multifd_helper", + abs_top_builddir "/src", + LIBEXECDIR))) + goto cleanup; + cmd = virCommandNewArgList(helper_path, sun_path, NULL); + virCommandAddArgFormat(cmd, "%d", nchannels); + virCommandAddArgFormat(cmd, "%d", saveFd->fd); + virCommandPassFD(cmd, saveFd->fd, 0); + + /* Perform parallel multifd migration from files (main fd + channels) */ + if (!(multiFd = qemuSaveImageCreateMultiFd(driver, vm, cmd, saveFd->path, oflags, cfg, nchannels))) + goto cleanup; + if (qemuSaveImageStartVM(conn, driver, vm, NULL, data, sun_path, + false, reset_nvram, false, asyncJob) < 0) + goto cleanup; + + qemu_started = true; + + if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATE_MULTIFD)) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("QEMU does not seem to support multifd migration, required for parallel migration from files")); + goto cleanup; + } else { + g_autoptr(qemuMigrationParams) migParams = qemuMigrationParamsNew(); + bool bwParam = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_PARAM_BANDWIDTH); + + if (bwParam) { + if (qemuMigrationParamsSetULL(migParams, + QEMU_MIGRATION_PARAM_MAX_BANDWIDTH, + QEMU_DOMAIN_MIG_BANDWIDTH_MAX * 1024 * 1024) < 0) + goto cleanup; + priv->migMaxBandwidth = QEMU_DOMAIN_MIG_BANDWIDTH_MAX; + } else { + if (qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob) == 0) { + qemuMonitorSetMigrationSpeed(priv->mon, + QEMU_DOMAIN_MIG_BANDWIDTH_MAX); + priv->migMaxBandwidth = QEMU_DOMAIN_MIG_BANDWIDTH_MAX; + qemuDomainObjExitMonitor(vm); + } + } + qemuMigrationParamsSetCap(migParams, QEMU_MIGRATION_CAP_MULTIFD); + if (qemuMigrationParamsSetInt(migParams, + QEMU_MIGRATION_PARAM_MULTIFD_CHANNELS, + nchannels) < 0) + goto cleanup; + if (qemuMigrationParamsApply(driver, vm, asyncJob, migParams) < 0) + goto cleanup; + + if (!virDomainObjIsActive(vm)) { + virReportError(VIR_ERR_INTERNAL_ERROR, "%s", + _("guest unexpectedly quit")); + goto cleanup; + } + /* multifd helper can now connect, then wait for migration to complete */ + if (virCommandRunAsync(cmd, NULL) < 0) + goto cleanup; + + if (qemuMigrationDstWaitForCompletion(driver, vm, asyncJob, false) < 0) + goto cleanup; + + if (qemuSaveImageCloseMultiFd(multiFd, nchannels, vm) < 0) + goto cleanup; + + if (qemuProcessRefreshState(driver, vm, asyncJob) < 0) + goto cleanup; + + /* run 'cont' on the destination */ + if (qemuProcessStartCPUs(driver, vm, + VIR_DOMAIN_RUNNING_RESTORED, + asyncJob) < 0) { + if (virGetLastErrorCode() == VIR_ERR_OK) + virReportError(VIR_ERR_OPERATION_FAILED, + "%s", _("failed to resume domain")); + goto cleanup; + } + if (virDomainObjSave(vm, driver->xmlopt, cfg->stateDir) < 0) { + VIR_WARN("Failed to save status on vm %s", vm->def->name); + goto cleanup; + } + } + qemuDomainEventEmitJobCompleted(driver, vm); + ret = 0; + + cleanup: + if (ret < 0 && qemu_started) { + qemuProcessStop(driver, vm, VIR_DOMAIN_SHUTOFF_FAILED, + asyncJob, VIR_QEMU_PROCESS_STOP_MIGRATED); + } + ret = qemuSaveImageFreeMultiFd(multiFd, vm, nchannels, ret); + return ret; +} + + /* qemuSaveImageGetCompressionProgram: * @imageFormat: String representation from qemu.conf for the compression * image format being used (dump, save, or snapshot). @@ -831,6 +939,7 @@ qemuSaveImageStartVM(virConnectPtr conn, bool started = false; virObjectEvent *event; VIR_AUTOCLOSE intermediatefd = -1; + g_autofree char *migrate_from = NULL; g_autoptr(virCommand) cmd = NULL; g_autofree char *errbuf = NULL; g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); @@ -877,8 +986,14 @@ qemuSaveImageStartVM(virConnectPtr conn, if (cookie && !cookie->slirpHelper) priv->disableSlirp = true; + if (fd) { + migrate_from = g_strdup("stdio"); + } else { + migrate_from = g_strdup_printf("unix://%s", path); + } + if (qemuProcessStart(conn, driver, vm, cookie ? cookie->cpu : NULL, - asyncJob, "stdio", *fd, path, wait_incoming, + asyncJob, migrate_from, fd ? *fd : -1, path, wait_incoming, NULL, VIR_NETDEV_VPORT_PROFILE_OP_RESTORE, start_flags) == 0) @@ -902,7 +1017,7 @@ qemuSaveImageStartVM(virConnectPtr conn, VIR_DEBUG("Decompression binary stderr: %s", NULLSTR(errbuf)); virErrorRestore(&orig_err); } - if (VIR_CLOSE(*fd) < 0) { + if (fd && VIR_CLOSE(*fd) < 0) { virReportSystemError(errno, _("cannot close file: %s"), path); rc = -1; } diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 952c5cd58a..99cc9a81a9 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -101,7 +101,7 @@ qemuSaveImageStartVM(virConnectPtr conn, bool reset_nvram, bool wait_incoming, virDomainAsyncJob asyncJob) - ATTRIBUTE_NONNULL(4) ATTRIBUTE_NONNULL(5) ATTRIBUTE_NONNULL(6); + ATTRIBUTE_NONNULL(5) ATTRIBUTE_NONNULL(6); int qemuSaveImageOpen(virQEMUDriver *driver, @@ -119,6 +119,12 @@ qemuSaveImageGetCompressionProgram(const char *imageFormat, bool use_raw_on_fail) ATTRIBUTE_NONNULL(2); +int qemuSaveImageLoadMultiFd(virConnectPtr conn, virDomainObj *vm, int oflags, + virQEMUSaveData *data, bool reset_nvram, + virQEMUSaveFd *saveFd, virDomainAsyncJob asyncJob) + ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) ATTRIBUTE_NONNULL(4) + ATTRIBUTE_NONNULL(6) G_GNUC_WARN_UNUSED_RESULT; + int qemuSaveImageCreate(virQEMUDriver *driver, virDomainObj *vm, -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- docs/manpages/virsh.rst | 23 +++++++++++++++++------ tools/virsh-domain.c | 24 ++++++++++++++++++++++-- 2 files changed, 39 insertions(+), 8 deletions(-) diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index e73e590754..e9012b85d1 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3803,15 +3803,18 @@ save :: save domain state-file [--bypass-cache] [--xml file] + [--parallel] [--parallel-connections connections] [{--running | --paused}] [--verbose] -Saves a running domain (RAM, but not disk state) to a state file so that -it can be restored -later. Once saved, the domain will no longer be running on the -system, thus the memory allocated for the domain will be free for -other domains to use. ``virsh restore`` restores from this state file. +Saves a paused or running domain (RAM, but not disk state) to one or more +state files, so that it can be restored later. +Once saved, the domain will no longer be running on the system, +thus the memory allocated for the domain will be free for +other domains to use. ``virsh restore`` restores from state file/s. + If *--bypass-cache* is specified, the save will avoid the file system -cache, although this may slow down the operation. +cache; depending on the specific scenario this may slow down or speed up +the operation. The progress may be monitored using ``domjobinfo`` virsh command and canceled with ``domjobabort`` command (sent by another virsh instance). Another option @@ -3833,6 +3836,14 @@ based on the state the domain was in when the save was done; passing either the *--running* or *--paused* flag will allow overriding which state the ``restore`` should use. +*--parallel* option will cause the save data to be sent over multiple +parallel connections to multiple files. The main save file is specified +with ``state-file``, and a number of additional connections can be +set using *--parallel-connections*, which will save to files named +``state-file``.1 , ``state-file``.2 ... up to ``connections``. + +Parallel connections may help in speeding up the save operation. + Domain saved state files assume that disk images will be unchanged between the creation and restore point. For a more complete system restore point, where the disk state is saved alongside the memory diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 8a3c9d53d4..85d18c99a8 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -4174,6 +4174,14 @@ static const vshCmdOptDef opts_save[] = { .type = VSH_OT_BOOL, .help = N_("avoid file system cache when saving") }, + {.name = "parallel", + .type = VSH_OT_BOOL, + .help = N_("enable parallel save to files") + }, + {.name = "parallel-connections", + .type = VSH_OT_INT, + .help = N_("number of connections/files for parallel save") + }, {.name = "xml", .type = VSH_OT_STRING, .completer = virshCompletePathLocalExisting, @@ -4206,6 +4214,8 @@ doSave(void *opaque) virTypedParameterPtr params = NULL; int nparams = 0; int maxparams = 0; + int intOpt = 0; + int rv = -1; unsigned int flags = 0; const char *xmlfile = NULL; g_autofree char *xml = NULL; @@ -4227,6 +4237,15 @@ doSave(void *opaque) } if (vshCommandOptBool(cmd, "bypass-cache")) flags |= VIR_DOMAIN_SAVE_BYPASS_CACHE; + if (vshCommandOptBool(cmd, "parallel")) + flags |= VIR_DOMAIN_SAVE_PARALLEL; + if ((rv = vshCommandOptInt(ctl, cmd, "parallel-connections", &intOpt)) < 0) { + goto out; + } else if (rv > 0) { + if (virTypedParamsAddInt(¶ms, &nparams, &maxparams, + VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS, intOpt) < 0) + goto out; + } if (vshCommandOptBool(cmd, "running")) flags |= VIR_DOMAIN_SAVE_RUNNING; if (vshCommandOptBool(cmd, "paused")) @@ -4247,8 +4266,9 @@ doSave(void *opaque) goto out; } } - - if (flags || xml) { + if (flags & VIR_DOMAIN_SAVE_PARALLEL) { + rc = virDomainSaveParams(dom, params, nparams, flags); + } else if (flags || xml) { rc = virDomainSaveFlags(dom, to, xml, flags); } else { rc = virDomainSave(dom, to); -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- docs/manpages/virsh.rst | 12 ++++++++++-- tools/virsh-domain.c | 10 +++++++++- 2 files changed, 19 insertions(+), 3 deletions(-) diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index e9012b85d1..dee748d870 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3754,12 +3754,13 @@ restore :: restore state-file [--bypass-cache] [--xml file] - [{--running | --paused}] [--reset-nvram] + [{--running | --paused}] [--reset-nvram] [--parallel] Restores a domain from a ``virsh save`` state file. See *save* for more info. If *--bypass-cache* is specified, the restore will avoid the file system -cache, although this may slow down the operation. +cache; depending on the specific scenario this may slow down or speed up +the operation. *--xml* ``file`` is usually omitted, but can be used to supply an alternative XML file for use on the restored guest with changes only @@ -3775,6 +3776,13 @@ domain should be started in. If *--reset-nvram* is specified, any existing NVRAM file will be deleted and re-initialized from its pristine template. +*--parallel* option will cause the save data to be loaded from multiple +state files over parallel connections. The main save file is specified +with ``state-file``, and the state file itself contains the number of +additional channels (files) to load. + +Parallel connections may help in speeding up the restore operation. + ``Note``: To avoid corrupting file system contents within the domain, you should not reuse the saved state file for a second ``restore`` unless you have also reverted all storage volumes back to the same contents as when diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 85d18c99a8..9103d6ed65 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -5312,6 +5312,10 @@ static const vshCmdOptDef opts_restore[] = { .type = VSH_OT_BOOL, .help = N_("avoid file system cache when restoring") }, + {.name = "parallel", + .type = VSH_OT_BOOL, + .help = N_("enable parallel restore") + }, {.name = "xml", .type = VSH_OT_STRING, .completer = virshCompletePathLocalExisting, @@ -5353,6 +5357,8 @@ cmdRestore(vshControl *ctl, const vshCmd *cmd) } if (vshCommandOptBool(cmd, "bypass-cache")) flags |= VIR_DOMAIN_SAVE_BYPASS_CACHE; + if (vshCommandOptBool(cmd, "parallel")) + flags |= VIR_DOMAIN_SAVE_PARALLEL; if (vshCommandOptBool(cmd, "running")) flags |= VIR_DOMAIN_SAVE_RUNNING; if (vshCommandOptBool(cmd, "paused")) @@ -5371,7 +5377,9 @@ cmdRestore(vshControl *ctl, const vshCmd *cmd) goto out; } - if (flags || xml) { + if (flags & VIR_DOMAIN_SAVE_PARALLEL) { + rc = virDomainRestoreParams(priv->conn, params, nparams, flags); + } else if (flags || xml) { rc = virDomainRestoreFlags(priv->conn, from, xml, flags); } else { rc = virDomainRestore(priv->conn, from); -- 2.35.3

add it to both capabilities and migration parameters Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_capabilities.c | 2 ++ src/qemu/qemu_capabilities.h | 1 + src/qemu/qemu_migration_params.c | 2 ++ src/qemu/qemu_migration_params.h | 1 + tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml | 1 + tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml | 1 + tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml | 1 + tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml | 1 + tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml | 1 + tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml | 1 + tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml | 1 + tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml | 1 + tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml | 1 + 26 files changed, 28 insertions(+) diff --git a/src/qemu/qemu_capabilities.c b/src/qemu/qemu_capabilities.c index 584a223b9f..746fc36ee2 100644 --- a/src/qemu/qemu_capabilities.c +++ b/src/qemu/qemu_capabilities.c @@ -676,6 +676,7 @@ VIR_ENUM_IMPL(virQEMUCaps, /* 430 */ "chardev.qemu-vdagent", /* QEMU_CAPS_CHARDEV_QEMU_VDAGENT */ "migrate-multifd", /* QEMU_CAPS_MIGRATE_MULTIFD */ + "migration-param.multifd-compression", /* QEMU_CAPS_MIGRATION_PARAM_MULTIFD_COMPRESSION */ ); @@ -1612,6 +1613,7 @@ static struct virQEMUCapsStringFlags virQEMUCapsQMPSchemaQueries[] = { { "migrate-set-parameters/arg-type/downtime-limit", QEMU_CAPS_MIGRATION_PARAM_DOWNTIME }, { "migrate-set-parameters/arg-type/xbzrle-cache-size", QEMU_CAPS_MIGRATION_PARAM_XBZRLE_CACHE_SIZE }, { "migrate-set-parameters/arg-type/block-bitmap-mapping/bitmaps/transform", QEMU_CAPS_MIGRATION_PARAM_BLOCK_BITMAP_MAPPING }, + { "migrate-set-parameters/arg-type/multifd-compression", QEMU_CAPS_MIGRATION_PARAM_MULTIFD_COMPRESSION }, { "nbd-server-start/arg-type/tls-creds", QEMU_CAPS_NBD_TLS }, { "nbd-server-add/arg-type/bitmap", QEMU_CAPS_NBD_BITMAP }, { "netdev_add/arg-type/+vhost-vdpa", QEMU_CAPS_NETDEV_VHOST_VDPA }, diff --git a/src/qemu/qemu_capabilities.h b/src/qemu/qemu_capabilities.h index 69fccc3fce..d6a4993071 100644 --- a/src/qemu/qemu_capabilities.h +++ b/src/qemu/qemu_capabilities.h @@ -651,6 +651,7 @@ typedef enum { /* virQEMUCapsFlags grouping marker for syntax-check */ /* 430 */ QEMU_CAPS_CHARDEV_QEMU_VDAGENT, /* -chardev qemu-vdagent */ QEMU_CAPS_MIGRATE_MULTIFD, /* migrate can set multifd parameter */ + QEMU_CAPS_MIGRATION_PARAM_MULTIFD_COMPRESSION, /* multifd-compression in migrate-set-parameters */ QEMU_CAPS_LAST /* this must always be the last item */ } virQEMUCapsFlags; diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c index 079a9f844d..e107681ba7 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -115,6 +115,7 @@ VIR_ENUM_IMPL(qemuMigrationParam, "xbzrle-cache-size", "max-postcopy-bandwidth", "multifd-channels", + "multifd-compression", ); typedef struct _qemuMigrationParamsAlwaysOnItem qemuMigrationParamsAlwaysOnItem; @@ -234,6 +235,7 @@ static const qemuMigrationParamType qemuMigrationParamTypes[] = { [QEMU_MIGRATION_PARAM_XBZRLE_CACHE_SIZE] = QEMU_MIGRATION_PARAM_TYPE_ULL, [QEMU_MIGRATION_PARAM_MAX_POSTCOPY_BANDWIDTH] = QEMU_MIGRATION_PARAM_TYPE_ULL, [QEMU_MIGRATION_PARAM_MULTIFD_CHANNELS] = QEMU_MIGRATION_PARAM_TYPE_INT, + [QEMU_MIGRATION_PARAM_MULTIFD_COMPRESSION] = QEMU_MIGRATION_PARAM_TYPE_STRING, }; G_STATIC_ASSERT(G_N_ELEMENTS(qemuMigrationParamTypes) == QEMU_MIGRATION_PARAM_LAST); diff --git a/src/qemu/qemu_migration_params.h b/src/qemu/qemu_migration_params.h index 271d65c338..f2e0a0d9f2 100644 --- a/src/qemu/qemu_migration_params.h +++ b/src/qemu/qemu_migration_params.h @@ -60,6 +60,7 @@ typedef enum { QEMU_MIGRATION_PARAM_XBZRLE_CACHE_SIZE, QEMU_MIGRATION_PARAM_MAX_POSTCOPY_BANDWIDTH, QEMU_MIGRATION_PARAM_MULTIFD_CHANNELS, + QEMU_MIGRATION_PARAM_MULTIFD_COMPRESSION, QEMU_MIGRATION_PARAM_LAST } qemuMigrationParam; diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml index 2a7f5074bc..de995c4434 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.aarch64.xml @@ -175,6 +175,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700241</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml index 3cdb173b01..a594095926 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.ppc64.xml @@ -181,6 +181,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>42900241</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml index 933b4870cd..c350c1b9a9 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.riscv64.xml @@ -167,6 +167,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>0</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml index 9bb48e1982..938775e7bb 100644 --- a/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_5.0.0.x86_64.xml @@ -215,6 +215,7 @@ <flag name='memory-backend-file.prealloc-threads'/> <flag name='virtio-iommu-pci'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100241</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml b/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml index f01cd25278..cef1e418ba 100644 --- a/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml +++ b/tests/qemucapabilitiesdata/caps_5.1.0.sparc.xml @@ -87,6 +87,7 @@ <flag name='query-display-options'/> <flag name='memory-backend-file.prealloc-threads'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5001000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>0</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml index f4446674a4..c78c078b24 100644 --- a/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_5.1.0.x86_64.xml @@ -219,6 +219,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5001000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml index f3e3640657..9426ef386c 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.aarch64.xml @@ -182,6 +182,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml index 691ffd2caa..7e0b8167fa 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.ppc64.xml @@ -186,6 +186,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>42900243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml index 2079257759..59742d607f 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.riscv64.xml @@ -172,6 +172,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>0</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml b/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml index d800e35096..bc7e701f1a 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.s390x.xml @@ -140,6 +140,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>39100243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml index f189e299c3..b1aff5f7b9 100644 --- a/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_5.2.0.x86_64.xml @@ -223,6 +223,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>5002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml index e86a2cf5d6..c3dae8c8db 100644 --- a/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_6.0.0.aarch64.xml @@ -190,6 +190,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>6000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml b/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml index bbc5825def..654eb92d4f 100644 --- a/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml +++ b/tests/qemucapabilitiesdata/caps_6.0.0.s390x.xml @@ -148,6 +148,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>6000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>39100242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml index 496dd5564c..2350d61f93 100644 --- a/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_6.0.0.x86_64.xml @@ -232,6 +232,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>6000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100242</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml index fab5e40e35..2e682ae4ba 100644 --- a/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_6.1.0.x86_64.xml @@ -237,6 +237,7 @@ <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>6001000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml index a83b8c8d77..ee90c76c30 100644 --- a/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_6.2.0.aarch64.xml @@ -202,6 +202,7 @@ <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>6001050</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700244</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml index b8f4aa2744..59e63acca0 100644 --- a/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_6.2.0.ppc64.xml @@ -197,6 +197,7 @@ <flag name='virtio-iommu-pci'/> <flag name='virtio-net.rss'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>6002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>42900244</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml index 1293cd1bf1..8597367f20 100644 --- a/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_6.2.0.x86_64.xml @@ -239,6 +239,7 @@ <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>6002000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100244</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml b/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml index 16626a1342..b15510d7d1 100644 --- a/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml +++ b/tests/qemucapabilitiesdata/caps_7.0.0.aarch64.xml @@ -210,6 +210,7 @@ <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>6002092</version> <kvmVersion>0</kvmVersion> <microcodeVersion>61700243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml b/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml index 9274b6f1d5..5a8206d19d 100644 --- a/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml +++ b/tests/qemucapabilitiesdata/caps_7.0.0.ppc64.xml @@ -214,6 +214,7 @@ <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>7000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>42900243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml index 5f1f837473..c9718c6e36 100644 --- a/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_7.0.0.x86_64.xml @@ -243,6 +243,7 @@ <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>7000000</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100243</microcodeVersion> diff --git a/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml b/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml index 19684ccaf1..0418ff2980 100644 --- a/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml +++ b/tests/qemucapabilitiesdata/caps_7.1.0.x86_64.xml @@ -243,6 +243,7 @@ <flag name='virtio-net.rss'/> <flag name='chardev.qemu-vdagent'/> <flag name='migrate-multifd'/> + <flag name='migration-param.multifd-compression'/> <version>7000050</version> <kvmVersion>0</kvmVersion> <microcodeVersion>43100244</microcodeVersion> -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- include/libvirt/libvirt-domain.h | 11 +++++++++++ 1 file changed, 11 insertions(+) diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h index 766e4d116e..c79a3c85e9 100644 --- a/include/libvirt/libvirt-domain.h +++ b/include/libvirt/libvirt-domain.h @@ -1618,6 +1618,17 @@ int virDomainRestoreParams (virConnectPtr conn, */ # define VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS "parallel.connections" +/** + * VIR_DOMAIN_SAVE_PARAM_PARALLEL_COMPRESSION: + * + * this optional parameter is used in conjunction with the flag + * VIR_DOMAIN_SAVE_PARALLEL during save to ask the hypervisor for + * compressed channels to be used using this algorithm. + * + * Since: 8.4.0 + */ +# define VIR_DOMAIN_SAVE_PARAM_PARALLEL_COMPRESSION "parallel.compression" + /* See below for virDomainSaveImageXMLFlags */ char * virDomainSaveImageGetXMLDesc (virConnectPtr conn, const char *file, -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_driver.c | 11 ++++++----- src/qemu/qemu_saveimage.c | 1 + src/qemu/qemu_saveimage.h | 1 + src/qemu/qemu_snapshot.c | 2 +- 4 files changed, 9 insertions(+), 6 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index d1dbf8f7ab..6ea23ee187 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2641,7 +2641,8 @@ static int qemuDomainSaveInternal(virQEMUDriver *driver, virDomainObj *vm, const char *path, int compressed, virCommand *compressor, - const char *xmlin, int nconn, unsigned int flags) + const char *xmlin, int nconn, const char *pcomp, + unsigned int flags) { g_autofree char *xml = NULL; bool was_running = false; @@ -2722,7 +2723,7 @@ qemuDomainSaveInternal(virQEMUDriver *driver, xml = NULL; ret = qemuSaveImageCreate(driver, vm, path, data, compressor, - nconn, flags, VIR_ASYNC_JOB_SAVE); + nconn, pcomp, flags, VIR_ASYNC_JOB_SAVE); if (ret < 0) goto endjob; @@ -2800,7 +2801,7 @@ qemuDomainManagedSaveHelper(virQEMUDriver *driver, VIR_INFO("Saving state of domain '%s' to '%s'", vm->def->name, path); if (qemuDomainSaveInternal(driver, vm, path, compressed, - compressor, dxml, -1, flags) < 0) + compressor, dxml, -1, NULL, flags) < 0) return -1; vm->hasManagedSave = true; @@ -2839,7 +2840,7 @@ qemuDomainSaveFlags(virDomainPtr dom, const char *path, const char *dxml, goto cleanup; ret = qemuDomainSaveInternal(driver, vm, path, compressed, - compressor, dxml, -1, flags); + compressor, dxml, -1, NULL, flags); cleanup: virDomainObjEndAPI(&vm); @@ -2913,7 +2914,7 @@ qemuDomainSaveParams(virDomainPtr dom, goto cleanup; ret = qemuDomainSaveInternal(driver, vm, to, compressed, - compressor, dxml, nconn, flags); + compressor, dxml, nconn, NULL, flags); cleanup: virDomainObjEndAPI(&vm); diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 7becaa5c25..784bd7e647 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -598,6 +598,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, virQEMUSaveData *data, virCommand *compressor, int nconn, + const char *pcomp G_GNUC_UNUSED, unsigned int flags, virDomainAsyncJob asyncJob) { diff --git a/src/qemu/qemu_saveimage.h b/src/qemu/qemu_saveimage.h index 99cc9a81a9..aada193f34 100644 --- a/src/qemu/qemu_saveimage.h +++ b/src/qemu/qemu_saveimage.h @@ -132,6 +132,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, virQEMUSaveData *data, virCommand *compressor, int nconn, + const char *pcomp, unsigned int flags, virDomainAsyncJob asyncJob); diff --git a/src/qemu/qemu_snapshot.c b/src/qemu/qemu_snapshot.c index 626a5a14b9..daa72983b3 100644 --- a/src/qemu/qemu_snapshot.c +++ b/src/qemu/qemu_snapshot.c @@ -1457,7 +1457,7 @@ qemuSnapshotCreateActiveExternal(virQEMUDriver *driver, memory_existing = virFileExists(snapdef->memorysnapshotfile); if ((ret = qemuSaveImageCreate(driver, vm, snapdef->memorysnapshotfile, - data, compressor, -1, 0, + data, compressor, -1, NULL, 0, VIR_ASYNC_JOB_SNAPSHOT)) < 0) goto cleanup; -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_driver.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 6ea23ee187..faaa6d4243 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -2865,6 +2865,7 @@ qemuDomainSaveParams(virDomainPtr dom, g_autoptr(virCommand) compressor = NULL; const char *to = NULL; const char *dxml = NULL; + const char *pcomp = NULL; int compressed; int ret = -1; int nconn = 2; @@ -2881,6 +2882,8 @@ qemuDomainSaveParams(virDomainPtr dom, VIR_TYPED_PARAM_STRING, VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS, VIR_TYPED_PARAM_INT, + VIR_DOMAIN_SAVE_PARAM_PARALLEL_COMPRESSION, + VIR_TYPED_PARAM_STRING, NULL) < 0) return -1; @@ -2892,6 +2895,8 @@ qemuDomainSaveParams(virDomainPtr dom, return -1; if (virTypedParamsGetInt(params, nparams, VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS, &nconn) < 0) return -1; + if (virTypedParamsGetString(params, nparams, VIR_DOMAIN_SAVE_PARAM_PARALLEL_COMPRESSION, &pcomp) < 0) + return -1; if (!(vm = qemuDomainObjFromDomain(dom))) goto cleanup; @@ -2914,7 +2919,7 @@ qemuDomainSaveParams(virDomainPtr dom, goto cleanup; ret = qemuDomainSaveInternal(driver, vm, to, compressed, - compressor, dxml, nconn, NULL, flags); + compressor, dxml, nconn, pcomp, flags); cleanup: virDomainObjEndAPI(&vm); -- 2.35.3

change from static to external linkage, and move the function close to the other similar ones, near qemuMigrationParamsSetULL. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_migration_params.c | 47 +++++++++++++++----------------- src/qemu/qemu_migration_params.h | 5 ++++ 2 files changed, 27 insertions(+), 25 deletions(-) diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c index e107681ba7..ee2cdf9461 100644 --- a/src/qemu/qemu_migration_params.c +++ b/src/qemu/qemu_migration_params.c @@ -900,31 +900,6 @@ qemuMigrationParamsApply(virQEMUDriver *driver, } -/** - * qemuMigrationParamsSetString: - * @migrParams: migration parameter object - * @param: parameter to set - * @value: new value - * - * Enables and sets the migration parameter @param in @migrParams. Returns 0 on - * success and -1 on error. Libvirt error is reported. - */ -static int -qemuMigrationParamsSetString(qemuMigrationParams *migParams, - qemuMigrationParam param, - const char *value) -{ - if (qemuMigrationParamsCheckType(param, QEMU_MIGRATION_PARAM_TYPE_STRING) < 0) - return -1; - - migParams->params[param].value.s = g_strdup(value); - - migParams->params[param].set = true; - - return 0; -} - - /* qemuMigrationParamsEnableTLS * @driver: pointer to qemu driver * @vm: domain object @@ -1146,6 +1121,28 @@ qemuMigrationParamsSetULL(qemuMigrationParams *migParams, return 0; } +/** + * qemuMigrationParamsSetString: + * @migrParams: migration parameter object + * @param: parameter to set + * @value: new value + * + * Enables and sets the migration parameter @param in @migrParams. Returns 0 on + * success and -1 on error. Libvirt error is reported. + */ +int +qemuMigrationParamsSetString(qemuMigrationParams *migParams, + qemuMigrationParam param, + const char *value) +{ + if (qemuMigrationParamsCheckType(param, QEMU_MIGRATION_PARAM_TYPE_STRING) < 0) + return -1; + + migParams->params[param].value.s = g_strdup(value); + migParams->params[param].set = true; + return 0; +} + /** * Returns -1 on error, diff --git a/src/qemu/qemu_migration_params.h b/src/qemu/qemu_migration_params.h index f2e0a0d9f2..647b8602dd 100644 --- a/src/qemu/qemu_migration_params.h +++ b/src/qemu/qemu_migration_params.h @@ -138,6 +138,11 @@ qemuMigrationParamsSetULL(qemuMigrationParams *migParams, qemuMigrationParam param, unsigned long long value); +int +qemuMigrationParamsSetString(qemuMigrationParams *migParams, + qemuMigrationParam param, + const char *value); + int qemuMigrationParamsGetULL(qemuMigrationParams *migParams, qemuMigrationParam param, -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_migration.c | 17 +++++++++++++---- src/qemu/qemu_migration.h | 2 +- src/qemu/qemu_saveimage.c | 19 ++++++++++++++----- 3 files changed, 28 insertions(+), 10 deletions(-) diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c index c4e1837419..f16b4976bc 100644 --- a/src/qemu/qemu_migration.c +++ b/src/qemu/qemu_migration.c @@ -5884,7 +5884,7 @@ qemuMigrationSrcToFileAux(virQEMUDriver *driver, virDomainObj *vm, virCommand *compressor, virDomainAsyncJob asyncJob, const char *sun_path, - int nchannels) + int nchannels, const char *pcomp) { qemuDomainObjPrivate *priv = vm->privateData; bool bwParam = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_PARAM_BANDWIDTH); @@ -5931,6 +5931,15 @@ qemuMigrationSrcToFileAux(virQEMUDriver *driver, virDomainObj *vm, QEMU_MIGRATION_PARAM_MULTIFD_CHANNELS, nchannels) < 0) return -1; + if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_PARAM_MULTIFD_COMPRESSION)) { + if (qemuMigrationParamsSetString(migParams, + QEMU_MIGRATION_PARAM_MULTIFD_COMPRESSION, pcomp) < 0) + return -1; + } else { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("QEMU does not seem to support multifd compression")); + return -1; + } } if (needParams && qemuMigrationParamsApply(driver, vm, asyncJob, migParams) < 0) @@ -6056,17 +6065,17 @@ qemuMigrationSrcToFile(virQEMUDriver *driver, virDomainObj *vm, virDomainAsyncJob asyncJob) { return qemuMigrationSrcToFileAux(driver, vm, fd, compressor, - asyncJob, NULL, -1); + asyncJob, NULL, -1, NULL); } int qemuMigrationSrcToFilesMultiFd(virQEMUDriver *driver, virDomainObj *vm, virDomainAsyncJob asyncJob, const char *sun_path, - int nchannels) + int nchannels, const char *pcomp) { return qemuMigrationSrcToFileAux(driver, vm, -1, NULL, - asyncJob, sun_path, nchannels); + asyncJob, sun_path, nchannels, pcomp); } int diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h index 38f4877cf0..d6185770b2 100644 --- a/src/qemu/qemu_migration.h +++ b/src/qemu/qemu_migration.h @@ -223,7 +223,7 @@ int qemuMigrationSrcToFilesMultiFd(virQEMUDriver *driver, virDomainObj *vm, virDomainAsyncJob asyncJob, const char *sun_path, - int nchannels) + int nchannels, const char *pcomp) ATTRIBUTE_NONNULL(1) ATTRIBUTE_NONNULL(2) G_GNUC_WARN_UNUSED_RESULT; int diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 784bd7e647..2598927eeb 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -598,13 +598,14 @@ qemuSaveImageCreate(virQEMUDriver *driver, virQEMUSaveData *data, virCommand *compressor, int nconn, - const char *pcomp G_GNUC_UNUSED, + const char *pcomp, unsigned int flags, virDomainAsyncJob asyncJob) { g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver); virQEMUSaveFd saveFd = QEMU_SAVEFD_INVALID; virQEMUSaveFd *multiFd = NULL; + virQEMUSaveMultiFdComp multiComp = QEMU_SAVE_MULTIFD_COMP_NONE; unsigned int oflags = O_WRONLY | O_TRUNC | O_CREAT; int ret = -1; @@ -616,15 +617,23 @@ qemuSaveImageCreate(virQEMUDriver *driver, } oflags |= O_DIRECT; } - + if (!pcomp || !pcomp[0]) { + pcomp = qemuSaveMultiFdCompTypeToString(QEMU_SAVE_MULTIFD_COMP_NONE); + } if (virQEMUSaveFdInit(&saveFd, path, 0, oflags, cfg, flags & VIR_DOMAIN_SAVE_PARALLEL) < 0) goto cleanup; if (qemuSecuritySetImageFDLabel(driver->securityManager, vm->def, saveFd.fd) < 0) goto cleanup; - if (nconn > 0) + if (nconn > 0) { data->header.multifd_channels = nconn; - + if ((multiComp = qemuSaveMultiFdCompTypeFromString(pcomp)) < 0) { + virReportError(VIR_ERR_OPERATION_FAILED, + _("Invalid %s multifd compression format specified"), pcomp); + goto cleanup; + } + data->header.multifd_comp = multiComp; + } if (virQEMUSaveDataWrite(data, saveFd.fd, saveFd.path) < 0) goto cleanup; @@ -654,7 +663,7 @@ qemuSaveImageCreate(virQEMUDriver *driver, goto cleanup; if (chown(sun_path, cfg->user, cfg->group) < 0) goto cleanup; - if (qemuMigrationSrcToFilesMultiFd(driver, vm, asyncJob, sun_path, nconn) < 0) + if (qemuMigrationSrcToFilesMultiFd(driver, vm, asyncJob, sun_path, nconn, pcomp) < 0) goto cleanup; if (qemuSaveImageCloseMultiFd(multiFd, nconn, vm) < 0) goto cleanup; -- 2.35.3

Signed-off-by: Claudio Fontana <cfontana@suse.de> --- src/qemu/qemu_saveimage.c | 17 +++++++++++++++++ 1 file changed, 17 insertions(+) diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c index 2598927eeb..4fd2485fc5 100644 --- a/src/qemu/qemu_saveimage.c +++ b/src/qemu/qemu_saveimage.c @@ -756,6 +756,23 @@ int qemuSaveImageLoadMultiFd(virConnectPtr conn, virDomainObj *vm, int oflags, QEMU_MIGRATION_PARAM_MULTIFD_CHANNELS, nchannels) < 0) goto cleanup; + if (virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MIGRATION_PARAM_MULTIFD_COMPRESSION)) { + const char *pcomp = qemuSaveMultiFdCompTypeToString(data->header.multifd_comp); + if (!pcomp) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, + _("libvirt does not support parallel compression type %u"), + data->header.multifd_comp); + goto cleanup; + } + if (qemuMigrationParamsSetString(migParams, + QEMU_MIGRATION_PARAM_MULTIFD_COMPRESSION, + pcomp) < 0) + goto cleanup; + } else if (data->header.multifd_comp != QEMU_SAVE_MULTIFD_COMP_NONE) { + virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s", + _("QEMU does not seem to support multifd compression")); + goto cleanup; + } if (qemuMigrationParamsApply(driver, vm, asyncJob, migParams) < 0) goto cleanup; -- 2.35.3

this completes the save side of the parallel compression support. Signed-off-by: Claudio Fontana <cfontana@suse.de> --- docs/manpages/virsh.rst | 4 ++++ tools/virsh-domain.c | 12 ++++++++++++ 2 files changed, 16 insertions(+) diff --git a/docs/manpages/virsh.rst b/docs/manpages/virsh.rst index dee748d870..5518e78160 100644 --- a/docs/manpages/virsh.rst +++ b/docs/manpages/virsh.rst @@ -3812,6 +3812,7 @@ save save domain state-file [--bypass-cache] [--xml file] [--parallel] [--parallel-connections connections] + [--parallel-compression algo] [{--running | --paused}] [--verbose] Saves a paused or running domain (RAM, but not disk state) to one or more @@ -3852,6 +3853,9 @@ set using *--parallel-connections*, which will save to files named Parallel connections may help in speeding up the save operation. +*--parallel-compression* can be used to ask the hypervisor to provide +compressed channels in the save stream using algorithm ``algo``. + Domain saved state files assume that disk images will be unchanged between the creation and restore point. For a more complete system restore point, where the disk state is saved alongside the memory diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c index 9103d6ed65..254d082e36 100644 --- a/tools/virsh-domain.c +++ b/tools/virsh-domain.c @@ -4182,6 +4182,10 @@ static const vshCmdOptDef opts_save[] = { .type = VSH_OT_INT, .help = N_("number of connections/files for parallel save") }, + {.name = "parallel-compression", + .type = VSH_OT_STRING, + .help = N_("compression algorithm and format for parallel save") + }, {.name = "xml", .type = VSH_OT_STRING, .completer = virshCompletePathLocalExisting, @@ -4211,6 +4215,7 @@ doSave(void *opaque) g_autoptr(virshDomain) dom = NULL; const char *name = NULL; const char *to = NULL; + const char *pcomp = NULL; virTypedParameterPtr params = NULL; int nparams = 0; int maxparams = 0; @@ -4246,6 +4251,13 @@ doSave(void *opaque) VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS, intOpt) < 0) goto out; } + if ((rv = vshCommandOptStringReq(ctl, cmd, "parallel-compression", &pcomp)) < 0) { + goto out; + } else { + if (virTypedParamsAddString(¶ms, &nparams, &maxparams, + VIR_DOMAIN_SAVE_PARAM_PARALLEL_COMPRESSION, pcomp) < 0) + goto out; + } if (vshCommandOptBool(cmd, "running")) flags |= VIR_DOMAIN_SAVE_RUNNING; if (vshCommandOptBool(cmd, "paused")) -- 2.35.3
participants (1)
-
Claudio Fontana