Setting 'nodatacow' on VM image when on Btrfs
by Chris Murphy
It's generally recommended by upstream Btrfs development to set
'nodatacow' using 'chattr +C' on the containing directory for VM
images. By setting it on the containing directory it means new VM
images inherit the attribute, including images copied to this
location.
But 'nodatacow' also implies no compression and no data checksums.
Could there be use cases in which it's preferred to do compression and
checksumming on VM images? In that case the fragmentation problem can
be mitigated by periodic defragmentation.
Is this something libvirt can and should do? Possibly by default with
a way for users to opt out?
Another option is to have the installer set 'chattr +C' in the short
term. But this doesn't help GNOME Boxes, since the user home isn't
created at installation time.
Three advantages of libvirt awareness of Btrfs:
(a) GNOME Boxes Cockpit, and other users of libvirt can then use this
same mechanism, and apply to their VM image locations.
(b) Create the containing directory as a subvolume instead of a directory
(1) btrfs snapshots are not recursive, therefore making this a
subvolume would prevent it from being snapshot, and thus (temporarily)
resuming datacow.
(2) in heavy workloads there can be lock contention on file
btrees; a subvolume is a dedicated file btree; so this would reduce
tendency for lock contention in heavy workloads (this is not likely a
desktop/laptop use case)
(c) virtiofs might be able to take advantage of btrfs subvolumes.
(This is a partial rewrite of
https://bugzilla.redhat.com/show_bug.cgi?id=1855000)
Thanks,
--
Chris Murphy
4 years, 4 months
[libvirt PATCH] network: split out networkSetIPv6Sysctl
by Ján Tomko
Refactor networkSetIPv6Sysctls to remove repetition and reuse
of the 'field' variable.
Signed-off-by: Ján Tomko <jtomko(a)redhat.com>
---
src/network/bridge_driver.c | 71 ++++++++++++++++++-------------------
1 file changed, 35 insertions(+), 36 deletions(-)
diff --git a/src/network/bridge_driver.c b/src/network/bridge_driver.c
index 713763130b..d817b5cbd6 100644
--- a/src/network/bridge_driver.c
+++ b/src/network/bridge_driver.c
@@ -2249,37 +2249,51 @@ networkEnableIPForwarding(bool enableIPv4,
}
+static int
+networkSetIPv6Sysctl(const char *bridge,
+ const char *sysctl_field,
+ const char *sysctl_setting,
+ bool ignoreMissing)
+{
+ g_autofree char *field = g_strdup_printf(SYSCTL_PATH "/net/ipv6/conf/%s/%s",
+ bridge, sysctl_field);
+
+ if (ignoreMissing && access(field, W_OK) < 0 && errno == ENOENT)
+ return -2;
+
+ if (virFileWriteStr(field, sysctl_setting, 0) < 0) {
+ virReportSystemError(errno,
+ _("cannot write to '%s' on bridge '%s'"),
+ field, bridge);
+ return -1;
+ }
+
+ return 0;
+}
+
+
static int
networkSetIPv6Sysctls(virNetworkObjPtr obj)
{
virNetworkDefPtr def = virNetworkObjGetDef(obj);
- char *field = NULL;
- int ret = -1;
bool enableIPv6 = !!virNetworkDefGetIPByIndex(def, AF_INET6, 0);
+ int rc;
/* set disable_ipv6 if there are no ipv6 addresses defined for the
* network. But also unset it if there *are* ipv6 addresses, as we
* can't be sure of its default value.
*/
- field = g_strdup_printf(SYSCTL_PATH "/net/ipv6/conf/%s/disable_ipv6",
- def->bridge);
-
- if (access(field, W_OK) < 0 && errno == ENOENT) {
+ rc = networkSetIPv6Sysctl(def->bridge, "disable_ipv6",
+ enableIPv6 ? "0" : "1", true);
+ if (rc == -2) {
if (!enableIPv6)
VIR_DEBUG("ipv6 appears to already be disabled on %s",
def->bridge);
- ret = 0;
- goto cleanup;
+ return 0;
+ } else if (rc < 0) {
+ return -1;
}
- if (virFileWriteStr(field, enableIPv6 ? "0" : "1", 0) < 0) {
- virReportSystemError(errno,
- _("cannot write to %s to enable/disable IPv6 "
- "on bridge %s"), field, def->bridge);
- goto cleanup;
- }
- VIR_FREE(field);
-
/* The rest of the ipv6 sysctl tunables should always be set the
* same, whether or not we're using ipv6 on this bridge.
*/
@@ -2287,31 +2301,16 @@ networkSetIPv6Sysctls(virNetworkObjPtr obj)
/* Prevent guests from hijacking the host network by sending out
* their own router advertisements.
*/
- field = g_strdup_printf(SYSCTL_PATH "/net/ipv6/conf/%s/accept_ra",
- def->bridge);
-
- if (virFileWriteStr(field, "0", 0) < 0) {
- virReportSystemError(errno,
- _("cannot disable %s"), field);
- goto cleanup;
- }
- VIR_FREE(field);
+ if (networkSetIPv6Sysctl(def->bridge, "accept_ra", "0", false) < 0)
+ return -1;
/* All interfaces used as a gateway (which is what this is, by
* definition), must always have autoconf=0.
*/
- field = g_strdup_printf(SYSCTL_PATH "/net/ipv6/conf/%s/autoconf", def->bridge);
+ if (networkSetIPv6Sysctl(def->bridge, "autoconf", "0", false) < 0)
+ return -1;
- if (virFileWriteStr(field, "0", 0) < 0) {
- virReportSystemError(errno,
- _("cannot disable %s"), field);
- goto cleanup;
- }
-
- ret = 0;
- cleanup:
- VIR_FREE(field);
- return ret;
+ return 0;
}
--
2.25.4
4 years, 4 months
[GSoC][PATCH v5] qemu_domainjob: introduce `privateData` for `qemuDomainJob`
by Prathamesh Chavan
To remove dependecy of `qemuDomainJob` on job specific
paramters, a `privateData` pointer is introduced.
To handle it, structure of callback functions is
also introduced.
Signed-off-by: Prathamesh Chavan <pc44800(a)gmail.com>
---
Previous version of this patch can be found here[1].
This patch mainly improvises on the changes suggested in the review
of the previous version by Martin and Michal.
Currently, the structure `qemuDomainJobInfo` is kept untouched.
src/qemu/qemu_domain.c | 70 ++++++++++++++++++++++++++++++-
src/qemu/qemu_domain.h | 10 +++++
src/qemu/qemu_domainjob.c | 71 ++++++++++++++++++--------------
src/qemu/qemu_domainjob.h | 38 ++++++++++++-----
src/qemu/qemu_driver.c | 3 +-
src/qemu/qemu_migration.c | 28 +++++++++----
src/qemu/qemu_migration_params.c | 9 ++--
src/qemu/qemu_process.c | 15 +++++--
8 files changed, 185 insertions(+), 59 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 10d2033db1..b0ed562d25 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -88,6 +88,74 @@ VIR_ENUM_IMPL(qemuDomainNamespace,
"mount",
);
+
+static void *
+qemuJobAllocPrivate(void)
+{
+ return g_new0(qemuDomainJobPrivate, 1);
+}
+
+
+static void
+qemuJobFreePrivate(void *opaque)
+{
+ qemuDomainJobPrivatePtr priv = opaque;
+
+ if (!priv)
+ return;
+
+ qemuMigrationParamsFree(priv->migParams);
+ VIR_FREE(priv);
+}
+
+
+static void
+qemuJobResetPrivate(void *opaque)
+{
+ qemuDomainJobPrivatePtr priv = opaque;
+
+ priv->spiceMigration = false;
+ priv->spiceMigrated = false;
+ priv->dumpCompleted = false;
+ qemuMigrationParamsFree(priv->migParams);
+ priv->migParams = NULL;
+}
+
+
+static int
+qemuDomainFormatJobPrivate(virBufferPtr buf,
+ qemuDomainJobObjPtr job)
+{
+ qemuDomainJobPrivatePtr priv = job->privateData;
+
+ if (priv->migParams)
+ qemuMigrationParamsFormat(buf, priv->migParams);
+
+ return 0;
+}
+
+
+static int
+qemuDomainParseJobPrivate(xmlXPathContextPtr ctxt,
+ qemuDomainJobObjPtr job)
+{
+ qemuDomainJobPrivatePtr priv = job->privateData;
+
+ if (qemuMigrationParamsParse(ctxt, &priv->migParams) < 0)
+ return -1;
+
+ return 0;
+}
+
+
+static qemuDomainObjPrivateJobCallbacks qemuPrivateJobCallbacks = {
+ .allocJobPrivate = qemuJobAllocPrivate,
+ .freeJobPrivate = qemuJobFreePrivate,
+ .resetJobPrivate = qemuJobResetPrivate,
+ .formatJob = qemuDomainFormatJobPrivate,
+ .parseJob = qemuDomainParseJobPrivate,
+};
+
/**
* qemuDomainObjFromDomain:
* @domain: Domain pointer that has to be looked up
@@ -1585,7 +1653,7 @@ qemuDomainObjPrivateAlloc(void *opaque)
if (VIR_ALLOC(priv) < 0)
return NULL;
- if (qemuDomainObjInitJob(&priv->job) < 0) {
+ if (qemuDomainObjInitJob(&priv->job, &qemuPrivateJobCallbacks) < 0) {
virReportSystemError(errno, "%s",
_("Unable to init qemu driver mutexes"));
goto error;
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index e524fd0002..bb9b414a46 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -492,6 +492,16 @@ struct _qemuDomainXmlNsDef {
char **capsdel;
};
+typedef struct _qemuDomainJobPrivate qemuDomainJobPrivate;
+typedef qemuDomainJobPrivate *qemuDomainJobPrivatePtr;
+struct _qemuDomainJobPrivate {
+ bool spiceMigration; /* we asked for spice migration and we
+ * should wait for it to finish */
+ bool spiceMigrated; /* spice migration completed */
+ bool dumpCompleted; /* dump completed */
+ qemuMigrationParamsPtr migParams;
+};
+
int qemuDomainObjStartWorker(virDomainObjPtr dom);
void qemuDomainObjStopWorker(virDomainObjPtr dom);
diff --git a/src/qemu/qemu_domainjob.c b/src/qemu/qemu_domainjob.c
index d96d5334a3..7cbfe3801e 100644
--- a/src/qemu/qemu_domainjob.c
+++ b/src/qemu/qemu_domainjob.c
@@ -159,24 +159,6 @@ qemuDomainEventEmitJobCompleted(virQEMUDriverPtr driver,
virObjectEventStateQueue(driver->domainEventState, event);
}
-
-int
-qemuDomainObjInitJob(qemuDomainJobObjPtr job)
-{
- memset(job, 0, sizeof(*job));
-
- if (virCondInit(&job->cond) < 0)
- return -1;
-
- if (virCondInit(&job->asyncCond) < 0) {
- virCondDestroy(&job->cond);
- return -1;
- }
-
- return 0;
-}
-
-
static void
qemuDomainObjResetJob(qemuDomainJobObjPtr job)
{
@@ -207,17 +189,13 @@ qemuDomainObjResetAsyncJob(qemuDomainJobObjPtr job)
job->phase = 0;
job->mask = QEMU_JOB_DEFAULT_MASK;
job->abortJob = false;
- job->spiceMigration = false;
- job->spiceMigrated = false;
- job->dumpCompleted = false;
VIR_FREE(job->error);
g_clear_pointer(&job->current, qemuDomainJobInfoFree);
- qemuMigrationParamsFree(job->migParams);
- job->migParams = NULL;
+ job->cb->resetJobPrivate(job->privateData);
job->apiFlags = 0;
}
-void
+int
qemuDomainObjRestoreJob(virDomainObjPtr obj,
qemuDomainJobObjPtr job)
{
@@ -229,11 +207,16 @@ qemuDomainObjRestoreJob(virDomainObjPtr obj,
job->asyncJob = priv->job.asyncJob;
job->asyncOwner = priv->job.asyncOwner;
job->phase = priv->job.phase;
- job->migParams = g_steal_pointer(&priv->job.migParams);
+ job->privateData = g_steal_pointer(&priv->job.privateData);
job->apiFlags = priv->job.apiFlags;
+ if (!(priv->job.privateData = priv->job.cb->allocJobPrivate()))
+ return -1;
+ job->cb = priv->job.cb;
+
qemuDomainObjResetJob(&priv->job);
qemuDomainObjResetAsyncJob(&priv->job);
+ return 0;
}
void
@@ -241,6 +224,7 @@ qemuDomainObjFreeJob(qemuDomainJobObjPtr job)
{
qemuDomainObjResetJob(job);
qemuDomainObjResetAsyncJob(job);
+ job->cb->freeJobPrivate(job->privateData);
g_clear_pointer(&job->current, qemuDomainJobInfoFree);
g_clear_pointer(&job->completed, qemuDomainJobInfoFree);
virCondDestroy(&job->cond);
@@ -1239,14 +1223,14 @@ qemuDomainObjPrivateXMLFormatNBDMigration(virBufferPtr buf,
return 0;
}
-
int
qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf,
virDomainObjPtr vm)
{
+ qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobObjPtr jobObj = &priv->job;
g_auto(virBuffer) attrBuf = VIR_BUFFER_INITIALIZER;
g_auto(virBuffer) childBuf = VIR_BUFFER_INIT_CHILD(buf);
- qemuDomainObjPrivatePtr priv = vm->privateData;
qemuDomainJob job = priv->job.active;
if (!qemuDomainTrackJob(job))
@@ -1273,8 +1257,8 @@ qemuDomainObjPrivateXMLFormatJob(virBufferPtr buf,
qemuDomainObjPrivateXMLFormatNBDMigration(&childBuf, vm) < 0)
return -1;
- if (priv->job.migParams)
- qemuMigrationParamsFormat(&childBuf, priv->job.migParams);
+ if (jobObj->cb->formatJob(&childBuf, jobObj) < 0)
+ return -1;
virXMLFormatElement(buf, "job", &attrBuf, &childBuf);
@@ -1366,12 +1350,12 @@ qemuDomainObjPrivateXMLParseJobNBD(virDomainObjPtr vm,
return 0;
}
-
int
qemuDomainObjPrivateXMLParseJob(virDomainObjPtr vm,
xmlXPathContextPtr ctxt)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobObjPtr job = &priv->job;
VIR_XPATH_NODE_AUTORESTORE(ctxt);
g_autofree char *tmp = NULL;
@@ -1420,8 +1404,33 @@ qemuDomainObjPrivateXMLParseJob(virDomainObjPtr vm,
if (qemuDomainObjPrivateXMLParseJobNBD(vm, ctxt) < 0)
return -1;
- if (qemuMigrationParamsParse(ctxt, &priv->job.migParams) < 0)
+ if (job->cb->parseJob(ctxt, job) < 0)
+ return -1;
+
+ return 0;
+}
+
+
+int
+qemuDomainObjInitJob(qemuDomainJobObjPtr job,
+ qemuDomainObjPrivateJobCallbacksPtr cb)
+{
+ memset(job, 0, sizeof(*job));
+ job->cb = cb;
+
+ if (!(job->privateData = job->cb->allocJobPrivate()))
+ return -1;
+
+ if (virCondInit(&job->cond) < 0) {
+ job->cb->freeJobPrivate(job->privateData);
return -1;
+ }
+
+ if (virCondInit(&job->asyncCond) < 0) {
+ job->cb->freeJobPrivate(job->privateData);
+ virCondDestroy(&job->cond);
+ return -1;
+ }
return 0;
}
diff --git a/src/qemu/qemu_domainjob.h b/src/qemu/qemu_domainjob.h
index 9d2ee14584..c83e055647 100644
--- a/src/qemu/qemu_domainjob.h
+++ b/src/qemu/qemu_domainjob.h
@@ -19,7 +19,7 @@
#pragma once
#include <glib-object.h>
-#include "qemu_migration_params.h"
+#include "qemu_monitor.h"
#define JOB_MASK(job) (job == 0 ? 0 : 1 << (job - 1))
#define QEMU_JOB_DEFAULT_MASK \
@@ -156,6 +156,25 @@ qemuDomainJobInfoCopy(qemuDomainJobInfoPtr info);
typedef struct _qemuDomainJobObj qemuDomainJobObj;
typedef qemuDomainJobObj *qemuDomainJobObjPtr;
+
+typedef void *(*qemuDomainObjPrivateJobAlloc)(void);
+typedef void (*qemuDomainObjPrivateJobFree)(void *);
+typedef void (*qemuDomainObjPrivateJobReset)(void *);
+typedef int (*qemuDomainObjPrivateJobFormat)(virBufferPtr,
+ qemuDomainJobObjPtr);
+typedef int (*qemuDomainObjPrivateJobParse)(xmlXPathContextPtr,
+ qemuDomainJobObjPtr);
+
+typedef struct _qemuDomainObjPrivateJobCallbacks qemuDomainObjPrivateJobCallbacks;
+typedef qemuDomainObjPrivateJobCallbacks *qemuDomainObjPrivateJobCallbacksPtr;
+struct _qemuDomainObjPrivateJobCallbacks {
+ qemuDomainObjPrivateJobAlloc allocJobPrivate;
+ qemuDomainObjPrivateJobFree freeJobPrivate;
+ qemuDomainObjPrivateJobReset resetJobPrivate;
+ qemuDomainObjPrivateJobFormat formatJob;
+ qemuDomainObjPrivateJobParse parseJob;
+};
+
struct _qemuDomainJobObj {
virCond cond; /* Use to coordinate jobs */
@@ -182,14 +201,11 @@ struct _qemuDomainJobObj {
qemuDomainJobInfoPtr current; /* async job progress data */
qemuDomainJobInfoPtr completed; /* statistics data of a recently completed job */
bool abortJob; /* abort of the job requested */
- bool spiceMigration; /* we asked for spice migration and we
- * should wait for it to finish */
- bool spiceMigrated; /* spice migration completed */
char *error; /* job event completion error */
- bool dumpCompleted; /* dump completed */
-
- qemuMigrationParamsPtr migParams;
unsigned long apiFlags; /* flags passed to the API which started the async job */
+
+ void *privateData; /* job specific collection of data */
+ qemuDomainObjPrivateJobCallbacksPtr cb;
};
const char *qemuDomainAsyncJobPhaseToString(qemuDomainAsyncJob job,
@@ -234,8 +250,8 @@ void qemuDomainObjSetJobPhase(virQEMUDriverPtr driver,
int phase);
void qemuDomainObjSetAsyncJobMask(virDomainObjPtr obj,
unsigned long long allowedJobs);
-void qemuDomainObjRestoreJob(virDomainObjPtr obj,
- qemuDomainJobObjPtr job);
+int qemuDomainObjRestoreJob(virDomainObjPtr obj,
+ qemuDomainJobObjPtr job);
void qemuDomainObjDiscardAsyncJob(virQEMUDriverPtr driver,
virDomainObjPtr obj);
void qemuDomainObjReleaseAsyncJob(virDomainObjPtr obj);
@@ -264,7 +280,9 @@ bool qemuDomainTrackJob(qemuDomainJob job);
void qemuDomainObjFreeJob(qemuDomainJobObjPtr job);
-int qemuDomainObjInitJob(qemuDomainJobObjPtr job);
+int
+qemuDomainObjInitJob(qemuDomainJobObjPtr job,
+ qemuDomainObjPrivateJobCallbacksPtr cb);
bool qemuDomainJobAllowed(qemuDomainJobObjPtr jobs, qemuDomainJob newJob);
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index d185666ed8..d9c5f21f21 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -3701,9 +3701,10 @@ static int
qemuDumpWaitForCompletion(virDomainObjPtr vm)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
VIR_DEBUG("Waiting for dump completion");
- while (!priv->job.dumpCompleted && !priv->job.abortJob) {
+ while (!jobPriv->dumpCompleted && !priv->job.abortJob) {
if (virDomainObjWait(vm) < 0)
return -1;
}
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 78e64344f6..2c7bf349c3 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -1422,12 +1422,13 @@ static int
qemuMigrationSrcWaitForSpice(virDomainObjPtr vm)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
- if (!priv->job.spiceMigration)
+ if (!jobPriv->spiceMigration)
return 0;
VIR_DEBUG("Waiting for SPICE to finish migration");
- while (!priv->job.spiceMigrated && !priv->job.abortJob) {
+ while (!jobPriv->spiceMigrated && !priv->job.abortJob) {
if (virDomainObjWait(vm) < 0)
return -1;
}
@@ -1856,9 +1857,11 @@ qemuMigrationSrcGraphicsRelocate(virQEMUDriverPtr driver,
if (qemuDomainObjEnterMonitorAsync(driver, vm,
QEMU_ASYNC_JOB_MIGRATION_OUT) == 0) {
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
+
ret = qemuMonitorGraphicsRelocate(priv->mon, type, listenAddress,
port, tlsPort, tlsSubject);
- priv->job.spiceMigration = !ret;
+ jobPriv->spiceMigration = !ret;
if (qemuDomainObjExitMonitor(driver, vm) < 0)
ret = -1;
}
@@ -1993,6 +1996,7 @@ qemuMigrationSrcCleanup(virDomainObjPtr vm,
{
virQEMUDriverPtr driver = opaque;
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
VIR_DEBUG("vm=%s, conn=%p, asyncJob=%s, phase=%s",
vm->def->name, conn,
@@ -2018,7 +2022,7 @@ qemuMigrationSrcCleanup(virDomainObjPtr vm,
" domain was successfully started on destination or not",
vm->def->name);
qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT,
- priv->job.migParams, priv->job.apiFlags);
+ jobPriv->migParams, priv->job.apiFlags);
/* clear the job and let higher levels decide what to do */
qemuDomainObjDiscardAsyncJob(driver, vm);
break;
@@ -2382,6 +2386,7 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver,
int dataFD[2] = { -1, -1 };
qemuDomainObjPrivatePtr priv = NULL;
qemuMigrationCookiePtr mig = NULL;
+ qemuDomainJobPrivatePtr jobPriv = NULL;
bool tunnel = !!st;
g_autofree char *xmlout = NULL;
unsigned int cookieFlags;
@@ -2505,6 +2510,7 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver,
*def = NULL;
priv = vm->privateData;
+ jobPriv = priv->job.privateData;
priv->origname = g_strdup(origname);
if (taint_hook) {
@@ -2710,7 +2716,7 @@ qemuMigrationDstPrepareAny(virQEMUDriverPtr driver,
stopjob:
qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN,
- priv->job.migParams, priv->job.apiFlags);
+ jobPriv->migParams, priv->job.apiFlags);
if (stopProcess) {
unsigned int stopFlags = VIR_QEMU_PROCESS_STOP_MIGRATED;
@@ -2979,6 +2985,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr driver,
virObjectEventPtr event;
g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
qemuDomainJobInfoPtr jobInfo = NULL;
VIR_DEBUG("driver=%p, vm=%p, cookiein=%s, cookieinlen=%d, "
@@ -3064,7 +3071,7 @@ qemuMigrationSrcConfirmPhase(virQEMUDriverPtr driver,
qemuMigrationSrcRestoreDomainState(driver, vm);
qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT,
- priv->job.migParams, priv->job.apiFlags);
+ jobPriv->migParams, priv->job.apiFlags);
if (virDomainObjSave(vm, driver->xmlopt, cfg->stateDir) < 0)
VIR_WARN("Failed to save status on vm %s", vm->def->name);
@@ -4642,6 +4649,7 @@ qemuMigrationSrcPerformJob(virQEMUDriverPtr driver,
virErrorPtr orig_err = NULL;
g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT,
flags) < 0)
@@ -4699,7 +4707,7 @@ qemuMigrationSrcPerformJob(virQEMUDriverPtr driver,
*/
if (!v3proto && ret < 0)
qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT,
- priv->job.migParams, priv->job.apiFlags);
+ jobPriv->migParams, priv->job.apiFlags);
qemuMigrationSrcRestoreDomainState(driver, vm);
@@ -4740,6 +4748,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriverPtr driver,
unsigned long resource)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
int ret = -1;
/* If we didn't start the job in the begin phase, start it now. */
@@ -4774,7 +4783,7 @@ qemuMigrationSrcPerformPhase(virQEMUDriverPtr driver,
endjob:
if (ret < 0) {
qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT,
- priv->job.migParams, priv->job.apiFlags);
+ jobPriv->migParams, priv->job.apiFlags);
qemuMigrationJobFinish(driver, vm);
} else {
qemuMigrationJobContinue(vm);
@@ -4973,6 +4982,7 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver,
virErrorPtr orig_err = NULL;
int cookie_flags = 0;
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
unsigned short port;
unsigned long long timeReceived = 0;
@@ -5226,7 +5236,7 @@ qemuMigrationDstFinish(virQEMUDriverPtr driver,
}
qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN,
- priv->job.migParams, priv->job.apiFlags);
+ jobPriv->migParams, priv->job.apiFlags);
qemuMigrationJobFinish(driver, vm);
if (!virDomainObjIsActive(vm))
diff --git a/src/qemu/qemu_migration_params.c b/src/qemu/qemu_migration_params.c
index aea17b1ac8..fc282960da 100644
--- a/src/qemu/qemu_migration_params.c
+++ b/src/qemu/qemu_migration_params.c
@@ -950,6 +950,7 @@ qemuMigrationParamsEnableTLS(virQEMUDriverPtr driver,
qemuMigrationParamsPtr migParams)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
g_autoptr(virJSONValue) tlsProps = NULL;
g_autoptr(virJSONValue) secProps = NULL;
g_autoptr(virQEMUDriverConfig) cfg = virQEMUDriverGetConfig(driver);
@@ -961,7 +962,7 @@ qemuMigrationParamsEnableTLS(virQEMUDriverPtr driver,
return -1;
}
- if (!priv->job.migParams->params[QEMU_MIGRATION_PARAM_TLS_CREDS].set) {
+ if (!jobPriv->migParams->params[QEMU_MIGRATION_PARAM_TLS_CREDS].set) {
virReportError(VIR_ERR_OPERATION_UNSUPPORTED, "%s",
_("TLS migration is not supported with this "
"QEMU binary"));
@@ -1025,8 +1026,9 @@ qemuMigrationParamsDisableTLS(virDomainObjPtr vm,
qemuMigrationParamsPtr migParams)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
- if (!priv->job.migParams->params[QEMU_MIGRATION_PARAM_TLS_CREDS].set)
+ if (!jobPriv->migParams->params[QEMU_MIGRATION_PARAM_TLS_CREDS].set)
return 0;
if (qemuMigrationParamsSetString(migParams,
@@ -1150,6 +1152,7 @@ qemuMigrationParamsCheck(virQEMUDriverPtr driver,
virBitmapPtr remoteCaps)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
+ qemuDomainJobPrivatePtr jobPriv = priv->job.privateData;
qemuMigrationCapability cap;
qemuMigrationParty party;
size_t i;
@@ -1203,7 +1206,7 @@ qemuMigrationParamsCheck(virQEMUDriverPtr driver,
* to ask QEMU for their current settings.
*/
- return qemuMigrationParamsFetch(driver, vm, asyncJob, &priv->job.migParams);
+ return qemuMigrationParamsFetch(driver, vm, asyncJob, &jobPriv->migParams);
}
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 70fc24b993..af060b3180 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -1608,6 +1608,7 @@ qemuProcessHandleSpiceMigrated(qemuMonitorPtr mon G_GNUC_UNUSED,
void *opaque G_GNUC_UNUSED)
{
qemuDomainObjPrivatePtr priv;
+ qemuDomainJobPrivatePtr jobPriv;
virObjectLock(vm);
@@ -1615,12 +1616,13 @@ qemuProcessHandleSpiceMigrated(qemuMonitorPtr mon G_GNUC_UNUSED,
vm, vm->def->name);
priv = vm->privateData;
+ jobPriv = priv->job.privateData;
if (priv->job.asyncJob != QEMU_ASYNC_JOB_MIGRATION_OUT) {
VIR_DEBUG("got SPICE_MIGRATE_COMPLETED event without a migration job");
goto cleanup;
}
- priv->job.spiceMigrated = true;
+ jobPriv->spiceMigrated = true;
virDomainObjBroadcast(vm);
cleanup:
@@ -1720,6 +1722,7 @@ qemuProcessHandleDumpCompleted(qemuMonitorPtr mon G_GNUC_UNUSED,
void *opaque G_GNUC_UNUSED)
{
qemuDomainObjPrivatePtr priv;
+ qemuDomainJobPrivatePtr jobPriv;
virObjectLock(vm);
@@ -1727,11 +1730,12 @@ qemuProcessHandleDumpCompleted(qemuMonitorPtr mon G_GNUC_UNUSED,
vm, vm->def->name, stats, NULLSTR(error));
priv = vm->privateData;
+ jobPriv = priv->job.privateData;
if (priv->job.asyncJob == QEMU_ASYNC_JOB_NONE) {
VIR_DEBUG("got DUMP_COMPLETED event without a dump_completed job");
goto cleanup;
}
- priv->job.dumpCompleted = true;
+ jobPriv->dumpCompleted = true;
priv->job.current->stats.dump = *stats;
priv->job.error = g_strdup(error);
@@ -3411,6 +3415,8 @@ qemuProcessRecoverMigrationIn(virQEMUDriverPtr driver,
virDomainState state,
int reason)
{
+
+ qemuDomainJobPrivatePtr jobPriv = job->privateData;
bool postcopy = (state == VIR_DOMAIN_PAUSED &&
reason == VIR_DOMAIN_PAUSED_POSTCOPY_FAILED) ||
(state == VIR_DOMAIN_RUNNING &&
@@ -3459,7 +3465,7 @@ qemuProcessRecoverMigrationIn(virQEMUDriverPtr driver,
}
qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_NONE,
- job->migParams, job->apiFlags);
+ jobPriv->migParams, job->apiFlags);
return 0;
}
@@ -3471,6 +3477,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr driver,
int reason,
unsigned int *stopFlags)
{
+ qemuDomainJobPrivatePtr jobPriv = job->privateData;
bool postcopy = state == VIR_DOMAIN_PAUSED &&
(reason == VIR_DOMAIN_PAUSED_POSTCOPY ||
reason == VIR_DOMAIN_PAUSED_POSTCOPY_FAILED);
@@ -3554,7 +3561,7 @@ qemuProcessRecoverMigrationOut(virQEMUDriverPtr driver,
}
qemuMigrationParamsReset(driver, vm, QEMU_ASYNC_JOB_NONE,
- job->migParams, job->apiFlags);
+ jobPriv->migParams, job->apiFlags);
return 0;
}
--
2.25.1
4 years, 4 months
[libvirt PATCH] spec: Drop explicit dependency on ncurses
by Andrea Bolognani
We don't actually use ncurses directly: readline needs it, but
that's a readline implementation detail and not something that we
should concern ourselves with.
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
Test pipeline where I've tweaked all CI environments so that ncurses
is not explicitly installed, and in the case of macOS pkg-config is
not configured in a way that allows it to locate ncurses.pc:
https://gitlab.com/abologna/libvirt/-/pipelines/168175622
libvirt.spec.in | 2 --
1 file changed, 2 deletions(-)
diff --git a/libvirt.spec.in b/libvirt.spec.in
index f335a7e06e..029159269b 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -288,7 +288,6 @@ BuildRequires: readline-devel
%if %{with_bash_completion}
BuildRequires: bash-completion >= 2.0
%endif
-BuildRequires: ncurses-devel
BuildRequires: gettext
BuildRequires: libtasn1-devel
BuildRequires: gnutls-devel
@@ -901,7 +900,6 @@ capabilities of VirtualBox
Summary: Client side utilities of the libvirt library
Requires: %{name}-libs = %{version}-%{release}
Requires: readline
-Requires: ncurses
# Needed by /usr/libexec/libvirt-guests.sh script.
Requires: gettext
# Needed by virt-pki-validate script.
--
2.25.4
4 years, 4 months
[libvirt PATCH] spec: Don't require mdevctl on RHEL 7
by Andrea Bolognani
mdevctl is a relatively new tool that's packaged for Fedora and
RHEL 8, but not for RHEL 7. Make the dependency conditional to
avoid the libvirt-daemon-driver-nodedev package becoming
uninstallable on that platform.
Fixes: 9691440ecbc7d9383a1410f1067a4f9221f2de2c
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
libvirt.spec.in | 2 ++
1 file changed, 2 insertions(+)
diff --git a/libvirt.spec.in b/libvirt.spec.in
index f335a7e06e..c700bf8e5c 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -526,7 +526,9 @@ Requires: libvirt-libs = %{version}-%{release}
# needed for device enumeration
Requires: systemd >= 185
# For managing persistent mediated devices
+%if 0%{?fedora} || 0%{?rhel} > 7
Requires: mdevctl
+%endif
%description daemon-driver-nodedev
The nodedev driver plugin for the libvirtd daemon, providing
--
2.25.4
4 years, 4 months
[libvirt PATCH] spec: Remove unnecessary version check
by Andrea Bolognani
As of 65a883b34932 we no longer support Fedora releases older than
31, so the version check has become unnecessary.
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
Pushed as trivial.
libvirt.spec.in | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/libvirt.spec.in b/libvirt.spec.in
index 522e36b6a5..f335a7e06e 100644
--- a/libvirt.spec.in
+++ b/libvirt.spec.in
@@ -147,7 +147,7 @@
%define with_firewalld 1
-%if 0%{?fedora} >= 31 || 0%{?rhel} > 7
+%if 0%{?fedora} || 0%{?rhel} > 7
%define with_firewalld_zone 0%{!?_without_firewalld_zone:1}
%endif
--
2.25.4
4 years, 4 months
[libvirt PATCH 0/3] Fix migration with host-passthrough CPU
by Jiri Denemark
Fixes a regression in libvirt 6.5.0 caused by commit 201bd5db63. See
patches 2 and 3 for details.
Jiri Denemark (3):
qemu_monitor: Add API for checking CPU migratable property
qemu: Do not set //cpu/@migratable for running domains in post-parse
qemu: Properly set //cpu/@migratable default value for running domains
src/qemu/qemu_domain.c | 8 ++++--
src/qemu/qemu_monitor.c | 18 +++++++++++++
src/qemu/qemu_monitor.h | 4 +++
src/qemu/qemu_monitor_json.c | 27 ++++++++++++++++++++
src/qemu/qemu_monitor_json.h | 4 +++
src/qemu/qemu_process.c | 49 ++++++++++++++++++++++++++++++++++++
6 files changed, 108 insertions(+), 2 deletions(-)
--
2.27.0
4 years, 4 months
[PATCH] virNetSocketCheckProtocols: Actually check bool value
by Michal Privoznik
In 9536379da4c8ed61 and 8b0cb0e666f I've tried to call
virNetSocketCheckProtocolByLookup() only if we are suspecting the
host is IPv4 or IPv6 capable (because we've found an interface
with such address). However, the code was missing dereference of
the boolean variables and thus was comparing pointers against
NULL.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
Pushed under trivial rule.
src/rpc/virnetsocket.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/rpc/virnetsocket.c b/src/rpc/virnetsocket.c
index 45dcf48206..5a7f501923 100644
--- a/src/rpc/virnetsocket.c
+++ b/src/rpc/virnetsocket.c
@@ -205,11 +205,11 @@ int virNetSocketCheckProtocols(bool *hasIPv4,
freeifaddrs(ifaddr);
- if (hasIPv4 &&
+ if (*hasIPv4 &&
virNetSocketCheckProtocolByLookup("127.0.0.1", AF_INET, hasIPv4) < 0)
return -1;
- if (hasIPv6 &&
+ if (*hasIPv6 &&
virNetSocketCheckProtocolByLookup("::1", AF_INET6, hasIPv6) < 0)
return -1;
--
2.26.2
4 years, 4 months
[libvirt PATCH] docs: fix compilation instructions to use separate build dir
by Daniel P. Berrangé
Signed-off-by: Daniel P. Berrangé <berrange(a)redhat.com>
---
docs/compiling.html.in | 6 ++++--
1 file changed, 4 insertions(+), 2 deletions(-)
diff --git a/docs/compiling.html.in b/docs/compiling.html.in
index 0e12a9218e..18943a9a79 100644
--- a/docs/compiling.html.in
+++ b/docs/compiling.html.in
@@ -76,7 +76,8 @@ $ <b>sudo</b> <i>make install</i></pre>
</p>
<pre>
-$ ./autogen.sh --prefix=$HOME/usr
+$ mkdir build && cd build
+$ ../autogen.sh --prefix=$HOME/usr
$ make
$ <b>sudo</b> make install</pre>
@@ -88,7 +89,8 @@ $ <b>sudo</b> make install</pre>
</p>
<pre>
-$ ./autogen.sh --system
+$ mkdir build && cd build
+$ ../autogen.sh --system
$ make
</pre>
--
2.26.2
4 years, 4 months