[libvirt] [PATCH v2 0/3] driver level connection close event
by nshirokovskiy@virtuozzo.com
Notify of connection close event from parallels driver (possibly) wrapped in
the remote driver.
Changes from v1:
1. fix comment style issues
2. remove spurious whitespaces
3. move rpc related part from vz patch to second(rpc) patch
4. remove unnecessary locks for immutable closeCallback in first patch.
Discussion.
In 1 and 2 patch we forced to some decisions because we don't have a weak
reference mechanics.
1 patch.
-----------
virConnectCloseCallback is introduced because we can not reference the
connection object itself when setting a network layer callback because of how
connection close works.
A connection close procedure is next:
1. client closes connection
2. a this point nobody else referencing a connection and it is disposed
3. connection dispose unreferencing network connection
4. network connection disposes
Thus if we referece a connection in network close callback we never get step 2.
virConnectCloseCallback broke this cycle but at cost that clients MUST
unregister explicitly before closing connection. This is not good as this
unregistration is not really neaded. Client is not telling that it does not
want to receive events anymore but rather forced to obey some
implementation-driven rules.
2 patch.
-----------
We impose requirements on driver implementations which is fragile. Moreover we
again need to make explicit unregistrations. Implementation of domain events
illustrates this point. remoteDispatchConnectDomainEventRegister does not
reference NetClient and makes unregistration before NetClient is disposed but
drivers do not meet the formulated requirements. Object event system release
lock before delivering event for re-entrance purposes.
Shortly we have 2 undesired consequences here.
1. Mandatory unregistration.
2. Imposing multi-threading requirements.
Introduction of weak pointers could free us from these artifacts. Next weak
reference workflow illustrates this.
1. Take weak reference on object of interest before passing to party. This
doesn't break disposing mechanics as weak eference does not prevent from
disposing object. Object is disposed but memory is not freed yet if there are
weak references.
2. When callback is called we are safe to check if pointer dangling as we make
a weak reference before.
3. Release weak reference and this trigger memory freeing if there are no more
weak references.
daemon/libvirtd.h | 1 +
daemon/remote.c | 86 +++++++++++++++++++++++++++++++
src/datatypes.c | 115 +++++++++++++++++++++++++++++++----------
src/datatypes.h | 21 ++++++--
src/driver-hypervisor.h | 12 ++++
src/libvirt-host.c | 77 +++++++++-------------------
src/remote/remote_driver.c | 106 +++++++++++++++++++++++++++++---------
src/remote/remote_protocol.x | 24 ++++++++-
src/remote_protocol-structs | 6 ++
src/vz/vz_driver.c | 26 +++++++++
src/vz/vz_sdk.c | 29 +++++++++++
src/vz/vz_utils.h | 3 +
9 years, 4 months
[libvirt] vm snapshot multi-disk
by Marcus
Hi all,
I've recently been toying with VM snapshots, and have ran into an
issue. Given a VM with multiple disks, it seems a snapshot-create followed
by a snapshot-delete will only remove the qcow2 snapshot for the first disk
(or perhaps just the disk that contains the memory), not all of the disk
snapshots it created. Is this something people are aware of?
In searching around, I found a bug report where snapshot-creates would
fail due to the qcow2 snapshot ids being inconsistent. That looks like it
is patched for 2.4 qemu (
http://lists.nongnu.org/archive/html/qemu-devel/2015-03/msg04963.html),
this bug would trigger that one by leaving IDs around that are inconsistent
between member disks, but is not the same.
# virsh snapshot-create 7
Domain snapshot 1436792720 created
# virsh snapshot-list 7
Name Creation Time State
------------------------------------------------------------
1436792720 2015-07-13 06:05:20 -0700 running
# virsh domblklist 7
Target Source
------------------------------------------------
vda
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5
vdb
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c
# qemu-img snapshot -l
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1436792720 173M 2015-07-13 06:05:20 00:01:10.938
# qemu-img snapshot -l
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1436792720 0 2015-07-13 06:05:20 00:01:10.938
# virsh snapshot-delete 7 1436792720
Domain snapshot 1436792720 deleted
# qemu-img snapshot -l
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/e4d6e885-1382-40bc-890b-ad9c8b51a7a5
# qemu-img snapshot -l
/mnt/2a270ef3-f389-37a4-942f-380bed9f70aa/7033e4c6-5f59-4325-b7e0-ae191e12e86c
Snapshot list:
ID TAG VM SIZE DATE VM CLOCK
1 1436792720 0 2015-07-13 06:05:20 00:01:10.938
9 years, 4 months
[libvirt] [PATCH] virsh: Don't output node frequency if unknown
by Martin Kletzander
Commit ed8155eafbff5c5ca0bdfe84a8388f58b718c2f9 documented that
mhz field in virNodeInfo might be 0 if the frequency is unknown. Modify
virsh to know about that.
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
tools/virsh-host.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/tools/virsh-host.c b/tools/virsh-host.c
index 66f7fd9e62e4..7a223931b152 100644
--- a/tools/virsh-host.c
+++ b/tools/virsh-host.c
@@ -606,7 +606,8 @@ cmdNodeinfo(vshControl *ctl, const vshCmd *cmd ATTRIBUTE_UNUSED)
}
vshPrint(ctl, "%-20s %s\n", _("CPU model:"), info.model);
vshPrint(ctl, "%-20s %d\n", _("CPU(s):"), info.cpus);
- vshPrint(ctl, "%-20s %d MHz\n", _("CPU frequency:"), info.mhz);
+ if (info.mhz)
+ vshPrint(ctl, "%-20s %d MHz\n", _("CPU frequency:"), info.mhz);
vshPrint(ctl, "%-20s %d\n", _("CPU socket(s):"), info.sockets);
vshPrint(ctl, "%-20s %d\n", _("Core(s) per socket:"), info.cores);
vshPrint(ctl, "%-20s %d\n", _("Thread(s) per core:"), info.threads);
--
2.4.5
9 years, 4 months
[libvirt] [PATCH 0/3] qemu: virtio-9p-ccw support
by Boris Fiuczynski
Adding support and a test for virtio-9p-ccw.
Changing the default from virtio-9p-pci to virtio-9p-ccw for
s390-ccw-virtio machines.
Boris Fiuczynski (3):
qemu: Support for virtio-9p-ccw
qemu: Make virtio-9p-ccw the default for s390-ccw-virtio machines
qemu: Test for virtio-9p-ccw support
src/qemu/qemu_command.c | 14 ++++++++-
tests/qemuxml2argvdata/qemuxml2argv-fs9p-ccw.args | 16 ++++++++++
tests/qemuxml2argvdata/qemuxml2argv-fs9p-ccw.xml | 36 +++++++++++++++++++++++
tests/qemuxml2argvtest.c | 4 +++
4 files changed, 69 insertions(+), 1 deletion(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-fs9p-ccw.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-fs9p-ccw.xml
--
1.8.1.4
9 years, 4 months
[libvirt] [PATCH] daemonRunStateInit: Fix a typo on a comment
by Michal Privoznik
s/priviledged/privileged/
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
Pushed under trivial and who-cares rules.
daemon/libvirtd.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c
index 654e7f4..71db4a0 100644
--- a/daemon/libvirtd.c
+++ b/daemon/libvirtd.c
@@ -956,7 +956,7 @@ static void daemonRunStateInit(void *opaque)
driversInitialized = true;
#ifdef HAVE_DBUS
- /* Tie the non-priviledged libvirtd to the session/shutdown lifecycle */
+ /* Tie the non-privileged libvirtd to the session/shutdown lifecycle */
if (!virNetDaemonIsPrivileged(dmn)) {
sessionBus = virDBusGetSessionBus();
--
2.3.6
9 years, 4 months
[libvirt] [PATCH 1/6] vz: add migration backbone code
by nshirokovskiy@virtuozzo.com
From: Nikolay Shirokovskiy <nshirokovskiy(a)virtuozzo.com>
This patch makes basic vz migration possible. For example by virsh:
virsh -c vz:///system migrate $NAME vz+ssh://$DST/system
Vz migration is implemented thru interface for managed migrations for drivers
although it looks like a candadate for direct migration as all work is done by
vz sdk. The reason is that vz sdk lacks rich remote authentication capabilities
of libvirt and if we choose to implement direct migration we have to
reimplement auth means of libvirt. This brings the requirement that destination
side should have running libvirt daemon. This is not the problem as vz is
moving in the direction of tight integration with libvirt.
Another issue of this choice is that if the managment migration fails on
'finish' step driver is supposed to resume on source. This is not compatible
with vz sdk migration but this can be overcome without loosing a constistency,
see comments in code.
Technically we have a libvirt connection to destination in managed migration
scheme and we use this connection to obtain a session_uuid (which acts as authZ
token) for vz migration. This uuid is passed from destination through cookie
on 'prepare' step.
A few words on vz migration uri. I'd probably use just 'hostname:port' uris as
we don't have different migration schemes in vz but scheme part is mandatory,
so 'tcp' is used. Looks like good name.
Signed-off-by: Nikolay Shirokovskiy <nshirokovskiy(a)virtuozzo.com>
---
src/vz/vz_driver.c | 250 ++++++++++++++++++++++++++++++++++++++++++++++++++++
src/vz/vz_sdk.c | 79 ++++++++++++++--
src/vz/vz_sdk.h | 2 +
src/vz/vz_utils.h | 1 +
4 files changed, 322 insertions(+), 10 deletions(-)
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index 9f0c52f..e003646 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -1343,6 +1343,250 @@ vzDomainMemoryStats(virDomainPtr domain,
return ret;
}
+static int
+vzConnectSupportsFeature(virConnectPtr conn ATTRIBUTE_UNUSED, int feature)
+{
+ switch (feature) {
+ case VIR_DRV_FEATURE_MIGRATION_PARAMS:
+ return 1;
+ default:
+ return 0;
+ }
+}
+
+#define VZ_MIGRATION_PARAMETERS NULL
+
+static char *
+vzDomainMigrateBegin3Params(virDomainPtr domain,
+ virTypedParameterPtr params,
+ int nparams,
+ char **cookieout ATTRIBUTE_UNUSED,
+ int *cookieoutlen ATTRIBUTE_UNUSED,
+ unsigned int fflags ATTRIBUTE_UNUSED)
+{
+ virDomainObjPtr dom = NULL;
+ char *xml = NULL;
+
+ if (virTypedParamsValidate(params, nparams, VZ_MIGRATION_PARAMETERS) < 0)
+ goto cleanup;
+
+ if (!(dom = vzDomObjFromDomain(domain)))
+ goto cleanup;
+
+ xml = virDomainDefFormat(dom->def, VIR_DOMAIN_DEF_FORMAT_SECURE);
+
+ cleanup:
+ if (dom)
+ virObjectUnlock(dom);
+
+ return xml;
+}
+
+/* return 'hostname' */
+static char *
+vzCreateMigrateUri(void)
+{
+ char *hostname = NULL;
+ char *out = NULL;
+ virURI uri = {};
+
+ if ((hostname = virGetHostname()) == NULL)
+ goto cleanup;
+
+ if (STRPREFIX(hostname, "localhost")) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("hostname on destination resolved to localhost,"
+ " but migration requires an FQDN"));
+ goto cleanup;
+ }
+
+ /* to set const string to non-const */
+ if (VIR_STRDUP(uri.scheme, "tcp") < 0)
+ goto cleanup;
+ uri.server = hostname;
+ out = virURIFormat(&uri);
+
+ cleanup:
+ VIR_FREE(hostname);
+ VIR_FREE(uri.scheme);
+ return out;
+}
+
+static int
+vzDomainMigratePrepare3Params(virConnectPtr dconn,
+ virTypedParameterPtr params ATTRIBUTE_UNUSED,
+ int nparams ATTRIBUTE_UNUSED,
+ const char *cookiein ATTRIBUTE_UNUSED,
+ int cookieinlen ATTRIBUTE_UNUSED,
+ char **cookieout,
+ int *cookieoutlen,
+ char **uri_out,
+ unsigned int fflags ATTRIBUTE_UNUSED)
+{
+ vzConnPtr privconn = dconn->privateData;
+ int ret = -1;
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+
+ *cookieout = NULL;
+ *uri_out = NULL;
+
+ virUUIDFormat(privconn->session_uuid, uuidstr);
+ if (VIR_STRDUP(*cookieout, uuidstr) < 0)
+ goto cleanup;
+ *cookieoutlen = strlen(*cookieout) + 1;
+
+ if (!(*uri_out = vzCreateMigrateUri()))
+ goto cleanup;
+
+ ret = 0;
+
+ cleanup:
+ if (ret != 0) {
+ VIR_FREE(*cookieout);
+ VIR_FREE(*uri_out);
+ *cookieoutlen = 0;
+ }
+
+ return ret;
+}
+
+static int
+vzDomainMigratePerform3Params(virDomainPtr domain,
+ const char *dconnuri ATTRIBUTE_UNUSED,
+ virTypedParameterPtr params,
+ int nparams,
+ const char *cookiein,
+ int cookieinlen ATTRIBUTE_UNUSED,
+ char **cookieout,
+ int *cookieoutlen,
+ unsigned int fflags ATTRIBUTE_UNUSED)
+{
+ int ret = -1;
+ virDomainObjPtr dom = NULL;
+ const char *uri = NULL;
+ unsigned char session_uuid[VIR_UUID_BUFLEN];
+ char uuidstr[VIR_UUID_STRING_BUFLEN];
+
+ *cookieout = NULL;
+
+ if (virTypedParamsGetString(params, nparams,
+ VIR_MIGRATE_PARAM_URI,
+ &uri) < 0)
+ goto cleanup;
+
+ if (!(dom = vzDomObjFromDomain(domain)))
+ goto cleanup;
+
+ if (!uri) {
+ virReportError(VIR_ERR_INVALID_ARG, "%s",
+ _("migration URI should be provided"));
+ goto cleanup;
+ }
+
+ if (virUUIDParse(cookiein, session_uuid) < 0)
+ goto cleanup;
+
+ if (prlsdkMigrate(dom, uri, session_uuid) < 0)
+ goto cleanup;
+
+ virUUIDFormat(domain->uuid, uuidstr);
+ if (VIR_STRDUP(*cookieout, uuidstr) < 0)
+ goto cleanup;
+ *cookieoutlen = strlen(*cookieout) + 1;
+
+ ret = 0;
+
+ cleanup:
+ if (dom)
+ virObjectUnlock(dom);
+ if (ret != 0) {
+ VIR_FREE(*cookieout);
+ *cookieoutlen = 0;
+ }
+
+ return ret;
+}
+
+/* if we return NULL from this function we are supposed
+ to cleanup destination side, but we can't do it
+ because 'perform' step is finished and migration is actually
+ completed by dispatcher. Unfortunately in OOM situation we
+ have to return NULL. As a result high level migration
+ function return NULL which is supposed to be
+ treated as migration error while migration is
+ actually successful. This should not be a problem
+ as we are in consistent state. For example later
+ attempts to list source and destination domains
+ will reveal actual situation. */
+
+static virDomainPtr
+vzDomainMigrateFinish3Params(virConnectPtr dconn,
+ virTypedParameterPtr params ATTRIBUTE_UNUSED,
+ int nparams ATTRIBUTE_UNUSED,
+ const char *cookiein,
+ int cookieinlen ATTRIBUTE_UNUSED,
+ char **cookieout ATTRIBUTE_UNUSED,
+ int *cookieoutlen ATTRIBUTE_UNUSED,
+ unsigned int fflags ATTRIBUTE_UNUSED,
+ int cancelled)
+{
+ vzConnPtr privconn = dconn->privateData;
+ virDomainObjPtr dom = NULL;
+ virDomainPtr domain = NULL;
+ unsigned char domain_uuid[VIR_UUID_BUFLEN];
+
+ /* we have nothing to cleanup, whole job is done by PCS dispatcher */
+ if (cancelled)
+ return NULL;
+
+ if (virUUIDParse(cookiein, domain_uuid) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("Could not parse UUID from string '%s'"),
+ cookiein);
+ goto cleanup;
+ }
+
+ if (!(dom = prlsdkAddDomain(privconn, domain_uuid)))
+ goto cleanup;
+
+ domain = virGetDomain(dconn, dom->def->name, dom->def->uuid);
+ if (domain)
+ domain->id = dom->def->id;
+
+ cleanup:
+ if (!domain)
+ VIR_WARN("Can't provide domain with uuid '%s' after successfull migration.", cookiein);
+ virDomainObjEndAPI(&dom);
+ return domain;
+}
+
+/* This is executed only if 'perform' step is successfull that
+ is migration is completed by PCS dispatcher. Thus we should
+ ignore 'canceled' parameter and always kill source domain. */
+static int
+vzDomainMigrateConfirm3Params(virDomainPtr domain,
+ virTypedParameterPtr params ATTRIBUTE_UNUSED,
+ int nparams ATTRIBUTE_UNUSED,
+ const char *cookiein ATTRIBUTE_UNUSED,
+ int cookieinlen ATTRIBUTE_UNUSED,
+ unsigned int fflags ATTRIBUTE_UNUSED,
+ int cancelled ATTRIBUTE_UNUSED)
+{
+ vzConnPtr privconn = domain->conn->privateData;
+ virDomainObjPtr dom = NULL;
+
+ if (!(dom = vzDomObjFromDomain(domain)))
+ goto cleanup;
+
+ virDomainObjListRemove(privconn->domains, dom);
+
+ cleanup:
+ if (dom)
+ virObjectUnlock(dom);
+
+ return 0;
+}
+
static virHypervisorDriver vzDriver = {
.name = "vz",
.connectOpen = vzConnectOpen, /* 0.10.0 */
@@ -1396,6 +1640,12 @@ static virHypervisorDriver vzDriver = {
.domainBlockStatsFlags = vzDomainBlockStatsFlags, /* 1.2.17 */
.domainInterfaceStats = vzDomainInterfaceStats, /* 1.2.17 */
.domainMemoryStats = vzDomainMemoryStats, /* 1.2.17 */
+ .connectSupportsFeature = vzConnectSupportsFeature, /* 1.2.18 */
+ .domainMigrateBegin3Params = vzDomainMigrateBegin3Params, /* 1.2.18 */
+ .domainMigratePrepare3Params = vzDomainMigratePrepare3Params, /* 1.2.18 */
+ .domainMigratePerform3Params = vzDomainMigratePerform3Params, /* 1.2.18 */
+ .domainMigrateFinish3Params = vzDomainMigrateFinish3Params, /* 1.2.18 */
+ .domainMigrateConfirm3Params = vzDomainMigrateConfirm3Params, /* 1.2.18 */
};
static virConnectDriver vzConnectDriver = {
diff --git a/src/vz/vz_sdk.c b/src/vz/vz_sdk.c
index d1bc312..908bfc1 100644
--- a/src/vz/vz_sdk.c
+++ b/src/vz/vz_sdk.c
@@ -37,6 +37,9 @@
#define VIR_FROM_THIS VIR_FROM_PARALLELS
#define JOB_INFINIT_WAIT_TIMEOUT UINT_MAX
+static int
+prlsdkUUIDParse(const char *uuidstr, unsigned char *uuid);
+
VIR_LOG_INIT("parallels.sdk");
/*
@@ -228,24 +231,40 @@ prlsdkDeinit(void)
int
prlsdkConnect(vzConnPtr privconn)
{
- PRL_RESULT ret;
+ int ret = -1;
+ PRL_RESULT pret;
PRL_HANDLE job = PRL_INVALID_HANDLE;
+ PRL_HANDLE result = PRL_INVALID_HANDLE;
+ PRL_HANDLE response = PRL_INVALID_HANDLE;
+ char session_uuid[VIR_UUID_STRING_BUFLEN + 2];
+ PRL_UINT32 buflen = ARRAY_CARDINALITY(session_uuid);
- ret = PrlSrv_Create(&privconn->server);
- if (PRL_FAILED(ret)) {
- logPrlError(ret);
- return -1;
- }
+ pret = PrlSrv_Create(&privconn->server);
+ prlsdkCheckRetGoto(pret, cleanup);
job = PrlSrv_LoginLocalEx(privconn->server, NULL, 0,
PSL_HIGH_SECURITY, PACF_NON_INTERACTIVE_MODE);
+ if (PRL_FAILED(getJobResult(job, &result)))
+ goto cleanup;
+
+ pret = PrlResult_GetParam(result, &response);
+ prlsdkCheckRetGoto(pret, cleanup);
+
+ pret = PrlLoginResponse_GetSessionUuid(response, session_uuid, &buflen);
+ prlsdkCheckRetGoto(pret, cleanup);
+
+ if (prlsdkUUIDParse(session_uuid, privconn->session_uuid) < 0)
+ goto cleanup;
+
+ ret = 0;
- if (waitJob(job)) {
+ cleanup:
+ if (ret < 0)
PrlHandle_Free(privconn->server);
- return -1;
- }
+ PrlHandle_Free(result);
+ PrlHandle_Free(response);
- return 0;
+ return ret;
}
void
@@ -4035,3 +4054,43 @@ prlsdkGetMemoryStats(virDomainObjPtr dom,
return ret;
}
+
+/* high security is default choice for 2 reasons:
+ 1. as this is the highest set security we can't get
+ reject from server with high security settings
+ 2. this is on par with security level of driver
+ connection to dispatcher */
+
+#define PRLSDK_MIGRATION_FLAGS (PSL_HIGH_SECURITY)
+
+int prlsdkMigrate(virDomainObjPtr dom, const char* uri_str,
+ const unsigned char *session_uuid)
+{
+ int ret = -1;
+ vzDomObjPtr privdom = dom->privateData;
+ virURIPtr uri = NULL;
+ PRL_HANDLE job = PRL_INVALID_HANDLE;
+ char uuidstr[VIR_UUID_STRING_BUFLEN + 2];
+
+ uri = virURIParse(uri_str);
+ /* no special error logs as uri should be checked on prepare step */
+ if (uri == NULL)
+ goto cleanup;
+
+ prlsdkUUIDFormat(session_uuid, uuidstr);
+ job = PrlVm_MigrateEx(privdom->sdkdom, uri->server, uri->port, uuidstr,
+ "", /* use default dir for migrated instance bundle */
+ PRLSDK_MIGRATION_FLAGS,
+ 0, /* reserved flags */
+ PRL_TRUE /* don't ask for confirmations */
+ );
+
+ if (PRL_FAILED(waitJob(job)))
+ goto cleanup;
+
+ ret = 0;
+
+ cleanup:
+ virURIFree(uri);
+ return ret;
+}
diff --git a/src/vz/vz_sdk.h b/src/vz/vz_sdk.h
index ebe4591..1a90eca 100644
--- a/src/vz/vz_sdk.h
+++ b/src/vz/vz_sdk.h
@@ -76,3 +76,5 @@ int
prlsdkGetVcpuStats(virDomainObjPtr dom, int idx, unsigned long long *time);
int
prlsdkGetMemoryStats(virDomainObjPtr dom, virDomainMemoryStatPtr stats, unsigned int nr_stats);
+int
+prlsdkMigrate(virDomainObjPtr dom, const char* uri_str, const char unsigned *session_uuid);
diff --git a/src/vz/vz_utils.h b/src/vz/vz_utils.h
index db09647..a779b03 100644
--- a/src/vz/vz_utils.h
+++ b/src/vz/vz_utils.h
@@ -62,6 +62,7 @@ struct _vzConn {
virDomainObjListPtr domains;
PRL_HANDLE server;
+ unsigned char session_uuid[VIR_UUID_BUFLEN];
virStoragePoolObjList pools;
virNetworkObjListPtr networks;
virCapsPtr caps;
--
1.7.1
9 years, 4 months
[libvirt] [PATCH 0/2] Fix some Coverity issues
by Michal Privoznik
*** BLURB HERE ***
Michal Privoznik (2):
qemuMigrationRun: Don't leak @fd
cmdVcpuPin: Remove dead code
src/qemu/qemu_migration.c | 2 +-
tools/virsh-domain.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
--
2.3.6
9 years, 4 months
Re: [libvirt] [PING: PATCH v4 0/3] Allow PCI virtio on ARM "virt" machine
by Pavel Fedin
Knock-knock!!!
Kind regards,
Pavel Fedin
Expert Engineer
Samsung Electronics Research center Russia
> -----Original Message-----
> From: libvir-list-bounces(a)redhat.com [mailto:libvir-list-bounces@redhat.com] On Behalf Of Pavel
Fedin
> Sent: Thursday, July 09, 2015 12:11 PM
> To: libvir-list(a)redhat.com
> Cc: Peter Krempa
> Subject: [libvirt] [PATCH v4 0/3] Allow PCI virtio on ARM "virt" machine
>
> Virt machine in qemu since v2.3.0 has PCI generic host controller, and can use
> PCI devices. This provides performance improvement as well as vhost-net with
> irqfd support for virtio-net. However libvirt currently does not allow ARM virt
> machine to have PCI devices. This patchset adds the necessary support.
>
> Changes since v3:
> - Capability is based not on qemu version but on support of "gpex-pcihost"
> device by qemu
> - Added a workaround, allowing to pass "make check". The problem is that
> test suite does not build capabilities cache. Unfortunately this means
> that correct unit-test for the new functionality currently cannot be
> written. Test suite framework needs to be improved.
> Changes since v2:
> Complete rework, use different approach
> - Correctly model PCI Express bus on the machine. It is now possible to
> explicitly specify <address-type='pci'> with attributes. This allows to
> attach not only virtio, but any other PCI device to the model.
> - Default is not changed and still mmio, for backwards compatibility with
> existing installations. PCI bus has to be explicitly specified.
> - Check for the capability in correct place, in v2 it actually did not work
> Changes since v1:
> - Added capability based on qemu version number
> - Recognize also "virt-" prefix
>
> Pavel Fedin (3):
> Introduce QEMU_CAPS_OBJECT_GPEX
> Add PCI-Express root to ARM virt machine
> Build correct command line for PCI NICs on ARM
>
> src/qemu/qemu_capabilities.c | 2 ++
> src/qemu/qemu_capabilities.h | 1 +
> src/qemu/qemu_command.c | 3 ++-
> src/qemu/qemu_domain.c | 17 +++++++++++++----
> 4 files changed, 18 insertions(+), 5 deletions(-)
>
> --
> 1.9.5.msysgit.0
>
> --
> libvir-list mailing list
> libvir-list(a)redhat.com
> https://www.redhat.com/mailman/listinfo/libvir-list
9 years, 4 months
[libvirt] [PATCH 0/2] nodeinfo: Various cleanups
by Andrea Bolognani
The first patch builds on the sysfs_prefix work by John,
while the the second one contains unrelated formatting
changes that increase internal consistency.
Neither introduces behavioral changes.
Andrea Bolognani (2):
nodeinfo: Make sysfs_prefix usage more consistent
nodeinfo: Formatting changes
src/nodeinfo.c | 73 ++++++++++++++++++++++++++--------------------------
src/nodeinfopriv.h | 4 +--
tests/nodeinfotest.c | 14 +++++-----
3 files changed, 46 insertions(+), 45 deletions(-)
--
2.4.3
9 years, 4 months
[libvirt] [PATCH] virsh: Teach cmdFreepages to work with lxc driver
by Michal Privoznik
Some drivers don't expose available huge page sizes in the
capabilities XML. For instance, LXC driver is one of those.
This has a downside that when virsh is trying to get
aggregated info on free pages per all NUMA nodes, it fails.
The problem is that the virNodeGetFreePages() API expects
caller to pass an array of page sizes he is interested in.
In virsh, this array is filled from the capabilities from
'/capabilities/host/cpu/pages' XPath. As said, in LXC
there's no such XPath and therefore virsh fails currently.
But hey, we can fallback: the page sizes are exposed under
'/capabilities/host/topology/cells/cell/pages'. The page
size can be collected from there, and voilà the command
works again. But now we must make sure that there are no
duplicates in the array passed to the public API. Otherwise
we won't get as beautiful output as we are getting now.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
tools/virsh-host.c | 37 ++++++++++++++++++++++++++++++++++---
1 file changed, 34 insertions(+), 3 deletions(-)
diff --git a/tools/virsh-host.c b/tools/virsh-host.c
index 66f7fd9..28f6da1 100644
--- a/tools/virsh-host.c
+++ b/tools/virsh-host.c
@@ -285,6 +285,15 @@ static const vshCmdOptDef opts_freepages[] = {
{.name = NULL}
};
+static int
+vshPageSizeSorter(const void *a, const void *b)
+{
+ unsigned int pa = *(unsigned int *)a;
+ unsigned int pb = *(unsigned int *)b;
+
+ return pa - pb;
+}
+
static bool
cmdFreepages(vshControl *ctl, const vshCmd *cmd)
{
@@ -326,9 +335,15 @@ cmdFreepages(vshControl *ctl, const vshCmd *cmd)
nodes_cnt = virXPathNodeSet("/capabilities/host/cpu/pages", ctxt, &nodes);
if (nodes_cnt <= 0) {
- vshError(ctl, "%s", _("could not get information about "
- "supported page sizes"));
- goto cleanup;
+ /* Some drivers don't export page sizes under the
+ * XPath above. Do another trick to get them */
+ nodes_cnt = virXPathNodeSet("/capabilities/host/topology/cells/cell/pages",
+ ctxt, &nodes);
+ if (nodes_cnt <= 0) {
+ vshError(ctl, "%s", _("could not get information about "
+ "supported page sizes"));
+ goto cleanup;
+ }
}
pagesize = vshCalloc(ctl, nodes_cnt, sizeof(*pagesize));
@@ -345,6 +360,22 @@ cmdFreepages(vshControl *ctl, const vshCmd *cmd)
VIR_FREE(val);
}
+ /* Here, if we've done the trick few lines above,
+ * @pagesize array will contain duplicates. We should
+ * remove them otherwise not very nice output will be
+ * produced. */
+ qsort(pagesize, nodes_cnt, sizeof(*pagesize), vshPageSizeSorter);
+
+ for (i = 0; i < nodes_cnt - 1; ) {
+ if (pagesize[i] == pagesize[i + 1]) {
+ memmove(pagesize + i, pagesize + i + 1,
+ (nodes_cnt - i + 1) * sizeof(*pagesize));
+ nodes_cnt--;
+ } else {
+ i++;
+ }
+ }
+
npages = nodes_cnt;
VIR_FREE(nodes);
} else {
--
2.3.6
9 years, 4 months