[libvirt] fork of php-libvirt
by Lyre
I've added many APIs to php-libvirt, including network, node device,
interface, and snapshot support.
Unfortunately, I wasn't able to contact original author Radek, I really hope
he can see this.
So I forked it, the address is: https://github.com/4179e1/php-libvirt
But there's something undone, as a new php extension writer, I don't know
how the documents are generated. Any clues are appreciated.
BTW, I've pack php-libvirt into rpm via opensuse build service, currently:
it's ready on openSuSE 11.3 & SLES 11 SP1; fedora 14 should works too (it
works on my VM), but the obs environment is not ready yet; and there's
something need to workaround on rhel6. check here:
http://download.opensuse.org/repositories/home:/midmay/
13 years, 9 months
[libvirt] [PATCH] qemu: Fix a possible deadlock in p2p migration
by Jiri Denemark
Two more calls to remote libvirtd have to be surrounded by
qemuDomainObjEnterRemoteWithDriver() and
qemuDomainObjExitRemoteWithDriver() to prevent possible deadlock between
two communicating libvirt daemons.
See commit f0c8e1cb3774d6f09e2681ca1988bf235a343007 for further details.
---
src/qemu/qemu_driver.c | 18 ++++++++++++++++--
1 files changed, 16 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 54e9dcb..bc506c2 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -11622,24 +11622,38 @@ static int doPeer2PeerMigrate(virDomainPtr dom,
int ret = -1;
virConnectPtr dconn = NULL;
char *dom_xml;
+ bool p2p;
/* the order of operations is important here; we make sure the
* destination side is completely setup before we touch the source
*/
+ qemuDomainObjEnterRemoteWithDriver(driver, vm);
dconn = virConnectOpen(uri);
+ qemuDomainObjExitRemoteWithDriver(driver, vm);
if (dconn == NULL) {
qemuReportError(VIR_ERR_OPERATION_FAILED,
_("Failed to connect to remote libvirt URI %s"), uri);
return -1;
}
- if (!VIR_DRV_SUPPORTS_FEATURE(dconn->driver, dconn,
- VIR_DRV_FEATURE_MIGRATION_P2P)) {
+
+ qemuDomainObjEnterRemoteWithDriver(driver, vm);
+ p2p = VIR_DRV_SUPPORTS_FEATURE(dconn->driver, dconn,
+ VIR_DRV_FEATURE_MIGRATION_P2P);
+ qemuDomainObjExitRemoteWithDriver(driver, vm);
+ if (!p2p) {
qemuReportError(VIR_ERR_OPERATION_FAILED, "%s",
_("Destination libvirt does not support peer-to-peer migration protocol"));
goto cleanup;
}
+ /* domain may have been stopped while we were talking to remote daemon */
+ if (!virDomainObjIsActive(vm)) {
+ qemuReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("guest unexpectedly quit"));
+ goto cleanup;
+ }
+
dom_xml = qemudVMDumpXML(driver, vm,
VIR_DOMAIN_XML_SECURE |
VIR_DOMAIN_XML_UPDATE_CPU);
--
1.7.3.2
13 years, 10 months
[libvirt] [PATCH] handle DNS over IPv6
by Paweł Krześniak
Firstly: Add ip6tables rules to allow DNS over IPv6 in network.
Secondly: start dnsmasq with --interface option instead of --listen-address.
Dnsmasq currently uses "--listen-address IPv4_address" option, which
restricts DNS service to one IPv4 address only.
We could append --listen-address for every IPv[46] address defined on
interface, but it's cleaner to use "--interface brname".
There were some problems in the past with --interface option. Dnsmasq
version 2.46 and earlier exited with error when tired to bind() to IPv6
addresses on just brought up interfaces, because DAD (Duplicate
Address Detection) takes some time to finish and bind() returns
EADDRNOTAVAIL which caused dnsmasq to exit.
Dnsmasq version 2.47 (released on 05-Feb-2009) fixed this issue by
retrying bind() after getting EADDRNOTAVAIL error (as described in
http://www.thekelleys.org.uk/dnsmasq/CHANGELOG;
loop itself is defined in dnsmasq-2.47/src/network.c:404)
* Using --interface option causes longer network startup:
$ time virsh -c qemu:///system net-start isolated1
Network isolated1 started
real 0m0.112s
user 0m0.013s
sys 0m0.009s
$ time virsh -c qemu:///system net-start isolated1
Network isolated1 started
real 0m2.101s
user 0m0.011s
sys 0m0.011s
* Dnsmasq exits after DAD complets which guarantees that radvd will no
more produces following warnings:
Dec 28 19:42:11 nemo radvd[14652]: sendmsg: Invalid argument
---
src/network/bridge_driver.c | 32 +++++++++++++++++++++++++-------
1 files changed, 25 insertions(+), 7 deletions(-)
diff --git a/src/network/bridge_driver.c b/src/network/bridge_driver.c
index 7d43ef5..a689c9f 100644
--- a/src/network/bridge_driver.c
+++ b/src/network/bridge_driver.c
@@ -469,16 +469,13 @@ networkBuildDnsmasqArgv(virNetworkObjPtr network,
virCommandAddArgList(cmd, "--conf-file=", "", NULL);
/*
- * XXX does not actually work, due to some kind of
- * race condition setting up ipv6 addresses on the
- * interface. A sleep(10) makes it work, but that's
- * clearly not practical
+ * It's safe to use --interface option for dnsmasq 2.47 and later.
+ * With earlier versions we had to use --listen-address option.
*
- * virCommandAddArg(cmd, "--interface");
- * virCommandAddArg(cmd, ipdef->bridge);
+ * virCommandAddArgList(cmd, "--listen-address", bridgeaddr);
*/
virCommandAddArgList(cmd,
- "--listen-address", bridgeaddr,
+ "--interface", network->def->bridge,
"--except-interface", "lo",
NULL);
@@ -1157,9 +1154,30 @@ networkAddGeneralIptablesRules(struct
network_driver *driver,
goto err9;
}
+ /* allow DNS over IPv6 requests through to dnsmasq */
+ if (iptablesAddTcpInput(driver->iptables, AF_INET6,
+ network->def->bridge, 53) < 0) {
+ networkReportError(VIR_ERR_SYSTEM_ERROR,
+ _("failed to add ip6tables rule to allow
DNS requests from '%s'"),
+ network->def->bridge);
+ goto err10;
+ }
+
+ if (iptablesAddUdpInput(driver->iptables, AF_INET6,
+ network->def->bridge, 53) < 0) {
+ networkReportError(VIR_ERR_SYSTEM_ERROR,
+ _("failed to add ip6tables rule to allow
DNS requests from '%s'"),
+ network->def->bridge);
+ goto err11;
+ }
+
return 0;
/* unwind in reverse order from the point of failure */
+err11:
+ iptablesRemoveTcpInput(driver->iptables, AF_INET6,
network->def->bridge, 53);
+err10:
+ networkRemoveGeneralIp6tablesRules(driver, network);
err9:
iptablesRemoveForwardAllowCross(driver->iptables, AF_INET,
network->def->bridge);
err8:
13 years, 10 months
[libvirt] [PATCH 1/2] Add a parameter to virThreadPoolSendJob() to let the caller decide whether to wait for the job to complete
by Hu Tao
---
src/qemu/qemu_driver.c | 2 +-
src/util/threadpool.c | 19 ++++++++++++++++++-
src/util/threadpool.h | 3 ++-
3 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 924446f..aa2e805 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -665,7 +665,7 @@ qemuHandleDomainWatchdog(qemuMonitorPtr mon ATTRIBUTE_UNUSED,
if (VIR_ALLOC(wdEvent) == 0) {
wdEvent->action = VIR_DOMAIN_WATCHDOG_ACTION_DUMP;
wdEvent->vm = vm;
- ignore_value(virThreadPoolSendJob(driver->workerPool, wdEvent));
+ ignore_value(virThreadPoolSendJob(driver->workerPool, wdEvent, false));
} else
virReportOOMError();
}
diff --git a/src/util/threadpool.c b/src/util/threadpool.c
index 1213862..07f2fcf 100644
--- a/src/util/threadpool.c
+++ b/src/util/threadpool.c
@@ -42,6 +42,7 @@ struct _virThreadPoolJob {
virThreadPoolJobPtr next;
void *data;
+ virCondPtr complete;
};
typedef struct _virThreadPoolJobList virThreadPoolJobList;
@@ -73,6 +74,7 @@ struct _virThreadPool {
static void virThreadPoolWorker(void *opaque)
{
virThreadPoolPtr pool = opaque;
+ virCondPtr complete;
virMutexLock(&pool->mutex);
@@ -97,9 +99,12 @@ static void virThreadPoolWorker(void *opaque)
pool->jobList.tail = &pool->jobList.head;
virMutexUnlock(&pool->mutex);
+ complete = job->complete;
(pool->jobFunc)(job->data, pool->jobOpaque);
VIR_FREE(job);
virMutexLock(&pool->mutex);
+ if (complete)
+ virCondSignal(complete);
}
out:
@@ -188,9 +193,14 @@ void virThreadPoolFree(virThreadPoolPtr pool)
}
int virThreadPoolSendJob(virThreadPoolPtr pool,
- void *jobData)
+ void *jobData,
+ bool waitForCompletion)
{
virThreadPoolJobPtr job;
+ virCond complete;
+
+ if (waitForCompletion && virCondInit(&complete) < 0)
+ return -1;
virMutexLock(&pool->mutex);
if (pool->quit)
@@ -219,10 +229,17 @@ int virThreadPoolSendJob(virThreadPoolPtr pool,
job->data = jobData;
job->next = NULL;
+ job->complete = NULL;
*pool->jobList.tail = job;
pool->jobList.tail = &(*pool->jobList.tail)->next;
virCondSignal(&pool->cond);
+
+ if (waitForCompletion) {
+ job->complete = &complete;
+ virCondWait(&complete, &pool->mutex);
+ }
+
virMutexUnlock(&pool->mutex);
return 0;
diff --git a/src/util/threadpool.h b/src/util/threadpool.h
index 5714b0b..6f763dc 100644
--- a/src/util/threadpool.h
+++ b/src/util/threadpool.h
@@ -41,7 +41,8 @@ virThreadPoolPtr virThreadPoolNew(size_t minWorkers,
void virThreadPoolFree(virThreadPoolPtr pool);
int virThreadPoolSendJob(virThreadPoolPtr pool,
- void *jobdata) ATTRIBUTE_NONNULL(1)
+ void *jobdata,
+ bool waitForCompletion) ATTRIBUTE_NONNULL(1)
ATTRIBUTE_NONNULL(2)
ATTRIBUTE_RETURN_CHECK;
--
1.7.3.1
--
Thanks,
Hu Tao
13 years, 10 months
[libvirt] [PATCH] python: Use PyCapsule API if available
by Cole Robinson
On Fedore 14, virt-manager spews a bunch of warnings to the console:
/usr/lib64/python2.7/site-packages/libvirt.py:1781: PendingDeprecationWarning: The CObject type is marked Pending Deprecation in Python 2.7. Please use capsule objects instead.
Have libvirt use the capsule API if available. I've verified this compiles
fine on older python (2.6 in RHEL6 which doesn't have capsules), and
virt-manager seems to function fine.
---
python/typewrappers.c | 89 +++++++++++++++++++++++++++---------------------
1 files changed, 50 insertions(+), 39 deletions(-)
diff --git a/python/typewrappers.c b/python/typewrappers.c
index 733aa20..e39d3cd 100644
--- a/python/typewrappers.c
+++ b/python/typewrappers.c
@@ -16,6 +16,26 @@
#include "typewrappers.h"
+#ifndef Py_CAPSULE_H
+typedef void(*PyCapsule_Destructor)(void *, void *);
+#endif
+
+static PyObject *
+libvirt_buildPyObject(void *cobj,
+ const char *name,
+ PyCapsule_Destructor destr)
+{
+ PyObject *ret;
+
+#ifdef Py_CAPSULE_H
+ ret = PyCapsule_New(cobj, name, destr);
+#else
+ ret = PyCObject_FromVoidPtrAndDesc(cobj, (void *) name, destr);
+#endif /* _TEST_CAPSULE */
+
+ return ret;
+}
+
PyObject *
libvirt_intWrap(int val)
{
@@ -105,9 +125,8 @@ libvirt_virDomainPtrWrap(virDomainPtr node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virDomainPtr",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virDomainPtr", NULL);
return (ret);
}
@@ -120,9 +139,8 @@ libvirt_virNetworkPtrWrap(virNetworkPtr node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virNetworkPtr",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virNetworkPtr", NULL);
return (ret);
}
@@ -135,9 +153,8 @@ libvirt_virInterfacePtrWrap(virInterfacePtr node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virInterfacePtr",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virInterfacePtr", NULL);
return (ret);
}
@@ -150,9 +167,8 @@ libvirt_virStoragePoolPtrWrap(virStoragePoolPtr node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virStoragePoolPtr",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virStoragePoolPtr", NULL);
return (ret);
}
@@ -165,9 +181,8 @@ libvirt_virStorageVolPtrWrap(virStorageVolPtr node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virStorageVolPtr",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virStorageVolPtr", NULL);
return (ret);
}
@@ -180,9 +195,8 @@ libvirt_virConnectPtrWrap(virConnectPtr node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virConnectPtr",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virConnectPtr", NULL);
return (ret);
}
@@ -195,9 +209,8 @@ libvirt_virNodeDevicePtrWrap(virNodeDevicePtr node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virNodeDevicePtr",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virNodeDevicePtr", NULL);
return (ret);
}
@@ -210,7 +223,8 @@ libvirt_virSecretPtrWrap(virSecretPtr node)
Py_INCREF(Py_None);
return Py_None;
}
- ret = PyCObject_FromVoidPtrAndDesc(node, (char *) "virSecretPtr", NULL);
+
+ ret = libvirt_buildPyObject(node, "virSecretPtr", NULL);
return (ret);
}
@@ -223,7 +237,8 @@ libvirt_virNWFilterPtrWrap(virNWFilterPtr node)
Py_INCREF(Py_None);
return Py_None;
}
- ret = PyCObject_FromVoidPtrAndDesc(node, (char *) "virNWFilterPtr", NULL);
+
+ ret = libvirt_buildPyObject(node, "virNWFilterPtr", NULL);
return (ret);
}
@@ -236,7 +251,8 @@ libvirt_virStreamPtrWrap(virStreamPtr node)
Py_INCREF(Py_None);
return Py_None;
}
- ret = PyCObject_FromVoidPtrAndDesc(node, (char *) "virStreamPtr", NULL);
+
+ ret = libvirt_buildPyObject(node, "virStreamPtr", NULL);
return (ret);
}
@@ -249,9 +265,8 @@ libvirt_virDomainSnapshotPtrWrap(virDomainSnapshotPtr node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virDomainSnapshotPtr",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virDomainSnapshotPtr", NULL);
return (ret);
}
@@ -265,9 +280,8 @@ libvirt_virEventHandleCallbackWrap(virEventHandleCallback node)
printf("%s: WARNING - Wrapping None\n", __func__);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virEventHandleCallback",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virEventHandleCallback", NULL);
return (ret);
}
@@ -281,9 +295,8 @@ libvirt_virEventTimeoutCallbackWrap(virEventTimeoutCallback node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virEventTimeoutCallback",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virEventTimeoutCallback", NULL);
return (ret);
}
@@ -296,9 +309,8 @@ libvirt_virFreeCallbackWrap(virFreeCallback node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "virFreeCallback",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "virFreeCallback", NULL);
return (ret);
}
@@ -311,8 +323,7 @@ libvirt_virVoidPtrWrap(void* node)
Py_INCREF(Py_None);
return (Py_None);
}
- ret =
- PyCObject_FromVoidPtrAndDesc((void *) node, (char *) "void*",
- NULL);
+
+ ret = libvirt_buildPyObject(node, "void*", NULL);
return (ret);
}
--
1.7.3.2
13 years, 10 months
[libvirt] Implementing VNC per VM access control lists
by Neil Wilson
Hi,
At the moment SASL VNC authentication in libvirt allows any of the
userids to access any of the VNC consoles on a particular libvirt host.
There is a section in the qemu_command code marked "TODO: Support ACLs
later" and we would really like the ability to have per VM user
authorization to the VNC console from within libvirt.
Essentially the people who are accessing the VNC consoles are not
administrators and have no access to the Host server - so these ACLs
need to be completely based on a separate list of userids to any access
mechanism for the libvirtd itself.
Given that the VNC restrictions are enforced within qemu from the
monitor system, I'm presuming the authorization list is going to have to
be passed in via XML and be capable of being updated throughout the life
of a VM session. Unless there's another way of doing it...
What's the feeling about how this feature should be provided within
libvirt?
If there is somebody out there who has a bit of time at the moment and
fancies having a go at implementing this - and, of course, there is
agreement on a specification here - then we'd look at sponsoring them to
add the feature into Libvirt. Please put your hand up!
Regards,
Neil Wilson
13 years, 10 months
[libvirt] [PATCH] esx: Move occurrence check into esxVI_LookupObjectContentByType
by Matthias Bolte
This simplifies the callers of esxVI_LookupObjectContentByType.
---
As we're currently in feature freeze this patch is meant to be
applied after the next release.
Matthias
src/esx/esx_driver.c | 19 ++------
src/esx/esx_vi.c | 128 ++++++++++++++++++++++++-------------------------
src/esx/esx_vi.h | 3 +-
3 files changed, 69 insertions(+), 81 deletions(-)
diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c
index 6ada663..b582082 100644
--- a/src/esx/esx_driver.c
+++ b/src/esx/esx_driver.c
@@ -3231,13 +3231,7 @@ esxDomainGetAutostart(virDomainPtr domain, int *autostart)
(priv->primary,
priv->primary->hostSystem->configManager->autoStartManager,
"HostAutoStartManager", propertyNameList,
- &hostAutoStartManager) < 0) {
- goto cleanup;
- }
-
- if (hostAutoStartManager == NULL) {
- ESX_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve the HostAutoStartManager object"));
+ &hostAutoStartManager, esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
@@ -3275,13 +3269,7 @@ esxDomainGetAutostart(virDomainPtr domain, int *autostart)
(priv->primary,
priv->primary->hostSystem->configManager->autoStartManager,
"HostAutoStartManager", propertyNameList,
- &hostAutoStartManager) < 0) {
- goto cleanup;
- }
-
- if (hostAutoStartManager == NULL) {
- ESX_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve the HostAutoStartManager object"));
+ &hostAutoStartManager, esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
@@ -3912,7 +3900,8 @@ esxNodeGetFreeMemory(virConnectPtr conn)
esxVI_LookupObjectContentByType(priv->primary,
priv->primary->computeResource->resourcePool,
"ResourcePool", propertyNameList,
- &resourcePool) < 0) {
+ &resourcePool,
+ esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
diff --git a/src/esx/esx_vi.c b/src/esx/esx_vi.c
index 9eca9f4..7f4447c 100644
--- a/src/esx/esx_vi.c
+++ b/src/esx/esx_vi.c
@@ -499,13 +499,8 @@ esxVI_Context_LookupObjectsByPath(esxVI_Context *ctx,
"hostFolder\0") < 0 ||
esxVI_LookupObjectContentByType(ctx, ctx->service->rootFolder,
"Datacenter", propertyNameList,
- &datacenterList) < 0) {
- goto cleanup;
- }
-
- if (datacenterList == NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve datacenter list"));
+ &datacenterList,
+ esxVI_Occurrence_RequiredList) < 0) {
goto cleanup;
}
@@ -548,13 +543,8 @@ esxVI_Context_LookupObjectsByPath(esxVI_Context *ctx,
"resourcePool\0") < 0 ||
esxVI_LookupObjectContentByType(ctx, ctx->datacenter->hostFolder,
"ComputeResource", propertyNameList,
- &computeResourceList) < 0) {
- goto cleanup;
- }
-
- if (computeResourceList == NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve compute resource list"));
+ &computeResourceList,
+ esxVI_Occurrence_RequiredList) < 0) {
goto cleanup;
}
@@ -610,13 +600,8 @@ esxVI_Context_LookupObjectsByPath(esxVI_Context *ctx,
"configManager\0") < 0 ||
esxVI_LookupObjectContentByType(ctx, ctx->computeResource->_reference,
"HostSystem", propertyNameList,
- &hostSystemList) < 0) {
- goto cleanup;
- }
-
- if (hostSystemList == NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve host system list"));
+ &hostSystemList,
+ esxVI_Occurrence_RequiredList) < 0) {
goto cleanup;
}
@@ -687,17 +672,9 @@ esxVI_Context_LookupObjectsByHostSystemIp(esxVI_Context *ctx,
&managedObjectReference) < 0 ||
esxVI_LookupObjectContentByType(ctx, managedObjectReference,
"HostSystem", propertyNameList,
- &hostSystem) < 0) {
- goto cleanup;
- }
-
- if (hostSystem == NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve host system"));
- goto cleanup;
- }
-
- if (esxVI_HostSystem_CastFromObjectContent(hostSystem,
+ &hostSystem,
+ esxVI_Occurrence_RequiredItem) < 0 ||
+ esxVI_HostSystem_CastFromObjectContent(hostSystem,
&ctx->hostSystem) < 0) {
goto cleanup;
}
@@ -711,17 +688,9 @@ esxVI_Context_LookupObjectsByHostSystemIp(esxVI_Context *ctx,
"resourcePool\0") < 0 ||
esxVI_LookupObjectContentByType(ctx, hostSystem->obj,
"ComputeResource", propertyNameList,
- &computeResource) < 0) {
- goto cleanup;
- }
-
- if (computeResource == NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve compute resource of host system"));
- goto cleanup;
- }
-
- if (esxVI_ComputeResource_CastFromObjectContent(computeResource,
+ &computeResource,
+ esxVI_Occurrence_RequiredItem) < 0 ||
+ esxVI_ComputeResource_CastFromObjectContent(computeResource,
&ctx->computeResource) < 0) {
goto cleanup;
}
@@ -735,17 +704,9 @@ esxVI_Context_LookupObjectsByHostSystemIp(esxVI_Context *ctx,
"hostFolder\0") < 0 ||
esxVI_LookupObjectContentByType(ctx, computeResource->obj,
"Datacenter", propertyNameList,
- &datacenter) < 0) {
- goto cleanup;
- }
-
- if (datacenter == NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve datacenter of compute resource"));
- goto cleanup;
- }
-
- if (esxVI_Datacenter_CastFromObjectContent(datacenter,
+ &datacenter,
+ esxVI_Occurrence_RequiredItem) < 0 ||
+ esxVI_Datacenter_CastFromObjectContent(datacenter,
&ctx->datacenter) < 0) {
goto cleanup;
}
@@ -1586,7 +1547,8 @@ esxVI_EnsureSession(esxVI_Context *ctx)
"currentSession") < 0 ||
esxVI_LookupObjectContentByType(ctx, ctx->service->sessionManager,
"SessionManager", propertyNameList,
- &sessionManager) < 0) {
+ &sessionManager,
+ esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
@@ -1636,7 +1598,8 @@ esxVI_LookupObjectContentByType(esxVI_Context *ctx,
esxVI_ManagedObjectReference *root,
const char *type,
esxVI_String *propertyNameList,
- esxVI_ObjectContent **objectContentList)
+ esxVI_ObjectContent **objectContentList,
+ esxVI_Occurrence occurrence)
{
int result = -1;
esxVI_ObjectSpec *objectSpec = NULL;
@@ -1710,12 +1673,41 @@ esxVI_LookupObjectContentByType(esxVI_Context *ctx,
esxVI_PropertySpec_AppendToList(&propertyFilterSpec->propSet,
propertySpec) < 0 ||
esxVI_ObjectSpec_AppendToList(&propertyFilterSpec->objectSet,
- objectSpec) < 0) {
+ objectSpec) < 0 ||
+ esxVI_RetrieveProperties(ctx, propertyFilterSpec,
+ objectContentList) < 0) {
goto cleanup;
}
- result = esxVI_RetrieveProperties(ctx, propertyFilterSpec,
- objectContentList);
+ if (objectContentList == NULL) {
+ switch (occurrence) {
+ case esxVI_Occurrence_OptionalItem:
+ case esxVI_Occurrence_OptionalList:
+ result = 0;
+ break;
+
+ case esxVI_Occurrence_RequiredItem:
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
+ _("Could not lookup '%s' from '%s'"),
+ type, root->type);
+ break;
+
+ case esxVI_Occurrence_RequiredList:
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
+ _("Could not lookup '%s' list from '%s'"),
+ type, root->type);
+ break;
+
+ default:
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Invalid occurrence value"));
+ break;
+ }
+
+ goto cleanup;
+ }
+
+ result = 0;
cleanup:
/*
@@ -2276,7 +2268,8 @@ esxVI_LookupHostSystemProperties(esxVI_Context *ctx,
{
return esxVI_LookupObjectContentByType(ctx, ctx->hostSystem->_reference,
"HostSystem", propertyNameList,
- hostSystem);
+ hostSystem,
+ esxVI_Occurrence_RequiredItem);
}
@@ -2290,7 +2283,8 @@ esxVI_LookupVirtualMachineList(esxVI_Context *ctx,
* for cluster support */
return esxVI_LookupObjectContentByType(ctx, ctx->hostSystem->_reference,
"VirtualMachine", propertyNameList,
- virtualMachineList);
+ virtualMachineList,
+ esxVI_Occurrence_OptionalList);
}
@@ -2332,7 +2326,8 @@ esxVI_LookupVirtualMachineByUuid(esxVI_Context *ctx, const unsigned char *uuid,
if (esxVI_LookupObjectContentByType(ctx, managedObjectReference,
"VirtualMachine", propertyNameList,
- virtualMachine) < 0) {
+ virtualMachine,
+ esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
@@ -2475,7 +2470,8 @@ esxVI_LookupDatastoreList(esxVI_Context *ctx, esxVI_String *propertyNameList,
* support */
return esxVI_LookupObjectContentByType(ctx, ctx->hostSystem->_reference,
"Datastore", propertyNameList,
- datastoreList);
+ datastoreList,
+ esxVI_Occurrence_OptionalList);
}
@@ -2654,7 +2650,8 @@ esxVI_LookupDatastoreHostMount(esxVI_Context *ctx,
if (esxVI_String_AppendValueToList(&propertyNameList, "host") < 0 ||
esxVI_LookupObjectContentByType(ctx, datastore, "Datastore",
- propertyNameList, &objectContent) < 0) {
+ propertyNameList, &objectContent,
+ esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
@@ -2719,7 +2716,8 @@ esxVI_LookupTaskInfoByTask(esxVI_Context *ctx,
if (esxVI_String_AppendValueToList(&propertyNameList, "info") < 0 ||
esxVI_LookupObjectContentByType(ctx, task, "Task", propertyNameList,
- &objectContent) < 0) {
+ &objectContent,
+ esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
diff --git a/src/esx/esx_vi.h b/src/esx/esx_vi.h
index 553967b..7457751 100644
--- a/src/esx/esx_vi.h
+++ b/src/esx/esx_vi.h
@@ -284,7 +284,8 @@ int esxVI_LookupObjectContentByType(esxVI_Context *ctx,
esxVI_ManagedObjectReference *root,
const char *type,
esxVI_String *propertyNameList,
- esxVI_ObjectContent **objectContentList);
+ esxVI_ObjectContent **objectContentList,
+ esxVI_Occurrence occurrence);
int esxVI_GetManagedEntityStatus
(esxVI_ObjectContent *objectContent, const char *propertyName,
--
1.7.0.4
13 years, 10 months
[libvirt] Remove bashisms from libvirt-guests
by Laurent Léonard
Hi,
The attached patch removes bashisms from libvirt-guests.
TEXTDOMAINDIR is not specified, so system default will be used
("/usr/share/locale" on Debian, I don't know if it's the same on Fedora).
"xgettext -L Shell" output is the same with gettext shell functions as with
$"..." deprecated Bash-specific syntax.
Please generate po files somewhere in the source tree.
Thank you,
--
Laurent Léonard
13 years, 10 months
[libvirt] [PATCH] vbox: Use correct VRAM size unit
by Matthias Bolte
VirtualBox uses megabyte, libvirt uses kilobyte.
---
src/vbox/vbox_tmpl.c | 6 +++---
1 files changed, 3 insertions(+), 3 deletions(-)
diff --git a/src/vbox/vbox_tmpl.c b/src/vbox/vbox_tmpl.c
index f45e8ed..5ac94c3 100644
--- a/src/vbox/vbox_tmpl.c
+++ b/src/vbox/vbox_tmpl.c
@@ -2225,7 +2225,7 @@ static char *vboxDomainDumpXML(virDomainPtr dom, int flags) {
if (VIR_ALLOC_N(def->videos, def->nvideos) >= 0) {
if (VIR_ALLOC(def->videos[0]) >= 0) {
/* the default is: vram is 8MB, One monitor, 3dAccel Off */
- PRUint32 VRAMSize = 8 * 1024;
+ PRUint32 VRAMSize = 8;
PRUint32 monitorCount = 1;
PRBool accelerate3DEnabled = PR_FALSE;
PRBool accelerate2DEnabled = PR_FALSE;
@@ -2238,7 +2238,7 @@ static char *vboxDomainDumpXML(virDomainPtr dom, int flags) {
#endif /* VBOX_API_VERSION >= 3001 */
def->videos[0]->type = VIR_DOMAIN_VIDEO_TYPE_VBOX;
- def->videos[0]->vram = VRAMSize;
+ def->videos[0]->vram = VRAMSize * 1024;
def->videos[0]->heads = monitorCount;
if (VIR_ALLOC(def->videos[0]->accel) >= 0) {
def->videos[0]->accel->support3d = accelerate3DEnabled;
@@ -4397,7 +4397,7 @@ vboxAttachVideo(virDomainDefPtr def, IMachine *machine)
{
if ((def->nvideos == 1) &&
(def->videos[0]->type == VIR_DOMAIN_VIDEO_TYPE_VBOX)) {
- machine->vtbl->SetVRAMSize(machine, def->videos[0]->vram);
+ machine->vtbl->SetVRAMSize(machine, def->videos[0]->vram / 1024);
machine->vtbl->SetMonitorCount(machine, def->videos[0]->heads);
if (def->videos[0]->accel) {
machine->vtbl->SetAccelerate3DEnabled(machine,
--
1.7.0.4
13 years, 10 months