Re: [libvirt] [RHEL6 PATCH] Correct cpuid flags and "model" fields, V2
by john cooper
A question arose in today's kvm meeting concerning any
impact to libvirt from this change. I've discussed it
with Cole and it seems to be a non-issue. But just to
err on the side of caution, here's a summary:
The current cpu model definition of "qemu64" upstream
is problematic from kvm's perspective such that we need
to modify it slightly (BZ justifications listed below).
Doing so we've left the qemu64 definition as-is, but
added "cpu64-rhel6" and "cpu64-rhel5" models which
are now selected by default via "-M <machine>", for
"RHEL 6.0.0 PC" and "RHEL 5.x.y PC" respectively.
So the only issue would be libvirt invoking qemu with
neither a "-cpu" nor "-M" argument (which defaults to
qemu64) or explicitly requesting "-cpu qemu64".
>From my discussion with Cole it appears the use cases
where this may happen fall outside of routine/expected
usage and would need to be explicitly requested by the
user. However I wanted to call this out here in the
event we're overlooking something.
Thanks,
-john
http://post-office.corp.redhat.com/archives/rhvirt-patches/2010-July/msg0...
http://post-office.corp.redhat.com/archives/rhvirt-patches/2010-July/msg0...
john cooper wrote:
> Addresses BZs:
>
> #618332 - CPUID_EXT_POPCNT enabled in qemu64 and qemu32 built-in models.
> #613892 - [SR-IOV]VF device can not start on 32bit Windows2008 SP2
>
> Summary:
>
> CPU feature flags for several processor definitions require correction
> to accurately reflect the corresponding shipped silicon. In particular
> overly conservative values for cpuid "model" fields cause disable of
> MSI support in windows guests (BZ #613892). Also recent upstream changes
> of qemu64's built-in definition enables POPCNT (for the benefit of TCG)
> but risks kvm migration breakage (BZ #618332). The following patch
> address these issues collectively.
>
> Changes relative to previous version:
>
> - drop of the "qemu32-rhel5" model as it appears to be unneeded.
>
> - rename of "qemu64-rhel?" to "cpu64-rhel?" as we're diverging from
> upstream qemu64's definition but haven't migrated to kvm64 quite yet.
>
> - Correction of several fields for the (now) cpu64-rhel5 model.
>
> Further detail may be found in the associated mail thread common to
> both BZ cases starting about here:
>
> http://post-office.corp.redhat.com/archives/rhvirt-patches/2010-July/msg0...
>
> Brew Build:
>
> https://brewweb.devel.redhat.com/taskinfo?taskID=2643552
>
> Upstream status:
>
> BZ #618332 is a local, interim change and doesn't relate to upstream
> concerns. Longer-term this qemu64 derived model would be displaced
> by kvm64. The update of cpu definitions (BZ #613892) however needs
> to be pushed upstream upon validation in rhel6. We're currently the
> sole users of the new cpu models which fosters this inverted process.
--
john.cooper(a)redhat.com
14 years, 3 months
[libvirt] [PATCH] qemu: Fix PCI address allocation
by Jiri Denemark
When attaching a PCI device which doesn't explicitly set its PCI
address, libvirt allocates the address automatically. The problem is
that when checking which PCI address is unused, we only check for those
with slot number higher than the highest slot number ever used.
Thus attaching/detaching such device several times in a row (31 is the
theoretical limit, less then 30 tries are enough in practise) makes any
further device attachment fail. Furthermore, attaching a device with
predefined PCI address to 0:0:31 immediately forbids attachment of any
PCI device without explicit address.
This patch changes the logic so that we always check all PCI addresses
before we say there is no PCI address available.
---
src/qemu/qemu_conf.c | 14 ++++----------
1 files changed, 4 insertions(+), 10 deletions(-)
diff --git a/src/qemu/qemu_conf.c b/src/qemu/qemu_conf.c
index 57bc02f..eaebcc1 100644
--- a/src/qemu/qemu_conf.c
+++ b/src/qemu/qemu_conf.c
@@ -2055,8 +2055,6 @@ qemuAssignDeviceAliases(virDomainDefPtr def, unsigned long long qemuCmdFlags)
#define QEMU_PCI_ADDRESS_LAST_SLOT 31
struct _qemuDomainPCIAddressSet {
virHashTablePtr used;
- int nextslot;
- /* XXX add domain, bus later when QEMU allows > 1 */
};
@@ -2148,9 +2146,6 @@ int qemuDomainPCIAddressReserveAddr(qemuDomainPCIAddressSetPtr addrs,
return -1;
}
- if (dev->addr.pci.slot > addrs->nextslot)
- addrs->nextslot = dev->addr.pci.slot + 1;
-
return 0;
}
@@ -2217,7 +2212,7 @@ int qemuDomainPCIAddressSetNextAddr(qemuDomainPCIAddressSetPtr addrs,
{
int i;
- for (i = addrs->nextslot ; i <= QEMU_PCI_ADDRESS_LAST_SLOT ; i++) {
+ for (i = 0 ; i <= QEMU_PCI_ADDRESS_LAST_SLOT ; i++) {
virDomainDeviceInfo maybe;
char *addr;
@@ -2228,13 +2223,14 @@ int qemuDomainPCIAddressSetNextAddr(qemuDomainPCIAddressSetPtr addrs,
addr = qemuPCIAddressAsString(&maybe);
- VIR_DEBUG("Allocating PCI addr %s", addr);
-
if (virHashLookup(addrs->used, addr)) {
+ VIR_DEBUG("PCI addr %s already in use", addr);
VIR_FREE(addr);
continue;
}
+ VIR_DEBUG("Allocating PCI addr %s", addr);
+
if (virHashAddEntry(addrs->used, addr, addr) < 0) {
VIR_FREE(addr);
return -1;
@@ -2245,8 +2241,6 @@ int qemuDomainPCIAddressSetNextAddr(qemuDomainPCIAddressSetPtr addrs,
dev->addr.pci.bus = 0;
dev->addr.pci.slot = i;
- addrs->nextslot = i + 1;
-
return 0;
}
--
1.7.2
14 years, 3 months
[libvirt] [PATCH] OpenVZ: implement suspend/resume driver APIs
by Jean-Baptiste Rouault
---
src/openvz/openvz_driver.c | 84 ++++++++++++++++++++++++++++++++++++++++++-
1 files changed, 82 insertions(+), 2 deletions(-)
diff --git a/src/openvz/openvz_driver.c b/src/openvz/openvz_driver.c
index e5bbdd0..bdc0e92 100644
--- a/src/openvz/openvz_driver.c
+++ b/src/openvz/openvz_driver.c
@@ -503,6 +503,86 @@ static void openvzSetProgramSentinal(const char **prog, const char *key)
}
}
+static int openvzDomainSuspend(virDomainPtr dom) {
+ struct openvz_driver *driver = dom->conn->privateData;
+ virDomainObjPtr vm;
+ const char *prog[] = {VZCTL, "--quiet", "chkpnt", PROGRAM_SENTINAL, "--suspend", NULL};
+ int ret = -1;
+
+ openvzDriverLock(driver);
+ vm = virDomainFindByUUID(&driver->domains, dom->uuid);
+ openvzDriverUnlock(driver);
+
+ if (!vm) {
+ openvzError(VIR_ERR_INVALID_DOMAIN, "%s",
+ _("no domain with matching uuid"));
+ goto cleanup;
+ }
+
+ if (!virDomainObjIsActive(vm)) {
+ openvzError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("Domain is not running"));
+ goto cleanup;
+ }
+
+ if (vm->state != VIR_DOMAIN_PAUSED) {
+ openvzSetProgramSentinal(prog, vm->def->name);
+ if (virRun(prog, NULL) < 0) {
+ openvzError(VIR_ERR_OPERATION_FAILED, "%s",
+ _("Suspend operation failed"));
+ goto cleanup;
+ }
+ vm->state = VIR_DOMAIN_PAUSED;
+ }
+
+ ret = 0;
+
+cleanup:
+ if (vm)
+ virDomainObjUnlock(vm);
+ return ret;
+}
+
+static int openvzDomainResume(virDomainPtr dom) {
+ struct openvz_driver *driver = dom->conn->privateData;
+ virDomainObjPtr vm;
+ const char *prog[] = {VZCTL, "--quiet", "chkpnt", PROGRAM_SENTINAL, "--resume", NULL};
+ int ret = -1;
+
+ openvzDriverLock(driver);
+ vm = virDomainFindByUUID(&driver->domains, dom->uuid);
+ openvzDriverUnlock(driver);
+
+ if (!vm) {
+ openvzError(VIR_ERR_INVALID_DOMAIN, "%s",
+ _("no domain with matching uuid"));
+ goto cleanup;
+ }
+
+ if (!virDomainObjIsActive(vm)) {
+ openvzError(VIR_ERR_OPERATION_INVALID, "%s",
+ _("Domain is not running"));
+ goto cleanup;
+ }
+
+ if (vm->state == VIR_DOMAIN_PAUSED) {
+ openvzSetProgramSentinal(prog, vm->def->name);
+ if (virRun(prog, NULL) < 0) {
+ openvzError(VIR_ERR_OPERATION_FAILED, "%s",
+ _("Resume operation failed"));
+ goto cleanup;
+ }
+ vm->state = VIR_DOMAIN_RUNNING;
+ }
+
+ ret = 0;
+
+cleanup:
+ if (vm)
+ virDomainObjUnlock(vm);
+ return ret;
+}
+
static int openvzDomainShutdown(virDomainPtr dom) {
struct openvz_driver *driver = dom->conn->privateData;
virDomainObjPtr vm;
@@ -1491,8 +1571,8 @@ static virDriver openvzDriver = {
openvzDomainLookupByID, /* domainLookupByID */
openvzDomainLookupByUUID, /* domainLookupByUUID */
openvzDomainLookupByName, /* domainLookupByName */
- NULL, /* domainSuspend */
- NULL, /* domainResume */
+ openvzDomainSuspend, /* domainSuspend */
+ openvzDomainResume, /* domainResume */
openvzDomainShutdown, /* domainShutdown */
openvzDomainReboot, /* domainReboot */
openvzDomainShutdown, /* domainDestroy */
--
1.7.0.4
14 years, 3 months
[libvirt] [PATCH] Don't leak delay string when freeing virInterfaceBridgeDefs
by Laine Stump
I noticed this while looking into checking the default bridge delay to 0.
---
src/conf/interface_conf.c | 1 +
1 files changed, 1 insertions(+), 0 deletions(-)
diff --git a/src/conf/interface_conf.c b/src/conf/interface_conf.c
index 6430f7a..b24526f 100644
--- a/src/conf/interface_conf.c
+++ b/src/conf/interface_conf.c
@@ -84,6 +84,7 @@ void virInterfaceDefFree(virInterfaceDefPtr def)
switch (def->type) {
case VIR_INTERFACE_TYPE_BRIDGE:
+ VIR_FREE(def->data.bridge.delay);
for (i = 0;i < def->data.bridge.nbItf;i++) {
if (def->data.bridge.itf[i] == NULL)
break; /* to cope with half parsed data on errors */
--
1.7.2
14 years, 3 months
[libvirt] [PATCH] Fix build error in virsh.c
by Laine Stump
I just pushed this under the trivial rule...
Another gettext string with no format args sent to printf as a format string.
---
tools/virsh.c | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/tools/virsh.c b/tools/virsh.c
index 2ccf08b..c0ee3ee 100644
--- a/tools/virsh.c
+++ b/tools/virsh.c
@@ -11022,7 +11022,7 @@ vshParseArgv(vshControl *ctl, int argc, char **argv)
switch (arg) {
case 'd':
if (virStrToLong_i(optarg, NULL, 10, &ctl->debug) < 0) {
- vshError(ctl, _("option -d takes a numeric argument"));
+ vshError(ctl, "%s", _("option -d takes a numeric argument"));
exit(EXIT_FAILURE);
}
break;
--
1.7.2
14 years, 3 months
Re: [libvirt] [virt-tools-list] Add a manually configured XEN
by Michal Novotny
Hi,
I can see the point Bernhard. It seems that libvirt is not aware of this
guest. Well, if virsh (part of libvirt) is seeing the guest only then
the guest is running then you need to create a configuration for that in
the libvirt itself. I know there's something like:
virsh domxml-from-native <format> <config>
but I never used it so I don't know about the config and format and how
does it work. Please try asking the libvirt guys on the libvirt list (I
put this list in CC now).
Michal
On 08/03/2010 02:59 PM, Bernhard Suttner wrote:
> Hi,
>
> manually means, I did not use virt-install or the virt-manager to add the XEN domU to the dom0. I set it up manually (created configuration manually, added disk images and so on manually, ....).
>
> virsh list --all does show the domU BUT only if I start it manually with xm create domU. If the domU does not run, it does not appear. Other virtual manchines (domU) which I created with virt-manager or virt-install do also appear, if the domU is currently not up (stopped).
>
> Best regards,
> Bernhard
>
> -----Ursprüngliche Nachricht-----
> Von: Michal Novotny [mailto:minovotn@redhat.com]
> Gesendet: Dienstag, 3. August 2010 14:17
> An: Bernhard Suttner
> Cc: virt-tools-list(a)redhat.com
> Betreff: Re: AW: [virt-tools-list] Add a manually configured XEN
>
> On 08/03/2010 02:14 PM, Bernhard Suttner wrote:
>
>> Hi,
>>
>> thanks. But this did not work. I think, this option does only connect to another dom0 - means, to another xen host system. I want, that virt-manager does show the domU (the guest) of the local xen installation which I manually added to this system.
>>
>>
> Why do you mean that is manually added to the system? Does `virsh list
> --all` show the guest?
>
> Regards,
> Michal
>
>
>> Best regards,
>> Bernhard
>>
>> -----Ursprüngliche Nachricht-----
>> Von: Michal Novotny [mailto:minovotn@redhat.com]
>> Gesendet: Dienstag, 3. August 2010 10:51
>> An: Bernhard Suttner
>> Cc: virt-tools-list(a)redhat.com
>> Betreff: Re: [virt-tools-list] Add a manually configured XEN
>>
>> Hi Bernhard,
>> I'm not the expert but when you start virt-manager you can go to the
>> File -> Add connection and add the connection there. The connection
>> should be persistent and just connecting to it should fail when xend is
>> not running AFAIK.
>>
>> Hope this helps,
>> Michal
>>
>> On 08/02/2010 09:13 PM, Bernhard Suttner wrote:
>>
>>
>>> Hi,
>>>
>>> I am using XEN and the virt-manager (which use libvirt). I have configured the XEN domU manually within the shell. If I start the domU with xm create config then the virtual machine appears in the virt-manager. If I stop the XEN it will disappear. How could I add the XEN to the virt-manager like all the other (with virt-manager) created domU that I have it listed with the virt-manager and could start it from GUI?
>>>
>>> Best regards,
>>> Bernhard Suttner
>>>
>>> _______________________________________________
>>> virt-tools-list mailing list
>>> virt-tools-list(a)redhat.com
>>> https://www.redhat.com/mailman/listinfo/virt-tools-list
>>>
>>>
>>>
>>
>>
>
>
--
Michal Novotny<minovotn(a)redhat.com>, RHCE
Virtualization Team (xen userspace), Red Hat
14 years, 3 months
[libvirt] [PATCH] [RFC] ISCSI transport protocol support in libvirt
by Aurelien ROUGEMONT
Hi everyone,
This my first post on this list
Context : Some days ago I have decided to use infiniband for ISCSI
streams. Infiniband adds a new wonderful transport protocol : ISER. This
protocols is added to the well known the default TCP. (NB: ISER = ISCSI
+ RDMA). I could not see any ISCSI transport protocol modification
support in libivirt (the default protocol tcp is even hardcoded in the
regex)
What i have done :
- did the shell testing for the iscsiadm
- the attached patch that corrects 2 typos in the original code and
switches completely the iscsi transport protocol from tcp to iser (which
is not ideal at all)
What should be done (imho):
- add iscsi transport protocol support (using my patch as a basis)
- add a new XML property/whatever_fits_the_projects_policy to the
storagepool object that allows user to pick the iscsi transport protocol
(default is tcp)
I was thinking of having something like :
<pool type="iscsi">
<name>volumename</name>
<source>
<host name="1.2.3.4"/>
<device path="IQNOFTARGET"/>
<transport protocol="iser"/>
</source>
<target>
<path>/dev/disk/by-path</path>
</target>
</pool>
Any comment on this ? Any help on the XML part ?
Best regards,
Aurélien
NB: the current iscsi transport protocols available are : tcp(default),
iser, qla4xxx, bnx2, and icxgb3i.
PS: i'm still doing extensive testing of my patch
!DSPAM:4c46f49d90977882820711!
14 years, 3 months
Re: [libvirt] Extensions to the libvirt Storage API
by Shyam Iyer
On 07/28/2010 01:24 PM, Daniel P. Berrange wrote:
>
> >>>>
> >>>>
> >>> We explicitly don't support external driver plugins in libvirt for a
> >>> couple of reasons
> >>>
> >>> - We don't want to support use of closed source plugins
> >>> - We don't want to guarentee stability of any aspect of
> >>> libvirt's internal API
> >>>
> >>> We would like to see support for the various vendor specific iSCSI
> >>> extensions to allow volume creation/deletion, but want that code to
> >>> be part of the libvirt codebase.
> >>>
The APIs can also invoke standard snmpget/set methods to talk to the
target.
All standard distributions ship with a net-snmp implementation.
Would an implementation of the APIs that invokes snmpset/gets be an
amenable solution?
> Namespace clash ! The virDomainSnapshot APIs are per-hypervisor. They
> do snapshotting of the guest VM (including its storage).
>
> I was actually just talking about the storage backends though which
> can do snapshots independently of any hypervisor. See
> the<backingstorage>
> element here:
>
> http://libvirt.org/formatstorage.html#StorageVolBacking
>
> This is already implemented with the LVM pool doing LVM snapshots. We
> also use it for external qcow2 backing files.
>
There are certain benefits of allowing snapshots/backup to happen in the
storage and thereby save CPU cycles.
It becomes even more visible with large TB storage where in a simple
qcow2 backup copy could take a long time.
14 years, 3 months
[libvirt] [PATCH] esx: Restrict vpx:// to handle a single host in a vCenter
by Matthias Bolte
Now a vpx:// connection has an explicitly specified host. This
allows to enabled several functions for a vpx:// connection
again, like host UUID, hostname, general node info, max vCPU
count, free memory, migration and defining new domains.
Lookup datacenter, compute resource, resource pool and host
system once and cache them. This simplifies the rest of the
code and reduces overall HTTP(S) traffic a bit.
esx:// and vpx:// can be mixed freely for a migration.
Ensure that migration source and destination refer to the
same vCenter. Also directly encode the resource pool and
host system object IDs into the migration URI in the prepare
function. Then directly build managed object references in
the perform function instead of re-looking up already known
information.
---
docs/drvesx.html.in | 33 +++-
src/esx/esx_driver.c | 360 ++++++++++----------------
src/esx/esx_storage_driver.c | 24 +--
src/esx/esx_vi.c | 588 ++++++++++++++++++++++++++----------------
src/esx/esx_vi.h | 26 ++-
src/esx/esx_vmx.c | 4 +
6 files changed, 561 insertions(+), 474 deletions(-)
diff --git a/docs/drvesx.html.in b/docs/drvesx.html.in
index 4ae7a51..dfc91bb 100644
--- a/docs/drvesx.html.in
+++ b/docs/drvesx.html.in
@@ -28,7 +28,7 @@
Some example remote connection URIs for the driver are:
</p>
<pre>
-vpx://example-vcenter.com (VPX over HTTPS)
+vpx://example-vcenter.com/dc1/srv1 (VPX over HTTPS, select ESX server 'srv1' in datacenter 'dc1')
esx://example-esx.com (ESX over HTTPS)
gsx://example-gsx.com (GSX over HTTPS)
esx://example-esx.com/?transport=http (ESX over HTTP)
@@ -48,7 +48,7 @@ esx://example-esx.com/?no_verify=1 (ESX over HTTPS, but doesn't verify the s
URIs have this general form (<code>[...]</code> marks an optional part).
</p>
<pre>
-type://[username@]hostname[:port]/[?extraparameters]
+type://[username@]hostname[:port]/[datacenter[/cluster]/server][?extraparameters]
</pre>
<p>
The <code>type://</code> is either <code>esx://</code> or
@@ -58,6 +58,20 @@ type://[username@]hostname[:port]/[?extraparameters]
is 443, for <code>gsx://</code> it is 8333.
If the port parameter is given, it overrides the default port.
</p>
+ <p>
+ A <code>vpx://</code> connection is currently restricted to a single
+ ESX server. This might be relaxed in the future. The path part of the
+ URI is used to specify the datacenter and the ESX server in it. If the
+ ESX server is part of a cluster then the cluster has to be specified too.
+ </p>
+ <p>
+ An example: ESX server <code>example-esx.com</code> is managed by
+ vCenter <code>example-vcenter.com</code> and part of cluster
+ <code>cluster1</code>. This cluster is part of datacenter <code>dc1</code>.
+ </p>
+<pre>
+vpx://example-vcenter.com/dc1/cluster1/example-esx.com
+</pre>
<h4>Extra parameters</h4>
@@ -588,7 +602,7 @@ ethernet0.address = "00:50:56:25:48:C7"
esx://example.com/?vcenter=example-vcenter.com
</pre>
<p>
- Here an example how to migrate the domain <code>Fedora11</code> from
+ Here's an example how to migrate the domain <code>Fedora11</code> from
ESX server <code>example-src.com</code> to ESX server
<code>example-dst.com</code> implicitly involving vCenter
<code>example-vcenter.com</code> using <code>virsh</code>.
@@ -604,6 +618,19 @@ Enter root password for example-dst.com:
Enter username for example-vcenter.com [administrator]:
Enter administrator password for example-vcenter.com:
</pre>
+ <p>
+ <span class="since">Since 0.8.3</span> you can directly connect to a vCenter.
+ This simplifies migration a bit. Here's the same migration as above but
+ using <code>vpx://</code> connections and assuming both ESX server are in
+ datacenter <code>dc1</code> and aren't part of a cluster.
+ </p>
+<pre>
+$ virsh -c vpx://example-vcenter.com/dc1/example-src.com migrate Fedora11 vpx://example-vcenter.com/dc1/example-dst.com
+Enter username for example-vcenter.com [administrator]:
+Enter administrator password for example-vcenter.com:
+Enter username for example-vcenter.com [administrator]:
+Enter administrator password for example-vcenter.com:
+</pre>
<h2><a name="scheduler">Scheduler configuration</a></h2>
diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c
index 3bdc551..fd87078 100644
--- a/src/esx/esx_driver.c
+++ b/src/esx/esx_driver.c
@@ -67,20 +67,14 @@ esxSupportsLongMode(esxPrivate *priv)
return priv->supportsLongMode;
}
- if (priv->host == NULL) {
- /* FIXME: Currently no host for a vpx:// connection */
- return esxVI_Boolean_False;
- }
-
- if (esxVI_EnsureSession(priv->host) < 0) {
+ if (esxVI_EnsureSession(priv->primary) < 0) {
return esxVI_Boolean_Undefined;
}
if (esxVI_String_AppendValueToList(&propertyNameList,
"hardware.cpuFeature") < 0 ||
- esxVI_LookupObjectContentByType(priv->host, priv->host->hostFolder,
- "HostSystem", propertyNameList,
- esxVI_Boolean_True, &hostSystem) < 0) {
+ esxVI_LookupHostSystemProperties(priv->primary, propertyNameList,
+ &hostSystem) < 0) {
goto cleanup;
}
@@ -153,20 +147,14 @@ esxLookupHostSystemBiosUuid(esxPrivate *priv, unsigned char *uuid)
esxVI_ObjectContent *hostSystem = NULL;
esxVI_DynamicProperty *dynamicProperty = NULL;
- if (priv->host == NULL) {
- /* FIXME: Currently no host for a vpx:// connection */
- return 0;
- }
-
- if (esxVI_EnsureSession(priv->host) < 0) {
+ if (esxVI_EnsureSession(priv->primary) < 0) {
return -1;
}
if (esxVI_String_AppendValueToList(&propertyNameList,
"hardware.systemInfo.uuid") < 0 ||
- esxVI_LookupObjectContentByType(priv->host, priv->host->hostFolder,
- "HostSystem", propertyNameList,
- esxVI_Boolean_True, &hostSystem) < 0) {
+ esxVI_LookupHostSystemProperties(priv->primary, propertyNameList,
+ &hostSystem) < 0) {
goto cleanup;
}
@@ -236,7 +224,7 @@ esxCapsInit(esxPrivate *priv)
}
virCapabilitiesSetMacPrefix(caps, (unsigned char[]){ 0x00, 0x0c, 0x29 });
- virCapabilitiesAddHostMigrateTransport(caps, "esx");
+ virCapabilitiesAddHostMigrateTransport(caps, "vpxmigr");
caps->hasWideScsiBus = true;
@@ -347,7 +335,8 @@ esxConnectToHost(esxPrivate *priv, virConnectAuthPtr auth,
if (esxVI_Context_Alloc(&priv->host) < 0 ||
esxVI_Context_Connect(priv->host, url, ipAddress, username, password,
- parsedUri) < 0) {
+ parsedUri) < 0 ||
+ esxVI_Context_LookupObjectsByPath(priv->host, parsedUri) < 0) {
goto cleanup;
}
@@ -373,8 +362,8 @@ esxConnectToHost(esxPrivate *priv, virConnectAuthPtr auth,
if (esxVI_String_AppendValueListToList(&propertyNameList,
"runtime.inMaintenanceMode\0"
"summary.managementServerIp\0") < 0 ||
- esxVI_LookupHostSystemByIp(priv->host, ipAddress, propertyNameList,
- &hostSystem) < 0 ||
+ esxVI_LookupHostSystemProperties(priv->host, propertyNameList,
+ &hostSystem) < 0 ||
esxVI_GetBoolean(hostSystem, "runtime.inMaintenanceMode",
&inMaintenanceMode,
esxVI_Occurrence_RequiredItem) < 0 ||
@@ -416,6 +405,7 @@ static int
esxConnectToVCenter(esxPrivate *priv, virConnectAuthPtr auth,
const char *hostname, int port,
const char *predefinedUsername,
+ const char *hostSystemIpAddress,
esxUtil_ParsedUri *parsedUri)
{
int result = -1;
@@ -424,6 +414,14 @@ esxConnectToVCenter(esxPrivate *priv, virConnectAuthPtr auth,
char *password = NULL;
char *url = NULL;
+ if (hostSystemIpAddress == NULL &&
+ (parsedUri->path_datacenter == NULL ||
+ parsedUri->path_computeResource == NULL)) {
+ ESX_ERROR(VIR_ERR_INVALID_ARG, "%s",
+ _("Path has to specify the datacenter and compute resource"));
+ return -1;
+ }
+
if (esxUtil_ResolveHostname(hostname, ipAddress, NI_MAXHOST) < 0) {
return -1;
}
@@ -473,6 +471,17 @@ esxConnectToVCenter(esxPrivate *priv, virConnectAuthPtr auth,
goto cleanup;
}
+ if (hostSystemIpAddress != NULL) {
+ if (esxVI_Context_LookupObjectsByHostSystemIp(priv->vCenter,
+ hostSystemIpAddress) < 0) {
+ goto cleanup;
+ }
+ } else {
+ if (esxVI_Context_LookupObjectsByPath(priv->vCenter, parsedUri) < 0) {
+ goto cleanup;
+ }
+ }
+
result = 0;
cleanup:
@@ -486,7 +495,8 @@ esxConnectToVCenter(esxPrivate *priv, virConnectAuthPtr auth,
/*
- * URI format: {vpx|esx|gsx}://[<username>@]<hostname>[:<port>]/[<query parameter> ...]
+ * URI format: {vpx|esx|gsx}://[<username>@]<hostname>[:<port>]/[<path>][?<query parameter> ...]
+ * <path> = <datacenter>/<computeresource>[/<hostsystem>]
*
* If no port is specified the default port is set dependent on the scheme and
* transport parameter:
@@ -497,6 +507,11 @@ esxConnectToVCenter(esxPrivate *priv, virConnectAuthPtr auth,
* - gsx+http 8222
* - gsx+https 8333
*
+ * For a vpx:// connection <path> references a host managed by the vCenter.
+ * In case the host is part of a cluster then <computeresource> is the cluster
+ * name. Otherwise <computeresource> and <hostsystem> are equal and the later
+ * can be omitted.
+ *
* Optional query parameters:
* - transport={http|https}
* - vcenter={<vcenter>|*} only useful for an esx:// connection
@@ -508,7 +523,7 @@ esxConnectToVCenter(esxPrivate *priv, virConnectAuthPtr auth,
*
* The vcenter parameter is only necessary for migration, because the vCenter
* server is in charge to initiate a migration between two ESX hosts. The
- * vcenter parameter can be set to an explicity hostname or to *. If set to *,
+ * vcenter parameter can be set to an explicitly hostname or to *. If set to *,
* the driver will check if the ESX host is managed by a vCenter and connect to
* it. If the ESX host is not managed by a vCenter an error is reported.
*
@@ -635,7 +650,8 @@ esxOpen(virConnectPtr conn, virConnectAuthPtr auth, int flags ATTRIBUTE_UNUSED)
}
if (esxConnectToVCenter(priv, auth, vCenterIpAddress,
- conn->uri->port, NULL, parsedUri) < 0) {
+ conn->uri->port, NULL,
+ priv->host->ipAddress, parsedUri) < 0) {
goto cleanup;
}
}
@@ -644,7 +660,7 @@ esxOpen(virConnectPtr conn, virConnectAuthPtr auth, int flags ATTRIBUTE_UNUSED)
} else { /* VPX */
/* Connect to vCenter */
if (esxConnectToVCenter(priv, auth, conn->uri->server, conn->uri->port,
- conn->uri->user, parsedUri) < 0) {
+ conn->uri->user, NULL, parsedUri) < 0) {
goto cleanup;
}
@@ -722,26 +738,19 @@ esxSupportsVMotion(esxPrivate *priv)
{
esxVI_String *propertyNameList = NULL;
esxVI_ObjectContent *hostSystem = NULL;
- esxVI_DynamicProperty *dynamicProperty = NULL;
if (priv->supportsVMotion != esxVI_Boolean_Undefined) {
return priv->supportsVMotion;
}
- if (priv->host == NULL) {
- /* FIXME: Currently no host for a vpx:// connection */
- return esxVI_Boolean_False;
- }
-
- if (esxVI_EnsureSession(priv->host) < 0) {
+ if (esxVI_EnsureSession(priv->primary) < 0) {
return esxVI_Boolean_Undefined;
}
if (esxVI_String_AppendValueToList(&propertyNameList,
"capability.vmotionSupported") < 0 ||
- esxVI_LookupObjectContentByType(priv->host, priv->host->hostFolder,
- "HostSystem", propertyNameList,
- esxVI_Boolean_True, &hostSystem) < 0) {
+ esxVI_LookupHostSystemProperties(priv->primary, propertyNameList,
+ &hostSystem) < 0) {
goto cleanup;
}
@@ -751,19 +760,10 @@ esxSupportsVMotion(esxPrivate *priv)
goto cleanup;
}
- for (dynamicProperty = hostSystem->propSet; dynamicProperty != NULL;
- dynamicProperty = dynamicProperty->_next) {
- if (STREQ(dynamicProperty->name, "capability.vmotionSupported")) {
- if (esxVI_AnyType_ExpectType(dynamicProperty->val,
- esxVI_Type_Boolean) < 0) {
- goto cleanup;
- }
-
- priv->supportsVMotion = dynamicProperty->val->boolean;
- break;
- } else {
- VIR_WARN("Unexpected '%s' property", dynamicProperty->name);
- }
+ if (esxVI_GetBoolean(hostSystem, "capability.vmotionSupported",
+ &priv->supportsVMotion,
+ esxVI_Occurrence_RequiredItem) < 0) {
+ goto cleanup;
}
cleanup:
@@ -842,14 +842,7 @@ esxGetHostname(virConnectPtr conn)
const char *domainName = NULL;
char *complete = NULL;
- if (priv->host == NULL) {
- /* FIXME: Currently no host for a vpx:// connection */
- ESX_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve the hostname for a vpx:// connection"));
- return NULL;
- }
-
- if (esxVI_EnsureSession(priv->host) < 0) {
+ if (esxVI_EnsureSession(priv->primary) < 0) {
return NULL;
}
@@ -857,9 +850,8 @@ esxGetHostname(virConnectPtr conn)
(&propertyNameList,
"config.network.dnsConfig.hostName\0"
"config.network.dnsConfig.domainName\0") < 0 ||
- esxVI_LookupObjectContentByType(priv->host, priv->host->hostFolder,
- "HostSystem", propertyNameList,
- esxVI_Boolean_True, &hostSystem) < 0) {
+ esxVI_LookupHostSystemProperties(priv->primary, propertyNameList,
+ &hostSystem) < 0) {
goto cleanup;
}
@@ -944,14 +936,7 @@ esxNodeGetInfo(virConnectPtr conn, virNodeInfoPtr nodeinfo)
memset(nodeinfo, 0, sizeof (*nodeinfo));
- if (priv->host == NULL) {
- /* FIXME: Currently no host for a vpx:// connection */
- ESX_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Nodeinfo is not available for a vpx:// connection"));
- return -1;
- }
-
- if (esxVI_EnsureSession(priv->host) < 0) {
+ if (esxVI_EnsureSession(priv->primary) < 0) {
return -1;
}
@@ -963,9 +948,8 @@ esxNodeGetInfo(virConnectPtr conn, virNodeInfoPtr nodeinfo)
"hardware.memorySize\0"
"hardware.numaInfo.numNodes\0"
"summary.hardware.cpuModel\0") < 0 ||
- esxVI_LookupObjectContentByType(priv->host, priv->host->hostFolder,
- "HostSystem", propertyNameList,
- esxVI_Boolean_True, &hostSystem) < 0) {
+ esxVI_LookupHostSystemProperties(priv->primary, propertyNameList,
+ &hostSystem) < 0) {
goto cleanup;
}
@@ -1126,10 +1110,8 @@ esxListDomains(virConnectPtr conn, int *ids, int maxids)
if (esxVI_String_AppendValueToList(&propertyNameList,
"runtime.powerState") < 0 ||
- esxVI_LookupObjectContentByType(priv->primary, priv->primary->vmFolder,
- "VirtualMachine", propertyNameList,
- esxVI_Boolean_True,
- &virtualMachineList) < 0) {
+ esxVI_LookupVirtualMachineList(priv->primary, propertyNameList,
+ &virtualMachineList) < 0) {
goto cleanup;
}
@@ -1209,10 +1191,8 @@ esxDomainLookupByID(virConnectPtr conn, int id)
"name\0"
"runtime.powerState\0"
"config.uuid\0") < 0 ||
- esxVI_LookupObjectContentByType(priv->primary, priv->primary->vmFolder,
- "VirtualMachine", propertyNameList,
- esxVI_Boolean_True,
- &virtualMachineList) < 0) {
+ esxVI_LookupVirtualMachineList(priv->primary, propertyNameList,
+ &virtualMachineList) < 0) {
goto cleanup;
}
@@ -2142,22 +2122,14 @@ esxDomainGetMaxVcpus(virDomainPtr domain)
priv->maxVcpus = -1;
- if (priv->host == NULL) {
- /* FIXME: Currently no host for a vpx:// connection */
- ESX_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("MaxVCPUs value is not available for a vpx:// connection"));
- return -1;
- }
-
- if (esxVI_EnsureSession(priv->host) < 0) {
+ if (esxVI_EnsureSession(priv->primary) < 0) {
return -1;
}
if (esxVI_String_AppendValueToList(&propertyNameList,
"capability.maxSupportedVcpus") < 0 ||
- esxVI_LookupObjectContentByType(priv->host, priv->host->hostFolder,
- "HostSystem", propertyNameList,
- esxVI_Boolean_True, &hostSystem) < 0) {
+ esxVI_LookupHostSystemProperties(priv->primary, propertyNameList,
+ &hostSystem) < 0) {
goto cleanup;
}
@@ -2196,11 +2168,8 @@ esxDomainDumpXML(virDomainPtr domain, int flags)
{
esxPrivate *priv = domain->conn->privateData;
esxVI_String *propertyNameList = NULL;
- esxVI_ObjectContent *datacenter = NULL;
esxVI_ObjectContent *virtualMachine = NULL;
- esxVI_DynamicProperty *dynamicProperty = NULL;
- const char *vmPathName = NULL;
- char *datacenterName = NULL;
+ char *vmPathName = NULL;
char *datastoreName = NULL;
char *directoryName = NULL;
char *fileName = NULL;
@@ -2214,38 +2183,16 @@ esxDomainDumpXML(virDomainPtr domain, int flags)
return NULL;
}
- if (esxVI_String_AppendValueToList(&propertyNameList, "name") < 0 ||
- esxVI_LookupObjectContentByType(priv->primary, priv->primary->datacenter,
- "Datacenter", propertyNameList,
- esxVI_Boolean_False, &datacenter) < 0 ||
- esxVI_GetStringValue(datacenter, "name", &datacenterName,
- esxVI_Occurrence_RequiredItem) < 0) {
- goto cleanup;
- }
-
- esxVI_String_Free(&propertyNameList);
-
if (esxVI_String_AppendValueToList(&propertyNameList,
"config.files.vmPathName") < 0 ||
esxVI_LookupVirtualMachineByUuid(priv->primary, domain->uuid,
propertyNameList, &virtualMachine,
- esxVI_Occurrence_RequiredItem) < 0) {
+ esxVI_Occurrence_RequiredItem) < 0 ||
+ esxVI_GetStringValue(virtualMachine, "config.files.vmPathName",
+ &vmPathName, esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
- for (dynamicProperty = virtualMachine->propSet; dynamicProperty != NULL;
- dynamicProperty = dynamicProperty->_next) {
- if (STREQ(dynamicProperty->name, "config.files.vmPathName")) {
- if (esxVI_AnyType_ExpectType(dynamicProperty->val,
- esxVI_Type_String) < 0) {
- goto cleanup;
- }
-
- vmPathName = dynamicProperty->val->string;
- break;
- }
- }
-
if (esxUtil_ParseDatastoreRelatedPath(vmPathName, &datastoreName,
&directoryName, &fileName) < 0) {
goto cleanup;
@@ -2261,7 +2208,7 @@ esxDomainDumpXML(virDomainPtr domain, int flags)
virBufferURIEncodeString(&buffer, fileName);
virBufferAddLit(&buffer, "?dcPath=");
- virBufferURIEncodeString(&buffer, datacenterName);
+ virBufferURIEncodeString(&buffer, priv->primary->datacenter->name);
virBufferAddLit(&buffer, "&dsName=");
virBufferURIEncodeString(&buffer, datastoreName);
@@ -2289,7 +2236,6 @@ esxDomainDumpXML(virDomainPtr domain, int flags)
}
esxVI_String_Free(&propertyNameList);
- esxVI_ObjectContent_Free(&datacenter);
esxVI_ObjectContent_Free(&virtualMachine);
VIR_FREE(datastoreName);
VIR_FREE(directoryName);
@@ -2392,10 +2338,8 @@ esxListDefinedDomains(virConnectPtr conn, char **const names, int maxnames)
if (esxVI_String_AppendValueListToList(&propertyNameList,
"name\0"
"runtime.powerState\0") < 0 ||
- esxVI_LookupObjectContentByType(priv->primary, priv->primary->vmFolder,
- "VirtualMachine", propertyNameList,
- esxVI_Boolean_True,
- &virtualMachineList) < 0) {
+ esxVI_LookupVirtualMachineList(priv->primary, propertyNameList,
+ &virtualMachineList) < 0) {
goto cleanup;
}
@@ -2557,14 +2501,7 @@ esxDomainDefineXML(virConnectPtr conn, const char *xml ATTRIBUTE_UNUSED)
esxVI_TaskInfoState taskInfoState;
virDomainPtr domain = NULL;
- if (priv->host == NULL) {
- /* FIXME: Currently no host for a vpx:// connection */
- ESX_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not define domain with a vpx:// connection"));
- return NULL;
- }
-
- if (esxVI_EnsureSession(priv->host) < 0) {
+ if (esxVI_EnsureSession(priv->primary) < 0) {
return NULL;
}
@@ -2577,7 +2514,7 @@ esxDomainDefineXML(virConnectPtr conn, const char *xml ATTRIBUTE_UNUSED)
}
/* Check if an existing domain should be edited */
- if (esxVI_LookupVirtualMachineByUuid(priv->host, def->uuid, NULL,
+ if (esxVI_LookupVirtualMachineByUuid(priv->primary, def->uuid, NULL,
&virtualMachine,
esxVI_Occurrence_OptionalItem) < 0) {
goto cleanup;
@@ -2592,8 +2529,8 @@ esxDomainDefineXML(virConnectPtr conn, const char *xml ATTRIBUTE_UNUSED)
}
/* Build VMX from domain XML */
- vmx = esxVMX_FormatConfig(priv->host, priv->caps, def,
- priv->host->productVersion);
+ vmx = esxVMX_FormatConfig(priv->primary, priv->caps, def,
+ priv->primary->productVersion);
if (vmx == NULL) {
goto cleanup;
@@ -2657,7 +2594,7 @@ esxDomainDefineXML(virConnectPtr conn, const char *xml ATTRIBUTE_UNUSED)
virBufferURIEncodeString(&buffer, def->name);
virBufferAddLit(&buffer, ".vmx?dcPath=");
- virBufferURIEncodeString(&buffer, priv->host->datacenter->value);
+ virBufferURIEncodeString(&buffer, priv->primary->datacenter->name);
virBufferAddLit(&buffer, "&dsName=");
virBufferURIEncodeString(&buffer, datastoreName);
@@ -2682,31 +2619,21 @@ esxDomainDefineXML(virConnectPtr conn, const char *xml ATTRIBUTE_UNUSED)
}
}
- /* Get resource pool */
- if (esxVI_String_AppendValueToList(&propertyNameList, "parent") < 0 ||
- esxVI_LookupHostSystemByIp(priv->host, priv->host->ipAddress,
- propertyNameList, &hostSystem) < 0) {
- goto cleanup;
- }
-
- if (esxVI_LookupResourcePoolByHostSystem(priv->host, hostSystem,
- &resourcePool) < 0) {
- goto cleanup;
- }
-
/* Check, if VMX file already exists */
/* FIXME */
/* Upload VMX file */
- if (esxVI_Context_UploadFile(priv->host, url, vmx) < 0) {
+ if (esxVI_Context_UploadFile(priv->primary, url, vmx) < 0) {
goto cleanup;
}
/* Register the domain */
- if (esxVI_RegisterVM_Task(priv->host, priv->host->vmFolder,
+ if (esxVI_RegisterVM_Task(priv->primary, priv->primary->datacenter->vmFolder,
datastoreRelatedPath, NULL, esxVI_Boolean_False,
- resourcePool, hostSystem->obj, &task) < 0 ||
- esxVI_WaitForTaskCompletion(priv->host, task, def->uuid,
+ priv->primary->computeResource->resourcePool,
+ priv->primary->hostSystem->_reference,
+ &task) < 0 ||
+ esxVI_WaitForTaskCompletion(priv->primary, task, def->uuid,
esxVI_Occurrence_OptionalItem,
priv->autoAnswer, &taskInfoState) < 0) {
goto cleanup;
@@ -3102,32 +3029,25 @@ static int
esxDomainMigratePrepare(virConnectPtr dconn,
char **cookie ATTRIBUTE_UNUSED,
int *cookielen ATTRIBUTE_UNUSED,
- const char *uri_in, char **uri_out,
+ const char *uri_in ATTRIBUTE_UNUSED,
+ char **uri_out,
unsigned long flags ATTRIBUTE_UNUSED,
const char *dname ATTRIBUTE_UNUSED,
unsigned long resource ATTRIBUTE_UNUSED)
{
- int result = -1;
- esxUtil_ParsedUri *parsedUri = NULL;
+ esxPrivate *priv = dconn->privateData;
if (uri_in == NULL) {
- if (esxUtil_ParseUri(&parsedUri, dconn->uri) < 0) {
- return -1;
- }
-
- if (virAsprintf(uri_out, "%s://%s:%d/sdk", parsedUri->transport,
- dconn->uri->server, dconn->uri->port) < 0) {
+ if (virAsprintf(uri_out, "vpxmigr://%s/%s/%s",
+ priv->vCenter->ipAddress,
+ priv->vCenter->computeResource->resourcePool->value,
+ priv->vCenter->hostSystem->_reference->value) < 0) {
virReportOOMError();
- goto cleanup;
+ return -1;
}
}
- result = 0;
-
- cleanup:
- esxUtil_FreeParsedUri(&parsedUri);
-
- return result;
+ return 0;
}
@@ -3143,12 +3063,13 @@ esxDomainMigratePerform(virDomainPtr domain,
{
int result = -1;
esxPrivate *priv = domain->conn->privateData;
- xmlURIPtr xmlUri = NULL;
- char hostIpAddress[NI_MAXHOST] = "";
+ xmlURIPtr parsedUri = NULL;
+ char *saveptr;
+ char *path_resourcePool;
+ char *path_hostSystem;
esxVI_ObjectContent *virtualMachine = NULL;
- esxVI_String *propertyNameList = NULL;
- esxVI_ObjectContent *hostSystem = NULL;
- esxVI_ManagedObjectReference *resourcePool = NULL;
+ esxVI_ManagedObjectReference resourcePool;
+ esxVI_ManagedObjectReference hostSystem;
esxVI_Event *eventList = NULL;
esxVI_ManagedObjectReference *task = NULL;
esxVI_TaskInfoState taskInfoState;
@@ -3169,39 +3090,57 @@ esxDomainMigratePerform(virDomainPtr domain,
return -1;
}
- /* Parse the destination URI and resolve the hostname */
- xmlUri = xmlParseURI(uri);
+ /* Parse migration URI */
+ parsedUri = xmlParseURI(uri);
- if (xmlUri == NULL) {
+ if (parsedUri == NULL) {
virReportOOMError();
return -1;
}
- if (esxUtil_ResolveHostname(xmlUri->server, hostIpAddress,
- NI_MAXHOST) < 0) {
+ if (parsedUri->scheme == NULL || STRCASENEQ(parsedUri->scheme, "vpxmigr")) {
+ ESX_ERROR(VIR_ERR_INVALID_ARG, "%s",
+ _("Only vpxmigr:// migration URIs are supported"));
goto cleanup;
}
- /* Lookup VirtualMachine, HostSystem and ResourcePool */
- if (esxVI_LookupVirtualMachineByUuidAndPrepareForTask
- (priv->vCenter, domain->uuid, NULL, &virtualMachine,
- priv->autoAnswer) < 0 ||
- esxVI_String_AppendValueToList(&propertyNameList, "parent") < 0 ||
- esxVI_LookupHostSystemByIp(priv->vCenter, hostIpAddress,
- propertyNameList, &hostSystem) < 0) {
+ if (STRCASENEQ(priv->vCenter->ipAddress, parsedUri->server)) {
+ ESX_ERROR(VIR_ERR_INVALID_ARG, "%s",
+ _("Migration source and destination have to refer to "
+ "the same vCenter"));
+ goto cleanup;
+ }
+
+ path_resourcePool = strtok_r(parsedUri->path, "/", &saveptr);
+ path_hostSystem = strtok_r(NULL, "", &saveptr);
+
+ if (path_resourcePool == NULL || path_hostSystem == NULL) {
+ ESX_ERROR(VIR_ERR_INVALID_ARG, "%s",
+ _("Migration URI has to specify resource pool and host system"));
goto cleanup;
}
- if (esxVI_LookupResourcePoolByHostSystem(priv->vCenter, hostSystem,
- &resourcePool) < 0) {
+ resourcePool._next = NULL;
+ resourcePool._type = esxVI_Type_ManagedObjectReference;
+ resourcePool.type = (char *)"ResourcePool";
+ resourcePool.value = path_resourcePool;
+
+ hostSystem._next = NULL;
+ hostSystem._type = esxVI_Type_ManagedObjectReference;
+ hostSystem.type = (char *)"HostSystem";
+ hostSystem.value = path_hostSystem;
+
+ /* Lookup VirtualMachine */
+ if (esxVI_LookupVirtualMachineByUuidAndPrepareForTask
+ (priv->vCenter, domain->uuid, NULL, &virtualMachine,
+ priv->autoAnswer) < 0) {
goto cleanup;
}
/* Validate the purposed migration */
if (esxVI_ValidateMigration(priv->vCenter, virtualMachine->obj,
- esxVI_VirtualMachinePowerState_Undefined,
- NULL, resourcePool, hostSystem->obj,
- &eventList) < 0) {
+ esxVI_VirtualMachinePowerState_Undefined, NULL,
+ &resourcePool, &hostSystem, &eventList) < 0) {
goto cleanup;
}
@@ -3224,8 +3163,8 @@ esxDomainMigratePerform(virDomainPtr domain,
}
/* Perform the purposed migration */
- if (esxVI_MigrateVM_Task(priv->vCenter, virtualMachine->obj, resourcePool,
- hostSystem->obj,
+ if (esxVI_MigrateVM_Task(priv->vCenter, virtualMachine->obj,
+ &resourcePool, &hostSystem,
esxVI_VirtualMachineMovePriority_DefaultPriority,
esxVI_VirtualMachinePowerState_Undefined,
&task) < 0 ||
@@ -3245,11 +3184,8 @@ esxDomainMigratePerform(virDomainPtr domain,
result = 0;
cleanup:
- xmlFreeURI(xmlUri);
+ xmlFreeURI(parsedUri);
esxVI_ObjectContent_Free(&virtualMachine);
- esxVI_String_Free(&propertyNameList);
- esxVI_ObjectContent_Free(&hostSystem);
- esxVI_ManagedObjectReference_Free(&resourcePool);
esxVI_Event_Free(&eventList);
esxVI_ManagedObjectReference_Free(&task);
@@ -3276,41 +3212,19 @@ esxNodeGetFreeMemory(virConnectPtr conn)
unsigned long long result = 0;
esxPrivate *priv = conn->privateData;
esxVI_String *propertyNameList = NULL;
- esxVI_ObjectContent *hostSystem = NULL;
- esxVI_ManagedObjectReference *managedObjectReference = NULL;
esxVI_ObjectContent *resourcePool = NULL;
esxVI_DynamicProperty *dynamicProperty = NULL;
esxVI_ResourcePoolResourceUsage *resourcePoolResourceUsage = NULL;
- if (priv->host == NULL) {
- /* FIXME: Currently no host for a vpx:// connection */
- ESX_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve free memory for a vpx:// connection"));
- return 0;
- }
-
- if (esxVI_EnsureSession(priv->host) < 0) {
+ if (esxVI_EnsureSession(priv->primary) < 0) {
return 0;
}
- /* Lookup host system with its resource pool */
- if (esxVI_String_AppendValueToList(&propertyNameList, "parent") < 0 ||
- esxVI_LookupHostSystemByIp(priv->host, priv->host->ipAddress,
- propertyNameList, &hostSystem) < 0) {
- goto cleanup;
- }
-
- if (esxVI_LookupResourcePoolByHostSystem(priv->host, hostSystem,
- &managedObjectReference) < 0) {
- goto cleanup;
- }
-
- esxVI_String_Free(&propertyNameList);
-
/* Get memory usage of resource pool */
if (esxVI_String_AppendValueToList(&propertyNameList,
"runtime.memory") < 0 ||
- esxVI_LookupObjectContentByType(priv->host, managedObjectReference,
+ esxVI_LookupObjectContentByType(priv->primary,
+ priv->primary->computeResource->resourcePool,
"ResourcePool", propertyNameList,
esxVI_Boolean_False,
&resourcePool) < 0) {
@@ -3341,8 +3255,6 @@ esxNodeGetFreeMemory(virConnectPtr conn)
cleanup:
esxVI_String_Free(&propertyNameList);
- esxVI_ObjectContent_Free(&hostSystem);
- esxVI_ManagedObjectReference_Free(&managedObjectReference);
esxVI_ObjectContent_Free(&resourcePool);
esxVI_ResourcePoolResourceUsage_Free(&resourcePoolResourceUsage);
diff --git a/src/esx/esx_storage_driver.c b/src/esx/esx_storage_driver.c
index 9f25e02..b0ccc32 100644
--- a/src/esx/esx_storage_driver.c
+++ b/src/esx/esx_storage_driver.c
@@ -78,9 +78,7 @@ esxNumberOfStoragePools(virConnectPtr conn)
return -1;
}
- if (esxVI_LookupObjectContentByType(priv->primary, priv->primary->datacenter,
- "Datastore", NULL, esxVI_Boolean_True,
- &datastoreList) < 0) {
+ if (esxVI_LookupDatastoreList(priv->primary, NULL, &datastoreList) < 0) {
return -1;
}
@@ -123,10 +121,8 @@ esxListStoragePools(virConnectPtr conn, char **const names, int maxnames)
if (esxVI_String_AppendValueToList(&propertyNameList,
"summary.name") < 0 ||
- esxVI_LookupObjectContentByType(priv->primary, priv->primary->datacenter,
- "Datastore", propertyNameList,
- esxVI_Boolean_True,
- &datastoreList) < 0) {
+ esxVI_LookupDatastoreList(priv->primary, propertyNameList,
+ &datastoreList) < 0) {
goto cleanup;
}
@@ -308,15 +304,7 @@ esxStoragePoolLookupByUUID(virConnectPtr conn, const unsigned char *uuid)
char *name = NULL;
virStoragePoolPtr pool = NULL;
- /* FIXME: Need to handle this for a vpx:// connection */
- if (priv->host == NULL ||
- ! (priv->host->productVersion & esxVI_ProductVersion_ESX)) {
- ESX_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Lookup by UUID is supported on ESX only"));
- return NULL;
- }
-
- if (esxVI_EnsureSession(priv->host) < 0) {
+ if (esxVI_EnsureSession(priv->primary) < 0) {
return NULL;
}
@@ -334,7 +322,7 @@ esxStoragePoolLookupByUUID(virConnectPtr conn, const unsigned char *uuid)
* part of the 'summary.url' property if there is no name match.
*/
if (esxVI_String_AppendValueToList(&propertyNameList, "summary.name") < 0 ||
- esxVI_LookupDatastoreByName(priv->host, uuid_string,
+ esxVI_LookupDatastoreByName(priv->primary, uuid_string,
propertyNameList, &datastore,
esxVI_Occurrence_OptionalItem) < 0) {
goto cleanup;
@@ -350,7 +338,7 @@ esxStoragePoolLookupByUUID(virConnectPtr conn, const unsigned char *uuid)
if (datastore == NULL && STREQ(uuid_string + 17, "-0000-000000000000")) {
uuid_string[17] = '\0';
- if (esxVI_LookupDatastoreByName(priv->host, uuid_string,
+ if (esxVI_LookupDatastoreByName(priv->primary, uuid_string,
propertyNameList, &datastore,
esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
diff --git a/src/esx/esx_vi.c b/src/esx/esx_vi.c
index 5695881..9f2ac36 100644
--- a/src/esx/esx_vi.c
+++ b/src/esx/esx_vi.c
@@ -101,10 +101,11 @@ ESX_VI__TEMPLATE__FREE(Context,
VIR_FREE(item->password);
esxVI_ServiceContent_Free(&item->service);
esxVI_UserSession_Free(&item->session);
- esxVI_ManagedObjectReference_Free(&item->datacenter);
- esxVI_ManagedObjectReference_Free(&item->vmFolder);
- esxVI_ManagedObjectReference_Free(&item->hostFolder);
+ esxVI_Datacenter_Free(&item->datacenter);
+ esxVI_ComputeResource_Free(&item->computeResource);
+ esxVI_HostSystem_Free(&item->hostSystem);
esxVI_SelectionSpec_Free(&item->fullTraversalSpecList);
+ esxVI_SelectionSpec_Free(&item->fullTraversalSpecList2);
});
static size_t
@@ -279,11 +280,6 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
const char *ipAddress, const char *username,
const char *password, esxUtil_ParsedUri *parsedUri)
{
- int result = -1;
- esxVI_String *propertyNameList = NULL;
- esxVI_ObjectContent *datacenterList = NULL;
- esxVI_DynamicProperty *dynamicProperty = NULL;
-
if (ctx == NULL || url == NULL || ipAddress == NULL || username == NULL ||
password == NULL || ctx->url != NULL || ctx->service != NULL ||
ctx->curl_handle != NULL || ctx->curl_headers != NULL) {
@@ -293,7 +289,7 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
if (esxVI_String_DeepCopyValue(&ctx->url, url) < 0 ||
esxVI_String_DeepCopyValue(&ctx->ipAddress, ipAddress) < 0) {
- goto cleanup;
+ return -1;
}
ctx->curl_handle = curl_easy_init();
@@ -301,7 +297,7 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
if (ctx->curl_handle == NULL) {
ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
_("Could not initialize CURL"));
- goto cleanup;
+ return -1;
}
ctx->curl_headers = curl_slist_append(ctx->curl_headers, "Content-Type: "
@@ -321,7 +317,7 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
if (ctx->curl_headers == NULL) {
ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
_("Could not build CURL header list"));
- goto cleanup;
+ return -1;
}
curl_easy_setopt(ctx->curl_handle, CURLOPT_URL, ctx->url);
@@ -357,7 +353,7 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
if (virMutexInit(&ctx->curl_lock) < 0) {
ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
_("Could not initialize CURL mutex"));
- goto cleanup;
+ return -1;
}
ctx->username = strdup(username);
@@ -365,11 +361,11 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
if (ctx->username == NULL || ctx->password == NULL) {
virReportOOMError();
- goto cleanup;
+ return -1;
}
if (esxVI_RetrieveServiceContent(ctx, &ctx->service) < 0) {
- goto cleanup;
+ return -1;
}
if (STREQ(ctx->service->about->apiType, "HostAgent") ||
@@ -389,7 +385,7 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
_("Expecting VI API major/minor version '2.5' or '4.x' "
"but found '%s'"), ctx->service->about->apiVersion);
- goto cleanup;
+ return -1;
}
if (STREQ(ctx->service->about->productLineId, "gsx")) {
@@ -399,7 +395,7 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
_("Expecting GSX major/minor version '2.0' but "
"found '%s'"), ctx->service->about->version);
- goto cleanup;
+ return -1;
}
} else if (STREQ(ctx->service->about->productLineId, "esx") ||
STREQ(ctx->service->about->productLineId, "embeddedEsx")) {
@@ -419,7 +415,7 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
_("Expecting ESX major/minor version '3.5' or "
"'4.x' but found '%s'"),
ctx->service->about->version);
- goto cleanup;
+ return -1;
}
} else if (STREQ(ctx->service->about->productLineId, "vpx")) {
if (STRPREFIX(ctx->service->about->version, "2.5")) {
@@ -437,36 +433,67 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
_("Expecting VPX major/minor version '2.5' or '4.x' "
"but found '%s'"), ctx->service->about->version);
- goto cleanup;
+ return -1;
}
} else {
ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
_("Expecting product 'gsx' or 'esx' or 'embeddedEsx' "
"or 'vpx' but found '%s'"),
ctx->service->about->productLineId);
- goto cleanup;
+ return -1;
}
} else {
ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
_("Expecting VI API type 'HostAgent' or 'VirtualCenter' "
"but found '%s'"), ctx->service->about->apiType);
- goto cleanup;
+ return -1;
}
- if (esxVI_Login(ctx, username, password, NULL, &ctx->session) < 0) {
- goto cleanup;
+ if (esxVI_Login(ctx, username, password, NULL, &ctx->session) < 0 ||
+ esxVI_BuildFullTraversalSpecList(&ctx->fullTraversalSpecList) < 0) {
+ return -1;
}
- esxVI_BuildFullTraversalSpecList(&ctx->fullTraversalSpecList);
+ /* Folder -> parent (Folder, Datacenter) */
+ if (esxVI_BuildFullTraversalSpecItem(&ctx->fullTraversalSpecList2,
+ "managedEntityToParent",
+ "ManagedEntity", "parent",
+ NULL) < 0) {
+ return -1;
+ }
- if (esxVI_String_AppendValueListToList(&propertyNameList,
- "vmFolder\0"
- "hostFolder\0") < 0) {
- goto cleanup;
+ /* ComputeResource -> parent (Folder) */
+ if (esxVI_BuildFullTraversalSpecItem(&ctx->fullTraversalSpecList2,
+ "computeResourceToParent",
+ "ComputeResource", "parent",
+ "managedEntityToParent\0") < 0) {
+ return -1;
}
- /* Get pointer to Datacenter for later use */
- if (esxVI_LookupObjectContentByType(ctx, ctx->service->rootFolder,
+ return 0;
+}
+
+int
+esxVI_Context_LookupObjectsByPath(esxVI_Context *ctx,
+ esxUtil_ParsedUri *parsedUri)
+{
+ int result = -1;
+ esxVI_String *propertyNameList = NULL;
+ char *name = NULL;
+ esxVI_ObjectContent *datacenterList = NULL;
+ esxVI_ObjectContent *datacenter = NULL;
+ esxVI_ObjectContent *computeResourceList = NULL;
+ esxVI_ObjectContent *computeResource = NULL;
+ char *hostSystemName = NULL;
+ esxVI_ObjectContent *hostSystemList = NULL;
+ esxVI_ObjectContent *hostSystem = NULL;
+
+ /* Lookup Datacenter */
+ if (esxVI_String_AppendValueListToList(&propertyNameList,
+ "name\0"
+ "vmFolder\0"
+ "hostFolder\0") < 0 ||
+ esxVI_LookupObjectContentByType(ctx, ctx->service->rootFolder,
"Datacenter", propertyNameList,
esxVI_Boolean_True,
&datacenterList) < 0) {
@@ -475,36 +502,156 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
if (datacenterList == NULL) {
ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve the 'datacenter' object from the "
- "VI host/center"));
+ _("Could not retrieve datacenter list"));
goto cleanup;
}
- ctx->datacenter = datacenterList->obj;
- datacenterList->obj = NULL;
+ if (parsedUri->path_datacenter != NULL) {
+ for (datacenter = datacenterList; datacenter != NULL;
+ datacenter = datacenter->_next) {
+ name = NULL;
- /* Get pointer to vmFolder and hostFolder for later use */
- for (dynamicProperty = datacenterList->propSet; dynamicProperty != NULL;
- dynamicProperty = dynamicProperty->_next) {
- if (STREQ(dynamicProperty->name, "vmFolder")) {
- if (esxVI_ManagedObjectReference_CastFromAnyType
- (dynamicProperty->val, &ctx->vmFolder)) {
+ if (esxVI_GetStringValue(datacenter, "name", &name,
+ esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
- } else if (STREQ(dynamicProperty->name, "hostFolder")) {
- if (esxVI_ManagedObjectReference_CastFromAnyType
- (dynamicProperty->val, &ctx->hostFolder)) {
+
+ if (STREQ(name, parsedUri->path_datacenter)) {
+ break;
+ }
+ }
+
+ if (datacenter == NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
+ _("Could not find datacenter '%s'"),
+ parsedUri->path_datacenter);
+ goto cleanup;
+ }
+ } else {
+ datacenter = datacenterList;
+ }
+
+ if (esxVI_Datacenter_CastFromObjectContent(datacenter,
+ &ctx->datacenter) < 0) {
+ goto cleanup;
+ }
+
+ /* Lookup ComputeResource */
+ esxVI_String_Free(&propertyNameList);
+
+ if (esxVI_String_AppendValueListToList(&propertyNameList,
+ "name\0"
+ "host\0"
+ "resourcePool\0") < 0 ||
+ esxVI_LookupObjectContentByType(ctx, ctx->datacenter->hostFolder,
+ "ComputeResource", propertyNameList,
+ esxVI_Boolean_True,
+ &computeResourceList) < 0) {
+ goto cleanup;
+ }
+
+ if (computeResourceList == NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Could not retrieve compute resource list"));
+ goto cleanup;
+ }
+
+ if (parsedUri->path_computeResource != NULL) {
+ for (computeResource = computeResourceList; computeResource != NULL;
+ computeResource = computeResource->_next) {
+ name = NULL;
+
+ if (esxVI_GetStringValue(computeResource, "name", &name,
+ esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
- } else {
- VIR_WARN("Unexpected '%s' property", dynamicProperty->name);
+
+ if (STREQ(name, parsedUri->path_computeResource)) {
+ break;
+ }
}
+
+ if (computeResource == NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
+ _("Could not find compute resource '%s'"),
+ parsedUri->path_computeResource);
+ goto cleanup;
+ }
+ } else {
+ computeResource = computeResourceList;
+ }
+
+ if (esxVI_ComputeResource_CastFromObjectContent(computeResource,
+ &ctx->computeResource) < 0) {
+ goto cleanup;
+ }
+
+ if (ctx->computeResource->resourcePool == NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Could not retrieve resource pool"));
+ goto cleanup;
+ }
+
+ /* Lookup HostSystem */
+ if (parsedUri->path_hostSystem == NULL &&
+ STREQ(ctx->computeResource->_reference->type,
+ "ClusterComputeResource")) {
+ ESX_VI_ERROR(VIR_ERR_INVALID_ARG, "%s",
+ _("Path has to specify the host system"));
+ goto cleanup;
}
- if (ctx->vmFolder == NULL || ctx->hostFolder == NULL) {
+ esxVI_String_Free(&propertyNameList);
+
+ if (esxVI_String_AppendValueListToList(&propertyNameList,
+ "name\0") < 0 ||
+ esxVI_LookupObjectContentByType(ctx, ctx->computeResource->_reference,
+ "HostSystem", propertyNameList,
+ esxVI_Boolean_True,
+ &hostSystemList) < 0) {
+ goto cleanup;
+ }
+
+ if (hostSystemList == NULL) {
ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("The 'datacenter' object is missing the "
- "'vmFolder'/'hostFolder' property"));
+ _("Could not retrieve host system list"));
+ goto cleanup;
+ }
+
+ if (parsedUri->path_hostSystem != NULL ||
+ (parsedUri->path_computeResource != NULL &&
+ parsedUri->path_hostSystem == NULL)) {
+ if (parsedUri->path_hostSystem != NULL) {
+ hostSystemName = parsedUri->path_hostSystem;
+ } else {
+ hostSystemName = parsedUri->path_computeResource;
+ }
+
+ for (hostSystem = hostSystemList; hostSystem != NULL;
+ hostSystem = hostSystem->_next) {
+ name = NULL;
+
+ if (esxVI_GetStringValue(hostSystem, "name", &name,
+ esxVI_Occurrence_RequiredItem) < 0) {
+ goto cleanup;
+ }
+
+ if (STREQ(name, hostSystemName)) {
+ break;
+ }
+ }
+
+ if (hostSystem == NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
+ _("Could not find host system '%s'"), hostSystemName);
+ goto cleanup;
+ }
+ } else {
+ hostSystem = hostSystemList;
+ }
+
+ if (esxVI_HostSystem_CastFromObjectContent(hostSystem,
+ &ctx->hostSystem) < 0) {
goto cleanup;
}
@@ -513,6 +660,110 @@ esxVI_Context_Connect(esxVI_Context *ctx, const char *url,
cleanup:
esxVI_String_Free(&propertyNameList);
esxVI_ObjectContent_Free(&datacenterList);
+ esxVI_ObjectContent_Free(&computeResourceList);
+ esxVI_ObjectContent_Free(&hostSystemList);
+
+ return result;
+}
+
+int
+esxVI_Context_LookupObjectsByHostSystemIp(esxVI_Context *ctx,
+ const char *hostSystemIpAddress)
+{
+ int result = -1;
+ esxVI_String *propertyNameList = NULL;
+ esxVI_ManagedObjectReference *managedObjectReference = NULL;
+ esxVI_ObjectContent *hostSystem = NULL;
+ esxVI_ObjectContent *computeResource = NULL;
+ esxVI_ObjectContent *datacenter = NULL;
+
+ /* Lookup HostSystem */
+ if (esxVI_String_AppendValueListToList(&propertyNameList,
+ "name\0") < 0 ||
+ esxVI_FindByIp(ctx, NULL, hostSystemIpAddress, esxVI_Boolean_False,
+ &managedObjectReference) < 0 ||
+ esxVI_LookupObjectContentByType(ctx, managedObjectReference,
+ "HostSystem", propertyNameList,
+ esxVI_Boolean_False, &hostSystem) < 0) {
+ goto cleanup;
+ }
+
+ if (hostSystem == NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Could not retrieve host system"));
+ goto cleanup;
+ }
+
+ if (esxVI_HostSystem_CastFromObjectContent(hostSystem,
+ &ctx->hostSystem) < 0) {
+ goto cleanup;
+ }
+
+ /* Lookup ComputeResource */
+ esxVI_String_Free(&propertyNameList);
+
+ if (esxVI_String_AppendValueListToList(&propertyNameList,
+ "name\0"
+ "host\0"
+ "resourcePool\0") < 0 ||
+ esxVI_LookupObjectContentByType(ctx, hostSystem->obj,
+ "ComputeResource", propertyNameList,
+ esxVI_Boolean_True,
+ &computeResource) < 0) {
+ goto cleanup;
+ }
+
+ if (computeResource == NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Could not retrieve compute resource of host system"));
+ goto cleanup;
+ }
+
+ if (esxVI_ComputeResource_CastFromObjectContent(computeResource,
+ &ctx->computeResource) < 0) {
+ goto cleanup;
+ }
+
+ /* Lookup Datacenter */
+ esxVI_String_Free(&propertyNameList);
+
+ if (esxVI_String_AppendValueListToList(&propertyNameList,
+ "name\0"
+ "vmFolder\0"
+ "hostFolder\0") < 0 ||
+ esxVI_LookupObjectContentByType(ctx, computeResource->obj,
+ "Datacenter", propertyNameList,
+ /* FIXME: Passing Undefined here is a hack until
+ * esxVI_LookupObjectContentByType supports more
+ * fine grained traversal configuration. Looking
+ * up the Datacenter from the ComputeResource
+ * requiers an upward search. Putting this in the
+ * list with the other downward traversal rules
+ * would result in cyclic searching */
+ esxVI_Boolean_Undefined,
+ &datacenter) < 0) {
+ goto cleanup;
+ }
+
+ if (datacenter == NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Could not retrieve datacenter of compute resource"));
+ goto cleanup;
+ }
+
+ if (esxVI_Datacenter_CastFromObjectContent(datacenter,
+ &ctx->datacenter) < 0) {
+ goto cleanup;
+ }
+
+ result = 0;
+
+ cleanup:
+ esxVI_String_Free(&propertyNameList);
+ esxVI_ManagedObjectReference_Free(&managedObjectReference);
+ esxVI_ObjectContent_Free(&hostSystem);
+ esxVI_ObjectContent_Free(&computeResource);
+ esxVI_ObjectContent_Free(&datacenter);
return result;
}
@@ -1219,82 +1470,64 @@ esxVI_BuildFullTraversalSpecList(esxVI_SelectionSpec **fullTraversalSpecList)
return -1;
}
+ /* Folder -> childEntity (ManagedEntity) */
if (esxVI_BuildFullTraversalSpecItem(fullTraversalSpecList,
- "visitFolders",
+ "folderToChildEntity",
"Folder", "childEntity",
- "visitFolders\0"
- "datacenterToDatastore\0"
- "datacenterToVmFolder\0"
- "datacenterToHostFolder\0"
- "computeResourceToHost\0"
- "computeResourceToResourcePool\0"
- "hostSystemToVm\0"
- "resourcePoolToVm\0") < 0) {
+ "folderToChildEntity\0") < 0) {
goto failure;
}
- /* Traversal through datastore branch */
+ /* ComputeResource -> host (HostSystem) */
if (esxVI_BuildFullTraversalSpecItem(fullTraversalSpecList,
- "datacenterToDatastore",
- "Datacenter", "datastore",
+ "computeResourceToHost",
+ "ComputeResource", "host",
NULL) < 0) {
goto failure;
}
- /* Traversal through vmFolder branch */
- if (esxVI_BuildFullTraversalSpecItem(fullTraversalSpecList,
- "datacenterToVmFolder",
- "Datacenter", "vmFolder",
- "visitFolders\0") < 0) {
- goto failure;
- }
-
- /* Traversal through hostFolder branch */
+ /* ComputeResource -> datastore (Datastore) */
if (esxVI_BuildFullTraversalSpecItem(fullTraversalSpecList,
- "datacenterToHostFolder",
- "Datacenter", "hostFolder",
- "visitFolders\0") < 0) {
+ "computeResourceToDatastore",
+ "ComputeResource", "datastore",
+ NULL) < 0) {
goto failure;
}
- /* Traversal through host branch */
+ /* ResourcePool -> resourcePool (ResourcePool) */
if (esxVI_BuildFullTraversalSpecItem(fullTraversalSpecList,
- "computeResourceToHost",
- "ComputeResource", "host",
- NULL) < 0) {
+ "resourcePoolToResourcePool",
+ "ResourcePool", "resourcePool",
+ "resourcePoolToResourcePool\0"
+ "resourcePoolToVm\0") < 0) {
goto failure;
}
- /* Traversal through resourcePool branch */
+ /* ResourcePool -> vm (VirtualMachine) */
if (esxVI_BuildFullTraversalSpecItem(fullTraversalSpecList,
- "computeResourceToResourcePool",
- "ComputeResource", "resourcePool",
- "resourcePoolToResourcePool\0"
- "resourcePoolToVm\0") < 0) {
+ "resourcePoolToVm",
+ "ResourcePool", "vm", NULL) < 0) {
goto failure;
}
- /* Recurse through all resource pools */
+ /* HostSystem -> parent (ComputeResource) */
if (esxVI_BuildFullTraversalSpecItem(fullTraversalSpecList,
- "resourcePoolToResourcePool",
- "ResourcePool", "resourcePool",
- "resourcePoolToResourcePool\0"
- "resourcePoolToVm\0") < 0) {
+ "hostSystemToParent",
+ "HostSystem", "parent", NULL) < 0) {
goto failure;
}
- /* Recurse through all hosts */
+ /* HostSystem -> vm (VirtualMachine) */
if (esxVI_BuildFullTraversalSpecItem(fullTraversalSpecList,
"hostSystemToVm",
- "HostSystem", "vm",
- "visitFolders\0") < 0) {
+ "HostSystem", "vm", NULL) < 0) {
goto failure;
}
- /* Recurse through all resource pools */
+ /* HostSystem -> datastore (Datastore) */
if (esxVI_BuildFullTraversalSpecItem(fullTraversalSpecList,
- "resourcePoolToVm",
- "ResourcePool", "vm", NULL) < 0) {
+ "hostSystemToDatastore",
+ "HostSystem", "datastore", NULL) < 0) {
goto failure;
}
@@ -1422,6 +1655,11 @@ esxVI_LookupObjectContentByType(esxVI_Context *ctx,
return -1;
}
+ if (objectContentList == NULL || *objectContentList != NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid argument"));
+ return -1;
+ }
+
if (esxVI_ObjectSpec_Alloc(&objectSpec) < 0) {
return -1;
}
@@ -1431,6 +1669,8 @@ esxVI_LookupObjectContentByType(esxVI_Context *ctx,
if (recurse == esxVI_Boolean_True) {
objectSpec->selectSet = ctx->fullTraversalSpecList;
+ } else if (recurse == esxVI_Boolean_Undefined) {
+ objectSpec->selectSet = ctx->fullTraversalSpecList2;
}
if (esxVI_PropertySpec_Alloc(&propertySpec) < 0) {
@@ -1669,9 +1909,8 @@ esxVI_LookupNumberOfDomainsByPowerState(esxVI_Context *ctx,
if (esxVI_String_AppendValueToList(&propertyNameList,
"runtime.powerState") < 0 ||
- esxVI_LookupObjectContentByType(ctx, ctx->vmFolder, "VirtualMachine",
- propertyNameList, esxVI_Boolean_True,
- &virtualMachineList) < 0) {
+ esxVI_LookupVirtualMachineList(ctx, propertyNameList,
+ &virtualMachineList) < 0) {
goto cleanup;
}
@@ -1965,125 +2204,28 @@ esxVI_GetSnapshotTreeBySnapshot
-int
-esxVI_LookupResourcePoolByHostSystem
- (esxVI_Context *ctx, esxVI_ObjectContent *hostSystem,
- esxVI_ManagedObjectReference **resourcePool)
+int esxVI_LookupHostSystemProperties(esxVI_Context *ctx,
+ esxVI_String *propertyNameList,
+ esxVI_ObjectContent **hostSystem)
{
- int result = -1;
- esxVI_String *propertyNameList = NULL;
- esxVI_DynamicProperty *dynamicProperty = NULL;
- esxVI_ManagedObjectReference *managedObjectReference = NULL;
- esxVI_ObjectContent *computeResource = NULL;
-
- if (resourcePool == NULL || *resourcePool != NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid argument"));
- return -1;
- }
-
- for (dynamicProperty = hostSystem->propSet; dynamicProperty != NULL;
- dynamicProperty = dynamicProperty->_next) {
- if (STREQ(dynamicProperty->name, "parent")) {
- if (esxVI_ManagedObjectReference_CastFromAnyType
- (dynamicProperty->val, &managedObjectReference) < 0) {
- goto cleanup;
- }
-
- break;
- } else {
- VIR_WARN("Unexpected '%s' property", dynamicProperty->name);
- }
- }
-
- if (managedObjectReference == NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve compute resource of host system"));
- goto cleanup;
- }
-
- if (esxVI_String_AppendValueToList(&propertyNameList, "resourcePool") < 0 ||
- esxVI_LookupObjectContentByType(ctx, managedObjectReference,
- "ComputeResource", propertyNameList,
- esxVI_Boolean_False,
- &computeResource) < 0) {
- goto cleanup;
- }
-
- if (computeResource == NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve compute resource of host system"));
- goto cleanup;
- }
-
- for (dynamicProperty = computeResource->propSet; dynamicProperty != NULL;
- dynamicProperty = dynamicProperty->_next) {
- if (STREQ(dynamicProperty->name, "resourcePool")) {
- if (esxVI_ManagedObjectReference_CastFromAnyType
- (dynamicProperty->val, resourcePool) < 0) {
- goto cleanup;
- }
-
- break;
- } else {
- VIR_WARN("Unexpected '%s' property", dynamicProperty->name);
- }
- }
-
- if ((*resourcePool) == NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("Could not retrieve resource pool of compute resource"));
- goto cleanup;
- }
-
- result = 0;
-
- cleanup:
- esxVI_String_Free(&propertyNameList);
- esxVI_ManagedObjectReference_Free(&managedObjectReference);
- esxVI_ObjectContent_Free(&computeResource);
-
- return result;
+ return esxVI_LookupObjectContentByType(ctx, ctx->hostSystem->_reference,
+ "HostSystem", propertyNameList,
+ esxVI_Boolean_False, hostSystem);
}
int
-esxVI_LookupHostSystemByIp(esxVI_Context *ctx, const char *ipAddress,
- esxVI_String *propertyNameList,
- esxVI_ObjectContent **hostSystem)
+esxVI_LookupVirtualMachineList(esxVI_Context *ctx,
+ esxVI_String *propertyNameList,
+ esxVI_ObjectContent **virtualMachineList)
{
- int result = -1;
- esxVI_ManagedObjectReference *managedObjectReference = NULL;
-
- if (hostSystem == NULL || *hostSystem != NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid argument"));
- return -1;
- }
-
- if (esxVI_FindByIp(ctx, ctx->datacenter, ipAddress, esxVI_Boolean_False,
- &managedObjectReference) < 0) {
- return -1;
- }
-
- if (managedObjectReference == NULL) {
- ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR,
- _("Could not find host system with IP address '%s'"),
- ipAddress);
- goto cleanup;
- }
-
- if (esxVI_LookupObjectContentByType(ctx, managedObjectReference,
- "HostSystem", propertyNameList,
- esxVI_Boolean_False, hostSystem) < 0) {
- goto cleanup;
- }
-
- result = 0;
-
- cleanup:
- esxVI_ManagedObjectReference_Free(&managedObjectReference);
-
- return result;
+ /* FIXME: Switch from ctx->hostSystem to ctx->computeResource->resourcePool
+ * for cluster support */
+ return esxVI_LookupObjectContentByType(ctx, ctx->hostSystem->_reference,
+ "VirtualMachine", propertyNameList,
+ esxVI_Boolean_True,
+ virtualMachineList);
}
@@ -2105,8 +2247,8 @@ esxVI_LookupVirtualMachineByUuid(esxVI_Context *ctx, const unsigned char *uuid,
virUUIDFormat(uuid, uuid_string);
- if (esxVI_FindByUuid(ctx, ctx->datacenter, uuid_string, esxVI_Boolean_True,
- &managedObjectReference) < 0) {
+ if (esxVI_FindByUuid(ctx, ctx->datacenter->_reference, uuid_string,
+ esxVI_Boolean_True, &managedObjectReference) < 0) {
return -1;
}
@@ -2158,10 +2300,8 @@ esxVI_LookupVirtualMachineByName(esxVI_Context *ctx, const char *name,
if (esxVI_String_DeepCopyList(&completePropertyNameList,
propertyNameList) < 0 ||
esxVI_String_AppendValueToList(&completePropertyNameList, "name") < 0 ||
- esxVI_LookupObjectContentByType(ctx, ctx->vmFolder, "VirtualMachine",
- completePropertyNameList,
- esxVI_Boolean_True,
- &virtualMachineList) < 0) {
+ esxVI_LookupVirtualMachineList(ctx, completePropertyNameList,
+ &virtualMachineList) < 0) {
goto cleanup;
}
@@ -2260,6 +2400,19 @@ esxVI_LookupVirtualMachineByUuidAndPrepareForTask
int
+esxVI_LookupDatastoreList(esxVI_Context *ctx, esxVI_String *propertyNameList,
+ esxVI_ObjectContent **datastoreList)
+{
+ /* FIXME: Switch from ctx->hostSystem to ctx->computeResource for cluster
+ * support */
+ return esxVI_LookupObjectContentByType(ctx, ctx->hostSystem->_reference,
+ "Datastore", propertyNameList,
+ esxVI_Boolean_True, datastoreList);
+}
+
+
+
+int
esxVI_LookupDatastoreByName(esxVI_Context *ctx, const char *name,
esxVI_String *propertyNameList,
esxVI_ObjectContent **datastore,
@@ -2285,14 +2438,9 @@ esxVI_LookupDatastoreByName(esxVI_Context *ctx, const char *name,
esxVI_String_AppendValueListToList(&completePropertyNameList,
"summary.accessible\0"
"summary.name\0"
- "summary.url\0") < 0) {
- goto cleanup;
- }
-
- if (esxVI_LookupObjectContentByType(ctx, ctx->datacenter, "Datastore",
- completePropertyNameList,
- esxVI_Boolean_True,
- &datastoreList) < 0) {
+ "summary.url\0") < 0 ||
+ esxVI_LookupDatastoreList(ctx, completePropertyNameList,
+ &datastoreList) < 0) {
goto cleanup;
}
diff --git a/src/esx/esx_vi.h b/src/esx/esx_vi.h
index 325ba69..a23c56d 100644
--- a/src/esx/esx_vi.h
+++ b/src/esx/esx_vi.h
@@ -158,10 +158,11 @@ struct _esxVI_Context {
esxVI_APIVersion apiVersion;
esxVI_ProductVersion productVersion;
esxVI_UserSession *session;
- esxVI_ManagedObjectReference *datacenter;
- esxVI_ManagedObjectReference *vmFolder;
- esxVI_ManagedObjectReference *hostFolder;
+ esxVI_Datacenter *datacenter;
+ esxVI_ComputeResource *computeResource;
+ esxVI_HostSystem *hostSystem;
esxVI_SelectionSpec *fullTraversalSpecList;
+ esxVI_SelectionSpec *fullTraversalSpecList2;
};
int esxVI_Context_Alloc(esxVI_Context **ctx);
@@ -169,6 +170,10 @@ void esxVI_Context_Free(esxVI_Context **ctx);
int esxVI_Context_Connect(esxVI_Context *ctx, const char *ipAddress,
const char *url, const char *username,
const char *password, esxUtil_ParsedUri *parsedUri);
+int esxVI_Context_LookupObjectsByPath(esxVI_Context *ctx,
+ esxUtil_ParsedUri *parsedUri);
+int esxVI_Context_LookupObjectsByHostSystemIp(esxVI_Context *ctx,
+ const char *hostSystemIpAddress);
int esxVI_Context_DownloadFile(esxVI_Context *ctx, const char *url,
char **content);
int esxVI_Context_UploadFile(esxVI_Context *ctx, const char *url,
@@ -327,13 +332,13 @@ int esxVI_GetSnapshotTreeBySnapshot
esxVI_ManagedObjectReference *snapshot,
esxVI_VirtualMachineSnapshotTree **snapshotTree);
-int esxVI_LookupResourcePoolByHostSystem
- (esxVI_Context *ctx, esxVI_ObjectContent *hostSystem,
- esxVI_ManagedObjectReference **resourcePool);
+int esxVI_LookupHostSystemProperties(esxVI_Context *ctx,
+ esxVI_String *propertyNameList,
+ esxVI_ObjectContent **hostSystem);
-int esxVI_LookupHostSystemByIp(esxVI_Context *ctx, const char *ipAddress,
- esxVI_String *propertyNameList,
- esxVI_ObjectContent **hostSystem);
+int esxVI_LookupVirtualMachineList(esxVI_Context *ctx,
+ esxVI_String *propertyNameList,
+ esxVI_ObjectContent **virtualMachineList);
int esxVI_LookupVirtualMachineByUuid(esxVI_Context *ctx,
const unsigned char *uuid,
@@ -351,6 +356,9 @@ int esxVI_LookupVirtualMachineByUuidAndPrepareForTask
esxVI_String *propertyNameList, esxVI_ObjectContent **virtualMachine,
esxVI_Boolean autoAnswer);
+int esxVI_LookupDatastoreList(esxVI_Context *ctx, esxVI_String *propertyNameList,
+ esxVI_ObjectContent **datastoreList);
+
int esxVI_LookupDatastoreByName(esxVI_Context *ctx, const char *name,
esxVI_String *propertyNameList,
esxVI_ObjectContent **datastore,
diff --git a/src/esx/esx_vmx.c b/src/esx/esx_vmx.c
index 8905ffd..807c6db 100644
--- a/src/esx/esx_vmx.c
+++ b/src/esx/esx_vmx.c
@@ -2711,6 +2711,10 @@ esxVMX_FormatConfig(esxVI_Context *ctx, virCapsPtr caps, virDomainDefPtr def,
case esxVI_ProductVersion_ESX40:
case esxVI_ProductVersion_ESX41:
case esxVI_ProductVersion_ESX4x:
+ /* FIXME: Putting VPX* here is a hack until a more fine grained system is in place */
+ case esxVI_ProductVersion_VPX40:
+ case esxVI_ProductVersion_VPX41:
+ case esxVI_ProductVersion_VPX4x:
virBufferAddLit(&buffer, "virtualHW.version = \"7\"\n");
break;
--
1.7.0.4
14 years, 3 months
[libvirt] [PATCH] esx: Set storage pool target path to host.mountInfo.path
by Matthias Bolte
Now all storage pool types expose the target path.
---
src/esx/esx_storage_driver.c | 114 ++++++++++++++++++-----------------------
src/esx/esx_vi.c | 68 +++++++++++++++++++++++++
src/esx/esx_vi.h | 4 ++
src/esx/esx_vi_generator.py | 2 +-
4 files changed, 123 insertions(+), 65 deletions(-)
diff --git a/src/esx/esx_storage_driver.c b/src/esx/esx_storage_driver.c
index e0680a1..4fcc4af 100644
--- a/src/esx/esx_storage_driver.c
+++ b/src/esx/esx_storage_driver.c
@@ -194,11 +194,8 @@ static virStoragePoolPtr
esxStoragePoolLookupByName(virConnectPtr conn, const char *name)
{
esxPrivate *priv = conn->storagePrivateData;
- esxVI_String *propertyNameList = NULL;
esxVI_ObjectContent *datastore = NULL;
- esxVI_DynamicProperty *dynamicProperty = NULL;
- esxVI_DatastoreHostMount *datastoreHostMountList = NULL;
- esxVI_DatastoreHostMount *datastoreHostMount = NULL;
+ esxVI_DatastoreHostMount *hostMount = NULL;
char *suffix = NULL;
int suffixLength;
char uuid_string[VIR_UUID_STRING_BUFLEN] = "00000000-00000000-0000-000000000000";
@@ -209,9 +206,7 @@ esxStoragePoolLookupByName(virConnectPtr conn, const char *name)
return NULL;
}
- if (esxVI_String_AppendValueToList(&propertyNameList, "host") < 0 ||
- esxVI_LookupDatastoreByName(priv->primary, name,
- propertyNameList, &datastore,
+ if (esxVI_LookupDatastoreByName(priv->primary, name, NULL, &datastore,
esxVI_Occurrence_RequiredItem) < 0) {
goto cleanup;
}
@@ -232,30 +227,12 @@ esxStoragePoolLookupByName(virConnectPtr conn, const char *name)
* The object name of virtual machine contains an integer, we use that as
* domain ID.
*/
- for (dynamicProperty = datastore->propSet; dynamicProperty != NULL;
- dynamicProperty = dynamicProperty->_next) {
- if (STREQ(dynamicProperty->name, "host")) {
- if (esxVI_DatastoreHostMount_CastListFromAnyType
- (dynamicProperty->val, &datastoreHostMountList) < 0) {
- goto cleanup;
- }
-
- break;
- }
+ if (esxVI_LookupDatastoreHostMount(priv->primary, datastore->obj,
+ &hostMount) < 0) {
+ goto cleanup;
}
- for (datastoreHostMount = datastoreHostMountList; datastoreHostMount != NULL;
- datastoreHostMount = datastoreHostMount->_next) {
- if (STRNEQ(priv->primary->hostSystem->_reference->value,
- datastoreHostMount->key->value)) {
- continue;
- }
-
- if ((suffix = STRSKIP(datastoreHostMount->mountInfo->path,
- "/vmfs/volumes/")) == NULL) {
- break;
- }
-
+ if ((suffix = STRSKIP(hostMount->mountInfo->path, "/vmfs/volumes/")) != NULL) {
suffixLength = strlen(suffix);
if ((suffixLength == 35 && /* = strlen("4b0beca7-7fd401f3-1d7f-000ae484a6a3") */
@@ -284,9 +261,8 @@ esxStoragePoolLookupByName(virConnectPtr conn, const char *name)
pool = virGetStoragePool(conn, name, uuid);
cleanup:
- esxVI_String_Free(&propertyNameList);
esxVI_ObjectContent_Free(&datastore);
- esxVI_DatastoreHostMount_Free(&datastoreHostMountList);
+ esxVI_DatastoreHostMount_Free(&hostMount);
return pool;
}
@@ -481,13 +457,12 @@ esxStoragePoolGetXMLDesc(virStoragePoolPtr pool, unsigned int flags)
esxPrivate *priv = pool->conn->storagePrivateData;
esxVI_String *propertyNameList = NULL;
esxVI_ObjectContent *datastore = NULL;
+ esxVI_DatastoreHostMount *hostMount = NULL;
esxVI_DynamicProperty *dynamicProperty = NULL;
esxVI_Boolean accessible = esxVI_Boolean_Undefined;
virStoragePoolDef def;
esxVI_DatastoreInfo *info = NULL;
- esxVI_LocalDatastoreInfo *localInfo = NULL;
esxVI_NasDatastoreInfo *nasInfo = NULL;
- esxVI_VmfsDatastoreInfo *vmfsInfo = NULL;
char *xml = NULL;
virCheckFlags(0, NULL);
@@ -507,13 +482,17 @@ esxStoragePoolGetXMLDesc(virStoragePoolPtr pool, unsigned int flags)
propertyNameList, &datastore,
esxVI_Occurrence_RequiredItem) < 0 ||
esxVI_GetBoolean(datastore, "summary.accessible",
- &accessible, esxVI_Occurrence_RequiredItem) < 0) {
+ &accessible, esxVI_Occurrence_RequiredItem) < 0 ||
+ esxVI_LookupDatastoreHostMount(priv->primary, datastore->obj,
+ &hostMount) < 0) {
goto cleanup;
}
def.name = pool->name;
memcpy(def.uuid, pool->uuid, VIR_UUID_BUFLEN);
+ def.target.path = hostMount->mountInfo->path;
+
if (accessible == esxVI_Boolean_True) {
for (dynamicProperty = datastore->propSet; dynamicProperty != NULL;
dynamicProperty = dynamicProperty->_next) {
@@ -531,46 +510,52 @@ esxStoragePoolGetXMLDesc(virStoragePoolPtr pool, unsigned int flags)
}
def.available = dynamicProperty->val->int64;
- } else if (STREQ(dynamicProperty->name, "info")) {
- if (esxVI_DatastoreInfo_CastFromAnyType(dynamicProperty->val,
- &info) < 0) {
- goto cleanup;
- }
}
}
def.allocation = def.capacity - def.available;
+ }
- /* See vSphere API documentation about HostDatastoreSystem for details */
- if ((localInfo = esxVI_LocalDatastoreInfo_DynamicCast(info)) != NULL) {
- def.type = VIR_STORAGE_POOL_DIR;
- def.target.path = localInfo->path;
- } else if ((nasInfo = esxVI_NasDatastoreInfo_DynamicCast(info)) != NULL) {
- def.type = VIR_STORAGE_POOL_NETFS;
- def.source.host.name = nasInfo->nas->remoteHost;
- def.source.dir = nasInfo->nas->remotePath;
-
- if (STRCASEEQ(nasInfo->nas->type, "NFS")) {
- def.source.format = VIR_STORAGE_POOL_NETFS_NFS;
- } else if (STRCASEEQ(nasInfo->nas->type, "CIFS")) {
- def.source.format = VIR_STORAGE_POOL_NETFS_CIFS;
- } else {
- ESX_ERROR(VIR_ERR_INTERNAL_ERROR,
- _("Datastore has unexpected type '%s'"),
- nasInfo->nas->type);
+ for (dynamicProperty = datastore->propSet; dynamicProperty != NULL;
+ dynamicProperty = dynamicProperty->_next) {
+ if (STREQ(dynamicProperty->name, "info")) {
+ if (esxVI_DatastoreInfo_CastFromAnyType(dynamicProperty->val,
+ &info) < 0) {
goto cleanup;
}
- } else if ((vmfsInfo = esxVI_VmfsDatastoreInfo_DynamicCast(info)) != NULL) {
- def.type = VIR_STORAGE_POOL_FS;
- /*
- * FIXME: I'm not sure how to represent the source and target of a
- * VMFS based datastore in libvirt terms
- */
+
+ break;
+ }
+ }
+
+ /* See vSphere API documentation about HostDatastoreSystem for details */
+ if (esxVI_LocalDatastoreInfo_DynamicCast(info) != NULL) {
+ def.type = VIR_STORAGE_POOL_DIR;
+ } else if ((nasInfo = esxVI_NasDatastoreInfo_DynamicCast(info)) != NULL) {
+ def.type = VIR_STORAGE_POOL_NETFS;
+ def.source.host.name = nasInfo->nas->remoteHost;
+ def.source.dir = nasInfo->nas->remotePath;
+
+ if (STRCASEEQ(nasInfo->nas->type, "NFS")) {
+ def.source.format = VIR_STORAGE_POOL_NETFS_NFS;
+ } else if (STRCASEEQ(nasInfo->nas->type, "CIFS")) {
+ def.source.format = VIR_STORAGE_POOL_NETFS_CIFS;
} else {
- ESX_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
- _("DatastoreInfo has unexpected type"));
+ ESX_ERROR(VIR_ERR_INTERNAL_ERROR,
+ _("Datastore has unexpected type '%s'"),
+ nasInfo->nas->type);
goto cleanup;
}
+ } else if (esxVI_VmfsDatastoreInfo_DynamicCast(info) != NULL) {
+ def.type = VIR_STORAGE_POOL_FS;
+ /*
+ * FIXME: I'm not sure how to represent the source and target of a
+ * VMFS based datastore in libvirt terms
+ */
+ } else {
+ ESX_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("DatastoreInfo has unexpected type"));
+ goto cleanup;
}
xml = virStoragePoolDefFormat(&def);
@@ -578,6 +563,7 @@ esxStoragePoolGetXMLDesc(virStoragePoolPtr pool, unsigned int flags)
cleanup:
esxVI_String_Free(&propertyNameList);
esxVI_ObjectContent_Free(&datastore);
+ esxVI_DatastoreHostMount_Free(&hostMount);
esxVI_DatastoreInfo_Free(&info);
return xml;
diff --git a/src/esx/esx_vi.c b/src/esx/esx_vi.c
index f421502..55c5246 100644
--- a/src/esx/esx_vi.c
+++ b/src/esx/esx_vi.c
@@ -2566,6 +2566,74 @@ esxVI_LookupDatastoreByAbsolutePath(esxVI_Context *ctx,
int
+esxVI_LookupDatastoreHostMount(esxVI_Context *ctx,
+ esxVI_ManagedObjectReference *datastore,
+ esxVI_DatastoreHostMount **hostMount)
+{
+ int result = -1;
+ esxVI_String *propertyNameList = NULL;
+ esxVI_ObjectContent *objectContent = NULL;
+ esxVI_DynamicProperty *dynamicProperty = NULL;
+ esxVI_DatastoreHostMount *hostMountList = NULL;
+ esxVI_DatastoreHostMount *candidate = NULL;
+
+ if (hostMount == NULL || *hostMount != NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s", _("Invalid argument"));
+ return -1;
+ }
+
+ if (esxVI_String_AppendValueToList(&propertyNameList, "host") < 0 ||
+ esxVI_LookupObjectContentByType(ctx, datastore, "Datastore",
+ propertyNameList, esxVI_Boolean_False,
+ &objectContent) < 0) {
+ goto cleanup;
+ }
+
+ for (dynamicProperty = objectContent->propSet; dynamicProperty != NULL;
+ dynamicProperty = dynamicProperty->_next) {
+ if (STREQ(dynamicProperty->name, "host")) {
+ if (esxVI_DatastoreHostMount_CastListFromAnyType
+ (dynamicProperty->val, &hostMountList) < 0) {
+ goto cleanup;
+ }
+
+ break;
+ } else {
+ VIR_WARN("Unexpected '%s' property", dynamicProperty->name);
+ }
+ }
+
+ for (candidate = hostMountList; candidate != NULL;
+ candidate = candidate->_next) {
+ if (STRNEQ(ctx->hostSystem->_reference->value, candidate->key->value)) {
+ continue;
+ }
+
+ if (esxVI_DatastoreHostMount_DeepCopy(hostMount, candidate) < 0) {
+ goto cleanup;
+ }
+
+ break;
+ }
+
+ if (*hostMount == NULL) {
+ ESX_VI_ERROR(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Could not lookup datastore host mount"));
+ goto cleanup;
+ }
+
+ result = 0;
+
+ cleanup:
+ esxVI_String_Free(&propertyNameList);
+ esxVI_ObjectContent_Free(&objectContent);
+ esxVI_DatastoreHostMount_Free(&hostMountList);
+
+ return result;
+}
+
+
+int
esxVI_LookupTaskInfoByTask(esxVI_Context *ctx,
esxVI_ManagedObjectReference *task,
esxVI_TaskInfo **taskInfo)
diff --git a/src/esx/esx_vi.h b/src/esx/esx_vi.h
index fdd15f1..d5dc9d5 100644
--- a/src/esx/esx_vi.h
+++ b/src/esx/esx_vi.h
@@ -370,6 +370,10 @@ int esxVI_LookupDatastoreByAbsolutePath(esxVI_Context *ctx,
esxVI_ObjectContent **datastore,
esxVI_Occurrence occurrence);
+int esxVI_LookupDatastoreHostMount(esxVI_Context *ctx,
+ esxVI_ManagedObjectReference *datastore,
+ esxVI_DatastoreHostMount **hostMount);
+
int esxVI_LookupTaskInfoByTask(esxVI_Context *ctx,
esxVI_ManagedObjectReference *task,
esxVI_TaskInfo **taskInfo);
diff --git a/src/esx/esx_vi_generator.py b/src/esx/esx_vi_generator.py
index e3c3d14..411fd80 100755
--- a/src/esx/esx_vi_generator.py
+++ b/src/esx/esx_vi_generator.py
@@ -1127,7 +1127,7 @@ additional_enum_features = { "ManagedEntityStatus" : Enum.FEATURE__ANY_TYPE
"VirtualMachinePowerState" : Enum.FEATURE__ANY_TYPE }
-additional_object_features = { "DatastoreHostMount" : Object.FEATURE__LIST | Object.FEATURE__ANY_TYPE,
+additional_object_features = { "DatastoreHostMount" : Object.FEATURE__DEEP_COPY | Object.FEATURE__LIST | Object.FEATURE__ANY_TYPE,
"DatastoreInfo" : Object.FEATURE__ANY_TYPE | Object.FEATURE__DYNAMIC_CAST,
"Event" : Object.FEATURE__LIST,
"FileInfo" : Object.FEATURE__DYNAMIC_CAST,
--
1.7.0.4
14 years, 3 months