Re: [libvirt] [User question] Huge buffer size on KVM host
by Avi Kivity
On 08/16/2012 05:54 PM, Martin Wawro wrote:
>
> On Aug 15, 2012, at 2:57 PM, Avi Kivity wrote:
>
>>>
>>> We are using logical volumes and the cache is set to 'none'.
>>
>> Strange, that should work without any buffering.
>>
>> What the contents of
>>
>> /sys/block/sda/queue/hw_sector_size
>>
>> and
>>
>> /sys/block/sda/queue/logical_block_size
>>
>> ?
>>
>
> Hi Avi,
>
> It seems that the kernel on that particular machine is too old, those entries are
> not featured. We checked on a comparable setup with a newer kernel and both entries
> were set to 512.
>
> We also did have a third more thorough look on the caching. It turns out that the
> virt-manager does not seem to honor the caching adjusted in the GUI correctly.
> We disabled caching on all virtual devices for this particular VM and checking
> with "ps -fxal" revealed, that only one of those devices (and a rather small one too)
> had this set. We corrected this in the XML file directly and the buffer size
> currently resides at around 1.8 GB after rebooting the VM (the only virtio device
> not having the cache=none option set is now the (non-mounted) cdrom).
>
cc += libvirt-list
Is there a reason that cdroms don't get cache=none?
--
error compiling committee.c: too many arguments to function
12 years, 3 months
[libvirt] Libvir JNA report SIGSEGV
by Benjamin Wang (gendwang)
Hi,
I try to verify the JNA with concurrent situation but meet some problems. The following is my example code:
public static void testcase1() throws LibvirtException
{
Connect conn=null;
Connect conn1=null;
//connect to the hypervisor
conn = new Connect("esx://10.74.125.68:443/?no_verify=1&transport=https", new ConnectAuthDefault(), 0);
System.out.println(conn.getVersion());
//connect to the hypervisor
conn1 = new Connect("esx://10.74.125.90:443/?no_verify=1&transport=https", new ConnectAuthDefault(), 0);
System.out.println(conn1.getVersion());
while(true)
{
int[] array = new int[100000000];
Long version = conn.getVersion();
Long version1 = conn1.getVersion();
try
{
Thread.sleep(1000);
}
catch(Exception e)
{
}
}
}
When I add line "int[] array = new int[100000000]", then the following error will be generated very quickly:
# An unexpected error has been detected by Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x0000003f9b07046e, pid=30049, tid=1109510464
#
# Java VM: OpenJDK 64-Bit Server VM (1.6.0-b09 mixed mode linux-amd64)
# Problematic frame:
# C [libc.so.6+0x7046e]
#
# An error report file with more information is saved as:
I have tried to write the similar code as following. It works well.
static void virXenBasic_TC001(void)
{
virConnectPtr conn = NULL;
virConnectPtr conn1 = NULL;
unsigned long version = 0;
unsigned long version1 = 0;
char *hostname = NULL;
conn = virConnectOpenAuth("esx://10.74.125.21/?no_verify=1", virConnectAuthPtrDefault, 0);
if (conn == NULL) {
fprintf(stderr, "Failed to open connection to qemu:///system\n");
return;
}
conn1 = virConnectOpenAuth("esx://192.168.119.40/?no_verify=1", virConnectAuthPtrDefault, 0);
if (conn1 == NULL) {
fprintf(stderr, "Failed to open connection to qemu:///system\n");
return;
}
while(true)
{
hostname = malloc(sizeof(char) * 100000000);
virConnectGetVersion(conn, &version);
virConnectGetVersion(conn, &version1);
free(hostname);
sleep(1);
}
return;
}
B.R.
Benjamin Wang
12 years, 3 months
[libvirt] [PATCH 0/6 v4] Atomic API to list storage volumes
by Osier Yang
v3 - v4:
* Just rebase on the top, and split each API from the big set.
Osier Yang (6):
list: Define new API virStoragePoolListAllVolumes
list: Implemente RPC calls for virStoragePoolListAllVolumes
list: Implement virStoragePoolListAllVolumes for storage driver
list: Implement virStoragePoolListAllVolumes for test driver
list: Use virStoragePoolListAllVolumes in virsh
list: Expose virStoragePoolListAllVolumes to Python binding
daemon/remote.c | 58 +++++++++
include/libvirt/libvirt.h.in | 3 +
python/generator.py | 1 +
python/libvirt-override-api.xml | 8 +-
python/libvirt-override-virStoragePool.py | 11 ++
python/libvirt-override.c | 50 ++++++++
src/driver.h | 6 +-
src/libvirt.c | 50 ++++++++
src/libvirt_public.syms | 1 +
src/remote/remote_driver.c | 66 ++++++++++
src/remote/remote_protocol.x | 14 ++-
src/remote_protocol-structs | 13 ++
src/storage/storage_driver.c | 67 ++++++++++
src/test/test_driver.c | 67 ++++++++++
tools/virsh-volume.c | 197 ++++++++++++++++++++++-------
15 files changed, 562 insertions(+), 50 deletions(-)
create mode 100644 python/libvirt-override-virStoragePool.py
--
1.7.7.3
12 years, 3 months
[libvirt] [PATCH V5] support offline migration
by liguang
original migration did not aware of offline case,
so, add code to support offline migration quietly
(did not disturb original migration) by pass
VIR_MIGRATE_OFFLINE flag to migration APIs, and
migration process will not puzzeled by domain
offline and exit unexpectly.
these changes did not take care of disk images the
domain required, for disk images could be transfered
by other APIs as suggested.
so, the migration result is just make domain
definition alive at target side.
Signed-off-by: liguang <lig.fnst(a)cn.fujitsu.com>
---
include/libvirt/libvirt.h.in | 1 +
src/qemu/qemu_driver.c | 8 +++++
src/qemu/qemu_migration.c | 63 ++++++++++++++++++++++++++++++++++++-----
src/qemu/qemu_migration.h | 3 +-
tools/virsh-domain.c | 4 ++
5 files changed, 70 insertions(+), 9 deletions(-)
diff --git a/include/libvirt/libvirt.h.in b/include/libvirt/libvirt.h.in
index cfe5047..77df2ab 100644
--- a/include/libvirt/libvirt.h.in
+++ b/include/libvirt/libvirt.h.in
@@ -995,6 +995,7 @@ typedef enum {
* whole migration process; this will be used automatically
* when supported */
VIR_MIGRATE_UNSAFE = (1 << 9), /* force migration even if it is considered unsafe */
+ VIR_MIGRATE_OFFLINE = (1 << 10), /* offline migrate */
} virDomainMigrateFlags;
/* Domain migration. */
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index b12d9bc..0ed7053 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -9641,6 +9641,8 @@ qemuDomainMigrateBegin3(virDomainPtr domain,
}
if (!virDomainObjIsActive(vm)) {
+ if (flags |= VIR_MIGRATE_OFFLINE)
+ goto offline;
virReportError(VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
goto endjob;
@@ -9653,6 +9655,7 @@ qemuDomainMigrateBegin3(virDomainPtr domain,
if (qemuDomainCheckEjectableMedia(driver, vm, asyncJob) < 0)
goto endjob;
+offline:
if (!(xml = qemuMigrationBegin(driver, vm, xmlin, dname,
cookieout, cookieoutlen,
flags)))
@@ -9888,6 +9891,11 @@ qemuDomainMigrateConfirm3(virDomainPtr domain,
goto cleanup;
}
+ if (flags & VIR_MIGRATE_OFFLINE) {
+ ret = 0;
+ goto cleanup;
+ }
+
if (!qemuMigrationJobIsActive(vm, QEMU_ASYNC_JOB_MIGRATION_OUT))
goto cleanup;
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 1b21ef6..cf140d4 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -70,6 +70,7 @@ enum qemuMigrationCookieFlags {
QEMU_MIGRATION_COOKIE_FLAG_GRAPHICS,
QEMU_MIGRATION_COOKIE_FLAG_LOCKSTATE,
QEMU_MIGRATION_COOKIE_FLAG_PERSISTENT,
+ QEMU_MIGRATION_COOKIE_FLAG_OFFLINE,
QEMU_MIGRATION_COOKIE_FLAG_LAST
};
@@ -77,12 +78,13 @@ enum qemuMigrationCookieFlags {
VIR_ENUM_DECL(qemuMigrationCookieFlag);
VIR_ENUM_IMPL(qemuMigrationCookieFlag,
QEMU_MIGRATION_COOKIE_FLAG_LAST,
- "graphics", "lockstate", "persistent");
+ "graphics", "lockstate", "persistent", "offline");
enum qemuMigrationCookieFeatures {
QEMU_MIGRATION_COOKIE_GRAPHICS = (1 << QEMU_MIGRATION_COOKIE_FLAG_GRAPHICS),
QEMU_MIGRATION_COOKIE_LOCKSTATE = (1 << QEMU_MIGRATION_COOKIE_FLAG_LOCKSTATE),
QEMU_MIGRATION_COOKIE_PERSISTENT = (1 << QEMU_MIGRATION_COOKIE_FLAG_PERSISTENT),
+ QEMU_MIGRATION_COOKIE_OFFLINE = (1 << QEMU_MIGRATION_COOKIE_FLAG_OFFLINE),
};
typedef struct _qemuMigrationCookieGraphics qemuMigrationCookieGraphics;
@@ -439,6 +441,12 @@ qemuMigrationCookieXMLFormat(struct qemud_driver *driver,
virBufferAdjustIndent(buf, -2);
}
+ if (mig->flags & QEMU_MIGRATION_COOKIE_OFFLINE) {
+ virBufferAsprintf(buf, " <offline mig_ol='%d'>\n",
+ 1);
+ virBufferAddLit(buf, " </offline>\n");
+ }
+
virBufferAddLit(buf, "</qemu-migration>\n");
return 0;
}
@@ -662,6 +670,18 @@ qemuMigrationCookieXMLParse(qemuMigrationCookiePtr mig,
VIR_FREE(nodes);
}
+ if ((flags & QEMU_MIGRATION_COOKIE_OFFLINE) &&
+ virXPathBoolean("count(./offline) > 0", ctxt)) {
+ int offline = 0;
+ if (virXPathInt("string(./offline/@mig_ol)", ctxt, &offline) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ "%s", _("missing mig_ol attribute in migration data"));
+ goto error;
+ }
+ if (offline != 1)
+ mig->flags &= ~QEMU_MIGRATION_COOKIE_OFFLINE;
+ }
+
return 0;
error:
@@ -721,6 +741,10 @@ qemuMigrationBakeCookie(qemuMigrationCookiePtr mig,
qemuMigrationCookieAddPersistent(mig, dom) < 0)
return -1;
+ if (flags & QEMU_MIGRATION_COOKIE_OFFLINE) {
+ mig->flags |= QEMU_MIGRATION_COOKIE_OFFLINE;
+ }
+
if (!(*cookieout = qemuMigrationCookieXMLFormatStr(driver, mig)))
return -1;
@@ -1151,6 +1175,13 @@ char *qemuMigrationBegin(struct qemud_driver *driver,
QEMU_MIGRATION_COOKIE_LOCKSTATE) < 0)
goto cleanup;
+ if (flags & VIR_MIGRATE_OFFLINE) {
+ if (qemuMigrationBakeCookie(mig, driver, vm,
+ cookieout, cookieoutlen,
+ QEMU_MIGRATION_COOKIE_OFFLINE) < 0)
+ goto cleanup;
+ }
+
if (xmlin) {
if (!(def = virDomainDefParseString(driver->caps, xmlin,
QEMU_EXPECTED_VIRT_TYPES,
@@ -1314,6 +1345,15 @@ qemuMigrationPrepareAny(struct qemud_driver *driver,
goto endjob;
}
+ if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen,
+ QEMU_MIGRATION_COOKIE_OFFLINE)))
+ return ret;
+
+ if (mig->flags & QEMU_MIGRATION_COOKIE_OFFLINE) {
+ ret = 0;
+ goto cleanup;
+ }
+
/* Start the QEMU daemon, with the same command-line arguments plus
* -incoming $migrateFrom
*/
@@ -1856,7 +1896,8 @@ qemuMigrationRun(struct qemud_driver *driver,
virLockManagerPluginGetName(driver->lockManager));
return -1;
}
-
+ if (flags & VIR_MIGRATE_OFFLINE)
+ return 0;
if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen,
QEMU_MIGRATION_COOKIE_GRAPHICS)))
goto cleanup;
@@ -2372,6 +2413,8 @@ static int doPeer2PeerMigrate3(struct qemud_driver *driver,
qemuDomainObjExitRemoteWithDriver(driver, vm);
}
VIR_FREE(dom_xml);
+ if (flags & VIR_MIGRATE_OFFLINE)
+ goto cleanup;
if (ret == -1)
goto cleanup;
@@ -2477,7 +2520,7 @@ finish:
vm->def->name);
cleanup:
- if (ddomain) {
+ if (ddomain || (flags & VIR_MIGRATE_OFFLINE)) {
virObjectUnref(ddomain);
ret = 0;
} else {
@@ -2554,10 +2597,9 @@ static int doPeer2PeerMigrate(struct qemud_driver *driver,
}
/* domain may have been stopped while we were talking to remote daemon */
- if (!virDomainObjIsActive(vm)) {
+ if (!virDomainObjIsActive(vm) && !(flags & VIR_MIGRATE_OFFLINE)) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("guest unexpectedly quit"));
- goto cleanup;
}
/* Change protection is only required on the source side (us), and
@@ -2617,7 +2659,7 @@ qemuMigrationPerformJob(struct qemud_driver *driver,
if (qemuMigrationJobStart(driver, vm, QEMU_ASYNC_JOB_MIGRATION_OUT) < 0)
goto cleanup;
- if (!virDomainObjIsActive(vm)) {
+ if (!virDomainObjIsActive(vm) && !(flags & VIR_MIGRATE_OFFLINE)) {
virReportError(VIR_ERR_OPERATION_INVALID,
"%s", _("domain is not running"));
goto endjob;
@@ -2941,6 +2983,8 @@ qemuMigrationFinish(struct qemud_driver *driver,
*/
if (retcode == 0) {
if (!virDomainObjIsActive(vm)) {
+ if (flags & VIR_MIGRATE_OFFLINE)
+ goto offline;
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("guest unexpectedly quit"));
goto endjob;
@@ -3038,7 +3082,7 @@ qemuMigrationFinish(struct qemud_driver *driver,
goto endjob;
}
}
-
+ offline:
dom = virGetDomain (dconn, vm->def->name, vm->def->uuid);
event = virDomainEventNewFromObj(vm,
@@ -3120,7 +3164,10 @@ int qemuMigrationConfirm(struct qemud_driver *driver,
if (!(mig = qemuMigrationEatCookie(driver, vm, cookiein, cookieinlen, 0)))
return -1;
-
+ if (flags & VIR_MIGRATE_OFFLINE) {
+ rv = 0;
+ goto cleanup;
+ }
/* Did the migration go as planned? If yes, kill off the
* domain object, but if no, resume CPUs
*/
diff --git a/src/qemu/qemu_migration.h b/src/qemu/qemu_migration.h
index 1740204..2bcaea0 100644
--- a/src/qemu/qemu_migration.h
+++ b/src/qemu/qemu_migration.h
@@ -36,7 +36,8 @@
VIR_MIGRATE_NON_SHARED_DISK | \
VIR_MIGRATE_NON_SHARED_INC | \
VIR_MIGRATE_CHANGE_PROTECTION | \
- VIR_MIGRATE_UNSAFE)
+ VIR_MIGRATE_UNSAFE | \
+ VIR_MIGRATE_OFFLINE)
enum qemuMigrationJobPhase {
QEMU_MIGRATION_PHASE_NONE = 0,
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index 4684466..4cd4687 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -6525,6 +6525,7 @@ static const vshCmdOptDef opts_migrate[] = {
{"dname", VSH_OT_DATA, 0, N_("rename to new name during migration (if supported)")},
{"timeout", VSH_OT_INT, 0, N_("force guest to suspend if live migration exceeds timeout (in seconds)")},
{"xml", VSH_OT_STRING, 0, N_("filename containing updated XML for the target")},
+ {"offline", VSH_OT_BOOL, 0, N_("for offline migration")},
{NULL, 0, 0, NULL}
};
@@ -6591,6 +6592,9 @@ doMigrate(void *opaque)
if (vshCommandOptBool(cmd, "unsafe"))
flags |= VIR_MIGRATE_UNSAFE;
+ if (vshCommandOptBool(cmd, "offline"))
+ flags |= VIR_MIGRATE_OFFLINE;
+
if (xmlfile &&
virFileReadAll(xmlfile, 8192, &xml) < 0) {
vshError(ctl, _("file '%s' doesn't exist"), xmlfile);
--
1.7.2.5
12 years, 3 months
[libvirt] [PATCHv6 0/2] Implementation of virConnectListAllDomains() for esx and hyperv
by Peter Krempa
Yet another respin updated and rebased to current head.
Both drivers are compile tested but I don't have the infrastructure do a
functional test.
Peter Krempa (2):
hyperv: Add implementation for virConnectListAllDomains()
esx: Add implementation for virConnectListAllDomains()
src/esx/esx_driver.c | 194 +++++++++++++++++++++++++++++++++++++++++++++
src/hyperv/hyperv_driver.c | 135 +++++++++++++++++++++++++++++++
2 files changed, 329 insertions(+)
--
1.7.12
12 years, 3 months
[libvirt] [PATCH] esx: Fix and improve esxListAllDomains function
by Matthias Bolte
Avoid requesting information such as identity or power state when it
is not necessary.
Lookup virtual machine list with the required fields (configStatus,
name, and config.uuid) to make esxVI_GetVirtualMachineIdentity work.
No need to call esxVI_GetNumberOfSnapshotTrees. rootSnapshotTreeList
can be tested for emptiness by checking it for NULL.
esxVI_LookupRootSnapshotTreeList already does the error reporting,
don't overwrite it.
Check if autostart is enabled at all before looking up the individual
autostart setting of a virtual machine.
Reorder VIR_EXPAND_N(doms, ndoms, 1) to avoid leaking the result of
the call to virGetDomain if VIR_EXPAND_N fails.
If virGetDomain fails it already reports an error, don't overwrite it
with an OOM error.
All items in doms up to the count-th one are valid, no need to double
check before freeing them.
Finally, don't leak autoStartDefaults and powerInfoList.
---
src/esx/esx_driver.c | 116 ++++++++++++++++++++++++++++++++-----------------
1 files changed, 76 insertions(+), 40 deletions(-)
diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c
index 28e2c65..28f3386 100644
--- a/src/esx/esx_driver.c
+++ b/src/esx/esx_driver.c
@@ -5010,13 +5010,15 @@ esxListAllDomains(virConnectPtr conn,
{
int ret = -1;
esxPrivate *priv = conn->privateData;
+ bool needIdentity;
+ bool needPowerState;
virDomainPtr dom;
virDomainPtr *doms = NULL;
size_t ndoms = 0;
+ esxVI_String *propertyNameList = NULL;
esxVI_ObjectContent *virtualMachineList = NULL;
esxVI_ObjectContent *virtualMachine = NULL;
- esxVI_String *propertyNameList = NULL;
- esxVI_AutoStartDefaults *autostart_defaults = NULL;
+ esxVI_AutoStartDefaults *autoStartDefaults = NULL;
esxVI_VirtualMachinePowerState powerState;
esxVI_AutoStartPowerInfo *powerInfoList = NULL;
esxVI_AutoStartPowerInfo *powerInfo = NULL;
@@ -5025,7 +5027,6 @@ esxListAllDomains(virConnectPtr conn,
int id;
unsigned char uuid[VIR_UUID_BUFLEN];
int count = 0;
- int snapshotCount;
bool autostart;
int state;
@@ -5035,7 +5036,7 @@ esxListAllDomains(virConnectPtr conn,
* - persistence: all esx machines are persistent
* - managed save: esx doesn't support managed save
*/
- if ((MATCH(VIR_CONNECT_LIST_DOMAINS_TRANSIENT) &&
+ if ((MATCH(VIR_CONNECT_LIST_DOMAINS_TRANSIENT) &&
!MATCH(VIR_CONNECT_LIST_DOMAINS_PERSISTENT)) ||
(MATCH(VIR_CONNECT_LIST_DOMAINS_MANAGEDSAVE) &&
!MATCH(VIR_CONNECT_LIST_DOMAINS_NO_MANAGEDSAVE))) {
@@ -5047,23 +5048,49 @@ esxListAllDomains(virConnectPtr conn,
goto cleanup;
}
- if (esxVI_EnsureSession(priv->primary) < 0)
+ if (esxVI_EnsureSession(priv->primary) < 0)
return -1;
/* check system default autostart value */
if (MATCH(VIR_CONNECT_LIST_DOMAINS_FILTERS_AUTOSTART)) {
if (esxVI_LookupAutoStartDefaults(priv->primary,
- &autostart_defaults) < 0)
+ &autoStartDefaults) < 0) {
goto cleanup;
+ }
+
+ if (autoStartDefaults->enabled == esxVI_Boolean_True) {
+ if (esxVI_LookupAutoStartPowerInfoList(priv->primary,
+ &powerInfoList) < 0) {
+ goto cleanup;
+ }
+ }
+ }
- if (esxVI_LookupAutoStartPowerInfoList(priv->primary,
- &powerInfoList) < 0)
+ needIdentity = MATCH(VIR_CONNECT_LIST_DOMAINS_FILTERS_SNAPSHOT) ||
+ domains != NULL;
+
+ if (needIdentity) {
+ /* Request required data for esxVI_GetVirtualMachineIdentity */
+ if (esxVI_String_AppendValueListToList(&propertyNameList,
+ "configStatus\0"
+ "name\0"
+ "config.uuid\0") < 0) {
goto cleanup;
+ }
}
- if (esxVI_String_AppendValueToList(&propertyNameList,
- "runtime.powerState") < 0 ||
- esxVI_LookupVirtualMachineList(priv->primary, propertyNameList,
+ needPowerState = MATCH(VIR_CONNECT_LIST_DOMAINS_FILTERS_ACTIVE) ||
+ MATCH(VIR_CONNECT_LIST_DOMAINS_FILTERS_STATE) ||
+ domains != NULL;
+
+ if (needPowerState) {
+ if (esxVI_String_AppendValueToList(&propertyNameList,
+ "runtime.powerState") < 0) {
+ goto cleanup;
+ }
+ }
+
+ if (esxVI_LookupVirtualMachineList(priv->primary, propertyNameList,
&virtualMachineList) < 0)
goto cleanup;
@@ -5075,12 +5102,21 @@ esxListAllDomains(virConnectPtr conn,
for (virtualMachine = virtualMachineList; virtualMachine != NULL;
virtualMachine = virtualMachine->_next) {
+ if (needIdentity) {
+ VIR_FREE(name);
- VIR_FREE(name);
+ if (esxVI_GetVirtualMachineIdentity(virtualMachine, &id,
+ &name, uuid) < 0) {
+ goto cleanup;
+ }
+ }
- if (esxVI_GetVirtualMachineIdentity(virtualMachine, &id, &name, uuid) < 0 ||
- esxVI_GetVirtualMachinePowerState(virtualMachine, &powerState) < 0)
- goto cleanup;
+ if (needPowerState) {
+ if (esxVI_GetVirtualMachinePowerState(virtualMachine,
+ &powerState) < 0) {
+ goto cleanup;
+ }
+ }
/* filter by active state */
if (MATCH(VIR_CONNECT_LIST_DOMAINS_FILTERS_ACTIVE) &&
@@ -5092,23 +5128,17 @@ esxListAllDomains(virConnectPtr conn,
/* filter by snapshot existence */
if (MATCH(VIR_CONNECT_LIST_DOMAINS_FILTERS_SNAPSHOT)) {
+ esxVI_VirtualMachineSnapshotTree_Free(&rootSnapshotTreeList);
+
if (esxVI_LookupRootSnapshotTreeList(priv->primary, uuid,
&rootSnapshotTreeList) < 0) {
- virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Couldn't retrieve snapshot list for "
- "domain '%s'"), name);
goto cleanup;
}
- snapshotCount = esxVI_GetNumberOfSnapshotTrees(rootSnapshotTreeList,
- true, false);
-
- esxVI_VirtualMachineSnapshotTree_Free(&rootSnapshotTreeList);
-
if (!((MATCH(VIR_CONNECT_LIST_DOMAINS_HAS_SNAPSHOT) &&
- snapshotCount > 0) ||
+ rootSnapshotTreeList != NULL) ||
(MATCH(VIR_CONNECT_LIST_DOMAINS_NO_SNAPSHOT) &&
- snapshotCount == 0)))
+ rootSnapshotTreeList == NULL)))
continue;
}
@@ -5116,19 +5146,18 @@ esxListAllDomains(virConnectPtr conn,
if (MATCH(VIR_CONNECT_LIST_DOMAINS_FILTERS_AUTOSTART)) {
autostart = false;
- for (powerInfo = powerInfoList; powerInfo != NULL;
- powerInfo = powerInfo->_next) {
- if (STREQ(powerInfo->key->value, virtualMachine->obj->value)) {
- if (STRCASEEQ(powerInfo->startAction, "powerOn"))
- autostart = true;
+ if (autoStartDefaults->enabled == esxVI_Boolean_True) {
+ for (powerInfo = powerInfoList; powerInfo != NULL;
+ powerInfo = powerInfo->_next) {
+ if (STREQ(powerInfo->key->value, virtualMachine->obj->value)) {
+ if (STRCASEEQ(powerInfo->startAction, "powerOn"))
+ autostart = true;
- break;
+ break;
+ }
}
}
- autostart = autostart &&
- autostart_defaults->enabled == esxVI_Boolean_True;
-
if (!((MATCH(VIR_CONNECT_LIST_DOMAINS_AUTOSTART) &&
autostart) ||
(MATCH(VIR_CONNECT_LIST_DOMAINS_NO_AUTOSTART) &&
@@ -5139,6 +5168,7 @@ esxListAllDomains(virConnectPtr conn,
/* filter by domain state */
if (MATCH(VIR_CONNECT_LIST_DOMAINS_FILTERS_STATE)) {
state = esxVI_VirtualMachinePowerState_ConvertToLibvirt(powerState);
+
if (!((MATCH(VIR_CONNECT_LIST_DOMAINS_RUNNING) &&
state == VIR_DOMAIN_RUNNING) ||
(MATCH(VIR_CONNECT_LIST_DOMAINS_PAUSED) &&
@@ -5158,17 +5188,18 @@ esxListAllDomains(virConnectPtr conn,
continue;
}
- if (!(dom = virGetDomain(conn, name, uuid)))
+ if (VIR_EXPAND_N(doms, ndoms, 1) < 0)
goto no_memory;
+ if (!(dom = virGetDomain(conn, name, uuid)))
+ goto cleanup;
+
/* Only running/suspended virtual machines have an ID != -1 */
if (powerState != esxVI_VirtualMachinePowerState_PoweredOff)
dom->id = id;
else
dom->id = -1;
- if (VIR_EXPAND_N(doms, ndoms, 1) < 0)
- goto no_memory;
doms[count++] = dom;
}
@@ -5180,14 +5211,19 @@ esxListAllDomains(virConnectPtr conn,
cleanup:
if (doms) {
for (id = 0; id < count; id++) {
- if (doms[id])
- virDomainFree(doms[id]);
+ virDomainFree(doms[id]);
}
+
+ VIR_FREE(doms);
}
- VIR_FREE(doms);
+
VIR_FREE(name);
+ esxVI_AutoStartDefaults_Free(&autoStartDefaults);
+ esxVI_AutoStartPowerInfo_Free(&powerInfoList);
esxVI_String_Free(&propertyNameList);
esxVI_ObjectContent_Free(&virtualMachineList);
+ esxVI_VirtualMachineSnapshotTree_Free(&rootSnapshotTreeList);
+
return ret;
no_memory:
--
1.7.4.1
12 years, 3 months
[libvirt] [PATCH] esx: Remove unused variable from esxDomainGetAutostart
by Matthias Bolte
---
I've pushed this one under the trivial rule.
src/esx/esx_driver.c | 2 --
1 files changed, 0 insertions(+), 2 deletions(-)
diff --git a/src/esx/esx_driver.c b/src/esx/esx_driver.c
index 28e2c65..991f03c 100644
--- a/src/esx/esx_driver.c
+++ b/src/esx/esx_driver.c
@@ -3363,7 +3363,6 @@ esxDomainGetAutostart(virDomainPtr domain, int *autostart)
esxPrivate *priv = domain->conn->privateData;
esxVI_AutoStartDefaults *defaults = NULL;
esxVI_String *propertyNameList = NULL;
- esxVI_ObjectContent *hostAutoStartManager = NULL;
esxVI_AutoStartPowerInfo *powerInfo = NULL;
esxVI_AutoStartPowerInfo *powerInfoList = NULL;
esxVI_ObjectContent *virtualMachine = NULL;
@@ -3417,7 +3416,6 @@ esxDomainGetAutostart(virDomainPtr domain, int *autostart)
cleanup:
esxVI_String_Free(&propertyNameList);
- esxVI_ObjectContent_Free(&hostAutoStartManager);
esxVI_AutoStartDefaults_Free(&defaults);
esxVI_AutoStartPowerInfo_Free(&powerInfoList);
esxVI_ObjectContent_Free(&virtualMachine);
--
1.7.4.1
12 years, 3 months
[libvirt] SCSI command passthrough
by Geert Jansen
Hi,
i'm trying to pass through SCSI commands from a guest to a host. Both
guest and host are RHEL 6.3. The relevant section in my XML is:
<devices>
<disk type='block' device='lun'>
<driver name='qemu' type='raw' cache='none'/>
<source dev='/dev/sdb'/>
<target dev='sdb' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='1' unit='0'/>
</disk>
<controller type='scsi' index='0' model='virtio-scsi'>
<address type='pci' domain='0x0000' bus='0x00' slot='0x05'
function='0x0'/>
</controller>
...
</devices>
Commands that are whitelisted by the host kernel come through (e.g.
"sg_inq"), but other commands don't (e.g. "sg_persist").
I need sg_persist so i tried making qemu-kvm setuid root. This works.
Is there a better way to allow arbitrary SCSI commands, perferably on a
per-VM basis, rather than making qemu setuid root?
Regards,
Geert
12 years, 3 months
[libvirt] [PATCH 0/4] Fix PM events
by Jiri Denemark
PM related events suffered from quite a lot of issues. The only thing that
actually worked was STARTED life cycle event with WAKEUP detail.
Jiri Denemark (4):
Fix docs for PM event callbacks
Fix PMSuspend and PMWakeup events
Add PMSUSPENDED life cycle event
examples: Fix event detail printing in python test
daemon/remote.c | 2 ++
examples/domain-events/events-c/event-test.c | 14 ++++++++++++--
examples/domain-events/events-python/event-test.py | 8 +++++---
include/libvirt/libvirt.h.in | 20 ++++++++++++++++----
python/libvirt-override.c | 4 ++--
src/qemu/qemu_process.c | 12 ++++++++++--
6 files changed, 47 insertions(+), 13 deletions(-)
--
1.7.12
12 years, 3 months
[libvirt] [PATCH] build: fix build on older gcc
by Eric Blake
On RHEL 6.2, gcc 4.4.6 complains:
cc1: warning: command line option "-Wenum-compare" is valid for C++/ObjC++ but not for C
which in turn breaks a -Werror build.
Meanwhile, in Fedora 17, gcc 4.7.0, -Wenum-compare has been enhanced
to also work on C, but at the same time, it is documented that -Wall
now implicitly includes -Wenum-compare.
Therefore, it is sufficient to remove explicit checks for this option,
avoiding the warning from older gcc while still getting the
compile-time safety from newer gcc.
* m4/virt-compile-warnings.m4 (-Wenum-compare): Omit explicit check.
---
Pushing under the build-breaker rule.
m4/virt-compile-warnings.m4 | 2 ++
1 file changed, 2 insertions(+)
diff --git a/m4/virt-compile-warnings.m4 b/m4/virt-compile-warnings.m4
index c3ff962..d1173eb 100644
--- a/m4/virt-compile-warnings.m4
+++ b/m4/virt-compile-warnings.m4
@@ -57,6 +57,8 @@ AC_DEFUN([LIBVIRT_COMPILE_WARNINGS],[
dontwarn="$dontwarn -Wformat-nonliteral"
# Gnulib's stat-time.h violates this
dontwarn="$dontwarn -Waggregate-return"
+ # gcc 4.4.6 complains this is C++ only; gcc 4.7.0 implies this from -Wall
+ dontwarn="$dontwarn -Wenum-compare"
# Gnulib uses '#pragma GCC diagnostic push' to silence some
# warnings, but older gcc doesn't support this.
--
1.7.11.4
12 years, 3 months