[libvirt] [PATCH v2] qemu: Resolve Coverity DEADCODE.
by Matthias Gatto
reported here: http://www.redhat.com/archives/libvir-list/2014-November/msg00327.html
I could have just remove bool supportMaxOptions variable, but
if I had do this, we could not check anymore if the nparams variable is
superior to QEMU_NB_BLOCK_IO_TUNE_PARAM_MAX.
v2: change following this proposal:
http://www.redhat.com/archives/libvir-list/2014-November/msg00379.html
---
src/qemu/qemu_driver.c | 17 +++++++++--------
1 file changed, 9 insertions(+), 8 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 56e8430..acf2b9a 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -17003,14 +17003,16 @@ qemuDomainGetBlockIoTune(virDomainPtr dom,
&persistentDef) < 0)
goto endjob;
+ if (flags & VIR_DOMAIN_AFFECT_LIVE) {
+ /* If the VM is running, we can check if the current VM can use
+ * optional parameters or not. We didn't made this check sooner
+ * because we need vm->privateData which need
+ * virDomainLiveConfigHelperMethod to do so. */
+ priv = vm->privateData;
+ supportMaxOptions = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DRIVE_IOTUNE_MAX);
+ }
+
if ((*nparams) == 0) {
- if (flags & VIR_DOMAIN_AFFECT_LIVE) {
- priv = vm->privateData;
- /* If the VM is running, we can check if the current VM can use
- * optional parameters or not. We didn't made this check sooner
- * because we need the VM data to do so. */
- supportMaxOptions = virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DRIVE_IOTUNE_MAX);
- }
*nparams = supportMaxOptions ?
QEMU_NB_BLOCK_IO_TUNE_PARAM_MAX : QEMU_NB_BLOCK_IO_TUNE_PARAM;
ret = 0;
@@ -17023,7 +17025,6 @@ qemuDomainGetBlockIoTune(virDomainPtr dom,
}
if (flags & VIR_DOMAIN_AFFECT_LIVE) {
- priv = vm->privateData;
qemuDomainObjEnterMonitor(driver, vm);
ret = qemuMonitorGetBlockIoThrottle(priv->mon, device, &reply, supportMaxOptions);
qemuDomainObjExitMonitor(driver, vm);
--
1.8.3.1
10 years
[libvirt] [PATCHv8 0/7] Add non-FreeBSD guest support to Bhyve driver.
by Conrad Meyer
Drvbhyve hardcodes bhyveload(8) as the host bootloader for guests.
bhyveload(8) loader only supports FreeBSD guests.
This patch series adds <bootloader> and <bootloader_args> handling to
bhyve_command, so libvirt can boot non-FreeBSD guests in Bhyve.
Additionally, support for grub-bhyve(1)'s --cons-dev argument is added so that
interactive GRUB menus can be manipulated with the domain-configured serial
device.
See patch logs for further details.
Thanks,
Conrad
Changes in v8:
- Fix typo in virBhyveProcessStart that prevented booting bhyve VMs.
Conrad Meyer (7):
bhyve: Support /domain/bootloader configuration for non-FreeBSD
guests.
bhyvexml2argv: Add loader argv tests.
domaincommon.rng: Add 'bootloader' to os=hvm schema for Bhyve
bhyvexml2argv: Add tests for domain-configured bootloader, args
bhyve: Probe grub-bhyve for --cons-dev capability
bhyve: Add console support for grub-bhyve bootloader
bhyvexml2argv: Add test for grub console support
docs/drvbhyve.html.in | 100 ++++++++++-
docs/formatdomain.html.in | 4 +-
docs/schemas/domaincommon.rng | 17 +-
src/bhyve/bhyve_capabilities.c | 37 ++++
src/bhyve/bhyve_capabilities.h | 3 +
src/bhyve/bhyve_command.c | 189 +++++++++++++++++++--
src/bhyve/bhyve_command.h | 5 +-
src/bhyve/bhyve_driver.c | 16 +-
src/bhyve/bhyve_driver.h | 2 +
src/bhyve/bhyve_process.c | 38 ++++-
src/bhyve/bhyve_utils.h | 2 +
.../bhyvexml2argv-acpiapic.ldargs | 1 +
tests/bhyvexml2argvdata/bhyvexml2argv-base.ldargs | 1 +
.../bhyvexml2argv-bhyveload-explicitargs.args | 3 +
.../bhyvexml2argv-bhyveload-explicitargs.ldargs | 1 +
.../bhyvexml2argv-bhyveload-explicitargs.xml | 23 +++
.../bhyvexml2argvdata/bhyvexml2argv-console.ldargs | 1 +
.../bhyvexml2argv-custom-loader.args | 3 +
.../bhyvexml2argv-custom-loader.ldargs | 1 +
.../bhyvexml2argv-custom-loader.xml | 24 +++
.../bhyvexml2argv-disk-cdrom-grub.args | 3 +
.../bhyvexml2argv-disk-cdrom-grub.devmap | 1 +
.../bhyvexml2argv-disk-cdrom-grub.ldargs | 2 +
.../bhyvexml2argv-disk-cdrom-grub.xml | 23 +++
.../bhyvexml2argv-disk-cdrom.ldargs | 1 +
.../bhyvexml2argv-disk-virtio.ldargs | 1 +
.../bhyvexml2argv-grub-defaults.args | 3 +
.../bhyvexml2argv-grub-defaults.devmap | 1 +
.../bhyvexml2argv-grub-defaults.ldargs | 2 +
.../bhyvexml2argv-grub-defaults.xml | 23 +++
.../bhyvexml2argvdata/bhyvexml2argv-macaddr.ldargs | 1 +
.../bhyvexml2argv-serial-grub-nocons.args | 4 +
.../bhyvexml2argv-serial-grub-nocons.devmap | 1 +
.../bhyvexml2argv-serial-grub-nocons.ldargs | 2 +
.../bhyvexml2argv-serial-grub-nocons.xml | 26 +++
.../bhyvexml2argv-serial-grub.args | 4 +
.../bhyvexml2argv-serial-grub.devmap | 1 +
.../bhyvexml2argv-serial-grub.ldargs | 2 +
.../bhyvexml2argv-serial-grub.xml | 26 +++
.../bhyvexml2argvdata/bhyvexml2argv-serial.ldargs | 1 +
tests/bhyvexml2argvtest.c | 71 +++++++-
41 files changed, 631 insertions(+), 39 deletions(-)
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-acpiapic.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-base.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-bhyveload-explicitargs.args
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-bhyveload-explicitargs.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-bhyveload-explicitargs.xml
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-console.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-custom-loader.args
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-custom-loader.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-custom-loader.xml
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-disk-cdrom-grub.args
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-disk-cdrom-grub.devmap
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-disk-cdrom-grub.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-disk-cdrom-grub.xml
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-disk-cdrom.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-disk-virtio.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-grub-defaults.args
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-grub-defaults.devmap
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-grub-defaults.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-grub-defaults.xml
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-macaddr.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-serial-grub-nocons.args
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-serial-grub-nocons.devmap
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-serial-grub-nocons.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-serial-grub-nocons.xml
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-serial-grub.args
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-serial-grub.devmap
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-serial-grub.ldargs
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-serial-grub.xml
create mode 100644 tests/bhyvexml2argvdata/bhyvexml2argv-serial.ldargs
--
1.9.3
10 years
[libvirt] [PATCH] qemuxml2argvtest: Run some test only on Linux
by Michal Privoznik
As I was reviewing bhyve commits, I've noticed qemuxml2argvtest
failing for some test cases. This is not bug in qemu driver code
rather than being unable to load qemuxml2argvmock on non-Linux
platforms. For instance:
318) QEMU XML-2-ARGV numatune-memnode
... libvirt: error : internal error: NUMA node 0 is unavailable
FAILED
Rather than disabling qemuxml2argvtest on BSD (we do compile qemu
driver there) disable only those test cases which require mocking.
To achieve that goal new DO_TEST_LINUX() macro is introduced which
invokes the test case on Linux only and consume arguments on other
systems.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
tests/qemuxml2argvtest.c | 31 +++++++++++++++++++++++++------
1 file changed, 25 insertions(+), 6 deletions(-)
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index fe58a24..623237b 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -575,6 +575,21 @@ mymain(void)
FLAG_EXPECT_PARSE_ERROR | FLAG_EXPECT_ERROR, \
__VA_ARGS__)
+# ifdef __linux__
+ /* This is a macro that invokes test only on Linux. It's
+ * meant to be called in those cases where qemuxml2argvmock
+ * cooperation is expected (e.g. we need a fixed time,
+ * predictable NUMA topology and so on). On non-Linux
+ * platforms the macro just consume its argument. */
+# define DO_TEST_LINUX(name, ...) \
+ DO_TEST_FULL(name, NULL, -1, 0, __VA_ARGS__)
+# else /* __linux__ */
+# define DO_TEST_LINUX(name, ...) \
+ do { \
+ const char *tmp ATTRIBUTE_UNUSED = name; \
+ } while (0)
+# endif /* __linux__ */
+
# define NONE QEMU_CAPS_LAST
/* Unset or set all envvars here that are copied in qemudBuildCommandLine
@@ -684,14 +699,16 @@ mymain(void)
DO_TEST("kvm-features-off", NONE);
DO_TEST("hugepages", QEMU_CAPS_MEM_PATH);
- DO_TEST("hugepages-pages", QEMU_CAPS_MEM_PATH, QEMU_CAPS_OBJECT_MEMORY_RAM,
- QEMU_CAPS_OBJECT_MEMORY_FILE);
+ DO_TEST_LINUX("hugepages-pages", QEMU_CAPS_MEM_PATH,
+ QEMU_CAPS_OBJECT_MEMORY_RAM,
+ QEMU_CAPS_OBJECT_MEMORY_FILE);
DO_TEST("hugepages-pages2", QEMU_CAPS_MEM_PATH, QEMU_CAPS_OBJECT_MEMORY_RAM,
QEMU_CAPS_OBJECT_MEMORY_FILE);
DO_TEST("hugepages-pages3", QEMU_CAPS_MEM_PATH, QEMU_CAPS_OBJECT_MEMORY_RAM,
QEMU_CAPS_OBJECT_MEMORY_FILE);
- DO_TEST("hugepages-shared", QEMU_CAPS_MEM_PATH, QEMU_CAPS_OBJECT_MEMORY_RAM,
- QEMU_CAPS_OBJECT_MEMORY_FILE);
+ DO_TEST_LINUX("hugepages-shared", QEMU_CAPS_MEM_PATH,
+ QEMU_CAPS_OBJECT_MEMORY_RAM,
+ QEMU_CAPS_OBJECT_MEMORY_FILE);
DO_TEST_PARSE_ERROR("hugepages-memaccess-invalid", NONE);
DO_TEST_FAILURE("hugepages-pages4", QEMU_CAPS_MEM_PATH,
QEMU_CAPS_OBJECT_MEMORY_RAM, QEMU_CAPS_OBJECT_MEMORY_FILE);
@@ -1246,10 +1263,12 @@ mymain(void)
DO_TEST("numatune-memory", NONE);
DO_TEST_PARSE_ERROR("numatune-memory-invalid-nodeset", NONE);
- DO_TEST("numatune-memnode", QEMU_CAPS_NUMA, QEMU_CAPS_OBJECT_MEMORY_RAM);
+ DO_TEST_LINUX("numatune-memnode", QEMU_CAPS_NUMA,
+ QEMU_CAPS_OBJECT_MEMORY_RAM);
DO_TEST_FAILURE("numatune-memnode", NONE);
- DO_TEST("numatune-memnode-no-memory", QEMU_CAPS_NUMA, QEMU_CAPS_OBJECT_MEMORY_RAM);
+ DO_TEST_LINUX("numatune-memnode-no-memory", QEMU_CAPS_NUMA,
+ QEMU_CAPS_OBJECT_MEMORY_RAM);
DO_TEST_FAILURE("numatune-memnode-no-memory", NONE);
DO_TEST("numatune-auto-nodeset-invalid", NONE);
--
2.0.4
10 years
[libvirt] [PATCH v6 0/7] qemu: Introduce support for new the block_set_io_throttle parameters add in the version 1.7 of qemu.
by Matthias Gatto
This series of patches add support for bps_max, bps_rd_max, bps_wr_max,
bps_max, bps_rd_max, bps_wr_max, and iops_size in the functions qemuDomainSetBlockIoTune
and qemuDomainGetBlockIoTune.
The last patch add support for these parameters to the virsh blkdeviotune command.
v2: -Spellfix
v3: -Merge patch 1/9,2/9,5/9 together.
-Change the capability detection.(patch 2/7 and 3/7).
-Try to make the usage of QEMU_NB_BLOCK_IO_TUNE_PARAM_MAX more explicit(patch 3/7).
v4: -Rebase on HEAD.
-Update qemu_driver to comply with Pavel's patchs.(patch 3/6)
-Remove the qemu_monitor_text modification.(remove old patch 5/7)
v5: -Split patch 1/6 in two.
-Add documentation for the new xml options (patch 2/7)
-Change (void) to ATTRIBUTE_UNUSED (patch 4/7)
-Capability detection of supportMaxOptions move before usage of supportMaxOptions (patch 4/7)
v6: -Spellfix
-Add comment (patch 4/7, 5/7)
-Undo the modification of the supportMaxOptions made
in the v5 because it was creating bugs(patch 4/5)
The 2 first patches have been reviewed by Eric Blake and sould be merge soon
The 3rd patch have been reviewed by Michal Privoznik and ack
Matthias Gatto (7):
qemu: Add define for the new throttle options
qemu: Modify the structure _virDomainBlockIoTuneInfo.
qemu: Add Qemu capability for bps_max and friends
qemu: Add bps_max and friends qemu driver
qemu: Add bps_max and friends QMP suport
qemu: Add bps_max and friends to qemu command generation
virsh: Add bps_max and friends to virsh
docs/formatdomain.html.in | 25 ++++
docs/schemas/domaincommon.rng | 43 ++++++
include/libvirt/libvirt-domain.h | 110 ++++++++++++++++
src/conf/domain_conf.c | 109 +++++++++++++++-
src/conf/domain_conf.h | 7 +
src/qemu/qemu_capabilities.c | 2 +
src/qemu/qemu_capabilities.h | 1 +
src/qemu/qemu_command.c | 57 +++++++-
src/qemu/qemu_driver.c | 187 ++++++++++++++++++++++++++-
src/qemu/qemu_monitor.c | 10 +-
src/qemu/qemu_monitor.h | 6 +-
src/qemu/qemu_monitor_json.c | 66 ++++++++--
src/qemu/qemu_monitor_json.h | 6 +-
tests/qemucapabilitiesdata/caps_2.1.1-1.caps | 1 +
tests/qemumonitorjsontest.c | 6 +-
tools/virsh-domain.c | 119 +++++++++++++++++
tools/virsh.pod | 10 ++
17 files changed, 732 insertions(+), 33 deletions(-)
--
1.8.3.1
10 years
[libvirt] [PATCH] nwfilter: fix deadlock caused updating network device and nwfilter
by Pavel Hrdina
Commit 6e5c79a1 tried to fix deadlock between nwfilter{Define,Undefine}
and starting of guest, but this same deadlock is also for
updating/attaching network device to domain.
The deadlock was introduced by removing global QEMU driver lock because
nwfilter was counting on this lock and ensure that all driver locks are
locked inside of nwfilter{Define,Undefine}.
This patch extends usage of virNWFilterReadLockFilterUpdates to prevent
the deadlock for all possible paths in QEMU driver. LXC and UML drivers
still have global lock.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1143780
Signed-off-by: Pavel Hrdina <phrdina(a)redhat.com>
---
This is temporary fix for the deadlock issue as I'm planning to create global
libvirt jobs (similar to QEMU domain jobs) and use it for other drivers and
for example for nwfilters too.
src/qemu/qemu_driver.c | 12 ++++++++++++
src/qemu/qemu_migration.c | 3 +++
src/qemu/qemu_process.c | 4 ++++
3 files changed, 19 insertions(+)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 6acaea8..9e6f505 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -5937,6 +5937,8 @@ qemuDomainRestoreFlags(virConnectPtr conn,
def = tmp;
}
+ virNWFilterReadLockFilterUpdates();
+
if (!(vm = virDomainObjListAdd(driver->domains, def,
driver->xmlopt,
VIR_DOMAIN_OBJ_LIST_ADD_LIVE |
@@ -5978,6 +5980,7 @@ qemuDomainRestoreFlags(virConnectPtr conn,
virFileWrapperFdFree(wrapperFd);
if (vm)
virObjectUnlock(vm);
+ virNWFilterUnlockFilterUpdates();
return ret;
}
@@ -7502,6 +7505,8 @@ static int qemuDomainAttachDeviceFlags(virDomainPtr dom, const char *xml,
affect = flags & (VIR_DOMAIN_AFFECT_LIVE | VIR_DOMAIN_AFFECT_CONFIG);
+ virNWFilterReadLockFilterUpdates();
+
if (!(caps = virQEMUDriverGetCapabilities(driver, false)))
goto cleanup;
@@ -7614,6 +7619,7 @@ static int qemuDomainAttachDeviceFlags(virDomainPtr dom, const char *xml,
virObjectUnlock(vm);
virObjectUnref(caps);
virObjectUnref(cfg);
+ virNWFilterUnlockFilterUpdates();
return ret;
}
@@ -7644,6 +7650,8 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr dom,
VIR_DOMAIN_AFFECT_CONFIG |
VIR_DOMAIN_DEVICE_MODIFY_FORCE, -1);
+ virNWFilterReadLockFilterUpdates();
+
cfg = virQEMUDriverGetConfig(driver);
affect = flags & (VIR_DOMAIN_AFFECT_LIVE | VIR_DOMAIN_AFFECT_CONFIG);
@@ -7760,6 +7768,7 @@ static int qemuDomainUpdateDeviceFlags(virDomainPtr dom,
virObjectUnlock(vm);
virObjectUnref(caps);
virObjectUnref(cfg);
+ virNWFilterUnlockFilterUpdates();
return ret;
}
@@ -14510,6 +14519,8 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot,
* and use of FORCE can cause multiple transitions.
*/
+ virNWFilterReadLockFilterUpdates();
+
if (!(vm = qemuDomObjFromSnapshot(snapshot)))
return -1;
@@ -14831,6 +14842,7 @@ qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot,
virObjectUnlock(vm);
virObjectUnref(caps);
virObjectUnref(cfg);
+ virNWFilterUnlockFilterUpdates();
return ret;
}
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 94a4cf6..18242ae 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -2666,6 +2666,8 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver,
goto cleanup;
}
+ virNWFilterReadLockFilterUpdates();
+
if (!(vm = virDomainObjListAdd(driver->domains, *def,
driver->xmlopt,
VIR_DOMAIN_OBJ_LIST_ADD_LIVE |
@@ -2825,6 +2827,7 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver,
qemuDomainEventQueue(driver, event);
qemuMigrationCookieFree(mig);
virObjectUnref(caps);
+ virNWFilterUnlockFilterUpdates();
return ret;
stop:
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 26d4948..409a672 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -3422,6 +3422,8 @@ qemuProcessReconnect(void *opaque)
VIR_FREE(data);
+ virNWFilterReadLockFilterUpdates();
+
virObjectLock(obj);
cfg = virQEMUDriverGetConfig(driver);
@@ -3573,6 +3575,7 @@ qemuProcessReconnect(void *opaque)
virObjectUnref(conn);
virObjectUnref(cfg);
+ virNWFilterUnlockFilterUpdates();
return;
@@ -3608,6 +3611,7 @@ qemuProcessReconnect(void *opaque)
}
virObjectUnref(conn);
virObjectUnref(cfg);
+ virNWFilterUnlockFilterUpdates();
}
static int
--
2.0.4
10 years
[libvirt] [PATCH] conf: Fix crash when src->hosts = NULL in virStorageFileBackendGlusterInit
by Luyao Huang
https://bugzilla.redhat.com/show_bug.cgi?id=1162974
When do external snapshot for a gluster disk with no host name(ip) in
snapshot xml, libvirtd will crash. Because when node do not have a children
in virDomainStorageHostParse, libvirt will return 0, but donnot get hosts for
virStorageFileBackendGlusterInit.
snpahost.xml:
<domainsnapshot>
<name>snapshot_test</name>
<description>Snapshot Test</description>
<disks>
<disk name='vda' snapshot='external' type='network'>
<source protocol='gluster' name='gluster-vol1/gluster.img.snap'/>
</disk>
</disks>
</domainsnapshot>
Back trace:
virsh snapshot-create r6 snapshot.xml --disk-only
0 virStorageFileBackendGlusterInit (src=0x7fc760007ca0) at storage/storage_backend_gluster.c:577
1 0x00007fc76d678e22 in virStorageFileInitAs (src=0x7fc760007ca0, uid=uid@entry=4294967295, gid=gid@entry=4294967295) at storage/storage_driver.c:2547
2 0x00007fc76d678e9c in virStorageFileInit (src=<optimized out>) at storage/storage_driver.c:2567
3 0x00007fc76bc13f9c in qemuDomainSnapshotPrepareDiskExternal (reuse=false, active=true, snapdisk=0x7fc7600019b8, disk=0x7fc7641e4880, conn=0x7fc76426cc10)
at qemu/qemu_driver.c:12995
4 qemuDomainSnapshotPrepare (flags=<synthetic pointer>, def=0x7fc760002570, vm=0x7fc76422b530, conn=0x7fc76426cc10) at qemu/qemu_driver.c:13156
5 qemuDomainSnapshotCreateXML (domain=0x7fc760001f30, xmlDesc=<optimized out>, flags=16) at qemu/qemu_driver.c:13896
6 0x00007fc782d4de4d in virDomainSnapshotCreateXML (domain=domain@entry=0x7fc760001f30,
xmlDesc=0x7fc760001b80 "<domainsnapshot>\n<name>snapshot_test</name>\n<description>Snapshot Test</description>\n<disks>\n<disk name='vda' snapshot='external' type='network'>\n<source protocol='gluster' name='gluster-vol1/gluster."..., flags=16) at libvirt.c:18488
7 0x00007fc7837cb44c in remoteDispatchDomainSnapshotCreateXML (server=<optimized out>, msg=<optimized out>, ret=0x7fc760000a60, args=0x7fc760001f90, rerr=0x7fc77344dc80,
client=<optimized out>) at remote_dispatch.h:8605
Signed-off-by: Luyao Huang <lhuang(a)redhat.com>
---
src/conf/domain_conf.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c
index 2c65276..34c1c12 100644
--- a/src/conf/domain_conf.c
+++ b/src/conf/domain_conf.c
@@ -4150,7 +4150,12 @@ virDomainStorageHostParse(xmlNodePtr node,
memset(&host, 0, sizeof(host));
- child = node->children;
+ if ((child = node->children) == NULL) {
+ virReportError(VIR_ERR_XML_ERROR, "%s",
+ _("Can not find a host in xml"));
+ goto cleanup;
+ }
+
while (child != NULL) {
if (child->type == XML_ELEMENT_NODE &&
xmlStrEqual(child->name, BAD_CAST "host")) {
--
1.8.3.1
10 years
[libvirt] [PATCH] Do not crash on gluster snapshots with no host name
by Ján Tomko
virStorageFileBackendGlusterInit did not check nhosts.
https://bugzilla.redhat.com/show_bug.cgi?id=1162974
---
src/storage/storage_backend_gluster.c | 10 +++++++++-
1 file changed, 9 insertions(+), 1 deletion(-)
diff --git a/src/storage/storage_backend_gluster.c b/src/storage/storage_backend_gluster.c
index 8a7d7e5..b79b634 100644
--- a/src/storage/storage_backend_gluster.c
+++ b/src/storage/storage_backend_gluster.c
@@ -571,9 +571,17 @@ virStorageFileBackendGlusterInit(virStorageSourcePtr src)
{
virStorageFileBackendGlusterPrivPtr priv = NULL;
virStorageNetHostDefPtr host = &(src->hosts[0]);
- const char *hostname = host->name;
+ const char *hostname;
int port = 0;
+ if (src->nhosts != 1) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("Expected exactly 1 host for the gluster volume"));
+ return -1;
+ }
+
+ hostname = host->name;
+
VIR_DEBUG("initializing gluster storage file %p (gluster://%s:%s/%s%s)",
src, hostname, host->port ? host->port : "0",
NULLSTR(src->volume), src->path);
--
2.0.4
10 years
[libvirt] Problem with virInterfaceCreate(), IFF_UP, and NetworkManager
by Laine Stump
Due to "a checkered past" (a myriad of minor issues, changing over
time), libvirt's semi-official position on the virInterface*() APIs and
NetworkManager is that virInterface*() is only supported if NM is
disabled. We do still attempt to make it work as well as possible, but
normally I only test those APIs on systems that have NM disabled and use
the network service (RHEL/Fedora/CentOS systems here) instead.
On a seemingly unrelated note, a few months ago mprivozn pushed a patch
that makes it an error to call virInterfaceCreate() (i.e. "ifup") for an
interface that is already active. (the "active" state of an interface is
determined by looking at an interface's IFF_UP flag (and also
IFF_RUNNING, if the interface isn't a bridge device). Previously, this
was allowed, as it is common practive to ifup an interface to make new
config take effect.
Last week, I happened to test the "virsh iface-bridge" command on a
system with NM enabled. That command gave an error about the interface
being already active, so I tried again, this time ifdowning the
interface in advance - I *still* got the error. Further investigation
and questioning of NM developers led me to the realization that when NM
is enabled, all interfaces *always* have IFF_UP and IFF_RUNNING set,
even if they are ifdowned. Further, if NM is active there is no way to
determine an interface's "active" status via iotctl() or netlink;
instead, must query to determine if NM is active, and if it is you must
call a NM API instead (I got this much information from NM developers
directly; haven't investigated yet exactly what the API is).
NM developers say that this pinning-up of the IFF_UP flag has been done
for a long time, and is necessary to do interface auto-config. I think
it is violating a long-standing assumption (if not a standard) about the
meaning of IFF_UP, and I'm not convinced that it really is a necessity
(certainly once a config file is present for an interface, it shouldn't
be needed), but then I haven't spent as much time in that problem space
as they have.
In the meantime, the virInterfaceCreate() API fails 100% of the time on
any system that has NM enabled. My dilemma now is whether to attempt to
affect change in NM's use of IFF_UP so that it once again can be used as
an indicator of whether or not an interface is active, or to just give
in and 1) officially declare that virInterface*() isn't supported if NM
is enabled until 2) we add code to netcf that detects when NM is active
and learns how to query interface status from NM instead of the standard
ioctl(SIOCGIFFLAGS).
And if the latter is preferred, should we in the meantime perhaps revert
the patch that made virInterfaceCreate() an error if the interface was
active? Or just leave it completely broken?
Any opinions?
10 years
[libvirt] [PATCH 0/2] Fixes for libssh2
by Cédric Bosdonnat
Hi all,
Here are two fixes to get libssh2 authentication working.
Cédric Bosdonnat (2):
Fix test wanting a negative size_t
Fix handling keyboard-interactive callbacks for libssh2
src/rpc/virnetsshsession.c | 9 +++++++--
1 file changed, 7 insertions(+), 2 deletions(-)
--
1.8.4.5
10 years