[libvirt] [RFC][scale] new API for querying domains stats
by Francesco Romani
Hi everyone,
I'd like to discuss possible APIs and plans for new query APIs in libvirt.
I'm one of the oVirt (http://www.ovirt.org) developers, and I write code for VDSM;
VDSM is the node management daemon, which is in charge, among many other things, to
gather the host and statistics per Domain/VM.
Right now we aim for a number of VM per node in the (few) hundreds, but we have big plans
to scale much more, and to possibly reach thousands in a not so distant future.
At the moment, we use one thread per VM to gather the VM stats (CPU, network, disk),
and of course this obviously scales poorly.
This is made only worse by the fact that VDSM is a python 2.7 application, and notoriously
python 2.x behaves very badly with threads. We are already working to improve our code,
but I'd like to bring the discussion here and see if and when the querying API can be improved.
We currently use these APIs for our sempling:
virDomainBlockInfo
virDomainGetInfo
virDomainGetCPUStats
virDomainBlockStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags
virDomainGetMetadata
What we'd like to have is
* asynchronous APIs for querying domain stats (https://bugzilla.redhat.com/show_bug.cgi?id=1113106)
This would be just awesome. Either a single callback or a different one per call is fine
(let's discuss this!).
please note that we are much more concerned about thread reduction then about performance
numbers. We had report of thread number becoming a real harm, while performance so far
is not yet a concern (https://bugzilla.redhat.com/show_bug.cgi?id=1102147#c54)
* bulk APIs for querying domain stats (https://bugzilla.redhat.com/show_bug.cgi?id=1113116)
would be really welcome as well. It is quite independent from the previous bullet point
and would help us greatly with scale.
So, I'd like to discuss if these additions are (or can be) in the project roadmap,
and, if so, how the API could look like and what the possible timeframe could be.
Of course I'd be happy to provide any further information about VDSM and its workings.
Thoughts very welcome!
Thanks and best regards,
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
10 years, 4 months
[libvirt] [PATCHv3] numatune: Fix parsing of empty nodeset (0,^0)
by Erik Skultety
Resolves https://bugzilla.redhat.com/show_bug.cgi?id=1121837
---
src/util/virbitmap.c | 3 +++
...emuxml2argv-numatune-memory-invalid-nodeset.xml | 31 ++++++++++++++++++++++
tests/qemuxml2argvtest.c | 1 +
3 files changed, 35 insertions(+)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-numatune-memory-invalid-nodeset.xml
diff --git a/src/util/virbitmap.c b/src/util/virbitmap.c
index 27282df..b6bd074 100644
--- a/src/util/virbitmap.c
+++ b/src/util/virbitmap.c
@@ -378,6 +378,9 @@ virBitmapParse(const char *str,
}
}
+ if (virBitmapIsAllClear(*bitmap))
+ goto error;
+
return virBitmapCountBits(*bitmap);
error:
diff --git a/tests/qemuxml2argvdata/qemuxml2argv-numatune-memory-invalid-nodeset.xml b/tests/qemuxml2argvdata/qemuxml2argv-numatune-memory-invalid-nodeset.xml
new file mode 100644
index 0000000..079ca9d
--- /dev/null
+++ b/tests/qemuxml2argvdata/qemuxml2argv-numatune-memory-invalid-nodeset.xml
@@ -0,0 +1,31 @@
+<domain type='qemu'>
+ <name>QEMUGuest1</name>
+ <uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
+ <memory unit='KiB'>219136</memory>
+ <currentMemory unit='KiB'>219136</currentMemory>
+ <vcpu placement='static' cpuset='0-1'>2</vcpu>
+ <numatune>
+ <memory mode="strict" nodeset="0,^0"/>
+ </numatune>
+ <os>
+ <type arch='i686' machine='pc'>hvm</type>
+ <boot dev='hd'/>
+ </os>
+ <cpu>
+ <topology sockets='2' cores='1' threads='1'/>
+ </cpu>
+ <clock offset='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <devices>
+ <emulator>/usr/bin/qemu</emulator>
+ <disk type='block' device='disk'>
+ <source dev='/dev/HostVG/QEMUGuest1'/>
+ <target dev='hda' bus='ide'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+ </disk>
+ <controller type='ide' index='0'/>
+ <memballoon model='virtio'/>
+ </devices>
+</domain>
diff --git a/tests/qemuxml2argvtest.c b/tests/qemuxml2argvtest.c
index 1c121ff..62b969c 100644
--- a/tests/qemuxml2argvtest.c
+++ b/tests/qemuxml2argvtest.c
@@ -1210,6 +1210,7 @@ mymain(void)
DO_TEST("cputune-zero-shares", QEMU_CAPS_NAME);
DO_TEST("numatune-memory", NONE);
+ DO_TEST_PARSE_ERROR("numatune-memory-invalid-nodeset", NONE);
DO_TEST("numatune-memnode", QEMU_CAPS_NUMA, QEMU_CAPS_OBJECT_MEMORY_RAM);
DO_TEST_FAILURE("numatune-memnode", NONE);
--
1.9.3
10 years, 4 months
[libvirt] [PATCH] Clear bandwidth settings for a shutoff domain using domiftune
by Jianwei Hu
qemu: To clear bandwidth settings for a shutoff domain by using domiftune.
After applying this patch, we can use virsh domiftune command to clear inbound
or/and outbound setting for a shutoff domain.
for example:
virsh domiftune $domain $interface 0 0
Please refer to below virsh help message:
man virsh:
To clear inbound or outbound settings, use --inbound or --outbound respectfully with average value of zero.
---
src/qemu/qemu_driver.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 82a82aa..7db2e9c 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -9983,11 +9983,17 @@ qemuDomainSetInterfaceParameters(virDomainPtr dom,
VIR_FREE(persistentNet->bandwidth->in);
persistentNet->bandwidth->in = bandwidth->in;
bandwidth->in = NULL;
+ } else {
+ VIR_FREE(persistentNet->bandwidth->in);
+ persistentNet->bandwidth->in = 0;
}
if (bandwidth->out) {
VIR_FREE(persistentNet->bandwidth->out);
persistentNet->bandwidth->out = bandwidth->out;
bandwidth->out = NULL;
+ } else {
+ VIR_FREE(persistentNet->bandwidth->out);
+ persistentNet->bandwidth->out = 0;
}
}
--
1.8.3.1
10 years, 4 months
[libvirt] [PATCH 1/1] qemu: Tidy up job handling during live migration
by Sam Bobroff
During a QEMU live migration several warning messages about job
handling could be written to syslog on the destination host:
"entering monitor without asking for a nested job is dangerous"
The messages are written because the job handling during migration
uses hard coded asyncJob values in several places that are incorrect.
This patch passes the required asyncJob value around and prevents
the warnings as well as any issues that the warnings may be referring
to.
Signed-off-by: Sam Bobroff <sam.bobroff(a)au1.ibm.com>
---
src/qemu/qemu_domain.c | 5 +++--
src/qemu/qemu_domain.h | 2 +-
src/qemu/qemu_driver.c | 21 ++++++++++++---------
src/qemu/qemu_migration.c | 3 ++-
src/qemu/qemu_process.c | 33 ++++++++++++++++++---------------
src/qemu/qemu_process.h | 1 +
6 files changed, 37 insertions(+), 28 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 4f63c88..3abbb14 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -2497,7 +2497,8 @@ qemuDomainDetermineDiskChain(virQEMUDriverPtr driver,
int
qemuDomainUpdateDeviceList(virQEMUDriverPtr driver,
- virDomainObjPtr vm)
+ virDomainObjPtr vm,
+ int asyncJob)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
char **aliases;
@@ -2505,7 +2506,7 @@ qemuDomainUpdateDeviceList(virQEMUDriverPtr driver,
if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE_DEL_EVENT))
return 0;
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
if (qemuMonitorGetDeviceAliases(priv->mon, &aliases) < 0) {
qemuDomainObjExitMonitor(driver, vm);
return -1;
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 67972b9..8736889 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -369,7 +369,7 @@ extern virDomainXMLNamespace virQEMUDriverDomainXMLNamespace;
extern virDomainDefParserConfig virQEMUDriverDomainDefParserConfig;
int qemuDomainUpdateDeviceList(virQEMUDriverPtr driver,
- virDomainObjPtr vm);
+ virDomainObjPtr vm, int asyncJob);
bool qemuDomainDefCheckABIStability(virQEMUDriverPtr driver,
virDomainDefPtr src,
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 33541d3..b0439d2 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1616,7 +1616,8 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn,
goto cleanup;
}
- if (qemuProcessStart(conn, driver, vm, NULL, -1, NULL, NULL,
+ if (qemuProcessStart(conn, driver, vm, QEMU_ASYNC_JOB_NONE,
+ NULL, -1, NULL, NULL,
VIR_NETDEV_VPORT_PROFILE_OP_CREATE,
start_flags) < 0) {
virDomainAuditStart(vm, "booted", false);
@@ -5446,7 +5447,8 @@ qemuDomainSaveImageStartVM(virConnectPtr conn,
}
/* Set the migration source and start it up. */
- ret = qemuProcessStart(conn, driver, vm, "stdio", *fd, path, NULL,
+ ret = qemuProcessStart(conn, driver, vm, QEMU_ASYNC_JOB_NONE,
+ "stdio", *fd, path, NULL,
VIR_NETDEV_VPORT_PROFILE_OP_RESTORE,
VIR_QEMU_PROCESS_START_PAUSED);
@@ -6143,7 +6145,8 @@ qemuDomainObjStart(virConnectPtr conn,
}
}
- ret = qemuProcessStart(conn, driver, vm, NULL, -1, NULL, NULL,
+ ret = qemuProcessStart(conn, driver, vm, QEMU_ASYNC_JOB_NONE,
+ NULL, -1, NULL, NULL,
VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags);
virDomainAuditStart(vm, "booted", ret >= 0);
if (ret >= 0) {
@@ -6500,7 +6503,7 @@ qemuDomainAttachDeviceLive(virDomainObjPtr vm,
}
if (ret == 0)
- qemuDomainUpdateDeviceList(driver, vm);
+ qemuDomainUpdateDeviceList(driver, vm, QEMU_ASYNC_JOB_NONE);
return ret;
}
@@ -6560,7 +6563,7 @@ qemuDomainDetachDeviceLive(virDomainObjPtr vm,
}
if (ret == 0)
- qemuDomainUpdateDeviceList(driver, vm);
+ qemuDomainUpdateDeviceList(driver, vm, QEMU_ASYNC_JOB_NONE);
return ret;
}
@@ -14101,8 +14104,8 @@ static int qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot,
if (config)
virDomainObjAssignDef(vm, config, false, NULL);
- rc = qemuProcessStart(snapshot->domain->conn,
- driver, vm, NULL, -1, NULL, snap,
+ rc = qemuProcessStart(snapshot->domain->conn, driver, vm,
+ QEMU_ASYNC_JOB_NONE, NULL, -1, NULL, snap,
VIR_NETDEV_VPORT_PROFILE_OP_CREATE,
VIR_QEMU_PROCESS_START_PAUSED);
virDomainAuditStart(vm, "from-snapshot", rc >= 0);
@@ -14195,8 +14198,8 @@ static int qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot,
if (event)
qemuDomainEventQueue(driver, event);
- rc = qemuProcessStart(snapshot->domain->conn,
- driver, vm, NULL, -1, NULL, NULL,
+ rc = qemuProcessStart(snapshot->domain->conn, driver, vm,
+ QEMU_ASYNC_JOB_NONE, NULL, -1, NULL, NULL,
VIR_NETDEV_VPORT_PROFILE_OP_CREATE,
start_flags);
virDomainAuditStart(vm, "from-snapshot", rc >= 0);
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 767d840..1c46b34 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -2480,7 +2480,8 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver,
/* Start the QEMU daemon, with the same command-line arguments plus
* -incoming $migrateFrom
*/
- if (qemuProcessStart(dconn, driver, vm, migrateFrom, dataFD[0], NULL, NULL,
+ if (qemuProcessStart(dconn, driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN,
+ migrateFrom, dataFD[0], NULL, NULL,
VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START,
VIR_QEMU_PROCESS_START_PAUSED |
VIR_QEMU_PROCESS_START_AUTODESTROY) < 0) {
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 8a6b384..229de6d 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -1444,7 +1444,8 @@ static qemuMonitorCallbacks monitorCallbacks = {
};
static int
-qemuConnectMonitor(virQEMUDriverPtr driver, virDomainObjPtr vm, int logfd)
+qemuConnectMonitor(virQEMUDriverPtr driver, virDomainObjPtr vm, int asyncJob,
+ int logfd)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
int ret = -1;
@@ -1495,7 +1496,7 @@ qemuConnectMonitor(virQEMUDriverPtr driver, virDomainObjPtr vm, int logfd)
}
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
ret = qemuMonitorSetCapabilities(priv->mon);
if (ret == 0 &&
virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MONITOR_JSON))
@@ -1901,6 +1902,7 @@ qemuProcessFindCharDevicePTYs(virDomainObjPtr vm,
static int
qemuProcessWaitForMonitor(virQEMUDriverPtr driver,
virDomainObjPtr vm,
+ int asyncJob,
virQEMUCapsPtr qemuCaps,
off_t pos)
{
@@ -1926,7 +1928,7 @@ qemuProcessWaitForMonitor(virQEMUDriverPtr driver,
}
VIR_DEBUG("Connect monitor to %p '%s'", vm, vm->def->name);
- if (qemuConnectMonitor(driver, vm, logfd) < 0)
+ if (qemuConnectMonitor(driver, vm, asyncJob, logfd) < 0)
goto cleanup;
/* Try to get the pty path mappings again via the monitor. This is much more
@@ -1938,7 +1940,7 @@ qemuProcessWaitForMonitor(virQEMUDriverPtr driver,
goto cleanup;
priv = vm->privateData;
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
ret = qemuMonitorGetPtyPaths(priv->mon, paths);
qemuDomainObjExitMonitor(driver, vm);
@@ -1984,13 +1986,13 @@ qemuProcessWaitForMonitor(virQEMUDriverPtr driver,
static int
qemuProcessDetectVcpuPIDs(virQEMUDriverPtr driver,
- virDomainObjPtr vm)
+ virDomainObjPtr vm, int asyncJob)
{
pid_t *cpupids = NULL;
int ncpupids;
qemuDomainObjPrivatePtr priv = vm->privateData;
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
/* failure to get the VCPU<-> PID mapping or to execute the query
* command will not be treated fatal as some versions of qemu don't
* support this command */
@@ -3150,7 +3152,7 @@ qemuProcessUpdateDevices(virQEMUDriverPtr driver,
old = priv->qemuDevices;
priv->qemuDevices = NULL;
- if (qemuDomainUpdateDeviceList(driver, vm) < 0)
+ if (qemuDomainUpdateDeviceList(driver, vm, QEMU_ASYNC_JOB_NONE) < 0)
goto cleanup;
if ((tmp = old)) {
@@ -3216,7 +3218,7 @@ qemuProcessReconnect(void *opaque)
virObjectRef(obj);
/* XXX check PID liveliness & EXE path */
- if (qemuConnectMonitor(driver, obj, -1) < 0)
+ if (qemuConnectMonitor(driver, obj, QEMU_ASYNC_JOB_NONE, -1) < 0)
goto error;
/* Failure to connect to agent shouldn't be fatal */
@@ -3655,6 +3657,7 @@ qemuProcessVerifyGuestCPU(virQEMUDriverPtr driver, virDomainObjPtr vm)
int qemuProcessStart(virConnectPtr conn,
virQEMUDriverPtr driver,
virDomainObjPtr vm,
+ int asyncJob,
const char *migrateFrom,
int stdin_fd,
const char *stdin_path,
@@ -4137,7 +4140,7 @@ int qemuProcessStart(virConnectPtr conn,
goto cleanup;
VIR_DEBUG("Waiting for monitor to show up");
- if (qemuProcessWaitForMonitor(driver, vm, priv->qemuCaps, pos) < 0)
+ if (qemuProcessWaitForMonitor(driver, vm, asyncJob, priv->qemuCaps, pos) < 0)
goto cleanup;
/* Failure to connect to agent shouldn't be fatal */
@@ -4160,7 +4163,7 @@ int qemuProcessStart(virConnectPtr conn,
goto cleanup;
VIR_DEBUG("Detecting VCPU PIDs");
- if (qemuProcessDetectVcpuPIDs(driver, vm) < 0)
+ if (qemuProcessDetectVcpuPIDs(driver, vm, asyncJob) < 0)
goto cleanup;
VIR_DEBUG("Setting cgroup for each VCPU (if required)");
@@ -4195,7 +4198,7 @@ int qemuProcessStart(virConnectPtr conn,
/* qemu doesn't support setting this on the command line, so
* enter the monitor */
VIR_DEBUG("Setting network link states");
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
if (qemuProcessSetLinkStates(vm) < 0) {
qemuDomainObjExitMonitor(driver, vm);
goto cleanup;
@@ -4204,7 +4207,7 @@ int qemuProcessStart(virConnectPtr conn,
qemuDomainObjExitMonitor(driver, vm);
VIR_DEBUG("Fetching list of active devices");
- if (qemuDomainUpdateDeviceList(driver, vm) < 0)
+ if (qemuDomainUpdateDeviceList(driver, vm, asyncJob) < 0)
goto cleanup;
/* Technically, qemuProcessStart can be called from inside
@@ -4219,7 +4222,7 @@ int qemuProcessStart(virConnectPtr conn,
vm->def->mem.cur_balloon);
goto cleanup;
}
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
if (vm->def->memballoon && vm->def->memballoon->period)
qemuMonitorSetMemoryStatsPeriod(priv->mon, vm->def->memballoon->period);
if (qemuMonitorSetBalloon(priv->mon, cur_balloon) < 0) {
@@ -4764,7 +4767,7 @@ int qemuProcessAttach(virConnectPtr conn ATTRIBUTE_UNUSED,
vm->pid = pid;
VIR_DEBUG("Waiting for monitor to show up");
- if (qemuProcessWaitForMonitor(driver, vm, priv->qemuCaps, -1) < 0)
+ if (qemuProcessWaitForMonitor(driver, vm, QEMU_ASYNC_JOB_NONE, priv->qemuCaps, -1) < 0)
goto error;
/* Failure to connect to agent shouldn't be fatal */
@@ -4779,7 +4782,7 @@ int qemuProcessAttach(virConnectPtr conn ATTRIBUTE_UNUSED,
}
VIR_DEBUG("Detecting VCPU PIDs");
- if (qemuProcessDetectVcpuPIDs(driver, vm) < 0)
+ if (qemuProcessDetectVcpuPIDs(driver, vm, QEMU_ASYNC_JOB_NONE) < 0)
goto error;
/* If we have -device, then addresses are assigned explicitly.
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index 9c78736..5948ea4 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -53,6 +53,7 @@ typedef enum {
int qemuProcessStart(virConnectPtr conn,
virQEMUDriverPtr driver,
virDomainObjPtr vm,
+ int asyncJob,
const char *migrateFrom,
int stdin_fd,
const char *stdin_path,
--
2.0.2.731.g247b4d5
10 years, 4 months
[libvirt] [PATCH] docs: use unique dev names in <disk> examples
by Eric Blake
Jiri Moskovcak reported on IRC that the documentation on valid
<disk> was confusing because it didn't have unique dev='...'
entries.
* docs/formatdomain.html.in: Use unique names.
Signed-off-by: Eric Blake <eblake(a)redhat.com>
---
docs/formatdomain.html.in | 26 +++++++++++++-------------
1 files changed, 13 insertions(+), 13 deletions(-)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 8950959..08f31c4 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -1540,14 +1540,14 @@
<source protocol="rbd" name="image_name2">
<host name="hostname" port="7000"/>
</source>
- <target dev="hdd" bus="ide"/>
+ <target dev="hdc" bus="ide"/>
<auth username='myuser'>
<secret type='ceph' usage='mypassid'/>
</auth>
</disk>
<disk type='block' device='cdrom'>
<driver name='qemu' type='raw'/>
- <target dev='hdc' bus='ide' tray='open'/>
+ <target dev='hdd' bus='ide' tray='open'/>
<readonly/>
</disk>
<disk type='network' device='cdrom'>
@@ -1555,7 +1555,7 @@
<source protocol="http" name="url_path">
<host name="hostname" port="80"/>
</source>
- <target dev='hdc' bus='ide' tray='open'/>
+ <target dev='hde' bus='ide' tray='open'/>
<readonly/>
</disk>
<disk type='network' device='cdrom'>
@@ -1563,7 +1563,7 @@
<source protocol="https" name="url_path">
<host name="hostname" port="443"/>
</source>
- <target dev='hdc' bus='ide' tray='open'/>
+ <target dev='hdf' bus='ide' tray='open'/>
<readonly/>
</disk>
<disk type='network' device='cdrom'>
@@ -1571,7 +1571,7 @@
<source protocol="ftp" name="url_path">
<host name="hostname" port="21"/>
</source>
- <target dev='hdc' bus='ide' tray='open'/>
+ <target dev='hdg' bus='ide' tray='open'/>
<readonly/>
</disk>
<disk type='network' device='cdrom'>
@@ -1579,7 +1579,7 @@
<source protocol="ftps" name="url_path">
<host name="hostname" port="990"/>
</source>
- <target dev='hdc' bus='ide' tray='open'/>
+ <target dev='hdh' bus='ide' tray='open'/>
<readonly/>
</disk>
<disk type='network' device='cdrom'>
@@ -1587,7 +1587,7 @@
<source protocol="tftp" name="url_path">
<host name="hostname" port="69"/>
</source>
- <target dev='hdc' bus='ide' tray='open'/>
+ <target dev='hdi' bus='ide' tray='open'/>
<readonly/>
</disk>
<disk type='block' device='lun'>
@@ -1601,12 +1601,12 @@
<source dev='/dev/sda'/>
<geometry cyls='16383' heads='16' secs='63' trans='lba'/>
<blockio logical_block_size='512' physical_block_size='4096'/>
- <target dev='hda' bus='ide'/>
+ <target dev='sdb' bus='ide'/>
</disk>
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
<source pool='blk-pool0' volume='blk-pool0-vol0'/>
- <target dev='hda' bus='ide'/>
+ <target dev='sdc' bus='ide'/>
</disk>
<disk type='network' device='disk'>
<driver name='qemu' type='raw'/>
@@ -1626,7 +1626,7 @@
<auth username='myuser'>
<secret type='iscsi' usage='libvirtiscsi'/>
</auth>
- <target dev='sda' bus='scsi'/>
+ <target dev='vdb' bus='scsi'/>
</disk>
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
@@ -1634,7 +1634,7 @@
<auth username='myuser'>
<secret type='iscsi' usage='libvirtiscsi'/>
</auth>
- <target dev='vda' bus='virtio'/>
+ <target dev='vdc' bus='virtio'/>
</disk>
<disk type='volume' device='disk'>
<driver name='qemu' type='raw'/>
@@ -1642,7 +1642,7 @@
<auth username='myuser'>
<secret type='iscsi' usage='libvirtiscsi'/>
</auth>
- <target dev='vda' bus='virtio'/>
+ <target dev='vdd' bus='virtio'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
@@ -1656,7 +1656,7 @@
<backingStore/>
</backingStore>
</backingStore>
- <target dev='vda' bus='virtio'/>
+ <target dev='vde' bus='virtio'/>
</disk>
</devices>
...</pre>
--
1.7.1
10 years, 4 months
[libvirt] [PATCH] qemu: use guest-fsfreeze-freeze-list command if mountpoints to freeze specified
by Tomoki Sekiyama
A command to freeze a part of mounted file systems is implemented in
upstream QEMU-guest-agent with a name of 'guest-fsfreeze-freeze-list'.
This fixes the name of the command used to partial fsfreeze in qemu driver
when 'mountpoints' option is specified to virDomainFSFreeze API.
Signed-off-by: Tomoki Sekiyama <tomoki.sekiyama(a)hds.com>
---
src/qemu/qemu_agent.c | 2 +-
tests/qemuagenttest.c | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_agent.c b/src/qemu/qemu_agent.c
index 0421733..a10954a 100644
--- a/src/qemu/qemu_agent.c
+++ b/src/qemu/qemu_agent.c
@@ -1336,7 +1336,7 @@ int qemuAgentFSFreeze(qemuAgentPtr mon, const char **mountpoints,
if (!arg)
return -1;
- cmd = qemuAgentMakeCommand("guest-fsfreeze-freeze",
+ cmd = qemuAgentMakeCommand("guest-fsfreeze-freeze-list",
"a:mountpoints", arg, NULL);
} else {
cmd = qemuAgentMakeCommand("guest-fsfreeze-freeze", NULL);
diff --git a/tests/qemuagenttest.c b/tests/qemuagenttest.c
index be207e8..bc649b4 100644
--- a/tests/qemuagenttest.c
+++ b/tests/qemuagenttest.c
@@ -45,7 +45,7 @@ testQemuAgentFSFreeze(const void *data)
if (qemuMonitorTestAddAgentSyncResponse(test) < 0)
goto cleanup;
- if (qemuMonitorTestAddItem(test, "guest-fsfreeze-freeze",
+ if (qemuMonitorTestAddItem(test, "guest-fsfreeze-freeze-list",
"{ \"return\" : 5 }") < 0)
goto cleanup;
10 years, 4 months
[libvirt] [PATCH v2] Include param.h in case of HAVE_BSD_CPU_AFFINITY
by Guido Günther
This fixes compilation on kFreeBSD which otherwise fails like
CC util/libvirt_util_la-virprocess.lo
In file included from /usr/include/sys/cpuset.h:35:0,
from util/virprocess.c:43:
/usr/include/sys/_cpuset.h:49:43: error: 'NBBY' undeclared here (not in
a function)
long __bits[howmany(CPU_SETSIZE, _NCPUBITS)];
^
In file included from util/virprocess.c:43:0:
/usr/include/sys/cpuset.h:215:12: error: unknown type name 'cpusetid_t'
int cpuset(cpusetid_t *);
^
/usr/include/sys/cpuset.h:216:30: error: expected ')' before 'id_t'
int cpuset_setid(cpuwhich_t, id_t, cpusetid_t);
^
/usr/include/sys/cpuset.h:217:42: error: expected ')' before 'id_t'
int cpuset_getid(cpulevel_t, cpuwhich_t, id_t, cpusetid_t *);
^
/usr/include/sys/cpuset.h:218:48: error: expected ')' before 'id_t'
int cpuset_getaffinity(cpulevel_t, cpuwhich_t, id_t, size_t, cpuset_t
*);
^
/usr/include/sys/cpuset.h:219:48: error: expected ')' before 'id_t'
int cpuset_setaffinity(cpulevel_t, cpuwhich_t, id_t, size_t, const
cpuset_t *);
And it's the correct usage as documented in
http://www.freebsd.org/cgi/man.cgi?query=cpuset_setid
Also change the #ifdef HAVE_BSH_CPU_AFFINITY to #if for consistency.
---
The previous version inluded sys/param.h twice causing make
syntax-check to fail.
-- Guido
src/util/virprocess.c | 7 +++++--
1 file changed, 5 insertions(+), 2 deletions(-)
diff --git a/src/util/virprocess.c b/src/util/virprocess.c
index 9179d73..97cce4f 100644
--- a/src/util/virprocess.c
+++ b/src/util/virprocess.c
@@ -33,13 +33,16 @@
#endif
#include <sched.h>
-#ifdef __FreeBSD__
+#if defined(__FreeBSD__) || HAVE_BSD_CPU_AFFINITY
# include <sys/param.h>
+#endif
+
+#ifdef __FreeBSD__
# include <sys/sysctl.h>
# include <sys/user.h>
#endif
-#ifdef HAVE_BSD_CPU_AFFINITY
+#if HAVE_BSD_CPU_AFFINITY
# include <sys/cpuset.h>
#endif
--
2.0.1
10 years, 4 months
[libvirt] [PATCH] Don't fail qemu driver intialization if we can't determine hugepage size
by Guido Günther
Otherwise we fail like
libvirt version: 1.2.7, package: 6 (root 2014-08-08-16:09:22 bogon)
virAuditOpen:62 : Unable to initialize audit layer: Protocol not supported
virFileGetDefaultHugepageSize:2958 : internal error: Unable to parse /proc/meminfo
virStateInitialize:749 : Initialization of QEMU state driver failed: internal error: Unable to parse /proc/meminfo
daemonRunStateInit:922 : Driver state initialization failed
if the data can't be determined.
Reference: http://bugs.debian.org/757609
---
src/util/virfile.c | 5 +++--
1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/src/util/virfile.c b/src/util/virfile.c
index f9efc65..b6f5e3f 100644
--- a/src/util/virfile.c
+++ b/src/util/virfile.c
@@ -2953,8 +2953,9 @@ virFileGetDefaultHugepageSize(unsigned long long *size)
goto cleanup;
if (!(c = strstr(meminfo, HUGEPAGESIZE_STR))) {
- virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Unable to parse %s"),
+ virReportError(VIR_ERR_NO_SUPPORT,
+ _("%s not found in %s"),
+ HUGEPAGESIZE_STR,
PROC_MEMINFO);
goto cleanup;
}
--
2.0.1
10 years, 4 months
[libvirt] [PATCH] Make 'uri' command a bit more prominent.
by Guido Günther
This tries to address
https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=688778
were libvirt autodetected vbox:///session and it wasn't listed in the
manpage.
---
tools/virsh.pod | 2 ++
1 file changed, 2 insertions(+)
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 26b1d79..afaca5b 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -272,6 +272,8 @@ connect to a local linux container
=back
+To find the currently used uri check the I<uri> command below.
+
For remote access see the documentation page at
L<http://libvirt.org/uri.html> on how to make URIs.
The I<--readonly> option allows for read-only connection
--
2.0.1
10 years, 4 months