[libvirt] [PATCH v2] storage: ZFS support
by Roman Bogorodskiy
Changes from the initial version:
- Update docs/schemas and docs/storage.html.in with ZFS backend
information
- Drop StartPool and StopPool that does nothing and therefore
not needed
- Fix memory leak in 'tokens' variable
- Fill volume key before creating a volume
- Use volume's target.path as a key
- Allow not to specify 'target' for pool because in any case
it should be /dev/zvol/$poolname
Few notes on Linux support:
I've been playing around with http://zfsonlinux.org/. I noticed it
missed '-H' switch to provide a machine-friendly output and
created a feature request:
https://github.com/zfsonlinux/zfs/issues/2522
It was quickly fixed, so I've updated from the git version and moved on.
>From that point it worked well and the only thing I had to modify is
not to pass 'volmode' parameter during volume creation (it's freebsd specific
and it could be e.g. just cdev or a FreeBSD geom provider).
So, basically, for the current (limited) feature set the two things
need to be checked on Linux:
- zpool get support for -H switch
- zfs volmode support for volume creation
I'm open for suggestions what would be a good way to check it. For the '-H'
thing, I was thinking about calling just 'zpool get -H' in StartPool. If
the error says 'invalid option 'H'' (i.e. check if it just contains 'H'), then
return an error that zfs version is too old. That's fragile, but probably
less fragile than parsing usage that looks like that:
usage:
get [-pH] <"all" | property[,...]> <pool> ...
As for the 'volmode', it's simple in terms of parsing, because e.g. 'zfs get' prints
a nice set of parameters e.g.:
volblocksize NO YES 512 to 128k, power of 2
volmode YES YES default | geom | dev | none
volsize YES NO <size>
So it should not be a problem to check it once and save in some sort of state struct
inside of the ZFS backend.
Roman Bogorodskiy (1):
storage: ZFS support
configure.ac | 43 +++++
docs/schemas/storagepool.rng | 20 +++
docs/storage.html.in | 34 ++++
include/libvirt/libvirt.h.in | 1 +
po/POTFILES.in | 1 +
src/Makefile.am | 8 +
src/conf/storage_conf.c | 15 +-
src/conf/storage_conf.h | 4 +-
src/qemu/qemu_conf.c | 1 +
src/storage/storage_backend.c | 6 +
src/storage/storage_backend_zfs.c | 329 ++++++++++++++++++++++++++++++++++++++
src/storage/storage_backend_zfs.h | 29 ++++
src/storage/storage_driver.c | 1 +
tools/virsh-pool.c | 3 +
14 files changed, 492 insertions(+), 3 deletions(-)
create mode 100644 src/storage/storage_backend_zfs.c
create mode 100644 src/storage/storage_backend_zfs.h
--
1.9.0
10 years, 3 months
[libvirt] [RFC][scale] new API for querying domains stats
by Francesco Romani
Hi everyone,
I'd like to discuss possible APIs and plans for new query APIs in libvirt.
I'm one of the oVirt (http://www.ovirt.org) developers, and I write code for VDSM;
VDSM is the node management daemon, which is in charge, among many other things, to
gather the host and statistics per Domain/VM.
Right now we aim for a number of VM per node in the (few) hundreds, but we have big plans
to scale much more, and to possibly reach thousands in a not so distant future.
At the moment, we use one thread per VM to gather the VM stats (CPU, network, disk),
and of course this obviously scales poorly.
This is made only worse by the fact that VDSM is a python 2.7 application, and notoriously
python 2.x behaves very badly with threads. We are already working to improve our code,
but I'd like to bring the discussion here and see if and when the querying API can be improved.
We currently use these APIs for our sempling:
virDomainBlockInfo
virDomainGetInfo
virDomainGetCPUStats
virDomainBlockStats
virDomainBlockStatsFlags
virDomainInterfaceStats
virDomainGetVcpusFlags
virDomainGetMetadata
What we'd like to have is
* asynchronous APIs for querying domain stats (https://bugzilla.redhat.com/show_bug.cgi?id=1113106)
This would be just awesome. Either a single callback or a different one per call is fine
(let's discuss this!).
please note that we are much more concerned about thread reduction then about performance
numbers. We had report of thread number becoming a real harm, while performance so far
is not yet a concern (https://bugzilla.redhat.com/show_bug.cgi?id=1102147#c54)
* bulk APIs for querying domain stats (https://bugzilla.redhat.com/show_bug.cgi?id=1113116)
would be really welcome as well. It is quite independent from the previous bullet point
and would help us greatly with scale.
So, I'd like to discuss if these additions are (or can be) in the project roadmap,
and, if so, how the API could look like and what the possible timeframe could be.
Of course I'd be happy to provide any further information about VDSM and its workings.
Thoughts very welcome!
Thanks and best regards,
--
Francesco Romani
RedHat Engineering Virtualization R & D
Phone: 8261328
IRC: fromani
10 years, 3 months
[libvirt] [PATCH 1/1] qemu: Tidy up job handling during live migration
by Sam Bobroff
During a QEMU live migration several warning messages about job
handling could be written to syslog on the destination host:
"entering monitor without asking for a nested job is dangerous"
The messages are written because the job handling during migration
uses hard coded asyncJob values in several places that are incorrect.
This patch passes the required asyncJob value around and prevents
the warnings as well as any issues that the warnings may be referring
to.
Signed-off-by: Sam Bobroff <sam.bobroff(a)au1.ibm.com>
---
src/qemu/qemu_domain.c | 5 +++--
src/qemu/qemu_domain.h | 2 +-
src/qemu/qemu_driver.c | 21 ++++++++++++---------
src/qemu/qemu_migration.c | 3 ++-
src/qemu/qemu_process.c | 33 ++++++++++++++++++---------------
src/qemu/qemu_process.h | 1 +
6 files changed, 37 insertions(+), 28 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index 4f63c88..3abbb14 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -2497,7 +2497,8 @@ qemuDomainDetermineDiskChain(virQEMUDriverPtr driver,
int
qemuDomainUpdateDeviceList(virQEMUDriverPtr driver,
- virDomainObjPtr vm)
+ virDomainObjPtr vm,
+ int asyncJob)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
char **aliases;
@@ -2505,7 +2506,7 @@ qemuDomainUpdateDeviceList(virQEMUDriverPtr driver,
if (!virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_DEVICE_DEL_EVENT))
return 0;
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
if (qemuMonitorGetDeviceAliases(priv->mon, &aliases) < 0) {
qemuDomainObjExitMonitor(driver, vm);
return -1;
diff --git a/src/qemu/qemu_domain.h b/src/qemu/qemu_domain.h
index 67972b9..8736889 100644
--- a/src/qemu/qemu_domain.h
+++ b/src/qemu/qemu_domain.h
@@ -369,7 +369,7 @@ extern virDomainXMLNamespace virQEMUDriverDomainXMLNamespace;
extern virDomainDefParserConfig virQEMUDriverDomainDefParserConfig;
int qemuDomainUpdateDeviceList(virQEMUDriverPtr driver,
- virDomainObjPtr vm);
+ virDomainObjPtr vm, int asyncJob);
bool qemuDomainDefCheckABIStability(virQEMUDriverPtr driver,
virDomainDefPtr src,
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 33541d3..b0439d2 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -1616,7 +1616,8 @@ static virDomainPtr qemuDomainCreateXML(virConnectPtr conn,
goto cleanup;
}
- if (qemuProcessStart(conn, driver, vm, NULL, -1, NULL, NULL,
+ if (qemuProcessStart(conn, driver, vm, QEMU_ASYNC_JOB_NONE,
+ NULL, -1, NULL, NULL,
VIR_NETDEV_VPORT_PROFILE_OP_CREATE,
start_flags) < 0) {
virDomainAuditStart(vm, "booted", false);
@@ -5446,7 +5447,8 @@ qemuDomainSaveImageStartVM(virConnectPtr conn,
}
/* Set the migration source and start it up. */
- ret = qemuProcessStart(conn, driver, vm, "stdio", *fd, path, NULL,
+ ret = qemuProcessStart(conn, driver, vm, QEMU_ASYNC_JOB_NONE,
+ "stdio", *fd, path, NULL,
VIR_NETDEV_VPORT_PROFILE_OP_RESTORE,
VIR_QEMU_PROCESS_START_PAUSED);
@@ -6143,7 +6145,8 @@ qemuDomainObjStart(virConnectPtr conn,
}
}
- ret = qemuProcessStart(conn, driver, vm, NULL, -1, NULL, NULL,
+ ret = qemuProcessStart(conn, driver, vm, QEMU_ASYNC_JOB_NONE,
+ NULL, -1, NULL, NULL,
VIR_NETDEV_VPORT_PROFILE_OP_CREATE, start_flags);
virDomainAuditStart(vm, "booted", ret >= 0);
if (ret >= 0) {
@@ -6500,7 +6503,7 @@ qemuDomainAttachDeviceLive(virDomainObjPtr vm,
}
if (ret == 0)
- qemuDomainUpdateDeviceList(driver, vm);
+ qemuDomainUpdateDeviceList(driver, vm, QEMU_ASYNC_JOB_NONE);
return ret;
}
@@ -6560,7 +6563,7 @@ qemuDomainDetachDeviceLive(virDomainObjPtr vm,
}
if (ret == 0)
- qemuDomainUpdateDeviceList(driver, vm);
+ qemuDomainUpdateDeviceList(driver, vm, QEMU_ASYNC_JOB_NONE);
return ret;
}
@@ -14101,8 +14104,8 @@ static int qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot,
if (config)
virDomainObjAssignDef(vm, config, false, NULL);
- rc = qemuProcessStart(snapshot->domain->conn,
- driver, vm, NULL, -1, NULL, snap,
+ rc = qemuProcessStart(snapshot->domain->conn, driver, vm,
+ QEMU_ASYNC_JOB_NONE, NULL, -1, NULL, snap,
VIR_NETDEV_VPORT_PROFILE_OP_CREATE,
VIR_QEMU_PROCESS_START_PAUSED);
virDomainAuditStart(vm, "from-snapshot", rc >= 0);
@@ -14195,8 +14198,8 @@ static int qemuDomainRevertToSnapshot(virDomainSnapshotPtr snapshot,
if (event)
qemuDomainEventQueue(driver, event);
- rc = qemuProcessStart(snapshot->domain->conn,
- driver, vm, NULL, -1, NULL, NULL,
+ rc = qemuProcessStart(snapshot->domain->conn, driver, vm,
+ QEMU_ASYNC_JOB_NONE, NULL, -1, NULL, NULL,
VIR_NETDEV_VPORT_PROFILE_OP_CREATE,
start_flags);
virDomainAuditStart(vm, "from-snapshot", rc >= 0);
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index 767d840..1c46b34 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -2480,7 +2480,8 @@ qemuMigrationPrepareAny(virQEMUDriverPtr driver,
/* Start the QEMU daemon, with the same command-line arguments plus
* -incoming $migrateFrom
*/
- if (qemuProcessStart(dconn, driver, vm, migrateFrom, dataFD[0], NULL, NULL,
+ if (qemuProcessStart(dconn, driver, vm, QEMU_ASYNC_JOB_MIGRATION_IN,
+ migrateFrom, dataFD[0], NULL, NULL,
VIR_NETDEV_VPORT_PROFILE_OP_MIGRATE_IN_START,
VIR_QEMU_PROCESS_START_PAUSED |
VIR_QEMU_PROCESS_START_AUTODESTROY) < 0) {
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c
index 8a6b384..229de6d 100644
--- a/src/qemu/qemu_process.c
+++ b/src/qemu/qemu_process.c
@@ -1444,7 +1444,8 @@ static qemuMonitorCallbacks monitorCallbacks = {
};
static int
-qemuConnectMonitor(virQEMUDriverPtr driver, virDomainObjPtr vm, int logfd)
+qemuConnectMonitor(virQEMUDriverPtr driver, virDomainObjPtr vm, int asyncJob,
+ int logfd)
{
qemuDomainObjPrivatePtr priv = vm->privateData;
int ret = -1;
@@ -1495,7 +1496,7 @@ qemuConnectMonitor(virQEMUDriverPtr driver, virDomainObjPtr vm, int logfd)
}
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
ret = qemuMonitorSetCapabilities(priv->mon);
if (ret == 0 &&
virQEMUCapsGet(priv->qemuCaps, QEMU_CAPS_MONITOR_JSON))
@@ -1901,6 +1902,7 @@ qemuProcessFindCharDevicePTYs(virDomainObjPtr vm,
static int
qemuProcessWaitForMonitor(virQEMUDriverPtr driver,
virDomainObjPtr vm,
+ int asyncJob,
virQEMUCapsPtr qemuCaps,
off_t pos)
{
@@ -1926,7 +1928,7 @@ qemuProcessWaitForMonitor(virQEMUDriverPtr driver,
}
VIR_DEBUG("Connect monitor to %p '%s'", vm, vm->def->name);
- if (qemuConnectMonitor(driver, vm, logfd) < 0)
+ if (qemuConnectMonitor(driver, vm, asyncJob, logfd) < 0)
goto cleanup;
/* Try to get the pty path mappings again via the monitor. This is much more
@@ -1938,7 +1940,7 @@ qemuProcessWaitForMonitor(virQEMUDriverPtr driver,
goto cleanup;
priv = vm->privateData;
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
ret = qemuMonitorGetPtyPaths(priv->mon, paths);
qemuDomainObjExitMonitor(driver, vm);
@@ -1984,13 +1986,13 @@ qemuProcessWaitForMonitor(virQEMUDriverPtr driver,
static int
qemuProcessDetectVcpuPIDs(virQEMUDriverPtr driver,
- virDomainObjPtr vm)
+ virDomainObjPtr vm, int asyncJob)
{
pid_t *cpupids = NULL;
int ncpupids;
qemuDomainObjPrivatePtr priv = vm->privateData;
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
/* failure to get the VCPU<-> PID mapping or to execute the query
* command will not be treated fatal as some versions of qemu don't
* support this command */
@@ -3150,7 +3152,7 @@ qemuProcessUpdateDevices(virQEMUDriverPtr driver,
old = priv->qemuDevices;
priv->qemuDevices = NULL;
- if (qemuDomainUpdateDeviceList(driver, vm) < 0)
+ if (qemuDomainUpdateDeviceList(driver, vm, QEMU_ASYNC_JOB_NONE) < 0)
goto cleanup;
if ((tmp = old)) {
@@ -3216,7 +3218,7 @@ qemuProcessReconnect(void *opaque)
virObjectRef(obj);
/* XXX check PID liveliness & EXE path */
- if (qemuConnectMonitor(driver, obj, -1) < 0)
+ if (qemuConnectMonitor(driver, obj, QEMU_ASYNC_JOB_NONE, -1) < 0)
goto error;
/* Failure to connect to agent shouldn't be fatal */
@@ -3655,6 +3657,7 @@ qemuProcessVerifyGuestCPU(virQEMUDriverPtr driver, virDomainObjPtr vm)
int qemuProcessStart(virConnectPtr conn,
virQEMUDriverPtr driver,
virDomainObjPtr vm,
+ int asyncJob,
const char *migrateFrom,
int stdin_fd,
const char *stdin_path,
@@ -4137,7 +4140,7 @@ int qemuProcessStart(virConnectPtr conn,
goto cleanup;
VIR_DEBUG("Waiting for monitor to show up");
- if (qemuProcessWaitForMonitor(driver, vm, priv->qemuCaps, pos) < 0)
+ if (qemuProcessWaitForMonitor(driver, vm, asyncJob, priv->qemuCaps, pos) < 0)
goto cleanup;
/* Failure to connect to agent shouldn't be fatal */
@@ -4160,7 +4163,7 @@ int qemuProcessStart(virConnectPtr conn,
goto cleanup;
VIR_DEBUG("Detecting VCPU PIDs");
- if (qemuProcessDetectVcpuPIDs(driver, vm) < 0)
+ if (qemuProcessDetectVcpuPIDs(driver, vm, asyncJob) < 0)
goto cleanup;
VIR_DEBUG("Setting cgroup for each VCPU (if required)");
@@ -4195,7 +4198,7 @@ int qemuProcessStart(virConnectPtr conn,
/* qemu doesn't support setting this on the command line, so
* enter the monitor */
VIR_DEBUG("Setting network link states");
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
if (qemuProcessSetLinkStates(vm) < 0) {
qemuDomainObjExitMonitor(driver, vm);
goto cleanup;
@@ -4204,7 +4207,7 @@ int qemuProcessStart(virConnectPtr conn,
qemuDomainObjExitMonitor(driver, vm);
VIR_DEBUG("Fetching list of active devices");
- if (qemuDomainUpdateDeviceList(driver, vm) < 0)
+ if (qemuDomainUpdateDeviceList(driver, vm, asyncJob) < 0)
goto cleanup;
/* Technically, qemuProcessStart can be called from inside
@@ -4219,7 +4222,7 @@ int qemuProcessStart(virConnectPtr conn,
vm->def->mem.cur_balloon);
goto cleanup;
}
- qemuDomainObjEnterMonitor(driver, vm);
+ ignore_value(qemuDomainObjEnterMonitorAsync(driver, vm, asyncJob));
if (vm->def->memballoon && vm->def->memballoon->period)
qemuMonitorSetMemoryStatsPeriod(priv->mon, vm->def->memballoon->period);
if (qemuMonitorSetBalloon(priv->mon, cur_balloon) < 0) {
@@ -4764,7 +4767,7 @@ int qemuProcessAttach(virConnectPtr conn ATTRIBUTE_UNUSED,
vm->pid = pid;
VIR_DEBUG("Waiting for monitor to show up");
- if (qemuProcessWaitForMonitor(driver, vm, priv->qemuCaps, -1) < 0)
+ if (qemuProcessWaitForMonitor(driver, vm, QEMU_ASYNC_JOB_NONE, priv->qemuCaps, -1) < 0)
goto error;
/* Failure to connect to agent shouldn't be fatal */
@@ -4779,7 +4782,7 @@ int qemuProcessAttach(virConnectPtr conn ATTRIBUTE_UNUSED,
}
VIR_DEBUG("Detecting VCPU PIDs");
- if (qemuProcessDetectVcpuPIDs(driver, vm) < 0)
+ if (qemuProcessDetectVcpuPIDs(driver, vm, QEMU_ASYNC_JOB_NONE) < 0)
goto error;
/* If we have -device, then addresses are assigned explicitly.
diff --git a/src/qemu/qemu_process.h b/src/qemu/qemu_process.h
index 9c78736..5948ea4 100644
--- a/src/qemu/qemu_process.h
+++ b/src/qemu/qemu_process.h
@@ -53,6 +53,7 @@ typedef enum {
int qemuProcessStart(virConnectPtr conn,
virQEMUDriverPtr driver,
virDomainObjPtr vm,
+ int asyncJob,
const char *migrateFrom,
int stdin_fd,
const char *stdin_path,
--
2.0.2.731.g247b4d5
10 years, 3 months
[libvirt] [PATCH] daemon: Fix indentation in libvirtd.c
by Wang Rui
Signed-off-by: Wang Rui <moon.wangrui(a)huawei.com>
---
daemon/libvirtd.c | 10 +++++-----
1 file changed, 5 insertions(+), 5 deletions(-)
diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c
index 4c926b3..946081a 100644
--- a/daemon/libvirtd.c
+++ b/daemon/libvirtd.c
@@ -803,11 +803,11 @@ static void daemonReloadHandler(virNetServerPtr srv ATTRIBUTE_UNUSED,
siginfo_t *sig ATTRIBUTE_UNUSED,
void *opaque ATTRIBUTE_UNUSED)
{
- VIR_INFO("Reloading configuration on SIGHUP");
- virHookCall(VIR_HOOK_DRIVER_DAEMON, "-",
- VIR_HOOK_DAEMON_OP_RELOAD, SIGHUP, "SIGHUP", NULL, NULL);
- if (virStateReload() < 0)
- VIR_WARN("Error while reloading drivers");
+ VIR_INFO("Reloading configuration on SIGHUP");
+ virHookCall(VIR_HOOK_DRIVER_DAEMON, "-",
+ VIR_HOOK_DAEMON_OP_RELOAD, SIGHUP, "SIGHUP", NULL, NULL);
+ if (virStateReload() < 0)
+ VIR_WARN("Error while reloading drivers");
}
static int daemonSetupSignals(virNetServerPtr srv)
--
1.7.12.4
10 years, 3 months
[libvirt] [PATCH v5 0/4] fix blockcopy across libvirtd restart, turn on active blockcommit
by Eric Blake
v4 was here:
https://www.redhat.com/archives/libvir-list/2014-June/msg01095.html
Since then: I _finally_ implemented persistent domain updates to
match live changes, by reimplementing how <mirror> is tracked in
XML to be more robust to libvirtd restarts. In particular, it
fixes a bug in v4 that Adam Litke pointed out where a pivot
commit would emit a ready event for 'active commit' but then a
completed event for plain 'commit'.
Patches 1 and 2 are completely local to blockcopy, and fix a
long-standing bug where it is not robust to libvirtd restarts
(similar to the bug fixed in 60e4944). They are pre-requisite
to turning on active commit.
Patches 3 and 4 are borderline on whether it is a new feature
or a bug fix. But consider that commit 47549d5 turned on qemu
feature probing with the intent of getting this in 1.2.7, and
we already missed getting active commit into 1.2.6. As it is,
patch 4 is very similar to the already-acked counterpart of v4.
Therefore, even though I missed rc1, I'm arguing that this whole
series should be included in rc2 in time for 1.2.7.
Not done yet, but that I'd also like to have in the release if I
can swing it: the new virConnectGetDomainCapabilities API needs
to expose features on whether active commit will work. Also,
I'd like to fix libvirtd restarts to inspect all existing <mirror>
tags and see if the job completed while libvirtd was offline (that
is, when we miss an event, we should still start up in the correct
state when reconnecting to qemu).
Eric Blake (4):
blockcopy: add more XML for state tracking
blockjob: properly track blockcopy xml changes on disk
blockcommit: track job type in xml
blockcommit: turn on active commit
docs/formatdomain.html.in | 23 ++-
docs/schemas/domaincommon.rng | 19 ++-
src/conf/domain_conf.c | 56 ++++++-
src/conf/domain_conf.h | 14 +-
src/qemu/qemu_driver.c | 167 ++++++++++++++-------
src/qemu/qemu_process.c | 92 ++++++++++--
.../qemuxml2argv-disk-active-commit.xml | 37 +++++
.../qemuxml2argvdata/qemuxml2argv-disk-mirror.xml | 8 +-
.../qemuxml2xmlout-disk-mirror-old.xml | 4 +-
tests/qemuxml2xmltest.c | 1 +
10 files changed, 336 insertions(+), 85 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-active-commit.xml
--
1.9.3
10 years, 3 months
[libvirt] [PATCH v2 0/8] Add iSCSI hostdev pass-through device
by John Ferlan
Followup/rebase to partially ACK'd pile of changes - figure it's just
as easy for me to keep things together and it's better to see things
in totality...
See: http://www.redhat.com/archives/libvir-list/2014-July/msg00592.html
* Patches 1-5 were adjusted to use the first_/second_ naming scheme rather
than a/b.
* Typo for 'vedor' in patch 1 adjusted.
* Changed name of iSCSIFree to iSCSIClear.
John Ferlan (8):
hostdev: Introduce virDomainHostdevSubsysUSB
hostdev: Introduce virDomainHostdevSubsysPCI
hostdev: Introduce virDomainHostdevSubsysSCSI
hostdev: Introduce virDomainHostdevSubsysSCSIHost
Add virConnectPtr for qemuBuildSCSIHostdevDrvStr
hostdev: Introduce virDomainHostdevSubsysSCSIiSCSI
domain_conf: Common routine to handle network storage host xml def
hostdev: Add iSCSI hostdev XML
docs/formatdomain.html.in | 142 +++++--
docs/schemas/domaincommon.rng | 46 +-
src/conf/domain_audit.c | 38 +-
src/conf/domain_conf.c | 471 ++++++++++++++-------
src/conf/domain_conf.h | 80 +++-
src/libxl/libxl_conf.c | 9 +-
src/libxl/libxl_domain.c | 6 +-
src/libxl/libxl_driver.c | 34 +-
src/lxc/lxc_cgroup.c | 4 +-
src/lxc/lxc_controller.c | 10 +-
src/lxc/lxc_driver.c | 16 +-
src/qemu/qemu_cgroup.c | 69 +--
src/qemu/qemu_command.c | 158 ++++---
src/qemu/qemu_command.h | 5 +-
src/qemu/qemu_conf.c | 20 +-
src/qemu/qemu_driver.c | 6 +-
src/qemu/qemu_hotplug.c | 93 ++--
src/qemu/qemu_hotplug.h | 9 +-
src/security/security_apparmor.c | 33 +-
src/security/security_dac.c | 66 +--
src/security/security_selinux.c | 64 +--
src/security/virt-aa-helper.c | 5 +-
src/util/virhostdev.c | 295 +++++++------
.../qemuxml2argv-hostdev-scsi-lsi-iscsi-auth.args | 14 +
.../qemuxml2argv-hostdev-scsi-lsi-iscsi-auth.xml | 46 ++
.../qemuxml2argv-hostdev-scsi-lsi-iscsi.args | 14 +
.../qemuxml2argv-hostdev-scsi-lsi-iscsi.xml | 40 ++
...emuxml2argv-hostdev-scsi-virtio-iscsi-auth.args | 16 +
...qemuxml2argv-hostdev-scsi-virtio-iscsi-auth.xml | 46 ++
.../qemuxml2argv-hostdev-scsi-virtio-iscsi.args | 16 +
.../qemuxml2argv-hostdev-scsi-virtio-iscsi.xml | 40 ++
tests/qemuxml2argvtest.c | 16 +
tests/qemuxml2xmltest.c | 5 +
33 files changed, 1322 insertions(+), 610 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-lsi-iscsi-auth.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-lsi-iscsi-auth.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-lsi-iscsi.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-lsi-iscsi.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-virtio-iscsi-auth.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-virtio-iscsi-auth.xml
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-virtio-iscsi.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-hostdev-scsi-virtio-iscsi.xml
--
1.9.3
10 years, 3 months
[libvirt] [Discussion] How do we think about time out mechanism?
by James
There's a kind of situation that when libvirtd's under a lot of pressure, just as we
start a lot of VMs at the same time, some libvirt APIs may take a lot of time to return.
And this will block the up level job to be finished. Mostly we can't wait forever, we
want a time out mechnism to help us out. When one API takes more than some time, it can
return time out as a result, and do some rolling back.
So my question is: do we have a plan to give a 'time out' solution or a better solution
to fix this kind of problems in the future? And when?
Thanks all!
--
Best Regards
James
10 years, 3 months
[libvirt] [PATCH v2 00/25] Refactor the xm parser
by David Kiarie
Kiarie Kahurani (25):
src/xenxs:Refactor code parsing memory config
src/xenxs:Refactor code parsing event actions config
src/xenxs:Refactor code parsing virtual time controls config
src/xenxs:Refactor code parsing PCI devices config
src/xenxs:Refactor code parsing CPU features config
src/xenxs:Refactor code parsing disk config
src/xenxs:Refactor code parsing Vfb config
src/xenxs:Refactor code parsing OS related config
src/xenxs:Refactor code parsing Char device related config
src/xenxs:Refactor code parsing Vif
src/xenxs:Refactor code parsing emulated hardware config
src/xenxs:Refactor code parsing general config
src/xenxs:Organise functions to avoid duplication
src/xenxs:Refactor code formating general config
src/xenxs:Refactor code formating memory config
src/xenxs:Reafactor code formating virtual time config
src/xenxs:Refactor code formating event actions config
src/xenxs:Refactor code formating Vif config
src/xenxs:Refactor code formating Vfb config
src/xenxs:Refactor code formating CPU config and features
src/xenxs:Refactor code formating disk list
src/xenxs:Refactor code formating emulated devices config
src/xenxs:Refactor char devices formating code
src/xenxs:Refactor code formating OS config
src/xenxs:Export code to be reused
src/xenxs/xen_xm.c | 2009 ++++++++++++--------
src/xenxs/xen_xm.h | 6 +-
tests/xmconfigdata/test-escape-paths.cfg | 14 +-
tests/xmconfigdata/test-fullvirt-force-hpet.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-force-nohpet.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-localtime.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-net-ioemu.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-net-netfront.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-new-cdrom.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-old-cdrom.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-parallel-tcp.cfg | 12 +-
.../test-fullvirt-serial-dev-2-ports.cfg | 12 +-
.../test-fullvirt-serial-dev-2nd-port.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-serial-file.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-serial-null.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-serial-pipe.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-serial-pty.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-serial-stdio.cfg | 12 +-
.../test-fullvirt-serial-tcp-telnet.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-serial-tcp.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-serial-udp.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-serial-unix.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-sound.cfg | 14 +-
tests/xmconfigdata/test-fullvirt-usbmouse.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-usbtablet.cfg | 12 +-
tests/xmconfigdata/test-fullvirt-utc.cfg | 12 +-
tests/xmconfigdata/test-no-source-cdrom.cfg | 12 +-
tests/xmconfigdata/test-pci-devs.cfg | 12 +-
28 files changed, 1326 insertions(+), 1005 deletions(-)
This patches try to refactor the code for the xen-xm parser since
much of the code can be reused when writing the xen-xl parser.
changes since Prepost
I have finished refactoring all the code
Fixed numerous memory leaks
I have also changed the layout of the tests since they depend
on the layout of the code which I have changed
I would also like to hear some comment on what configs I should
support in the new xen-xl parser.
Regards,
David
--
1.8.4.5
10 years, 3 months
Re: [libvirt] [Qemu-devel] [PATCH] [RFC] Add machine type pc-1.0-qemu-kvm for live migrate compatibility with qemu-kvm
by Alex Bligh
On 22 Jul 2014, at 19:43, Alex Bligh <alex(a)alex.org.uk> wrote:
> Testing has been light to date (i.e.
> can I migrate it inbound with -S without anything complaining).
I've given this quite a bit more testing today.
It works fine qemu-kvm 1.0 -> qemu-2.0+patch (cirrus vga)
It works fine qemu-2.0+patch -> qemu-2.0+patch (cirrus vga)
It doesn't (yet) work qemu-2.0+patch -> qemu-kvm 1.0 (cirrus vga).
The reason for this is (at least) that I need to emulate the
broken versioning of the mc146818rtc timer section, as writing
it correctly confuses qemu-kvm 1.0. Therefore please don't bother
testing migration back to 1.0 yet.
--
Alex Bligh
10 years, 3 months
[libvirt] [libvirt-glib] [PATCH v6 1/3] libvirt-gobject-domain: Add _fetch_snapshots
by mail@baedert.org
From: Timm Bäder <mail(a)baedert.org>
This function can be used to fetch the snapshots of a domain (according
to the given GVirDomainSnapshotListFlags) and save them in a
domain-internal GHashTable. A function to access them from outside will
be added in a later patch.
---
libvirt-gobject/libvirt-gobject-domain.c | 85 ++++++++++++++++++++++++++++++++
libvirt-gobject/libvirt-gobject-domain.h | 37 ++++++++++++++
libvirt-gobject/libvirt-gobject.sym | 2 +
3 files changed, 124 insertions(+)
diff --git a/libvirt-gobject/libvirt-gobject-domain.c b/libvirt-gobject/libvirt-gobject-domain.c
index c6e30e5..708cb3b 100644
--- a/libvirt-gobject/libvirt-gobject-domain.c
+++ b/libvirt-gobject/libvirt-gobject-domain.c
@@ -38,6 +38,8 @@ struct _GVirDomainPrivate
{
virDomainPtr handle;
gchar uuid[VIR_UUID_STRING_BUFLEN];
+ GHashTable *snapshots;
+ GMutex *lock;
};
G_DEFINE_TYPE(GVirDomain, gvir_domain, G_TYPE_OBJECT);
@@ -121,6 +123,11 @@ static void gvir_domain_finalize(GObject *object)
g_debug("Finalize GVirDomain=%p", domain);
+ if (priv->snapshots) {
+ g_hash_table_unref(priv->snapshots);
+ }
+ g_mutex_free(priv->lock);
+
virDomainFree(priv->handle);
G_OBJECT_CLASS(gvir_domain_parent_class)->finalize(object);
@@ -237,6 +244,7 @@ static void gvir_domain_init(GVirDomain *domain)
g_debug("Init GVirDomain=%p", domain);
domain->priv = GVIR_DOMAIN_GET_PRIVATE(domain);
+ domain->priv->lock = g_mutex_new();
}
typedef struct virDomain GVirDomainHandle;
@@ -1514,3 +1522,80 @@ gvir_domain_create_snapshot(GVirDomain *dom,
g_free(custom_xml);
return dom_snapshot;
}
+
+
+
+/**
+ * gvir_domain_fetch_snapshots:
+ * @dom: The domain
+ * @list_flags: bitwise-OR of #GVirDomainSnapshotListFlags
+ * @cancellable: (allow-none)(transfer-none): cancellation object
+ * @error: (allow-none): Place-holder for error or NULL
+ *
+ * Returns: TRUE on success, FALSE otherwise.
+ */
+gboolean gvir_domain_fetch_snapshots(GVirDomain *dom,
+ guint list_flags,
+ GCancellable *cancellable,
+ GError **error)
+{
+ GVirDomainPrivate *priv;
+ virDomainSnapshotPtr *snapshots = NULL;
+ GVirDomainSnapshot *snap;
+ GHashTable *snap_table;
+ int n_snaps = 0;
+ int i;
+ gboolean ret = FALSE;
+
+ g_return_val_if_fail(GVIR_IS_DOMAIN(dom), FALSE);
+ g_return_val_if_fail((error == NULL) || (*error == NULL), FALSE);
+
+ priv = dom->priv;
+
+ snap_table = g_hash_table_new_full(g_str_hash,
+ g_str_equal,
+ NULL,
+ g_object_unref);
+
+
+ n_snaps = virDomainListAllSnapshots(priv->handle, &snapshots, list_flags);
+
+ if (g_cancellable_set_error_if_cancelled(cancellable, error)) {
+ goto cleanup;
+ }
+
+ if (n_snaps < 0) {
+ gvir_set_error(error, GVIR_DOMAIN_ERROR, 0,
+ "Unable to fetch snapshots of %s",
+ gvir_domain_get_name(dom));
+ goto cleanup;
+ }
+
+ for (i = 0; i < n_snaps; i ++) {
+ if (g_cancellable_set_error_if_cancelled(cancellable, error)) {
+ goto cleanup;
+ }
+ snap = GVIR_DOMAIN_SNAPSHOT(g_object_new(GVIR_TYPE_DOMAIN_SNAPSHOT,
+ "handle", snapshots[i],
+ NULL));
+ g_hash_table_insert(snap_table,
+ (gpointer)gvir_domain_snapshot_get_name(snap),
+ snap);
+ }
+
+
+ g_mutex_lock(priv->lock);
+ if (priv->snapshots != NULL)
+ g_hash_table_unref(priv->snapshots);
+ priv->snapshots = snap_table;
+ snap_table = NULL;
+ g_mutex_unlock(priv->lock);
+
+ ret = TRUE;
+
+cleanup:
+ free(snapshots);
+ if (snap_table != NULL)
+ g_hash_table_unref(snap_table);
+ return ret;
+}
diff --git a/libvirt-gobject/libvirt-gobject-domain.h b/libvirt-gobject/libvirt-gobject-domain.h
index 38d3458..8c1a8e5 100644
--- a/libvirt-gobject/libvirt-gobject-domain.h
+++ b/libvirt-gobject/libvirt-gobject-domain.h
@@ -183,6 +183,39 @@ typedef enum {
GVIR_DOMAIN_REBOOT_GUEST_AGENT = VIR_DOMAIN_REBOOT_GUEST_AGENT,
} GVirDomainRebootFlags;
+/**
+ * GVirDomainSnapshotListFlags:
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_ALL: List all snapshots
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_DESCENDANTS: List all descendants, not just
+ * children, when listing a snapshot.
+ * For historical reasons, groups do not use contiguous bits.
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_ROOTS: Filter by snapshots with no parents, when listing a domain
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_METADATA: Filter by snapshots which have metadata
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_LEAVES: Filter by snapshots with no children
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_NO_LEAVES: Filter by snapshots that have children
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_NO_METADATA: Filter by snapshots with no metadata
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_INACTIVE: Filter by snapshots taken while guest was shut off
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_ACTIVE: Filter by snapshots taken while guest was active, and with memory state
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_DISK_ONLY: Filter by snapshots taken while guest was active, but without memory state
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_INTERNAL: Filter by snapshots stored internal to disk images
+ * @GVIR_DOMAIN_SNAPSHOT_LIST_EXTERNAL: Filter by snapshots that use files external to disk images
+ */
+typedef enum {
+ GVIR_DOMAIN_SNAPSHOT_LIST_ALL = 0,
+ GVIR_DOMAIN_SNAPSHOT_LIST_DESCENDANTS = VIR_DOMAIN_SNAPSHOT_LIST_DESCENDANTS,
+ GVIR_DOMAIN_SNAPSHOT_LIST_ROOTS = VIR_DOMAIN_SNAPSHOT_LIST_ROOTS,
+ GVIR_DOMAIN_SNAPSHOT_LIST_METADATA = VIR_DOMAIN_SNAPSHOT_LIST_METADATA,
+ GVIR_DOMAIN_SNAPSHOT_LIST_LEAVES = VIR_DOMAIN_SNAPSHOT_LIST_LEAVES,
+ GVIR_DOMAIN_SNAPSHOT_LIST_NO_LEAVES = VIR_DOMAIN_SNAPSHOT_LIST_NO_LEAVES,
+ GVIR_DOMAIN_SNAPSHOT_LIST_NO_METADATA = VIR_DOMAIN_SNAPSHOT_LIST_NO_METADATA,
+ GVIR_DOMAIN_SNAPSHOT_LIST_INACTIVE = VIR_DOMAIN_SNAPSHOT_LIST_INACTIVE,
+ GVIR_DOMAIN_SNAPSHOT_LIST_ACTIVE = VIR_DOMAIN_SNAPSHOT_LIST_ACTIVE,
+ GVIR_DOMAIN_SNAPSHOT_LIST_DISK_ONLY = VIR_DOMAIN_SNAPSHOT_LIST_DISK_ONLY,
+ GVIR_DOMAIN_SNAPSHOT_LIST_INTERNAL = VIR_DOMAIN_SNAPSHOT_LIST_INTERNAL,
+ GVIR_DOMAIN_SNAPSHOT_LIST_EXTERNAL = VIR_DOMAIN_SNAPSHOT_LIST_EXTERNAL
+} GVirDomainSnapshotListFlags;
+
+
typedef struct _GVirDomainInfo GVirDomainInfo;
struct _GVirDomainInfo
{
@@ -330,6 +363,10 @@ gvir_domain_create_snapshot(GVirDomain *dom,
guint flags,
GError **err);
+gboolean gvir_domain_fetch_snapshots(GVirDomain *dom,
+ guint list_flags,
+ GCancellable *cancellable,
+ GError **error);
G_END_DECLS
#endif /* __LIBVIRT_GOBJECT_DOMAIN_H__ */
diff --git a/libvirt-gobject/libvirt-gobject.sym b/libvirt-gobject/libvirt-gobject.sym
index b781cc6..781310f 100644
--- a/libvirt-gobject/libvirt-gobject.sym
+++ b/libvirt-gobject/libvirt-gobject.sym
@@ -236,7 +236,9 @@ LIBVIRT_GOBJECT_0.1.5 {
LIBVIRT_GOBJECT_0.1.9 {
global:
+ gvir_domain_fetch_snapshots;
gvir_domain_snapshot_delete;
+ gvir_domain_snapshot_list_flags_get_type;
} LIBVIRT_GOBJECT_0.1.5;
# .... define new API here using predicted next version number ....
--
2.0.3
10 years, 3 months