[libvirt] Another curiosity question
by Gene Czarcinski
All doc and the libvirt software itself is very insistent that dhcp4
will be supported on one and only one IPv4 subnetwork. Why is true?
Certainly dnsmasq supports multiple dhcp-range definitions and the
actual parameters passed to dnsmasq would be more or less the same! I
can understand some restriction if there were systems libvirt suports
which does not support the multi-dhcp per interface but not the blanket
restriction.
This, this is OK:
------------------------------------------------------
<ip address='172.16.6.1' prefix='16'>
<dhcp>
<range start='172.16.6.128' end='172.16.6.254' />
<range start='172.16.7.128' end='172.16.7.254' />
</dhcp>
</ip>
------------------------------------------------------
but this is not:
------------------------------------------------------
<ip address='172.16.6.1' prefix='16'>
<dhcp>
<range start='172.16.6.128' end='172.16.6.254' />
<range start='172.16.7.128' end='172.16.7.254' />
</dhcp>
</ip>
<ip address='172.16.7.1' prefix='16'>
<dhcp>
<range start='172.16.7.128' end='172.16.7.254' />
</dhcp>
</ip>
------------------------------------------------------
In both cases, the parameters passed to dnsmasq are:
----------------------------------------------------
dhcp-range=172.16.6.128,172.16.6.254
dhcp-range=172.16.7.128,172.16.7.254
----------------------------------------------------
and, for dhcp, dnsmasq does not care about the specific addresses since
it does its own filtering by listening on 0.0.0.0:67/68
Comments?
Gene
12 years, 1 month
[libvirt] [PATCH] virsh: Fix POD syntax
by Jiri Denemark
The first two hunks fix "Unterminated I<...> sequence" error and the
last one fixes "’=item’ outside of any ’=over’" error.
---
tools/virsh.pod | 6 ++----
1 file changed, 2 insertions(+), 4 deletions(-)
diff --git a/tools/virsh.pod b/tools/virsh.pod
index 61822bb..07d6a67 100644
--- a/tools/virsh.pod
+++ b/tools/virsh.pod
@@ -737,7 +737,7 @@ I<bandwidth> specifies copying bandwidth limit in MiB/s, although for
qemu, it may be non-zero only for an online domain.
=item B<blockcopy> I<domain> I<path> I<dest> [I<bandwidth>] [I<--shallow>]
-[I<--reuse-external>] [I<--raw>] [I<--wait> [I<--verbose]
+[I<--reuse-external>] [I<--raw>] [I<--wait> [I<--verbose>]
[{I<--pivot> | I<--finish>}] [I<--timeout> B<seconds>] [I<--async>]]
Copy a disk backing image chain to I<dest>. By default, this command
@@ -778,7 +778,7 @@ I<path> specifies fully-qualified path of the disk.
I<bandwidth> specifies copying bandwidth limit in MiB/s.
=item B<blockpull> I<domain> I<path> [I<bandwidth>] [I<base>]
-[I<--wait> [I<--verbose>] [I<--timeout> B<seconds>] [I<--async]]
+[I<--wait> [I<--verbose>] [I<--timeout> B<seconds>] [I<--async>]]
Populate a disk from its backing image chain. By default, this command
flattens the entire chain; but if I<base> is specified, containing the
@@ -2943,8 +2943,6 @@ and the monitor uses QMP, then the output will be pretty-printed. If more
than one argument is provided for I<command>, they are concatenated with a
space in between before passing the single command to the monitor.
-=back
-
=item B<qemu-agent-command> I<domain> [I<--timeout> I<seconds> | I<--async> | I<--block>] I<command>...
Send an arbitrary guest agent command I<command> to domain I<domain> through
--
1.7.12.4
12 years, 1 month
[libvirt] [PATCH v2 0/2] Qemu/Gluster support in Libvirt
by Harsh Prateek Bora
This patchset provides support for Gluster protocol based network disks.
Changelog:
v2:
- Addressed review comments by Jiri
- Updated patcheset as per new URI spec
Ref: http://lists.gnu.org/archive/html/qemu-devel/2012-09/msg05199.html
v1:
- Initial prototype
Harsh Prateek Bora (2):
Qemu/Gluster: Add Gluster protocol as supported network disk formats.
tests: Add tests for gluster protocol based network disks support
docs/schemas/domaincommon.rng | 8 +
src/conf/domain_conf.c | 28 ++-
src/conf/domain_conf.h | 11 ++
src/libvirt_private.syms | 2 +
src/qemu/qemu_command.c | 204 +++++++++++++++++++++
tests/qemuargv2xmltest.c | 1 +
.../qemuxml2argv-disk-drive-network-gluster.args | 1 +
.../qemuxml2argv-disk-drive-network-gluster.xml | 33 ++++
tests/qemuxml2argvtest.c | 2 +
9 files changed, 288 insertions(+), 2 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.xml
--
1.7.11.4
12 years, 1 month
[libvirt] [PATCH] parallels: fix build for some older compilers
by Laine Stump
Found this when building on RHEL5:
parallels/parallels_storage.c: In function 'parallelsStorageOpen':
parallels/parallels_storage.c:180: error: 'for' loop initial declaration used outside C99 mode
(and similar error in parallels_driver.c). This was in spite of
configuring with "-Wno-error".
---
Pushed under the build-breaker rule.
src/parallels/parallels_driver.c | 6 ++++--
src/parallels/parallels_storage.c | 4 +++-
2 files changed, 7 insertions(+), 3 deletions(-)
diff --git a/src/parallels/parallels_driver.c b/src/parallels/parallels_driver.c
index e8af89c..62db626 100644
--- a/src/parallels/parallels_driver.c
+++ b/src/parallels/parallels_driver.c
@@ -1256,14 +1256,16 @@ static int
parallelsApplySerialParams(virDomainChrDefPtr *oldserials, int nold,
virDomainChrDefPtr *newserials, int nnew)
{
+ int i, j;
+
if (nold != nnew)
goto error;
- for (int i = 0; i < nold; i++) {
+ for (i = 0; i < nold; i++) {
virDomainChrDefPtr oldserial = oldserials[i];
virDomainChrDefPtr newserial = NULL;
- for (int j = 0; j < nnew; j++) {
+ for (j = 0; j < nnew; j++) {
if (newserials[j]->target.port == oldserial->target.port) {
newserial = newserials[j];
break;
diff --git a/src/parallels/parallels_storage.c b/src/parallels/parallels_storage.c
index 112e288..76d885c 100644
--- a/src/parallels/parallels_storage.c
+++ b/src/parallels/parallels_storage.c
@@ -123,6 +123,8 @@ parallelsStorageOpen(virConnectPtr conn,
virStorageDriverStatePtr storageState;
int privileged = (geteuid() == 0);
parallelsConnPtr privconn = conn->privateData;
+ size_t i;
+
virCheckFlags(VIR_CONNECT_RO, VIR_DRV_OPEN_ERROR);
if (STRNEQ(conn->driver->name, "Parallels"))
@@ -176,7 +178,7 @@ parallelsStorageOpen(virConnectPtr conn,
goto error;
}
- for (size_t i = 0; i < privconn->pools.count; i++) {
+ for (i = 0; i < privconn->pools.count; i++) {
virStoragePoolObjLock(privconn->pools.objs[i]);
virStoragePoolObjPtr pool;
--
1.7.11.7
12 years, 1 month
[libvirt] [PATCH] cpustat: fix regression when cpus are offline
by Eric Blake
It turns out that the cpuacct results properly account for offline
cpus, and always returns results for every possible cpu, not just
the online ones. So there is no need to check the map of online
cpus in the first place, merely only a need to know the maximum
possible cpu. Meanwhile, virNodeGetCPUBitmap had a subtle change
from returning the maximum id to instead returning the width of
the bitmap (one larger than the maximum id), which made this code
encounter some off-by-one logic leading to bad error messages when
a cpu was offline:
$ virsh cpu-stats dom
error: Failed to virDomainGetCPUStats()
error: An error occurred, but the cause is unknown
* src/qemu/qemu_driver.c (qemuDomainGetPercpuStats): Drop
pointless check for cpumap changes, and use correct number of
cpus.
---
Fixes the regression noticed here:
https://www.redhat.com/archives/libvir-list/2012-October/msg01508.html
src/qemu/qemu_driver.c | 26 +++-----------------------
1 file changed, 3 insertions(+), 23 deletions(-)
diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
index 18be7d9..f817319 100644
--- a/src/qemu/qemu_driver.c
+++ b/src/qemu/qemu_driver.c
@@ -13597,7 +13597,6 @@ qemuDomainGetPercpuStats(virDomainPtr domain,
unsigned int ncpus)
{
virBitmapPtr map = NULL;
- virBitmapPtr map2 = NULL;
int rv = -1;
int i, id, max_id;
char *pos;
@@ -13609,7 +13608,6 @@ qemuDomainGetPercpuStats(virDomainPtr domain,
virTypedParameterPtr ent;
int param_idx;
unsigned long long cpu_time;
- bool result;
/* return the number of supported params */
if (nparams == 0 && ncpus != 0)
@@ -13621,7 +13619,7 @@ qemuDomainGetPercpuStats(virDomainPtr domain,
return rv;
if (ncpus == 0) { /* returns max cpu ID */
- rv = max_id + 1;
+ rv = max_id;
goto cleanup;
}
@@ -13648,11 +13646,7 @@ qemuDomainGetPercpuStats(virDomainPtr domain,
id = start_cpu + ncpus - 1;
for (i = 0; i <= id; i++) {
- if (virBitmapGetBit(map, i, &result) < 0)
- goto cleanup;
- if (!result) {
- cpu_time = 0;
- } else if (virStrToLong_ull(pos, &pos, 10, &cpu_time) < 0) {
+ if (virStrToLong_ull(pos, &pos, 10, &cpu_time) < 0) {
virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
_("cpuacct parse error"));
goto cleanup;
@@ -13680,22 +13674,9 @@ qemuDomainGetPercpuStats(virDomainPtr domain,
if (getSumVcpuPercpuStats(group, priv->nvcpupids, sum_cpu_time, n) < 0)
goto cleanup;
- /* Check that the mapping of online cpus didn't change mid-parse. */
- map2 = nodeGetCPUBitmap(domain->conn, &max_id);
- if (!map2 || !virBitmapEqual(map, map2)) {
- virReportError(VIR_ERR_OPERATION_INVALID, "%s",
- _("the set of online cpus changed while reading"));
- goto cleanup;
- }
-
sum_cpu_pos = sum_cpu_time;
for (i = 0; i <= id; i++) {
- if (virBitmapGetBit(map, i, &result) < 0)
- goto cleanup;
- if (!result)
- cpu_time = 0;
- else
- cpu_time = *(sum_cpu_pos++);
+ cpu_time = *(sum_cpu_pos++);
if (i < start_cpu)
continue;
if (virTypedParameterAssign(¶ms[(i - start_cpu) * nparams +
@@ -13711,7 +13692,6 @@ cleanup:
VIR_FREE(sum_cpu_time);
VIR_FREE(buf);
virBitmapFree(map);
- virBitmapFree(map2);
return rv;
}
--
1.7.11.7
12 years, 1 month
[libvirt] how to delete storage-pool entirely
by yue
it seems that if pools has the same path even with different uuid ,they will be recognize as one pool.
if i have defined a pool with a special path, next time to define a new one(same path , different uuid) will fail. then i delete(rm -f ) pool.xml file, define the pool again , it fail too. so if there is cache for all pools been defined until next start libvirtd?
my questions:
1. how to delete a pool entilely , include pool-xml file and possible cache (not to restart libvirtd).?
2.throught calling libvirt API , how to find the pool with a special path,no need to define it again?
thanks
12 years, 1 month
[libvirt] [PATCH] [trivial] documentation: HTML tag fix
by Philipp Hahn
Replace '%' by '&' for correct escaping of '>' in Domain specification.
Signed-off-by: Philipp Hahn <hahn(a)univention.de>
---
docs/formatdomain.html.in | 2 +-
1 files changed, 1 insertions(+), 1 deletions(-)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 2417943..c8da33d 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -3102,7 +3102,7 @@ qemu-kvm -net nic,model=? /dev/null
provide their own way (outside of libvirt) to tag guest traffic
onto specific vlans.) To allow for specification of multiple
tags (in the case of vlan trunking), a
- subelement, <code><tag%gt;</code>, specifies which vlan tag
+ subelement, <code><tag></code>, specifies which vlan tag
to use (for example: <code><tag id='42'/></code>. If an
interface has more than one <code><vlan></code> element
defined, it is assumed that the user wants to do VLAN trunking
--
1.7.1
12 years, 1 month
[libvirt] [PATCH v3 0/2] Qemu/Gluster support in Libvirt
by Harsh Prateek Bora
This patchset provides support for Gluster protocol based network disks.
Changelog:
v3:
- RNG schema updated as required for unix transport [Paolo]
- introduced another new attribute 'socket' for unix transport [Paolo]
- Uses virURIFormat and virURIParse for URI parsing. [danpb]
- updated documentation as required. [Jirka]
v2:
- Addressed review comments by Jiri
- Updated patcheset as per new URI spec
Ref: http://lists.gnu.org/archive/html/qemu-devel/2012-09/msg05199.html
v1:
- Initial prototype
Harsh Prateek Bora (2):
Qemu/Gluster: Add Gluster protocol as supported network disk formats.
tests: Add tests for gluster protocol based network disks support
docs/formatdomain.html.in | 24 +++-
docs/schemas/domaincommon.rng | 35 ++++-
src/conf/domain_conf.c | 72 ++++++++--
src/conf/domain_conf.h | 12 ++
src/libvirt_private.syms | 2 +
src/qemu/qemu_command.c | 155 +++++++++++++++++++++
tests/qemuargv2xmltest.c | 1 +
.../qemuxml2argv-disk-drive-network-gluster.args | 1 +
.../qemuxml2argv-disk-drive-network-gluster.xml | 35 +++++
tests/qemuxml2argvtest.c | 2 +
10 files changed, 311 insertions(+), 28 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.args
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-disk-drive-network-gluster.xml
--
1.7.11.7
12 years, 1 month
[libvirt] [PATCH] Revert "qemu: Do not require hostuuid in migration cookie"
by Jiri Denemark
This reverts commit 8d75e47edefdd77b86df1ee9af3cd5001d456f73.
Libvirt was never released with support for migration cookies without
hostuuid.
---
src/qemu/qemu_migration.c | 31 +++++++++++++++----------------
1 file changed, 15 insertions(+), 16 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index c15a75d..c4ac150 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -783,23 +783,22 @@ qemuMigrationCookieXMLParse(qemuMigrationCookiePtr mig,
}
if (!(tmp = virXPathString("string(./hostuuid[1])", ctxt))) {
- VIR_WARN("Missing hostuuid element in migration data; cannot "
- "detect migration to the same host");
- } else {
- if (virUUIDParse(tmp, mig->remoteHostuuid) < 0) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("malformed hostuuid element in migration data"));
- goto error;
- }
- if (memcmp(mig->remoteHostuuid, mig->localHostuuid,
- VIR_UUID_BUFLEN) == 0) {
- virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Attempt to migrate guest to the same host %s"),
- tmp);
- goto error;
- }
- VIR_FREE(tmp);
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ "%s", _("missing hostuuid element in migration data"));
+ goto error;
}
+ if (virUUIDParse(tmp, mig->remoteHostuuid) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ "%s", _("malformed hostuuid element in migration data"));
+ goto error;
+ }
+ if (memcmp(mig->remoteHostuuid, mig->localHostuuid, VIR_UUID_BUFLEN) == 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("Attempt to migrate guest to the same host %s"),
+ tmp);
+ goto error;
+ }
+ VIR_FREE(tmp);
/* Check to ensure all mandatory features from XML are also
* present in 'flags' */
--
1.7.12.4
12 years, 1 month
[libvirt] [PATCH] qemu: Do not require hostuuid in migration cookie
by Jiri Denemark
Having hostuuid in migration cookie is a nice bonus since it provides an
easy way of detecting migration to the same host. However, requiring it
breaks backward compatibility with older libvirt releases.
---
src/qemu/qemu_migration.c | 31 ++++++++++++++++---------------
1 file changed, 16 insertions(+), 15 deletions(-)
diff --git a/src/qemu/qemu_migration.c b/src/qemu/qemu_migration.c
index a2402ce..487182e 100644
--- a/src/qemu/qemu_migration.c
+++ b/src/qemu/qemu_migration.c
@@ -576,22 +576,23 @@ qemuMigrationCookieXMLParse(qemuMigrationCookiePtr mig,
}
if (!(tmp = virXPathString("string(./hostuuid[1])", ctxt))) {
- virReportError(VIR_ERR_INTERNAL_ERROR,
- "%s", _("missing hostuuid element in migration data"));
- goto error;
- }
- if (virUUIDParse(tmp, mig->remoteHostuuid) < 0) {
- virReportError(VIR_ERR_INTERNAL_ERROR,
- "%s", _("malformed hostuuid element in migration data"));
- goto error;
- }
- if (memcmp(mig->remoteHostuuid, mig->localHostuuid, VIR_UUID_BUFLEN) == 0) {
- virReportError(VIR_ERR_INTERNAL_ERROR,
- _("Attempt to migrate guest to the same host %s"),
- tmp);
- goto error;
+ VIR_WARN("Missing hostuuid element in migration data; cannot "
+ "detect migration to the same host");
+ } else {
+ if (virUUIDParse(tmp, mig->remoteHostuuid) < 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("malformed hostuuid element in migration data"));
+ goto error;
+ }
+ if (memcmp(mig->remoteHostuuid, mig->localHostuuid,
+ VIR_UUID_BUFLEN) == 0) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("Attempt to migrate guest to the same host %s"),
+ tmp);
+ goto error;
+ }
+ VIR_FREE(tmp);
}
- VIR_FREE(tmp);
/* Check to ensure all mandatory features from XML are also
* present in 'flags' */
--
1.7.12.4
12 years, 1 month