[libvirt] [PATCH] virt-host-validate: require fuse for LXC if compiled in
by Guido Günther
Domains fail to start without fuse like
error: internal error: guest failed to start: fuse: device not found, try 'modprobe fuse' first
Failure in libvirt_lxc startup: no error
so check for it too.
References: https://ci.debian.net/data/autopkgtest/unstable/amd64/libv/libvirt/201710...
---
tools/virt-host-validate-lxc.c | 7 +++++++
1 file changed, 7 insertions(+)
diff --git a/tools/virt-host-validate-lxc.c b/tools/virt-host-validate-lxc.c
index 2b906cc88a..64d9279c30 100644
--- a/tools/virt-host-validate-lxc.c
+++ b/tools/virt-host-validate-lxc.c
@@ -93,5 +93,12 @@ int virHostValidateLXC(void)
"BLK_CGROUP") < 0)
ret = -1;
+#if WITH_FUSE
+ if (virHostValidateDeviceExists("LXC", "/sys/fs/fuse/connections",
+ VIR_HOST_VALIDATE_FAIL,
+ _("Load the 'fuse' module to enable /proc/ overrides")) < 0)
+ ret = -1;
+#endif
+
return ret;
}
--
2.14.2
7 years, 1 month
[libvirt] [PATCH v3] qemu: add the print of page size in cmd domjobinfo
by Chao Fan
The command "info migrate" of qemu outputs the dirty-pages-rate during
migration, but page size is different in different architectures. So
page size should be output to calculate dirty pages in bytes.
Page size is already implemented with commit
030ce1f8612215fcbe9d353dfeaeb2937f8e3f94 in qemu.
Now Implement the counter-part in libvirt.
Signed-off-by: Chao Fan <fanc.fnst(a)cn.fujitsu.com>
Signed-off-by: Li Zhijian <lizhijian(a)cn.fujitsu.com>
---
v2 -> v3:
Follow the suggestion of John Ferlan:
1. Improve a judgment logic when failing to get page size.
v1 -> v2:
Follow the suggestion of John Ferlan:
1. Drop the fix for unrelated coding style problem.
2. Fix typo.
3. Improve a judgment logic when failing to get page size.
---
include/libvirt/libvirt-domain.h | 7 +++++++
src/qemu/qemu_domain.c | 6 ++++++
src/qemu/qemu_migration_cookie.c | 7 +++++++
src/qemu/qemu_monitor.h | 1 +
src/qemu/qemu_monitor_json.c | 2 ++
tools/virsh-domain.c | 8 ++++++++
6 files changed, 31 insertions(+)
diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index 030a62c43..1f4ddcf66 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -3336,6 +3336,13 @@ typedef enum {
# define VIR_DOMAIN_JOB_MEMORY_DIRTY_RATE "memory_dirty_rate"
/**
+ * VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE:
+ *
+ * virDomainGetJobStats field: page size of the memory in this domain
+ */
+# define VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE "page_size"
+
+/**
* VIR_DOMAIN_JOB_MEMORY_ITERATION:
*
* virDomainGetJobStats field: current iteration over domain's memory
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index e395c4ddf..e95e427e5 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -570,6 +570,12 @@ qemuDomainJobInfoToParams(qemuDomainJobInfoPtr jobInfo,
stats->ram_iteration) < 0)
goto error;
+ if (stats->ram_page_size > 0 &&
+ virTypedParamsAddULLong(&par, &npar, &maxpar,
+ VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE,
+ stats->ram_page_size) < 0)
+ goto error;
+
if (virTypedParamsAddULLong(&par, &npar, &maxpar,
VIR_DOMAIN_JOB_DISK_TOTAL,
stats->disk_total +
diff --git a/src/qemu/qemu_migration_cookie.c b/src/qemu/qemu_migration_cookie.c
index eef40a6cd..bc6a8dc55 100644
--- a/src/qemu/qemu_migration_cookie.c
+++ b/src/qemu/qemu_migration_cookie.c
@@ -654,6 +654,10 @@ qemuMigrationCookieStatisticsXMLFormat(virBufferPtr buf,
stats->ram_iteration);
virBufferAsprintf(buf, "<%1$s>%2$llu</%1$s>\n",
+ VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE,
+ stats->ram_page_size);
+
+ virBufferAsprintf(buf, "<%1$s>%2$llu</%1$s>\n",
VIR_DOMAIN_JOB_DISK_TOTAL,
stats->disk_total);
virBufferAsprintf(buf, "<%1$s>%2$llu</%1$s>\n",
@@ -1014,6 +1018,9 @@ qemuMigrationCookieStatisticsXMLParse(xmlXPathContextPtr ctxt)
virXPathULongLong("string(./" VIR_DOMAIN_JOB_MEMORY_ITERATION "[1])",
ctxt, &stats->ram_iteration);
+ virXPathULongLong("string(./" VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE "[1])",
+ ctxt, &stats->ram_page_size);
+
virXPathULongLong("string(./" VIR_DOMAIN_JOB_DISK_TOTAL "[1])",
ctxt, &stats->disk_total);
virXPathULongLong("string(./" VIR_DOMAIN_JOB_DISK_PROCESSED "[1])",
diff --git a/src/qemu/qemu_monitor.h b/src/qemu/qemu_monitor.h
index 6414d2483..1e3322433 100644
--- a/src/qemu/qemu_monitor.h
+++ b/src/qemu/qemu_monitor.h
@@ -677,6 +677,7 @@ struct _qemuMonitorMigrationStats {
unsigned long long ram_normal;
unsigned long long ram_normal_bytes;
unsigned long long ram_dirty_rate;
+ unsigned long long ram_page_size;
unsigned long long ram_iteration;
unsigned long long disk_transferred;
diff --git a/src/qemu/qemu_monitor_json.c b/src/qemu/qemu_monitor_json.c
index c63d250d3..70c895a35 100644
--- a/src/qemu/qemu_monitor_json.c
+++ b/src/qemu/qemu_monitor_json.c
@@ -2892,6 +2892,8 @@ qemuMonitorJSONGetMigrationStatsReply(virJSONValuePtr reply,
&stats->ram_normal_bytes));
ignore_value(virJSONValueObjectGetNumberUlong(ram, "dirty-pages-rate",
&stats->ram_dirty_rate));
+ ignore_value(virJSONValueObjectGetNumberUlong(ram, "page-size",
+ &stats->ram_page_size));
ignore_value(virJSONValueObjectGetNumberUlong(ram, "dirty-sync-count",
&stats->ram_iteration));
diff --git a/tools/virsh-domain.c b/tools/virsh-domain.c
index a3f3b7c7b..a50713d6e 100644
--- a/tools/virsh-domain.c
+++ b/tools/virsh-domain.c
@@ -6021,6 +6021,14 @@ cmdDomjobinfo(vshControl *ctl, const vshCmd *cmd)
}
if ((rc = virTypedParamsGetULLong(params, nparams,
+ VIR_DOMAIN_JOB_MEMORY_PAGE_SIZE,
+ &value)) < 0) {
+ goto save_error;
+ } else if (rc) {
+ vshPrint(ctl, "%-17s %-12llu bytes\n", _("Page size:"), value);
+ }
+
+ if ((rc = virTypedParamsGetULLong(params, nparams,
VIR_DOMAIN_JOB_MEMORY_ITERATION,
&value)) < 0) {
goto save_error;
--
2.13.5
7 years, 1 month
[libvirt] [PATCH v4 0/5] numa: describe sibling nodes distances
by Wim Ten Have
From: Wim ten Have <wim.ten.have(a)oracle.com>
This patch extends guest domain administration adding support to advertise
node sibling distances when configuring HVM numa guests.
NUMA (non-uniform memory access), a method of configuring a cluster of nodes
within a single multiprocessing system such that it shares processor
local memory amongst others improving performance and the ability of the
system to be expanded.
A NUMA system could be illustrated as shown below. Within this 4-node
system, every socket is equipped with its own distinct memory. The whole
typically resembles a SMP (symmetric multiprocessing) system being a
"tightly-coupled," "share everything" system in which multiple processors
are working under a single operating system and can access each others'
memory over multiple "Bus Interconnect" paths.
+-----+-----+-----+ +-----+-----+-----+
| M | CPU | CPU | | CPU | CPU | M |
| E | | | | | | E |
| M +- Socket0 -+ +- Socket3 -+ M |
| O | | | | | | O |
| R | CPU | CPU <---------> CPU | CPU | R |
| Y | | | | | | Y |
+-----+--^--+-----+ +-----+--^--+-----+
| |
| Bus Interconnect |
| |
+-----+--v--+-----+ +-----+--v--+-----+
| M | | | | | | M |
| E | CPU | CPU <---------> CPU | CPU | E |
| M | | | | | | M |
| O +- Socket1 -+ +- Socket2 -+ O |
| R | | | | | | R |
| Y | CPU | CPU | | CPU | CPU | Y |
+-----+-----+-----+ +-----+-----+-----+
In contrast there is the limitation of a flat SMP system (not illustrated)
under which the bus (data and address path) can easily become a performance
bottleneck under high activity as sockets are added.
NUMA adds an intermediate level of memory shared amongst a few cores per
socket as illustrated above, so that data accesses do not have to travel
over a single bus.
Unfortunately the way NUMA does this adds its own limitations. This,
as visualized in the illustration above, happens when data is stored in
memory associated with Socket2 and is accessed by a CPU (core) in Socket0.
The processors use the "Bus Interconnect" to create gateways between the
sockets (nodes) enabling inter-socket access to memory. These "Bus
Interconnect" hops add data access delays when a CPU (core) accesses
memory associated with a remote socket (node).
For terminology we refer to sockets as "nodes" where access to each
others' distinct resources such as memory make them "siblings" with a
designated "distance" between them. A specific design is described under
the ACPI (Advanced Configuration and Power Interface Specification)
within the chapter explaining the system's SLIT (System Locality Distance
Information Table).
These patches extend core libvirt's XML description of a virtual machine's
hardware to include NUMA distance information for sibling nodes, which
is then passed to Xen guests via libxl. Recently qemu landed support for
constructing the SLIT since commit 0f203430dd ("numa: Allow setting NUMA
distance for different NUMA nodes"), hence these core libvirt extensions
can also help other drivers in supporting this feature.
The XML changes made allow to describe the <cell> node/sockets <distances>
amongst <sibling> node identifiers and propagate these towards the numa
domain functionality finally adding support to libxl.
[below is an example illustrating a 4 node/socket <cell> setup]
<cpu>
<numa>
<cell id='0' cpus='0,4-7' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='10'/>
<sibling id='1' value='21'/>
<sibling id='2' value='31'/>
<sibling id='3' value='41'/>
</distances>
</cell>
<cell id='1' cpus='1,8-10,12-15' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='21'/>
<sibling id='1' value='10'/>
<sibling id='2' value='21'/>
<sibling id='3' value='31'/>
</distances>
</cell>
<cell id='2' cpus='2,11' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='31'/>
<sibling id='1' value='21'/>
<sibling id='2' value='10'/>
<sibling id='3' value='21'/>
</distances>
</cell>
<cell id='3' cpus='3' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='41'/>
<sibling id='1' value='31'/>
<sibling id='2' value='21'/>
<sibling id='3' value='10'/>
</distances>
</cell>
</numa>
</cpu>
By default on libxl, if no <distances> are given to describe the distance data
between different <cell>s, this patch will default to a scheme using 10
for local and 20 for any remote node/socket, which is the assumption of
guest OS when no SLIT is specified. While SLIT is optional, libxl requires
that distances are set nonetheless.
On Linux systems the SLIT detail can be listed with help of the 'numactl -H'
command. The above HVM guest would show the following output.
[root@f25 ~]# numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 4 5 6 7
node 0 size: 1988 MB
node 0 free: 1743 MB
node 1 cpus: 1 8 9 10 12 13 14 15
node 1 size: 1946 MB
node 1 free: 1885 MB
node 2 cpus: 2 11
node 2 size: 2011 MB
node 2 free: 1912 MB
node 3 cpus: 3
node 3 size: 2010 MB
node 3 free: 1980 MB
node distances:
node 0 1 2 3
0: 10 21 31 41
1: 21 10 21 31
2: 31 21 10 21
3: 41 31 21 10
Wim ten Have (5):
numa: rename function virDomainNumaDefCPUFormat
numa: describe siblings distances within cells
xenconfig: add domxml conversions for xen-xl
libxl: vnuma support
xlconfigtest: add tests for numa cell sibling distances
docs/formatdomain.html.in | 63 +++-
docs/schemas/basictypes.rng | 7 +
docs/schemas/cputypes.rng | 18 ++
src/conf/cpu_conf.c | 2 +-
src/conf/numa_conf.c | 342 ++++++++++++++++++++-
src/conf/numa_conf.h | 22 +-
src/libvirt_private.syms | 5 +
src/libxl/libxl_conf.c | 120 ++++++++
src/libxl/libxl_driver.c | 3 +-
src/xenconfig/xen_xl.c | 333 ++++++++++++++++++++
.../test-fullvirt-vnuma-autocomplete.cfg | 26 ++
.../test-fullvirt-vnuma-autocomplete.xml | 85 +++++
.../test-fullvirt-vnuma-nodistances.cfg | 26 ++
.../test-fullvirt-vnuma-nodistances.xml | 53 ++++
.../test-fullvirt-vnuma-partialdist.cfg | 26 ++
.../test-fullvirt-vnuma-partialdist.xml | 60 ++++
tests/xlconfigdata/test-fullvirt-vnuma.cfg | 26 ++
tests/xlconfigdata/test-fullvirt-vnuma.xml | 81 +++++
tests/xlconfigtest.c | 6 +
19 files changed, 1295 insertions(+), 9 deletions(-)
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-autocomplete.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-autocomplete.xml
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-nodistances.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-nodistances.xml
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-partialdist.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-partialdist.xml
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma.xml
--
2.9.5
7 years, 1 month
[libvirt] [PATCH] cpu_ppc64: Error out when model tag missing in virsh cpu-compare xml
by Nitesh Konkar
libvirtd throws unhandled signal 11 on ppc while running
virsh cpu-compare with missing model tag in the xml. This
patch errors out in such situation.
Signed-off-by: Nitesh Konkar <nitkon12(a)linux.vnet.ibm.com>
---
src/cpu/cpu_ppc64.c | 6 ++++++
1 file changed, 6 insertions(+)
diff --git a/src/cpu/cpu_ppc64.c b/src/cpu/cpu_ppc64.c
index b58e80a..c11ac9f 100644
--- a/src/cpu/cpu_ppc64.c
+++ b/src/cpu/cpu_ppc64.c
@@ -247,6 +247,12 @@ ppc64ModelFromCPU(const virCPUDef *cpu,
{
struct ppc64_model *model;
+ if (!cpu->model) {
+ virReportError(VIR_ERR_INVALID_ARG, "%s",
+ _("no guest CPU model specified"));
+ return NULL;
+ }
+
if (!(model = ppc64ModelFind(map, cpu->model))) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("Unknown CPU model %s"), cpu->model);
--
2.9.5
7 years, 1 month
[libvirt] [PATCH 0/3] Tie up some loose ends w/ nodedev common object
by John Ferlan
Patches speak for themselves... Convert to RWObjectLockable and then
merge the various ForEach callback functions into one API and structure.
John Ferlan (3):
nodedev: Convert virNodeDeviceObjList to use RWObjectLockable
nodedev: Convert virNodeDeviceObjHasCap to bool
nodedev: Introduce virNodeDeviceObjListForEachCb
src/conf/virnodedeviceobj.c | 237 ++++++++++++++++++--------------------------
1 file changed, 98 insertions(+), 139 deletions(-)
--
2.13.6
7 years, 1 month
[libvirt] [PATCH 0/2] Tie up a couple of secretobjs loose ends
by John Ferlan
Patches should speak for themselves.
John Ferlan (2):
secrets: Convert to use ObjectRWLockable
secrets: Introduce virSecretObjListForEachCb
src/conf/virsecretobj.c | 203 +++++++++++++++++-------------------------------
1 file changed, 72 insertions(+), 131 deletions(-)
--
2.13.6
7 years, 1 month
[libvirt] [PATCH] util: Resolve resource leak
by John Ferlan
Need to free @groups in the parent on success similar to other
APIs (virFile*) which use virGetGroupList and virFork.
Reported by Coverity.
Signed-off-by: John Ferlan <jferlan(a)redhat.com>
---
src/util/vircommand.c | 1 +
1 file changed, 1 insertion(+)
diff --git a/src/util/vircommand.c b/src/util/vircommand.c
index 41a61da49f..6cd76a560e 100644
--- a/src/util/vircommand.c
+++ b/src/util/vircommand.c
@@ -606,6 +606,7 @@ virExec(virCommandPtr cmd)
cmd->pid = pid;
+ VIR_FREE(groups);
VIR_FREE(binarystr);
return 0;
--
2.13.6
7 years, 1 month
[libvirt] [PATCH 0/2] PIIX3 implicit controller address handling fixes
by Ján Tomko
Ján Tomko (2):
qemu: reserve PCI addresses for implicit i440fx devices
qemu: clarify error message for index 0 PIIX3 USB controller
src/qemu/qemu_domain_address.c | 17 +++++++++++----
.../qemuxml2argv-440fx-ide-address-conflict.xml | 25 ++++++++++++++++++++++
tests/qemuxml2argvtest.c | 1 +
3 files changed, 39 insertions(+), 4 deletions(-)
create mode 100644 tests/qemuxml2argvdata/qemuxml2argv-440fx-ide-address-conflict.xml
--
2.13.0
7 years, 1 month
[libvirt] [PATCH] maint: update to latest gnulib
by Daniel P. Berrange
This pulls in the fix for getopt tests on Fedora >= 28 / glibc > 2.26.0
Signed-off-by: Daniel P. Berrange <berrange(a)redhat.com>
---
.gnulib | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/.gnulib b/.gnulib
index 8d116e3f65..5e9abf8716 160000
--- a/.gnulib
+++ b/.gnulib
@@ -1 +1 @@
-Subproject commit 8d116e3f657cb120f79efbbb675fa3cc9d21f53e
+Subproject commit 5e9abf87163ad4aeaefef0b02961f8674b0a4879
--
2.13.5
7 years, 1 month