[libvirt] [PATCH v2] vz: allow to start vz driver without host cache info
by Mikhail Feoktistov
Show warning message instead of fail operation.
It happens if kernel or cpu doesn't support reporting cpu cache info.
In case of Virtuozzo file "id" doesn't exist.
---
src/vz/vz_driver.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/src/vz/vz_driver.c b/src/vz/vz_driver.c
index 6f4aee3..eb97e54 100644
--- a/src/vz/vz_driver.c
+++ b/src/vz/vz_driver.c
@@ -119,7 +119,7 @@ vzBuildCapabilities(void)
goto error;
if (virCapabilitiesInitCaches(caps) < 0)
- goto error;
+ VIR_WARN("Failed to get host CPU cache info");
verify(ARRAY_CARDINALITY(archs) == ARRAY_CARDINALITY(emulators));
--
1.8.3.1
7 years, 1 month
[libvirt] [RFC] docs: Discourage usage of cache mode=passthrough
by Eduardo Habkost
Cache mode=passthrough can result in a broken cache topology if
the domain topology is not exactly the same as the host topology.
Warn about that in the documentation.
Bug report for reference:
https://bugzilla.redhat.com/show_bug.cgi?id=1184125
Signed-off-by: Eduardo Habkost <ehabkost(a)redhat.com>
---
docs/formatdomain.html.in | 4 +++-
1 file changed, 3 insertions(+), 1 deletion(-)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in
index 57ec2ff34..9c21892f3 100644
--- a/docs/formatdomain.html.in
+++ b/docs/formatdomain.html.in
@@ -1478,7 +1478,9 @@
<dt><code>passthrough</code></dt>
<dd>The real CPU cache data reported by the host CPU will be
- passed through to the virtual CPU.</dd>
+ passed through to the virtual CPU. Using this mode is not
+ recommended unless the domain CPU and NUMA topology is exactly
+ the same as the host CPU and NUMA topology.</dd>
<dt><code>disable</code></dt>
<dd>The virtual CPU will report no CPU cache of the specified
--
2.13.5
7 years, 1 month
[libvirt] [PATCH v3 REBASE 0/2] qemu: report block job errors from qemu to the user
by Nikolay Shirokovskiy
So that you can see nice report on migration:
"error: operation failed: migration of disk sda failed: No space left on device"
diff from v2:
============
1. split into 2 patches
2. change formal documentation where it is present accordingly
3. add variable initialization for safety
Nikolay Shirokovskiy (2):
qemu: prepare blockjob complete event error usage
qemu: report drive mirror errors on migration
src/qemu/qemu_blockjob.c | 14 +++++++++--
src/qemu/qemu_blockjob.h | 3 ++-
src/qemu/qemu_domain.c | 1 +
src/qemu/qemu_domain.h | 1 +
src/qemu/qemu_driver.c | 4 ++--
src/qemu/qemu_migration.c | 55 +++++++++++++++++++++++++++++++-------------
src/qemu/qemu_monitor.c | 5 ++--
src/qemu/qemu_monitor.h | 4 +++-
src/qemu/qemu_monitor_json.c | 4 +++-
src/qemu/qemu_process.c | 4 ++++
10 files changed, 70 insertions(+), 25 deletions(-)
--
1.8.3.1
7 years, 1 month
[libvirt] [PATCH 0/4] misc virt-aa-helper fixes
by Christian Ehrhardt
Hi,
this was mostly created by clearing old libvirt bugs in Ubuntu.
USB passthrough so far often used workarounds but can be fixed in
virt-aa-helper.
I have some more changes planned but these seem to become longer term
activities so I didn't want to postpone those easier ones due to that and
submit them today.
Christian Ehrhardt (4):
virt-aa-helper: fix paths for usb hostdevs
virt-aa-helper: fix libusb access to udev usb data
virt-aa-helper: allow spaces in vm names
virt-aa-helper: put static rules in quotes
examples/apparmor/libvirt-qemu | 3 +++
src/security/virt-aa-helper.c | 12 ++++++++----
2 files changed, 11 insertions(+), 4 deletions(-)
--
2.7.4
7 years, 1 month
[libvirt] [PATCH] iohelper: use saferead if later write with O_DIRECT
by Nikolay Shirokovskiy
One of the usecases of iohelper is to read from pipe and write
to file with O_DIRECT. As we read from pipe we can have partial
read and then we fail to write this data because output file
is open with O_DIRECT and buffer size is not aligned.
---
src/util/iohelper.c | 16 ++++++++++++++--
1 file changed, 14 insertions(+), 2 deletions(-)
diff --git a/src/util/iohelper.c b/src/util/iohelper.c
index 5416d45..bb8a8dd 100644
--- a/src/util/iohelper.c
+++ b/src/util/iohelper.c
@@ -109,9 +109,21 @@ runIO(const char *path, int fd, int oflags)
while (1) {
ssize_t got;
- if ((got = read(fdin, buf, buflen)) < 0) {
- if (errno == EINTR)
+ /* If we read with O_DIRECT from file we can't use saferead as
+ * it can lead to unaligned read after reading last bytes.
+ * If we write with O_DIRECT use should use saferead so that
+ * writes will be aligned.
+ * In other cases using saferead reduces number of syscalls.
+ */
+ if (fdin == fd && direct) {
+ if ((got = read(fdin, buf, buflen)) < 0 &&
+ errno == EINTR)
continue;
+ } else {
+ got = saferead(fdin, buf, buflen);
+ }
+
+ if (got < 0) {
virReportSystemError(errno, _("Unable to read %s"), fdinname);
goto cleanup;
}
--
1.8.3.1
7 years, 2 months
[libvirt] Exposing mem-path in domain XML
by Michal Privoznik
Dear list,
there is the following bug [1] which I'm not quite sure how to grasp. So
there is this application/infrastructure called Kove [2] that allows you
to have memory for your application stored on a distant host in network
and basically fetch needed region on pagefault. Now imagine that
somebody wants to use it for backing up domain memory. However, the way
that the tool works is it has some kernel module and then some userland
binary that is fed with the path of the mmaped file. I don't know all
the details, but the point is, in order to let users use this we need to
expose the paths for mem-path for the guest memory. I know we did not
want to do this in the past, but now it looks like we don't have a way
around it, do we?
Michal
1: https://bugzilla.redhat.com/show_bug.cgi?id=1461214
2: http://kove.net
7 years, 2 months
[libvirt] libvirt/QEMU/SEV interaction
by Brijesh Singh
Hi All,
(sorry for the long message)
CPUs from AMD EPYC family supports Secure Encrypted Virtualization (SEV)
feature - the feature allows running encrypted VMs. To enable the feature,
I have been submitting patches to Linux kernel [1], Qemu [2] and OVMF [3].
We have been making some good progress in getting patches accepted upstream
in Linux and OVMF trees. SEV builds upon SME (Secure Memory Encryption)
feature -- SME support just got pulled into 4.14 merge window. The base
SEV patches are accepted in OVMF tree -- now we have SEV aware guest BIOS.
I am getting ready to take off "RFC" tag from remaining patches to get them
reviewed and accepted.
The boot flow for launching an SEV guest is a bit different from a typical
guest launch. In order to launch SEV guest from virt-manager or other
high-level VM management tools, we need to design and implement new
interface between libvirt and qemu, and probably add new APIs in libvirt
to be used by VM management tools. I am new to the libvirt and need some
expert advice while designing this interface. A pictorial representation
for a SEV guest launch flow is available in SEV Spec Appendix A [4].
A typical flow looks like this:
1. Guest owner (GO) asks the cloud provider to launch SEV guest.
2. VM tool asks libvirt to provide its Platform Diffie-Hellman (PDH) key.
3. libvirt opens /dev/sev device to get its PDH and return the blob to the
caller.
4. VM tool gives its PDH to GO.
5. GO provides its DH key, session-info and guest policy.
6. VM tool somehow communicates the GO provided information to libvirt.
7. libvirt adds "sev-guest" object in its xml file with all the information
obtained from #5
(currently my xml file looks like this)
<qemu:arg value='-object'>
<qemu:arg
value='sev-guest,id=sev0,policy=<GO_policy>,dh-key-file=<filename>,session-file=<filename>/>
<qemu:arg value='-machine'/>
<qemu:arg value='memory-encryption=sev0'/>
8. libvirt launches the guest with "-S"
9. While creating the SEV guest qemu does the following
i) create encryption context using GO's DH, session-info and guest policy
(LAUNCH_START)
ii) encrypts the guest bios (LAUNCH_UPDATE_DATA)
iii) calls LAUNCH_MEASUREMENT to get the encrypted bios measurement
10. By some interface we must propagate the measurement all the way to GO
before libvirt starts the guest.
11. GO verifies the measurement and if measurement matches then it may
give a secret blob -- which must be injected into the guest before
libvirt starts the VM. If verification failed, GO will request cloud
provider to destroy the VM.
12. After secret blob is injected into guest, we call LAUNCH_FINISH
to destory the encryption context.
13. libvirt issues "continue" command to resume the guest boot.
Please note that the measurement value is protected with transport
encryption key (TIK) and it changes on each run. Similarly the secret blob
provided by GO does not need to be protected using libvirt/qemu APIs. The
secret is protected by TIK. From qemu and libvirt point of view these are
blobs and must be passed as-is to the SEV FW.
Questions:
a) Do we need to add a new set of APIs in libvirt to return the PDH from
libvirt and VM tool ? Or can we use some pre-existing APIs to pass the
opaque blobs ? (this is mainly for step 3 and 6)
b) do we need to define a new xml tag to for memory-encryption ? or just
use the qemu:args tag ? (step 6)
c) what existing communicate interface can be used between libvirt and qemu
to get the measurement ? can we add a new qemu monitor command
'get_sev_measurement' to get the measurement ? (step 10)
d) how to pass the secret blob from libvirt to qemu ? should we consider
adding a new object (sev-guest-secret) -- libvirt can add the object through
qemu monitor.
[1] https://marc.info/?l=kvm&m=150092661105069&w=2
[2] https://marc.info/?l=qemu-devel&m=148901186615642&w=2
[3] https://lists.01.org/pipermail/edk2-devel/2017-July/012220.html
[4] http://support.amd.com/TechDocs/55766_SEV-KM%20API_Specification.pdf
Thanks
Brijesh
7 years, 2 months
[libvirt] [PATCH v3] [libvirt-jenkins-ci] Build on supported Fedora releases (25-26)
by Andrea Bolognani
Fedora 23 has been out of support for quite a while now, with
Fedora 24 recently joining it with the release of Fedora 26
which, on the other hand, is fully supported and a prime candidate
for building libvirt.
Signed-off-by: Andrea Bolognani <abologna(a)redhat.com>
---
Yash first attempted this last December[1], without much luck.
Fedora 26 has been released in the meantime, which means we can
get rid of two builders instead of one! Someone will have to
prepare the 'libvirt-fedora-26' builder, though, because it
doesn't exist at the moment :)
[1] https://www.redhat.com/archives/libvir-list/2016-December/msg00676.html
projects/libosinfo.yaml | 3 +--
projects/libvirt-cim.yaml | 3 +--
projects/libvirt-glib.yaml | 3 +--
projects/libvirt-go-xml.yaml | 3 +--
projects/libvirt-go.yaml | 3 +--
projects/libvirt-perl.yaml | 3 +--
projects/libvirt-python.yaml | 3 +--
projects/libvirt-sandbox.yaml | 3 +--
projects/libvirt-tck.yaml | 3 +--
projects/libvirt.yaml | 9 +++------
projects/osinfo-db-tools.yaml | 3 +--
projects/osinfo-db.yaml | 3 +--
projects/virt-manager.yaml | 3 +--
projects/virt-viewer.yaml | 3 +--
14 files changed, 16 insertions(+), 32 deletions(-)
diff --git a/projects/libosinfo.yaml b/projects/libosinfo.yaml
index f9a8ceb..77c0414 100644
--- a/projects/libosinfo.yaml
+++ b/projects/libosinfo.yaml
@@ -3,9 +3,8 @@
name: libosinfo
machines:
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: libosinfo
make_env: |
diff --git a/projects/libvirt-cim.yaml b/projects/libvirt-cim.yaml
index 82a8127..b3476e7 100644
--- a/projects/libvirt-cim.yaml
+++ b/projects/libvirt-cim.yaml
@@ -4,9 +4,8 @@
machines:
- libvirt-centos-6
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: libvirt CIM
jobs:
diff --git a/projects/libvirt-glib.yaml b/projects/libvirt-glib.yaml
index 7d897ab..eba4646 100644
--- a/projects/libvirt-glib.yaml
+++ b/projects/libvirt-glib.yaml
@@ -3,9 +3,8 @@
name: libvirt-glib
machines:
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: Libvirt GLib
jobs:
diff --git a/projects/libvirt-go-xml.yaml b/projects/libvirt-go-xml.yaml
index 9f45694..ebe06fb 100644
--- a/projects/libvirt-go-xml.yaml
+++ b/projects/libvirt-go-xml.yaml
@@ -3,9 +3,8 @@
name: libvirt-go-xml
machines:
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: Libvirt Go XML
jobs:
diff --git a/projects/libvirt-go.yaml b/projects/libvirt-go.yaml
index b0ebc73..9ffdd0a 100644
--- a/projects/libvirt-go.yaml
+++ b/projects/libvirt-go.yaml
@@ -3,9 +3,8 @@
name: libvirt-go
machines:
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: Libvirt Go
jobs:
diff --git a/projects/libvirt-perl.yaml b/projects/libvirt-perl.yaml
index a9f4740..7646e27 100644
--- a/projects/libvirt-perl.yaml
+++ b/projects/libvirt-perl.yaml
@@ -3,9 +3,8 @@
name: libvirt-perl
machines:
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: Libvirt Perl
jobs:
diff --git a/projects/libvirt-python.yaml b/projects/libvirt-python.yaml
index c1192d0..cae8ca7 100644
--- a/projects/libvirt-python.yaml
+++ b/projects/libvirt-python.yaml
@@ -4,9 +4,8 @@
machines:
- libvirt-centos-6
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: Libvirt Python
jobs:
diff --git a/projects/libvirt-sandbox.yaml b/projects/libvirt-sandbox.yaml
index ebbc5be..2920084 100644
--- a/projects/libvirt-sandbox.yaml
+++ b/projects/libvirt-sandbox.yaml
@@ -2,9 +2,8 @@
- project:
name: libvirt-sandbox
machines:
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: Libvirt Sandbox
jobs:
diff --git a/projects/libvirt-tck.yaml b/projects/libvirt-tck.yaml
index a7c0233..ca72f6c 100644
--- a/projects/libvirt-tck.yaml
+++ b/projects/libvirt-tck.yaml
@@ -2,9 +2,8 @@
- project:
name: libvirt-tck
machines:
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: Libvirt TCK
jobs:
diff --git a/projects/libvirt.yaml b/projects/libvirt.yaml
index 5125c16..0efe770 100644
--- a/projects/libvirt.yaml
+++ b/projects/libvirt.yaml
@@ -4,9 +4,8 @@
machines:
- libvirt-centos-6
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: Libvirt
archive_format: xz
@@ -17,9 +16,8 @@
- libvirt-centos-6
- libvirt-centos-7
- libvirt-debian-8
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
- libvirt-freebsd
- autotools-syntax-check-job:
@@ -28,9 +26,8 @@
- libvirt-centos-6
- libvirt-centos-7
- libvirt-debian-8
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
check_env: |
export VIR_TEST_EXPENSIVE=1
diff --git a/projects/osinfo-db-tools.yaml b/projects/osinfo-db-tools.yaml
index 5f275ab..93931af 100644
--- a/projects/osinfo-db-tools.yaml
+++ b/projects/osinfo-db-tools.yaml
@@ -3,9 +3,8 @@
name: osinfo-db-tools
machines:
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: osinfo database tools
jobs:
diff --git a/projects/osinfo-db.yaml b/projects/osinfo-db.yaml
index 9539724..83eb92f 100644
--- a/projects/osinfo-db.yaml
+++ b/projects/osinfo-db.yaml
@@ -3,9 +3,8 @@
name: osinfo-db
machines:
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: osinfo database
jobs:
diff --git a/projects/virt-manager.yaml b/projects/virt-manager.yaml
index 4485d5f..a50e0ab 100644
--- a/projects/virt-manager.yaml
+++ b/projects/virt-manager.yaml
@@ -3,9 +3,8 @@
name: virt-manager
machines:
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: Virtual Machine Manager
jobs:
diff --git a/projects/virt-viewer.yaml b/projects/virt-viewer.yaml
index 06c372b..e3ef04a 100644
--- a/projects/virt-viewer.yaml
+++ b/projects/virt-viewer.yaml
@@ -3,9 +3,8 @@
name: virt-viewer
machines:
- libvirt-centos-7
- - libvirt-fedora-23
- - libvirt-fedora-24
- libvirt-fedora-25
+ - libvirt-fedora-26
- libvirt-fedora-rawhide
title: Virt Viewer
jobs:
--
2.13.5
7 years, 2 months
[libvirt] [PATCH 0/3] fix crash on libvirtd termination
by Nikolay Shirokovskiy
Libvirtd termination can crash. One can use [2] patch to trigger it. Call
domstats function and send TERM to libvirtd. You'd probably see stacktrace [1].
The problem is that threads with clients requests are joined after drivers
cleanup. This patch series address this issue.
[1] Crash stacktrace
Program received signal SIGSEGV, Segmentation fault.
Thread 5 (Thread 0x7fffe6a4d700 (LWP 921916)):
#0 0x00007fffd9cb3f14 in qemuDomainObjBeginJobInternal (driver=driver@entry=0x7fffcc103e40,
obj=obj@entry=0x7fffcc1a6ca0, job=job@entry=QEMU_JOB_QUERY, asyncJob=asyncJob@entry=QEMU_ASYNC_JOB_NONE)
at qemu/qemu_domain.c:4114
#1 0x00007fffd9cb82ab in qemuDomainObjBeginJob (driver=driver@entry=0x7fffcc103e40, obj=obj@entry=0x7fffcc1a6ca0,
job=job@entry=QEMU_JOB_QUERY) at qemu/qemu_domain.c:4240
#2 0x00007fffd9d23094 in qemuConnectGetAllDomainStats (conn=0x7fffcc1bc140, doms=<optimized out>,
ndoms=<optimized out>, stats=127, retStats=0x7fffe6a4cb10, flags=<optimized out>) at qemu/qemu_driver.c:20116
#3 0x00007ffff744a166 in virDomainListGetStats (doms=0x7fffa8000a10, stats=0,
retStats=retStats@entry=0x7fffe6a4cb10, flags=0) at libvirt-domain.c:11592
#4 0x000055555557af15 in remoteDispatchConnectGetAllDomainStats (server=<optimized out>, msg=<optimized out>,
ret=0x7fffa80008e0, args=0x7fffa80008c0, rerr=0x7fffe6a4cc50, client=<optimized out>) at remote.c:6532
#5 remoteDispatchConnectGetAllDomainStatsHelper (server=<optimized out>, client=<optimized out>,
msg=<optimized out>, rerr=0x7fffe6a4cc50, args=0x7fffa80008c0, ret=0x7fffa80008e0) at remote_dispatch.h:615
#6 0x00007ffff74abba2 in virNetServerProgramDispatchCall (msg=0x55555583bf50, client=0x55555583c580,
server=0x555555810f40, prog=0x55555583a140) at rpc/virnetserverprogram.c:437
#7 virNetServerProgramDispatch (prog=0x55555583a140, server=server@entry=0x555555810f40, client=0x55555583c580,
msg=0x55555583bf50) at rpc/virnetserverprogram.c:307
#8 0x00005555555ae10d in virNetServerProcessMsg (msg=<optimized out>, prog=<optimized out>, client=<optimized out>,
srv=0x555555810f40) at rpc/virnetserver.c:148
#9 virNetServerHandleJob (jobOpaque=<optimized out>, opaque=0x555555810f40) at rpc/virnetserver.c:169
#10 0x00007ffff7390fd1 in virThreadPoolWorker (opaque=opaque@entry=0x5555558057a0) at util/virthreadpool.c:167
#11 0x00007ffff7390358 in virThreadHelper (data=<optimized out>) at util/virthread.c:206
#12 0x00007ffff457be25 in start_thread () from /lib64/libpthread.so.0
#13 0x00007ffff42a934d in clone () from /lib64/libc.so.6
Thread 1 (Thread 0x7ffff7fae880 (LWP 921909)):
#0 0x00007ffff457f945 in pthread_cond_wait@(a)GLIBC_2.3.2 () from /lib64/libpthread.so.0
#1 0x00007ffff73905c6 in virCondWait (c=<optimized out>, m=<optimized out>) at util/virthread.c:154
#2 0x00007ffff73911e0 in virThreadPoolFree (pool=0x555555811030) at util/virthreadpool.c:290
#3 0x00005555555adb44 in virNetServerDispose (obj=0x555555810f40) at rpc/virnetserver.c:767
#4 0x00007ffff736f62b in virObjectUnref (anyobj=<optimized out>) at util/virobject.c:356
#5 0x00007ffff7343e19 in virHashFree (table=0x55555581ba40) at util/virhash.c:318
#6 0x00007ffff74a46b5 in virNetDaemonDispose (obj=0x555555812c50) at rpc/virnetdaemon.c:105
#7 0x00007ffff736f62b in virObjectUnref (anyobj=anyobj@entry=0x555555812c50) at util/virobject.c:356
#8 0x0000555555570479 in main (argc=<optimized out>, argv=<optimized out>) at libvirtd.c:1539
[2] patch to trigger crash
# diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
# index cf5e4ad..39a57aa 100644
# --- a/src/qemu/qemu_driver.c
# +++ b/src/qemu/qemu_driver.c
# @@ -20144,6 +20144,8 @@ qemuConnectGetAllDomainStats(virConnectPtr conn,
# domflags = 0;
# vm = vms[i];
#
# + sleep(5);
# +
# virObjectLock(vm);
#
# if (HAVE_JOB(privflags) &&
Nikolay Shirokovskiy (3):
daemon: finish threads on close
qemu: monitor: check monitor not closed on send
qemu: implement state driver shutdown function
daemon/libvirtd.c | 2 ++
src/driver-state.h | 4 ++++
src/libvirt.c | 18 ++++++++++++++++++
src/libvirt_internal.h | 1 +
src/libvirt_private.syms | 1 +
src/qemu/qemu_driver.c | 39 +++++++++++++++++++++++++++++++++++++++
src/qemu/qemu_monitor.c | 27 +++++++++++++--------------
src/rpc/virnetserver.c | 5 +++--
8 files changed, 81 insertions(+), 16 deletions(-)
--
1.8.3.1
7 years, 2 months
[libvirt] [PATCH v4 0/5] numa: describe sibling nodes distances
by Wim Ten Have
From: Wim ten Have <wim.ten.have(a)oracle.com>
This patch extends guest domain administration adding support to advertise
node sibling distances when configuring HVM numa guests.
NUMA (non-uniform memory access), a method of configuring a cluster of nodes
within a single multiprocessing system such that it shares processor
local memory amongst others improving performance and the ability of the
system to be expanded.
A NUMA system could be illustrated as shown below. Within this 4-node
system, every socket is equipped with its own distinct memory. The whole
typically resembles a SMP (symmetric multiprocessing) system being a
"tightly-coupled," "share everything" system in which multiple processors
are working under a single operating system and can access each others'
memory over multiple "Bus Interconnect" paths.
+-----+-----+-----+ +-----+-----+-----+
| M | CPU | CPU | | CPU | CPU | M |
| E | | | | | | E |
| M +- Socket0 -+ +- Socket3 -+ M |
| O | | | | | | O |
| R | CPU | CPU <---------> CPU | CPU | R |
| Y | | | | | | Y |
+-----+--^--+-----+ +-----+--^--+-----+
| |
| Bus Interconnect |
| |
+-----+--v--+-----+ +-----+--v--+-----+
| M | | | | | | M |
| E | CPU | CPU <---------> CPU | CPU | E |
| M | | | | | | M |
| O +- Socket1 -+ +- Socket2 -+ O |
| R | | | | | | R |
| Y | CPU | CPU | | CPU | CPU | Y |
+-----+-----+-----+ +-----+-----+-----+
In contrast there is the limitation of a flat SMP system (not illustrated)
under which the bus (data and address path) can easily become a performance
bottleneck under high activity as sockets are added.
NUMA adds an intermediate level of memory shared amongst a few cores per
socket as illustrated above, so that data accesses do not have to travel
over a single bus.
Unfortunately the way NUMA does this adds its own limitations. This,
as visualized in the illustration above, happens when data is stored in
memory associated with Socket2 and is accessed by a CPU (core) in Socket0.
The processors use the "Bus Interconnect" to create gateways between the
sockets (nodes) enabling inter-socket access to memory. These "Bus
Interconnect" hops add data access delays when a CPU (core) accesses
memory associated with a remote socket (node).
For terminology we refer to sockets as "nodes" where access to each
others' distinct resources such as memory make them "siblings" with a
designated "distance" between them. A specific design is described under
the ACPI (Advanced Configuration and Power Interface Specification)
within the chapter explaining the system's SLIT (System Locality Distance
Information Table).
These patches extend core libvirt's XML description of a virtual machine's
hardware to include NUMA distance information for sibling nodes, which
is then passed to Xen guests via libxl. Recently qemu landed support for
constructing the SLIT since commit 0f203430dd ("numa: Allow setting NUMA
distance for different NUMA nodes"), hence these core libvirt extensions
can also help other drivers in supporting this feature.
The XML changes made allow to describe the <cell> node/sockets <distances>
amongst <sibling> node identifiers and propagate these towards the numa
domain functionality finally adding support to libxl.
[below is an example illustrating a 4 node/socket <cell> setup]
<cpu>
<numa>
<cell id='0' cpus='0,4-7' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='10'/>
<sibling id='1' value='21'/>
<sibling id='2' value='31'/>
<sibling id='3' value='41'/>
</distances>
</cell>
<cell id='1' cpus='1,8-10,12-15' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='21'/>
<sibling id='1' value='10'/>
<sibling id='2' value='21'/>
<sibling id='3' value='31'/>
</distances>
</cell>
<cell id='2' cpus='2,11' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='31'/>
<sibling id='1' value='21'/>
<sibling id='2' value='10'/>
<sibling id='3' value='21'/>
</distances>
</cell>
<cell id='3' cpus='3' memory='2097152' unit='KiB'>
<distances>
<sibling id='0' value='41'/>
<sibling id='1' value='31'/>
<sibling id='2' value='21'/>
<sibling id='3' value='10'/>
</distances>
</cell>
</numa>
</cpu>
By default on libxl, if no <distances> are given to describe the distance data
between different <cell>s, this patch will default to a scheme using 10
for local and 20 for any remote node/socket, which is the assumption of
guest OS when no SLIT is specified. While SLIT is optional, libxl requires
that distances are set nonetheless.
On Linux systems the SLIT detail can be listed with help of the 'numactl -H'
command. The above HVM guest would show the following output.
[root@f25 ~]# numactl -H
available: 4 nodes (0-3)
node 0 cpus: 0 4 5 6 7
node 0 size: 1988 MB
node 0 free: 1743 MB
node 1 cpus: 1 8 9 10 12 13 14 15
node 1 size: 1946 MB
node 1 free: 1885 MB
node 2 cpus: 2 11
node 2 size: 2011 MB
node 2 free: 1912 MB
node 3 cpus: 3
node 3 size: 2010 MB
node 3 free: 1980 MB
node distances:
node 0 1 2 3
0: 10 21 31 41
1: 21 10 21 31
2: 31 21 10 21
3: 41 31 21 10
Wim ten Have (5):
numa: rename function virDomainNumaDefCPUFormat
numa: describe siblings distances within cells
xenconfig: add domxml conversions for xen-xl
libxl: vnuma support
xlconfigtest: add tests for numa cell sibling distances
docs/formatdomain.html.in | 63 +++-
docs/schemas/basictypes.rng | 7 +
docs/schemas/cputypes.rng | 18 ++
src/conf/cpu_conf.c | 2 +-
src/conf/numa_conf.c | 342 ++++++++++++++++++++-
src/conf/numa_conf.h | 22 +-
src/libvirt_private.syms | 5 +
src/libxl/libxl_conf.c | 120 ++++++++
src/libxl/libxl_driver.c | 3 +-
src/xenconfig/xen_xl.c | 333 ++++++++++++++++++++
.../test-fullvirt-vnuma-autocomplete.cfg | 26 ++
.../test-fullvirt-vnuma-autocomplete.xml | 85 +++++
.../test-fullvirt-vnuma-nodistances.cfg | 26 ++
.../test-fullvirt-vnuma-nodistances.xml | 53 ++++
.../test-fullvirt-vnuma-partialdist.cfg | 26 ++
.../test-fullvirt-vnuma-partialdist.xml | 60 ++++
tests/xlconfigdata/test-fullvirt-vnuma.cfg | 26 ++
tests/xlconfigdata/test-fullvirt-vnuma.xml | 81 +++++
tests/xlconfigtest.c | 6 +
19 files changed, 1295 insertions(+), 9 deletions(-)
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-autocomplete.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-autocomplete.xml
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-nodistances.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-nodistances.xml
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-partialdist.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma-partialdist.xml
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-vnuma.xml
--
2.9.5
7 years, 2 months