Re: Virtqemud wants to unlink /dev/urandom
by Martin Kletzander
[adding back libvir-list to the Cc]
On Fri, Mar 11, 2022 at 03:55:03PM +0100, Nikola Knazekova wrote:
>Hey Martin,
>
>thanks for your resposne.
>
>I don't know if it is happening in the mount namespace. Can you look at the
>logs in attachment?
>
>It was happening on clear install on F35, F36 and on older versions
>probably too.
>But it is only an issue in the new selinux policy for libvirt. In old
>selinux policy is allowed for virtd to unlink /dev/urandom char files.
>I just wanted to be sure if it is ok to allow it for virtqemud.
>
That actually might be the case, that it actually does set the context
on /dev/urandom correctly and then the unlink fails for virtqemud since
the selinux policy only accounts for libvirtd even though we switched to
modular daemons making virtqemud the one to do the work.
@Michal can you confirm what I'm guessing here since you did a lot of
the mount namespace work which I presume is what contributes to the
issue here.
In the meantime, would you mind trying this with the mount namespace
feature turned off in /etc/libvirt/qemu.conf like this:
namespaces = []
Thanks.
>Regards,
>Nikola
>
>On Thu, Feb 24, 2022 at 3:00 PM Martin Kletzander <mkletzan(a)redhat.com>
>wrote:
>
>> On Thu, Feb 24, 2022 at 01:41:50PM +0100, Nikola Knazekova wrote:
>> >Hi,
>> >
>> >when I am creating virtual machine on system with new SELinux policy for
>> >Libvirt, I am getting this error message:
>> >
>> >Unable to complete install: 'Unable to create device /dev/urandom: File
>> >exists'
>> >Traceback (most recent call last):
>> > File "/usr/share/virt-manager/virtManager/asyncjob.py", line 65, in
>> >cb_wrapper
>> > callback(asyncjob, *args, **kwargs)
>> > File "/usr/share/virt-manager/virtManager/createvm.py", line 2001, in
>> >_do_async_install
>> > installer.start_install(guest, meter=meter)
>> > File "/usr/share/virt-manager/virtinst/install/installer.py", line 701,
>> >in start_install
>> > domain = self._create_guest(
>> > File "/usr/share/virt-manager/virtinst/install/installer.py", line 649,
>> >in _create_guest
>> > domain = self.conn.createXML(install_xml or final_xml, 0)
>> > File "/usr/lib64/python3.10/site-packages/libvirt.py", line 4393, in
>> >createXML
>> > raise libvirtError('virDomainCreateXML() failed')
>> >libvirt.libvirtError: Unable to create device /dev/urandom: File exists
>> >
>> >And SELinux denial, where SELinux prevents virtqemud to unlink character
>> >device /dev/urandom:
>> >
>> >time->Wed Feb 23 19:30:33 2022
>> >type=PROCTITLE msg=audit(1645662633.819:930):
>>
>> >proctitle=2F7573722F7362696E2F7669727471656D7564002D2D74696D656F757400313230
>> >type=PATH msg=audit(1645662633.819:930): item=1 name="/dev/urandom"
>> inode=6
>> >dev=00:44 mode=020666 ouid=0 ogid=0 rdev=01:09
>> >obj=system_u:object_r:urandom_device_t:s0 nametype=DELETE cap_fp=0
>> cap_fi=0
>> >cap_fe=0 cap_fver=0 cap_frootid=0
>> >type=PATH msg=audit(1645662633.819:930): item=0 name="/dev/" inode=1
>> >dev=00:44 mode=040755 ouid=0 ogid=0 rdev=00:00
>> >obj=system_u:object_r:tmpfs_t:s0 nametype=PARENT cap_fp=0 cap_fi=0
>> cap_fe=0
>> >cap_fver=0 cap_frootid=0
>> >type=CWD msg=audit(1645662633.819:930): cwd="/"
>> >type=SYSCALL msg=audit(1645662633.819:930): arch=c000003e syscall=87
>> >success=no exit=-13 a0=7f9418064f50 a1=7f943909c930 a2=7f941d0ef6d4 a3=0
>> >items=2 ppid=6722 pid=7196 auid=4294967295 uid=0 gid=0 euid=0 suid=0
>> >fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="rpc-worker"
>> >exe="/usr/sbin/virtqemud" subj=system_u:system_r:virtqemud_t:s0 key=(null)
>> >type=AVC msg=audit(1645662633.819:930): avc: denied { unlink } for
>> > pid=7196 comm="rpc-worker" name="urandom" dev="tmpfs" ino=6
>> >scontext=system_u:system_r:virtqemud_t:s0
>> >tcontext=system_u:object_r:urandom_device_t:s0 tclass=chr_file
>> permissive=0
>> >
>> >Is this expected behavior?
>> >
>>
>> The error is not, but creating and removing /dev/urandom is fine, as far
>> as it happens in the mount namespace of the domain, which we create and
>> as such we also need to create some basic /dev structure in there.
>>
>> Unfortunately this error does not show whether it is happening in the
>> mount namespace, although it should definitely _not_ happen outside of it.
>>
>> Does this happen on clean install? What is the version of libvirt and
>> the selinux policy? What's the distro+version of the system? Would you
>> mind capturing the debug logs and attaching them?
>>
>> How to capture debug logs: https://libvirt.org/kbase/debuglogs.html
>>
>> >Thanks,
>> >Nikola
>>
>2022-03-04 03:08:28.053+0000: starting up libvirt version: 8.0.0, package: 2.fc36 (Fedora Project, 2022-01-20-17:44:09, ), qemu version: 6.2.0qemu-6.2.0-5.fc36, kernel: 5.17.0-0.rc5.102.fc36.x86_64, hostname: fedora
>LC_ALL=C \
>PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin \
>HOME=/var/lib/libvirt/qemu/domain-4-fedora35-3 \
>XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-4-fedora35-3/.local/share \
>XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-4-fedora35-3/.cache \
>XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-4-fedora35-3/.config \
>/usr/bin/qemu-system-x86_64 \
>-name guest=fedora35-3,debug-threads=on \
>-S \
>-object '{"qom-type":"secret","id":"masterKey0","format":"raw","file":"/var/lib/libvirt/qemu/domain-4-fedora35-3/master-key.aes"}' \
>-machine pc-q35-6.2,usb=off,vmport=off,dump-guest-core=off,memory-backend=pc.ram \
>-accel kvm \
>-cpu host,migratable=on \
>-m 2048 \
>-object '{"qom-type":"memory-backend-ram","id":"pc.ram","size":2147483648}' \
>-overcommit mem-lock=off \
>-smp 2,sockets=2,cores=1,threads=1 \
>-uuid 818068b5-c72b-475c-a960-231f29f60464 \
>-no-user-config \
>-nodefaults \
>-chardev socket,id=charmonitor,fd=28,server=on,wait=off \
>-mon chardev=charmonitor,id=monitor,mode=control \
>-rtc base=utc,driftfix=slew \
>-global kvm-pit.lost_tick_policy=delay \
>-no-hpet \
>-no-shutdown \
>-global ICH9-LPC.disable_s3=1 \
>-global ICH9-LPC.disable_s4=1 \
>-boot strict=on \
>-device pcie-root-port,port=16,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2 \
>-device pcie-root-port,port=17,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
>-device pcie-root-port,port=18,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
>-device pcie-root-port,port=19,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
>-device pcie-root-port,port=20,chassis=5,id=pci.5,bus=pcie.0,addr=0x2.0x4 \
>-device pcie-root-port,port=21,chassis=6,id=pci.6,bus=pcie.0,addr=0x2.0x5 \
>-device pcie-root-port,port=22,chassis=7,id=pci.7,bus=pcie.0,addr=0x2.0x6 \
>-device pcie-root-port,port=23,chassis=8,id=pci.8,bus=pcie.0,addr=0x2.0x7 \
>-device pcie-root-port,port=24,chassis=9,id=pci.9,bus=pcie.0,multifunction=on,addr=0x3 \
>-device pcie-root-port,port=25,chassis=10,id=pci.10,bus=pcie.0,addr=0x3.0x1 \
>-device pcie-root-port,port=26,chassis=11,id=pci.11,bus=pcie.0,addr=0x3.0x2 \
>-device pcie-root-port,port=27,chassis=12,id=pci.12,bus=pcie.0,addr=0x3.0x3 \
>-device pcie-root-port,port=28,chassis=13,id=pci.13,bus=pcie.0,addr=0x3.0x4 \
>-device pcie-root-port,port=29,chassis=14,id=pci.14,bus=pcie.0,addr=0x3.0x5 \
>-device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.2,addr=0x0 \
>-device virtio-serial-pci,id=virtio-serial0,bus=pci.3,addr=0x0 \
>-blockdev '{"driver":"file","filename":"/var/lib/libvirt/images/fedora35-3.qcow2","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
>-blockdev '{"node-name":"libvirt-2-format","read-only":false,"discard":"unmap","driver":"qcow2","file":"libvirt-2-storage","backing":null}' \
>-device virtio-blk-pci,bus=pci.4,addr=0x0,drive=libvirt-2-format,id=virtio-disk0,bootindex=2 \
>-blockdev '{"driver":"file","filename":"/home/n/Downloads/Fedora-Workstation-Live-x86_64-35-1.2.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
>-blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \
>-device ide-cd,bus=ide.0,drive=libvirt-1-format,id=sata0-0-0,bootindex=1 \
>-netdev tap,fd=29,id=hostnet0,vhost=on,vhostfd=31 \
>-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:81:d4:90,bus=pci.1,addr=0x0 \
>-chardev pty,id=charserial0 \
>-device isa-serial,chardev=charserial0,id=serial0 \
>-chardev socket,id=charchannel0,fd=27,server=on,wait=off \
>-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
>-chardev spicevmc,id=charchannel1,name=vdagent \
>-device virtserialport,bus=virtio-serial0.0,nr=2,chardev=charchannel1,id=channel1,name=com.redhat.spice.0 \
>-device usb-tablet,id=input0,bus=usb.0,port=1 \
>-audiodev '{"id":"audio1","driver":"spice"}' \
>-spice port=5900,addr=127.0.0.1,disable-ticketing=on,image-compression=off,seamless-migration=on \
>-device virtio-vga,id=video0,max_outputs=1,bus=pcie.0,addr=0x1 \
>-device ich9-intel-hda,id=sound0,bus=pcie.0,addr=0x1b \
>-device hda-duplex,id=sound0-codec0,bus=sound0.0,cad=0,audiodev=audio1 \
>-chardev spicevmc,id=charredir0,name=usbredir \
>-device usb-redir,chardev=charredir0,id=redir0,bus=usb.0,port=2 \
>-chardev spicevmc,id=charredir1,name=usbredir \
>-device usb-redir,chardev=charredir1,id=redir1,bus=usb.0,port=3 \
>-device virtio-balloon-pci,id=balloon0,bus=pci.5,addr=0x0 \
>-object '{"qom-type":"rng-random","id":"objrng0","filename":"/dev/urandom"}' \
>-device virtio-rng-pci,rng=objrng0,id=rng0,bus=pci.6,addr=0x0 \
>-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
>-msg timestamp=on
>char device redirected to /dev/pts/2 (label charserial0)
>2022-03-04T03:09:24.261105Z qemu-system-x86_64: terminating on signal 15 from pid 8179 (/usr/sbin/virtqemud)
>2022-03-04 03:09:24.461+0000: shutting down, reason=destroyed
2 years, 9 months
[libvirt PATCH] Add Alpine builds to CI
by Martin Kletzander
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
ci/containers/alpine-314.Dockerfile | 82 ++++++++++++++++++++++++++++
ci/containers/alpine-edge.Dockerfile | 81 +++++++++++++++++++++++++++
ci/gitlab.yml | 35 +++++++++++-
ci/manifest.yml | 8 +++
4 files changed, 204 insertions(+), 2 deletions(-)
create mode 100644 ci/containers/alpine-314.Dockerfile
create mode 100644 ci/containers/alpine-edge.Dockerfile
diff --git a/ci/containers/alpine-314.Dockerfile b/ci/containers/alpine-314.Dockerfile
new file mode 100644
index 000000000000..4ca35a949bda
--- /dev/null
+++ b/ci/containers/alpine-314.Dockerfile
@@ -0,0 +1,82 @@
+# THIS FILE WAS AUTO-GENERATED
+#
+# $ lcitool manifest ci/manifest.yml
+#
+# https://gitlab.com/libvirt/libvirt-ci
+
+FROM docker.io/library/alpine:3.14
+
+RUN apk update && \
+ apk upgrade && \
+ apk add \
+ acl-dev \
+ attr-dev \
+ audit-dev \
+ augeas \
+ bash-completion \
+ ca-certificates \
+ ccache \
+ ceph-dev \
+ clang \
+ curl-dev \
+ cyrus-sasl-dev \
+ diffutils \
+ dnsmasq \
+ eudev-dev \
+ fuse-dev \
+ gcc \
+ gettext \
+ git \
+ glib-dev \
+ gnutls-dev \
+ grep \
+ iproute2 \
+ iptables \
+ kmod \
+ libcap-ng-dev \
+ libnl3-dev \
+ libpcap-dev \
+ libpciaccess-dev \
+ libselinux-dev \
+ libssh-dev \
+ libssh2-dev \
+ libtirpc-dev \
+ libxml2-dev \
+ libxml2-utils \
+ libxslt \
+ lvm2 \
+ lvm2-dev \
+ make \
+ meson \
+ musl-dev \
+ netcf-dev \
+ nfs-utils \
+ numactl-dev \
+ open-iscsi \
+ parted-dev \
+ perl \
+ pkgconf \
+ polkit \
+ py3-docutils \
+ py3-flake8 \
+ python3 \
+ qemu-img \
+ readline-dev \
+ rpcgen \
+ samurai \
+ sed \
+ util-linux-dev \
+ wireshark-dev \
+ xen-dev \
+ yajl-dev && \
+ apk list | sort > /packages.txt && \
+ mkdir -p /usr/libexec/ccache-wrappers && \
+ ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/cc && \
+ ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/clang && \
+ ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/gcc
+
+ENV LANG "en_US.UTF-8"
+ENV MAKE "/usr/bin/make"
+ENV NINJA "/usr/bin/ninja"
+ENV PYTHON "/usr/bin/python3"
+ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
diff --git a/ci/containers/alpine-edge.Dockerfile b/ci/containers/alpine-edge.Dockerfile
new file mode 100644
index 000000000000..d171ed1be77d
--- /dev/null
+++ b/ci/containers/alpine-edge.Dockerfile
@@ -0,0 +1,81 @@
+# THIS FILE WAS AUTO-GENERATED
+#
+# $ lcitool manifest ci/manifest.yml
+#
+# https://gitlab.com/libvirt/libvirt-ci
+
+FROM docker.io/library/alpine:edge
+
+RUN apk update && \
+ apk upgrade && \
+ apk add \
+ acl-dev \
+ attr-dev \
+ audit-dev \
+ augeas \
+ bash-completion \
+ ca-certificates \
+ ccache \
+ ceph-dev \
+ clang \
+ curl-dev \
+ cyrus-sasl-dev \
+ diffutils \
+ dnsmasq \
+ eudev-dev \
+ fuse-dev \
+ gcc \
+ gettext \
+ git \
+ glib-dev \
+ gnutls-dev \
+ grep \
+ iproute2 \
+ iptables \
+ kmod \
+ libcap-ng-dev \
+ libnl3-dev \
+ libpcap-dev \
+ libpciaccess-dev \
+ libselinux-dev \
+ libssh-dev \
+ libssh2-dev \
+ libtirpc-dev \
+ libxml2-dev \
+ libxml2-utils \
+ libxslt \
+ lvm2 \
+ lvm2-dev \
+ make \
+ meson \
+ musl-dev \
+ netcf-dev \
+ nfs-utils \
+ numactl-dev \
+ open-iscsi \
+ parted-dev \
+ perl \
+ pkgconf \
+ polkit \
+ py3-docutils \
+ py3-flake8 \
+ python3 \
+ qemu-img \
+ readline-dev \
+ samurai \
+ sed \
+ util-linux-dev \
+ wireshark-dev \
+ xen-dev \
+ yajl-dev && \
+ apk list | sort > /packages.txt && \
+ mkdir -p /usr/libexec/ccache-wrappers && \
+ ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/cc && \
+ ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/clang && \
+ ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/gcc
+
+ENV LANG "en_US.UTF-8"
+ENV MAKE "/usr/bin/make"
+ENV NINJA "/usr/bin/ninja"
+ENV PYTHON "/usr/bin/python3"
+ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers"
diff --git a/ci/gitlab.yml b/ci/gitlab.yml
index cc03a2fe49f8..a19ec2a23f09 100644
--- a/ci/gitlab.yml
+++ b/ci/gitlab.yml
@@ -10,8 +10,7 @@
stage: containers
needs: []
services:
- - name: registry.gitlab.com/libvirt/libvirt-ci/docker-dind:master
- alias: docker
+ - docker:dind
before_script:
- export TAG="$CI_REGISTRY_IMAGE/ci-$NAME:latest"
- export COMMON_TAG="$CI_REGISTRY/libvirt/libvirt/ci-$NAME:latest"
@@ -87,6 +86,20 @@ x86_64-almalinux-8-container:
NAME: almalinux-8
+x86_64-alpine-314-container:
+ extends: .container_job
+ allow_failure: false
+ variables:
+ NAME: alpine-314
+
+
+x86_64-alpine-edge-container:
+ extends: .container_job
+ allow_failure: false
+ variables:
+ NAME: alpine-edge
+
+
x86_64-centos-stream-8-container:
extends: .container_job
allow_failure: false
@@ -400,6 +413,24 @@ x86_64-almalinux-8-clang:
RPM: skip
+x86_64-alpine-314:
+ extends: .native_build_job
+ needs:
+ - x86_64-alpine-314-container
+ allow_failure: false
+ variables:
+ NAME: alpine-314
+
+
+x86_64-alpine-edge:
+ extends: .native_build_job
+ needs:
+ - x86_64-alpine-edge-container
+ allow_failure: false
+ variables:
+ NAME: alpine-edge
+
+
x86_64-centos-stream-8:
extends: .native_build_job
needs:
diff --git a/ci/manifest.yml b/ci/manifest.yml
index 87d923ae7839..26704bef2362 100644
--- a/ci/manifest.yml
+++ b/ci/manifest.yml
@@ -18,6 +18,14 @@ targets:
RPM: skip
CC: clang
+ alpine-314:
+ jobs:
+ - arch: x86_64
+
+ alpine-edge:
+ jobs:
+ - arch: x86_64
+
centos-stream-8:
jobs:
- arch: x86_64
--
2.35.1
2 years, 9 months
[PATCH 0/4] qemu_cgroup: Slightly rework
by Michal Privoznik
These are inspired by my earlier patches that solved the same issue for
namespaces:
https://listman.redhat.com/archives/libvir-list/2022-March/229266.html
Michal Prívozník (4):
qemu_cgroup: Drop ENOENT special case for RNG devices
qemu_cgroup: Introduce and use qemuCgroupAllowDevicePath()
qemu_cgroup: Introduce and use qemuCgroupDenyDevicePath()
qemu_cgroup: Don't deny devices from cgroupDeviceACL
src/qemu/qemu_cgroup.c | 246 +++++++++++++++++------------------------
1 file changed, 100 insertions(+), 146 deletions(-)
--
2.34.1
2 years, 9 months
[PATCH] add build dependency on lxc_protocol.h to remote_daemon
by Joe Slater
remote_daemon.c and others need the generated header lxc_protocol.h,
but do not have it as a dependency in meson.build. This means that
builds will randomly (ok, very occasionally) fail. Restructure how the
header is built so that remote_daemon can have it as a dependency.
Signed-off-by: Joe Slater <joe.slater(a)windriver.com>
---
src/remote/meson.build | 48 ++++++++++++++++++++++++------------------
1 file changed, 28 insertions(+), 20 deletions(-)
diff --git a/src/remote/meson.build b/src/remote/meson.build
index 0a18826..31a30ee 100644
--- a/src/remote/meson.build
+++ b/src/remote/meson.build
@@ -1,27 +1,11 @@
-remote_driver_sources = [
- 'remote_driver.c',
- 'remote_sockets.c',
-]
-
-remote_driver_generated = []
+remote_xxx_generated = []
foreach name : [ 'remote', 'qemu', 'lxc' ]
- client_bodies_h = '@0(a)_client_bodies.h'.format(name)
protocol_c = '@0(a)_protocol.c'.format(name)
protocol_h = '@0(a)_protocol.h'.format(name)
protocol_x = '@0(a)_protocol.x'.format(name)
- remote_driver_generated += custom_target(
- client_bodies_h,
- input: protocol_x,
- output: client_bodies_h,
- command: [
- gendispatch_prog, '--mode=client', name, name.to_upper(), '@INPUT@',
- ],
- capture: true,
- )
-
- remote_driver_generated += custom_target(
+ remote_xxx_generated += custom_target(
protocol_h,
input: protocol_x,
output: protocol_h,
@@ -30,7 +14,7 @@ foreach name : [ 'remote', 'qemu', 'lxc' ]
],
)
- remote_driver_generated += custom_target(
+ remote_xxx_generated += custom_target(
protocol_c,
input: protocol_x,
output: protocol_c,
@@ -42,6 +26,30 @@ foreach name : [ 'remote', 'qemu', 'lxc' ]
rpc_probe_files += files(protocol_x)
endforeach
+
+remote_driver_sources = [
+ 'remote_driver.c',
+ 'remote_sockets.c',
+]
+
+remote_driver_generated =remote_xxx_generated
+
+foreach name : [ 'remote', 'qemu', 'lxc' ]
+ client_bodies_h = '@0(a)_client_bodies.h'.format(name)
+ protocol_x = '@0(a)_protocol.x'.format(name)
+
+ remote_driver_generated += custom_target(
+ client_bodies_h,
+ input: protocol_x,
+ output: client_bodies_h,
+ command: [
+ gendispatch_prog, '--mode=client', name, name.to_upper(), '@INPUT@',
+ ],
+ capture: true,
+ )
+
+endforeach
+
remote_daemon_sources = files(
'remote_daemon.c',
'remote_daemon_config.c',
@@ -49,7 +57,7 @@ remote_daemon_sources = files(
'remote_daemon_stream.c',
)
-remote_daemon_generated = []
+remote_daemon_generated = remote_xxx_generated
virt_ssh_helper_sources = files(
'remote_sockets.c',
--
2.32.0
2 years, 9 months
[libvirt][PATCH RESEND v10 0/5] Support query and use SGX
by Haibin Huang
Because the 5th patch was sent by mistake, so replace the 5th patch and
send it again.
This patch series provides support for enabling Intel's Software
Guard Extensions (SGX) feature in guest VM.
Giving the SGX support in QEMU had been merged. Intel SGX is a
set of instructions that increases the security of application code
and data, giving them more protection from disclosure or modification.
Developers can partition sensitive information into enclaves, which
are areas of execution in memory with more security protection.
It depends on QEMU fixing[1], which will move cpu QOM object from
/machine/unattached/device[nn] to /machine/cpu[nn]. It requires libvirt
to change the default cpu QOM object location once QEMU patch gets
accepted, but it is out of this SGX patch scope.
The typical flow looks below at very high level:
1. Calls virConnectGetDomainCapabilities API to domain capabilities
that includes the following SGX information.
<feature>
...
<sgx supported='yes'>
<epc_size unit='KiB'>N</epc_size>
</sgx>
...
</feature>
2. User requests to start a guest calling virCreateXML() with SGX
requirement. It does not support NUMA yet, since latest QEMU 6.2
release does not support NUMA.
It should contain
<devices>
...
<memory model='sgx-epc'>
<target>
<size unit='KiB'>N</size>
</target>
</memory>
...
</devices>
[1] https://lists.nongnu.org/archive/html/qemu-devel/2022-01/msg03534.html
Haibin Huang (3):
qemu: provide support to query the SGX capability
conf: expose SGX feature in domain capabilities
Add unit test for domaincapsdata sgx
Lin Yang (2):
conf: Introduce SGX EPC element into device memory xml
Update default CPU location in qemu QOM tree
docs/formatdomain.rst | 9 +-
docs/formatdomaincaps.html.in | 26 ++++
docs/schemas/domaincaps.rng | 22 ++-
docs/schemas/domaincommon.rng | 1 +
src/conf/domain_capabilities.c | 29 ++++
src/conf/domain_capabilities.h | 13 ++
src/conf/domain_conf.c | 6 +
src/conf/domain_conf.h | 1 +
src/conf/domain_validate.c | 16 ++
src/libvirt_private.syms | 1 +
src/qemu/qemu_alias.c | 3 +
src/qemu/qemu_capabilities.c | 137 ++++++++++++++++++
src/qemu/qemu_capabilities.h | 4 +
src/qemu/qemu_capspriv.h | 4 +
src/qemu/qemu_command.c | 1 +
src/qemu/qemu_domain.c | 38 +++--
src/qemu/qemu_domain_address.c | 6 +
src/qemu/qemu_driver.c | 1 +
src/qemu/qemu_monitor.c | 10 ++
src/qemu/qemu_monitor.h | 3 +
src/qemu/qemu_monitor_json.c | 84 ++++++++++-
src/qemu/qemu_monitor_json.h | 9 ++
src/qemu/qemu_process.c | 2 +
src/qemu/qemu_validate.c | 8 +
src/security/security_apparmor.c | 1 +
src/security/security_dac.c | 2 +
src/security/security_selinux.c | 2 +
tests/domaincapsdata/bhyve_basic.x86_64.xml | 1 +
tests/domaincapsdata/bhyve_fbuf.x86_64.xml | 1 +
tests/domaincapsdata/bhyve_uefi.x86_64.xml | 1 +
tests/domaincapsdata/empty.xml | 1 +
tests/domaincapsdata/libxl-xenfv.xml | 1 +
tests/domaincapsdata/libxl-xenpv.xml | 1 +
.../domaincapsdata/qemu_2.11.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.11.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_2.11.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_2.11.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.12.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.12.0-tcg.x86_64.xml | 1 +
.../qemu_2.12.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_2.12.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_2.12.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_2.12.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_2.12.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.4.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.4.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_2.4.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.5.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.5.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_2.5.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.6.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.6.0-tcg.x86_64.xml | 1 +
.../qemu_2.6.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_2.6.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_2.6.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_2.6.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.7.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.7.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_2.7.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_2.7.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.8.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.8.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_2.8.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_2.8.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.9.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_2.9.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_2.9.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_2.9.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_2.9.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_3.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_3.0.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_3.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_3.0.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_3.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_3.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_3.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_3.1.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_3.1.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_4.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_4.0.0-tcg.x86_64.xml | 1 +
.../qemu_4.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_4.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_4.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_4.0.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_4.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_4.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_4.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_4.1.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_4.2.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_4.2.0-tcg.x86_64.xml | 1 +
.../qemu_4.2.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_4.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.0.0-tcg.x86_64.xml | 1 +
.../qemu_5.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_5.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_5.1.0.sparc.xml | 1 +
tests/domaincapsdata/qemu_5.1.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.2.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_5.2.0-tcg.x86_64.xml | 1 +
.../qemu_5.2.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_5.2.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.0.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.0.0-tcg.x86_64.xml | 1 +
.../qemu_6.0.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.0.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.0.0.s390x.xml | 1 +
tests/domaincapsdata/qemu_6.0.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.1.0-q35.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.1.0-tcg.x86_64.xml | 1 +
tests/domaincapsdata/qemu_6.1.0.x86_64.xml | 1 +
.../domaincapsdata/qemu_6.2.0-q35.x86_64.xml | 4 +
.../domaincapsdata/qemu_6.2.0-tcg.x86_64.xml | 4 +
.../qemu_6.2.0-virt.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.aarch64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_6.2.0.x86_64.xml | 4 +
.../domaincapsdata/qemu_7.0.0-q35.x86_64.xml | 4 +
.../domaincapsdata/qemu_7.0.0-tcg.x86_64.xml | 4 +
tests/domaincapsdata/qemu_7.0.0.ppc64.xml | 1 +
tests/domaincapsdata/qemu_7.0.0.x86_64.xml | 4 +
.../caps_6.2.0.x86_64.replies | 22 ++-
.../caps_6.2.0.x86_64.xml | 5 +
.../caps_7.0.0.x86_64.replies | 22 ++-
.../caps_7.0.0.x86_64.xml | 5 +
tests/qemuxml2argvdata/sgx-epc.xml | 36 +++++
.../sgx-epc.x86_64-latest.xml | 52 +++++++
tests/qemuxml2xmltest.c | 2 +
138 files changed, 675 insertions(+), 30 deletions(-)
create mode 100644 tests/qemuxml2argvdata/sgx-epc.xml
create mode 100644 tests/qemuxml2xmloutdata/sgx-epc.x86_64-latest.xml
--
2.17.1
2 years, 9 months
[PATCH 0/3] virsh: Completers improvements
by Michal Privoznik
*** BLURB HERE ***
Michal Prívozník (3):
virsh: Properly terminate string list in
virshDomainInterfaceSourceModeCompleter()
virsh: Introduce virshEnumComplete()
virsh: Don't open code virshEnumComplete()
tools/virsh-completer-domain.c | 147 ++++++--------------------------
tools/virsh-completer-host.c | 11 +--
tools/virsh-completer-nodedev.c | 7 +-
tools/virsh-completer-pool.c | 7 +-
tools/virsh-completer-volume.c | 11 +--
tools/virsh-completer.c | 27 ++++++
tools/virsh-completer.h | 4 +
7 files changed, 67 insertions(+), 147 deletions(-)
--
2.34.1
2 years, 9 months
REST service for libvirt to simplify SEV(ES) launch measurement
by Daniel P. Berrangé
Extending management apps using libvirt to support measured launch of
QEMU guests with SEV/SEV-ES is unreasonably complicated today, both for
the guest owner and for the cloud management apps. We have APIs for
exposing info about the SEV host, the SEV guest, guest measurements
and secret injections. This is a "bags of bits" solution. We expect
apps to them turn this into a user facting solution. It is possible
but we're heading to a place where every cloud mgmt app essentially
needs to reinvent the same wheel and the guest owner will need to
learn custom APIs for dealing with SEV/SEV-ES for each cloud mgmt
app. This is pretty awful. We need to do a better job at providing
a solution that is more general purpose IMHO.
Consider a cloud mgmt app, right now the flow to use the bag of
bits libvirt exposes, looks something like
* Guest owner tells mgmt app they want to launch a VM
* Mgmt app decides what host the VM will be launched on
* Guest owner requests cert chain for the virt host from mgmt app
* Guest owner validates cert chain for the virt host
* Guest owner generates launch blob for the VM
* Guest owner provides launch blob to the mgmt app
* Management app tells libvirt to launch VM with blob,
with CPUs in a paused state
* Libvirt luanches QEMU with CPUs stopped
* Guest owner requests launch measurement from mgmt app
* Guest owner validates measurement
* Guest owner generates secret blob
* Guest owner sends secret blob to management app
* Management app tells libvirt to inject secrets
* Libvirt injects secrets to QEMU
* Management app tells libvirt to start QEMU CPUs
* Libvirt tells QEMU to start CPUs
Compare to a non-confidental VM
* Guest owner tells mgmt app they want to launch a VM
* Mgmt app decides what host the VM will be launched on
* Mgmt app tells libvirt to launch VM with CPUs in running state
* Libvirt launches QEMU with CPUs running
Now, of course the guest owner wouldn't be manually performing the
earlier steps, they would want some kind of software to take care
of this. No matter what, it still involves a large number of back
and forth operations between the guest owner & mgmt app, and between
the mgmt app and libvirt.
One of libvirt's key jobs is to isolate mgmt apps from differences
in behaviour of underlying hypervisor technologies, and we're failing
at that job with SEV/SEV-ES, because the mgmt app needs to go through
a multi-stage dance on every VM start, that is different from what
they do with non-confidential VMs.
It is especially unpleasant because there needs to be a "wait state"
between when the app selects a host to deploy a VM on, and when it
can actually start a VM. In essence the app needs to reserve capacity
on a host ahead of time for a VM that will be created some arbitrary
time later. This can have significant implications for the mgmt app
architectural design that are not neccessarily easy to address, when
they expect to just call virDomainCreate have the VM running in one
step.
It also harms interoperability to libvirt tools. For example if
a mgmt tool like virt-manager/OpenStack created a VM using SEV,
and you want to start it manually using a different tool like
'virsh', you enter a world of complexity and pain, due to the
multi step dance required.
AFAICT, in all of this, the mgmt app is really acting as a conduit
and is not implementing any interesting logic. The clever stuff is
all the responsibility of the guest owner, and/or whatever software
for attestation they are using remotely.
I think there is scope for enhancing libvirt, such that usage of
SEV/SEV-ES has little-to-no burden for the management apps, and
much less burden for guest owners. The key to achieving this is
to define a protocol for libvirt to connect to a remote service
to handle the launch measurements & secret acquisition. The guest
owner can provide the address of a service they control (or trust),
and libvirt can take care of all the interactions with it.
This frees both the user and mgmt app from having to know much
about SEV/SEV-ES, with VM startup process being essentially the
same as it has always been.
The sequence would look like
* Guest owner tells attestation service they intend to
create a VM with a given UUID, policy, and any other
criteria such as cert of the cloud owner, valid OVMF
firmware hashes, and providing any needed LUKS keys.
* Guest owner tells mgmt app they want to launch a VM,
using attestation service at https://somehost/and/url
* Mgmt app decides what host the VM will be launched on
* Mgmt app tells libvirt to launch VM with CPUs in running state
The next steps involve solely libvirt & the attestation service.
The mgmt app and guest owner have done their work.
* Libvirt contacts the service providing certificate chain
for the host to be used, the UUID of the guest, and any
other required info about the host.
* Attestation service validates the cert chain to ensure
it belongs to the cloud owner that was identified previously
* Attestation service generates a launch blob and puts it in
the response back to libvirt
* Libvirt launches QEMU with CPUs paused
* Libvirt gets the launch measurement and sends it to the
attestation server, with any other required info about the
VM instance
* Attestation service validates the measurement
* Attestation builds the secret table with LUKS keys
and puts it in the response back to libvirt
* Libvirt injects the secret table to QEMU
* Libvirt tells QEMU to start CPUs
All the same exchanges of information are present, but the management
app doesn't have to get involved. The guest owner also doesn't have
to get involved except for a one-time setup step. The software the
guest owner uses for attestation also doesn't have to be written to
cope with talking to OpenStack, CNV and whatever other vendor specific
cloud mgmt apps exist today. This will significantly reduce the burden
if supporting SEV/SEV-ES launch measurement in libvirt based apps, and
make SEV/SEV-ES guests more "normal" from a mgmt POV.
What could this look like from POV of an attestation server API, if
we assume HTTPS REST service with a simple JSON payload ...
* Guest Owner: Register a new VM to be booted:
POST /vm/<UUID>
Request body:
{
"scheme": "amd-sev",
"cloud-cert": "certificate of the cloud owner that signs the PEK",
"policy": 0x3,
"cpu-count": 3,
"firmware-hashes": [
"xxxx",
"yyyy",
],
"kernel-hash": "aaaa",
"initrd-hash": "bbbb",
"cmdline-hash": "cccc",
"secrets": [
{
"type": "luks-passphrase",
"passphrase": "<blah>"
}
]
}
* Libvirt: Request permission to launch a VM on a host
POST /vm/<UUID>/launch
Request body:
{
"pdh": "<blah>",
"cert-chain": "<blah>",
"cpu-id": "<CPU ID>",
...other relevant bits...
}
Service decides if the proposed host is acceptable
Response body (on success)
{
"session": "<blah>",
"owner-cert": "<blah>",
"policy": 3,
}
* Libvirt: Request secrets to inject to launched VM
POST /vm/<UUID>/validate
Request body:
{
"api-minor": 1,
"api-major": 2,
"build-id": 241,
"policy": 3,
"measurement": "<blah>",
"firmware-hash": "xxxx",
"cpu-count": 3,
....other relevant stuff....
}
Service validates the measurement...
Response body (on success):
{
"secret-header": "<blah>",
"secret-table": "<blah>",
}
So we can see there are only a couple of REST API calls we need to be
able to define. If we could do that then creating a SEV/SEV-ES enabled
guest with libvirt would not involve anything more complicated for the
mgmt app that providing the URI of the guest owner's attestation service
and an identifier for the VM. ie. the XML config could be merely:
<launchSecurity type="sev">
<attestation vmid="57f669c2-c427-4132-bc7a-26f56b6a718c"
service="http://somehost/some/url"/>
</launchSecurity>
And then involve virDomainCreate as normal with any other libvirt / QEMU
guest. No special workflow is required by the mgmt app. There is a small
extra task for the guest owner to register existance of their VM with the
attestation service. Aside from that the only change to the way they
interact with the cloud mgmt app is to provide the VM ID and URI for the
attestation service. No need to learn custom APIs for each different
cloud vendor, for dealing with fetching launch measurements or injecting
secrets.
Finally this attestation service REST protocol doesn't have to be something
controlled or defined by libvirt. I feel like it could be a protocol that
is defined anywhere and libvirt merely be one consumer of it. Other apps
that directly use QEMU may also wish to avail themselves of it.
All that really matters from libvirt POV is:
- The protocol definition exist to enable the above workflow,
with a long term API stability guarantee that it isn't going to
changed in incompatible ways
- There exists a fully open source reference implementation of sufficient
quality to deploy in the real world
I know https://github.com/slp/sev-attestation-server exists, but its current
design has assumptions about it being used with libkrun AFAICT. I have heard
of others interested in writing similar servers, but I've not seen code.
We are at a crucial stage where mgmt apps are looking to support measured
boot with SEV/SEV-ES and if we delay they'll all go off and do their own
thing, and it'll be too late, leading to https://xkcd.com/927/.
Especially for apps using libvirt to manage QEMU, I feel we have got a
few months window of opportunity to get such a service available, before
they all end up building out APIs for the tedious manual workflow,
reinventing the wheel.
Regards,
Daniel
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
2 years, 9 months
[PATCH v3 00/29] ppc64 PowerNV machines support
by Daniel Henrique Barboza
Hi,
This new version contains changes proposed by Jano. The most notable
change is on patch 9, where pnv_pnv3/pnv_phb4 capabilities are now being
probed and, if the QEMU version isn't high enough, they are cleared from
qemuCaps.
For convenience, the patches that are pending review/acks are patches 14,
17, 19, 20, 22, 23 and 24.
v2 link: https://listman.redhat.com/archives/libvir-list/2022-January/msg01149.html
Daniel Henrique Barboza (29):
qemu_domain.c: add PowerNV machine helpers
qemu_capabilities.c: use 'MachineIsPowerPC' in DeviceDiskCaps
qemu_domain: turn qemuDomainMachineIsPSeries() static
qemu_validate.c: use qemuDomainIsPowerPC() in
qemuValidateDomainChrDef()
qemu_domain.c: define ISA as default PowerNV serial
qemu_validate.c: enhance 'machine type not supported' message
qemu_domain.c: disable default devices for PowerNV machines
tests: add basic PowerNV8 test
qemu: introduce QEMU_CAPS_DEVICE_PNV_PHB3
conf, qemu: add 'pnv-phb3-root-port' PCI controller model name
conf, qemu: add 'pnv-phb3' PCI controller model name
domain_conf.c: fix identation in virDomainControllerDefParseXML()
conf: parse and format <target chip-id='...'/>
formatdomain.rst: add 'index' semantics for PowerNV domains
conf: introduce virDomainControllerIsPowerNVPHB
conf, qemu: add default 'chip-id' value for pnv-phb3 controllers
conf, qemu: add default 'targetIndex' value for pnv-phb3 devs
qemu_command.c: add command line for the pnv-phb3 device
qemu_domain_address.c: change pnv-phb3 minimal downstream slot
domain_conf: always format pnv-phb3-root-port address
tests: add pnv-phb3-root-port test
domain_validate.c: allow targetIndex 0 out of idx 0 for PowerNV PHBs
domain_conf.c: reject duplicated pnv-phb3 devices
qemu: introduce QEMU_CAPS_DEVICE_PNV_PHB4
conf, qemu: add 'pnv-phb4-root-port' PCI controller model name
domain_conf.c: add phb4-root-port to IsPowerNVRootPort()
conf, qemu: add 'pnv-phb4' controller model name
domain_conf.c: add pnv-phb4 to ControllerIsPowerNVPHB()
tests: add PowerNV9 tests
docs/formatdomain.rst | 12 +-
docs/schemas/domaincommon.rng | 10 ++
src/conf/domain_conf.c | 156 ++++++++++++++----
src/conf/domain_conf.h | 8 +
src/conf/domain_validate.c | 5 +-
src/libvirt_private.syms | 1 +
src/qemu/qemu_capabilities.c | 19 ++-
src/qemu/qemu_capabilities.h | 4 +
src/qemu/qemu_command.c | 21 ++-
src/qemu/qemu_domain.c | 51 +++++-
src/qemu/qemu_domain.h | 4 +-
src/qemu/qemu_domain_address.c | 64 ++++++-
src/qemu/qemu_validate.c | 62 ++++++-
.../qemucapabilitiesdata/caps_5.0.0.ppc64.xml | 2 +
.../qemucapabilitiesdata/caps_5.2.0.ppc64.xml | 2 +
.../qemucapabilitiesdata/caps_6.2.0.ppc64.xml | 2 +
.../qemucapabilitiesdata/caps_7.0.0.ppc64.xml | 2 +
.../powernv8-basic.ppc64-latest.args | 34 ++++
tests/qemuxml2argvdata/powernv8-basic.xml | 16 ++
tests/qemuxml2argvdata/powernv8-dupPHBs.err | 1 +
.../powernv8-dupPHBs.ppc64-latest.err | 1 +
tests/qemuxml2argvdata/powernv8-dupPHBs.xml | 27 +++
.../powernv8-root-port.ppc64-latest.args | 35 ++++
tests/qemuxml2argvdata/powernv8-root-port.xml | 17 ++
.../powernv8-two-sockets.ppc64-latest.args | 35 ++++
.../qemuxml2argvdata/powernv8-two-sockets.xml | 26 +++
.../powernv9-dupPHBs.ppc64-latest.err | 1 +
tests/qemuxml2argvdata/powernv9-dupPHBs.xml | 27 +++
.../powernv9-root-port.ppc64-latest.args | 35 ++++
tests/qemuxml2argvdata/powernv9-root-port.xml | 17 ++
tests/qemuxml2argvtest.c | 7 +
.../powernv8-basic.ppc64-latest.xml | 34 ++++
.../powernv8-root-port.ppc64-latest.xml | 39 +++++
.../powernv8-two-sockets.ppc64-latest.xml | 39 +++++
.../powernv9-root-port.ppc64-latest.xml | 39 +++++
.../qemuxml2xmloutdata/powernv9-root-port.xml | 36 ++++
tests/qemuxml2xmltest.c | 5 +
37 files changed, 848 insertions(+), 48 deletions(-)
create mode 100644 tests/qemuxml2argvdata/powernv8-basic.ppc64-latest.args
create mode 100644 tests/qemuxml2argvdata/powernv8-basic.xml
create mode 100644 tests/qemuxml2argvdata/powernv8-dupPHBs.err
create mode 100644 tests/qemuxml2argvdata/powernv8-dupPHBs.ppc64-latest.err
create mode 100644 tests/qemuxml2argvdata/powernv8-dupPHBs.xml
create mode 100644 tests/qemuxml2argvdata/powernv8-root-port.ppc64-latest.args
create mode 100644 tests/qemuxml2argvdata/powernv8-root-port.xml
create mode 100644 tests/qemuxml2argvdata/powernv8-two-sockets.ppc64-latest.args
create mode 100644 tests/qemuxml2argvdata/powernv8-two-sockets.xml
create mode 100644 tests/qemuxml2argvdata/powernv9-dupPHBs.ppc64-latest.err
create mode 100644 tests/qemuxml2argvdata/powernv9-dupPHBs.xml
create mode 100644 tests/qemuxml2argvdata/powernv9-root-port.ppc64-latest.args
create mode 100644 tests/qemuxml2argvdata/powernv9-root-port.xml
create mode 100644 tests/qemuxml2xmloutdata/powernv8-basic.ppc64-latest.xml
create mode 100644 tests/qemuxml2xmloutdata/powernv8-root-port.ppc64-latest.xml
create mode 100644 tests/qemuxml2xmloutdata/powernv8-two-sockets.ppc64-latest.xml
create mode 100644 tests/qemuxml2xmloutdata/powernv9-root-port.ppc64-latest.xml
create mode 100644 tests/qemuxml2xmloutdata/powernv9-root-port.xml
--
2.35.1
2 years, 9 months
[PATCH] scripts: Fix the parameter of warning function
by luzhipeng
The parameter of self.warning is inconsistent with it's definition, So
fix it.
Signed-off-by: luzhipeng <luzhipeng(a)cestc.cn>
---
scripts/apibuild.py | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/scripts/apibuild.py b/scripts/apibuild.py
index bdd3077c48..99b16f47fa 100755
--- a/scripts/apibuild.py
+++ b/scripts/apibuild.py
@@ -317,7 +317,7 @@ class index:
if type in type_map:
type_map[type][name] = d
else:
- self.warning("Unable to register type ", type)
+ self.warning("Unable to register type %s" % type)
if name == debugsym and not quiet:
print("New symbol: %s" % (d))
--
2.34.0.windows.1
2 years, 9 months