kvm-hint-dedicated requires host CPU passthrough?
by Stefan Hajnoczi
Hi,
libvirt refuses to set KVM_HINTS_DEDICATED when the CPU model is not
host-passthrough.
Is there a reason for this limitation?
My understanding is that KVM_HINTS_DEDICATED means the vCPU is pinned to
a host CPU that is not shared with other tasks. Any KVM vCPU should be
able to support this feature, regardless of whether host-passthrough is
used or not.
Stefan
4 years, 4 months
[libvirt-jenkins-ci PATCH 0/7] lcitool: Create and expose ccache wrappers
by Andrea Bolognani
This series makes ccache work out of the box for container-based
builds. Most of it is refactoring.
Applies on top of
https://www.redhat.com/archives/libvir-list/2020-March/msg01263.html
Andrea Bolognani (7):
lcitool: Improve ccache symlinks creation
lcitool: Configure ccache using environment variables
lcitool: Create ccache wrappers outside of $HOME
lcitool: Refactor cross_arch handling a bit
lcitool: Use commands[] for deb-based distros
lcitool: Use cross_commands[] for all distros
lcitool: Create and expose ccache wrappers
guests/host_vars/libvirt-centos-7/main.yml | 1 +
guests/host_vars/libvirt-centos-8/main.yml | 1 +
guests/host_vars/libvirt-debian-10/main.yml | 1 +
guests/host_vars/libvirt-debian-9/main.yml | 1 +
guests/host_vars/libvirt-debian-sid/main.yml | 1 +
guests/host_vars/libvirt-fedora-30/main.yml | 1 +
guests/host_vars/libvirt-fedora-31/main.yml | 1 +
.../host_vars/libvirt-fedora-rawhide/main.yml | 1 +
guests/host_vars/libvirt-freebsd-11/main.yml | 1 +
guests/host_vars/libvirt-freebsd-12/main.yml | 1 +
.../libvirt-freebsd-current/main.yml | 1 +
.../host_vars/libvirt-opensuse-151/main.yml | 1 +
guests/host_vars/libvirt-ubuntu-1604/main.yml | 1 +
guests/host_vars/libvirt-ubuntu-1804/main.yml | 1 +
guests/lcitool | 92 +++++++++++--------
guests/playbooks/update/main.yml | 1 +
guests/playbooks/update/tasks/global.yml | 14 +++
guests/playbooks/update/tasks/users.yml | 45 ---------
guests/playbooks/update/templates/bashrc.j2 | 3 +-
.../playbooks/update/templates/ccache.conf.j2 | 1 -
20 files changed, 87 insertions(+), 83 deletions(-)
create mode 100644 guests/playbooks/update/tasks/global.yml
delete mode 100644 guests/playbooks/update/templates/ccache.conf.j2
--
2.25.1
4 years, 4 months
[PATCH v5 for-5.0? 0/7] Tighten qemu-img rules on missing backing format
by Eric Blake
v4 was here:
https://lists.gnu.org/archive/html/qemu-devel/2020-03/msg03775.html
In v5:
- fix 'qemu-img convert -B' to actually warn [Kashyap]
- squash in followups
- a couple more iotest improvements
If we decide this is not 5.0 material, then patches 4 and 7 need a
tweak to s/5.0/5.1/ as the start of the deprecation clock.
Eric Blake (7):
sheepdog: Add trivial backing_fmt support
vmdk: Add trivial backing_fmt support
qcow: Tolerate backing_fmt=, but warn on backing_fmt=raw
qcow2: Deprecate use of qemu-img amend to change backing file
iotests: Specify explicit backing format where sensible
block: Add support to warn on backing file change without format
qemu-img: Deprecate use of -b without -F
docs/system/deprecated.rst | 32 ++++++++++++++++
docs/tools/qemu-img.rst | 4 ++
include/block/block.h | 4 +-
block.c | 34 +++++++++++++++--
block/qcow.c | 16 +++++++-
block/qcow2.c | 7 +++-
block/sheepdog.c | 18 ++++++++-
block/stream.c | 2 +-
block/vmdk.c | 14 +++++++
blockdev.c | 3 +-
qemu-img.c | 11 +++++-
tests/qemu-iotests/017 | 2 +-
tests/qemu-iotests/017.out | 2 +-
tests/qemu-iotests/018 | 2 +-
tests/qemu-iotests/018.out | 2 +-
tests/qemu-iotests/019 | 5 ++-
tests/qemu-iotests/019.out | 2 +-
tests/qemu-iotests/020 | 4 +-
tests/qemu-iotests/020.out | 4 +-
tests/qemu-iotests/024 | 8 ++--
tests/qemu-iotests/024.out | 5 ++-
tests/qemu-iotests/028 | 4 +-
tests/qemu-iotests/028.out | 2 +-
tests/qemu-iotests/030 | 26 +++++++++----
tests/qemu-iotests/034 | 2 +-
tests/qemu-iotests/034.out | 2 +-
tests/qemu-iotests/037 | 2 +-
tests/qemu-iotests/037.out | 2 +-
tests/qemu-iotests/038 | 2 +-
tests/qemu-iotests/038.out | 2 +-
tests/qemu-iotests/039 | 3 +-
tests/qemu-iotests/039.out | 2 +-
tests/qemu-iotests/040 | 47 ++++++++++++++++-------
tests/qemu-iotests/041 | 37 ++++++++++++------
tests/qemu-iotests/042 | 4 +-
tests/qemu-iotests/043 | 18 ++++-----
tests/qemu-iotests/043.out | 16 +++++---
tests/qemu-iotests/046 | 2 +-
tests/qemu-iotests/046.out | 2 +-
tests/qemu-iotests/050 | 4 +-
tests/qemu-iotests/050.out | 2 +-
tests/qemu-iotests/051 | 2 +-
tests/qemu-iotests/051.out | 2 +-
tests/qemu-iotests/051.pc.out | 2 +-
tests/qemu-iotests/056 | 3 +-
tests/qemu-iotests/060 | 2 +-
tests/qemu-iotests/060.out | 2 +-
tests/qemu-iotests/061 | 10 ++---
tests/qemu-iotests/061.out | 11 +++---
tests/qemu-iotests/069 | 2 +-
tests/qemu-iotests/069.out | 2 +-
tests/qemu-iotests/073 | 2 +-
tests/qemu-iotests/073.out | 2 +-
tests/qemu-iotests/082 | 10 +++--
tests/qemu-iotests/082.out | 14 ++++---
tests/qemu-iotests/085 | 4 +-
tests/qemu-iotests/085.out | 6 +--
tests/qemu-iotests/089 | 2 +-
tests/qemu-iotests/089.out | 2 +-
tests/qemu-iotests/095 | 4 +-
tests/qemu-iotests/095.out | 4 +-
tests/qemu-iotests/097 | 4 +-
tests/qemu-iotests/097.out | 16 ++++----
tests/qemu-iotests/098 | 2 +-
tests/qemu-iotests/098.out | 8 ++--
tests/qemu-iotests/110 | 4 +-
tests/qemu-iotests/110.out | 4 +-
tests/qemu-iotests/114 | 12 ++++++
tests/qemu-iotests/114.out | 9 +++++
tests/qemu-iotests/122 | 27 +++++++------
tests/qemu-iotests/122.out | 8 ++--
tests/qemu-iotests/126 | 4 +-
tests/qemu-iotests/126.out | 4 +-
tests/qemu-iotests/127 | 4 +-
tests/qemu-iotests/127.out | 4 +-
tests/qemu-iotests/129 | 3 +-
tests/qemu-iotests/133 | 2 +-
tests/qemu-iotests/133.out | 2 +-
tests/qemu-iotests/139 | 2 +-
tests/qemu-iotests/141 | 4 +-
tests/qemu-iotests/141.out | 4 +-
tests/qemu-iotests/142 | 2 +-
tests/qemu-iotests/142.out | 2 +-
tests/qemu-iotests/153 | 14 +++----
tests/qemu-iotests/153.out | 35 +++++++++--------
tests/qemu-iotests/154 | 42 ++++++++++----------
tests/qemu-iotests/154.out | 42 ++++++++++----------
tests/qemu-iotests/155 | 12 ++++--
tests/qemu-iotests/156 | 9 +++--
tests/qemu-iotests/156.out | 6 +--
tests/qemu-iotests/158 | 2 +-
tests/qemu-iotests/158.out | 2 +-
tests/qemu-iotests/161 | 8 ++--
tests/qemu-iotests/161.out | 8 ++--
tests/qemu-iotests/176 | 4 +-
tests/qemu-iotests/176.out | 32 ++++++++--------
tests/qemu-iotests/177 | 2 +-
tests/qemu-iotests/177.out | 2 +-
tests/qemu-iotests/179 | 2 +-
tests/qemu-iotests/179.out | 2 +-
tests/qemu-iotests/189 | 2 +-
tests/qemu-iotests/189.out | 2 +-
tests/qemu-iotests/191 | 12 +++---
tests/qemu-iotests/191.out | 12 +++---
tests/qemu-iotests/195 | 6 +--
tests/qemu-iotests/195.out | 6 +--
tests/qemu-iotests/198 | 2 +-
tests/qemu-iotests/198.out | 3 +-
tests/qemu-iotests/204 | 2 +-
tests/qemu-iotests/204.out | 2 +-
tests/qemu-iotests/216 | 2 +-
tests/qemu-iotests/224 | 4 +-
tests/qemu-iotests/225 | 2 +-
tests/qemu-iotests/225.out | 2 +-
tests/qemu-iotests/228 | 5 ++-
tests/qemu-iotests/245 | 3 +-
tests/qemu-iotests/249 | 4 +-
tests/qemu-iotests/249.out | 4 +-
tests/qemu-iotests/252 | 2 +-
tests/qemu-iotests/257 | 3 +-
tests/qemu-iotests/267 | 4 +-
tests/qemu-iotests/267.out | 6 +--
tests/qemu-iotests/270 | 2 +-
tests/qemu-iotests/270.out | 2 +-
tests/qemu-iotests/273 | 4 +-
tests/qemu-iotests/273.out | 4 +-
tests/qemu-iotests/279 | 4 +-
tests/qemu-iotests/279.out | 4 +-
tests/qemu-iotests/290 | 72 +++++++++++++++++++++++++++++++++++
tests/qemu-iotests/290.out | 45 ++++++++++++++++++++++
tests/qemu-iotests/group | 1 +
131 files changed, 683 insertions(+), 350 deletions(-)
create mode 100755 tests/qemu-iotests/290
create mode 100644 tests/qemu-iotests/290.out
--
2.26.0.rc2
4 years, 5 months
CFP: KVM Forum 2020
by Paolo Bonzini
================================================================
KVM Forum 2020: Call For Participation
October 28-30, 2020 - Convention Centre Dublin - Dublin, Ireland
(All submissions must be received before June 15, 2020 at 23:59 PST)
=================================================================
KVM Forum is an annual event that presents a rare opportunity for
developers and users to meet, discuss the state of Linux virtualization
technology, and plan for the challenges ahead. This highly technical
conference unites the developers who drive KVM development and the users
who depend on KVM as part of their offerings, or to power their data
centers and clouds.
Sessions include updates on the state of the KVM virtualization stack,
planning for the future, and many opportunities for attendees to
collaborate. After more than ten years in the mainline kernel, KVM
continues to be a critical part of the FOSS cloud infrastructure. Come
join us in continuing to improve the KVM ecosystem.
We understand that these are uncertain times and we are continuously
monitoring the COVID-19/Novel Coronavirus situation. Our attendees'
safety is of the utmost importance. If we feel it is not safe to meet in
person, we will turn the event into a virtual experience. We hope to
announce this at the same time as the speaker notification. Speakers
will still be expected to attend if a physical event goes ahead, but we
understand that exceptional circumstances might arise after speakers are
accepted and we will do our best to accommodate such circumstances.
Based on these factors, we encourage you to submit and reach out to us
should you have any questions. The program committee may be contacted as
a group via email: kvm-forum-2020-pc(a)redhat.com.
SUGGESTED TOPICS
----------------
* Scaling, latency optimizations, performance tuning
* Hardening and security
* New features
* Testing
KVM and the Linux Kernel:
* Nested virtualization
* Resource management (CPU, I/O, memory) and scheduling
* VFIO: IOMMU, SR-IOV, virtual GPU, etc.
* Networking: Open vSwitch, XDP, etc.
* virtio and vhost
* Architecture ports and new processor features
QEMU:
* Management interfaces: QOM and QMP
* New devices, new boards, new architectures
* New storage features
* High availability, live migration and fault tolerance
* Emulation and TCG
* Firmware: ACPI, UEFI, coreboot, U-Boot, etc.
Management & Infrastructure
* Managing KVM: Libvirt, OpenStack, oVirt, KubeVirt, etc.
* Storage: Ceph, Gluster, SPDK, etc.
* Network Function Virtualization: DPDK, OPNFV, OVN, etc.
* Provisioning
SUBMITTING YOUR PROPOSAL
------------------------
Abstracts due: June 15, 2020
Please submit a short abstract (~150 words) describing your presentation
proposal. Slots vary in length up to 45 minutes.
Submit your proposal here:
https://events.linuxfoundation.org/kvm-forum/program/cfp/
Please only use the categories "presentation" and "panel discussion"
You will receive a notification whether or not your presentation
proposal was accepted by August 17, 2020.
Speakers will receive a complimentary pass for the event. In case your
submission has multiple presenters, only the primary speaker for a
proposal will receive a complimentary event pass. For panel discussions,
all panelists will receive a complimentary event pass.
TECHNICAL TALKS
A good technical talk should not just report on what has happened over
the last year; it should present a concrete problem and how it impacts
the user and/or developer community. Whenever applicable, focus on work
that needs to be done, difficulties that haven't yet been solved, and on
decisions that other developers should be aware of. Summarizing recent
developments is okay but it should not be more than a small portion of
the overall talk.
END-USER TALKS
One of the big challenges as developers is to know what, where and how
people actually use our software. We will reserve a few slots for end
users talking about their deployment challenges and achievements.
If you are using KVM in production you are encouraged submit a speaking
proposal. Simply mark it as an end-user talk. As an end user, this is a
unique opportunity to get your input to developers.
PANEL DISCUSSIONS
If you are proposing a panel discussion, please make sure that you list
all of your potential panelists in your the abstract. We will request
full biographies if a panel is accepted.
HANDS-ON / BOF SESSIONS
We will reserve some time for people to get together and discuss
strategic decisions as well as other topics that are best solved within
smaller groups.
These sessions will be announced during the event. If you are interested
in organizing such a session, please add it to the list at
http://www.linux-kvm.org/page/KVM_Forum_2020_BOF
Let people you think who might be interested know about your BOF, and
encourage them to add their names to the wiki page as well. Please add
your ideas to the list before KVM Forum starts.
HOTEL / TRAVEL
--------------
This year's event will take place at the Conference Center Dublin. For
information on hotels close to the conference, please visit
https://events.linuxfoundation.org/kvm-forum/attend/venue-travel/.
Information on conference hotel blocks will be available later in
Spring 2020.
DATES TO REMEMBER
-----------------
* CFP Opens: Monday, April 27, 2020
* CFP Closes: Monday, June 15 at 11:59 PM PST
* CFP Notifications: Monday, August 17
* Schedule Announcement: Thursday, August 20
* Slide Due Date: Monday, October 26
* Event Dates: Wednesday, October 28 – Friday, October 30
Thank you for your interest in KVM. We're looking forward to your
submissions and, if the conditions will permit it, to seeing you at the
KVM Forum 2020 in October!
-your KVM Forum 2020 Program Committee
Please contact us with any questions or comments at
kvm-forum-2020-pc(a)redhat.com
4 years, 5 months
[PATCH v5 0/4] introduction of migration_version attribute for VFIO live migration
by Yan Zhao
This patchset introduces a migration_version attribute under sysfs of VFIO
Mediated devices.
This migration_version attribute is used to check migration compatibility
between two mdev devices.
Currently, it has two locations:
(1) under mdev_type node,
which can be used even before device creation, but only for mdev
devices of the same mdev type.
(2) under mdev device node,
which can only be used after the mdev devices are created, but the src
and target mdev devices are not necessarily be of the same mdev type
(The second location is newly added in v5, in order to keep consistent
with the migration_version node for migratable pass-though devices)
Patch 1 defines migration_version attribute for the first location in
Documentation/vfio-mediated-device.txt
Patch 2 uses GVT as an example for patch 1 to show how to expose
migration_version attribute and check migration compatibility in vendor
driver.
Patch 3 defines migration_version attribute for the second location in
Documentation/vfio-mediated-device.txt
Patch 4 uses GVT as an example for patch 3 to show how to expose
migration_version attribute and check migration compatibility in vendor
driver.
(The previous "Reviewed-by" and "Acked-by" for patch 1 and patch 2 are
kept in v5, as there are only small changes to commit messages of the two
patches.)
v5:
added patch 2 and 4 for mdev device part of migration_version attribute.
v4:
1. fixed indentation/spell errors, reworded several error messages
2. added a missing memory free for error handling in patch 2
v3:
1. renamed version to migration_version
2. let errno to be freely defined by vendor driver
3. let checking mdev_type be prerequisite of migration compatibility check
4. reworded most part of patch 1
5. print detailed error log in patch 2 and generate migration_version
string at init time
v2:
1. renamed patched 1
2. made definition of device version string completely private to vendor
driver
3. reverted changes to sample mdev drivers
4. described intent and usage of version attribute more clearly.
Yan Zhao (4):
vfio/mdev: add migration_version attribute for mdev (under mdev_type
node)
drm/i915/gvt: export migration_version to mdev sysfs (under mdev_type
node)
vfio/mdev: add migration_version attribute for mdev (under mdev device
node)
drm/i915/gvt: export migration_version to mdev sysfs (under mdev
device node)
.../driver-api/vfio-mediated-device.rst | 183 ++++++++++++++++++
drivers/gpu/drm/i915/gvt/Makefile | 2 +-
drivers/gpu/drm/i915/gvt/gvt.c | 39 ++++
drivers/gpu/drm/i915/gvt/gvt.h | 7 +
drivers/gpu/drm/i915/gvt/kvmgt.c | 55 ++++++
drivers/gpu/drm/i915/gvt/migration_version.c | 170 ++++++++++++++++
drivers/gpu/drm/i915/gvt/vgpu.c | 13 +-
7 files changed, 466 insertions(+), 3 deletions(-)
create mode 100644 drivers/gpu/drm/i915/gvt/migration_version.c
--
2.17.1
4 years, 5 months
[libvirt] Request for a new release of libvirt-java in Maven
by Slavka Peleva
Hello,
The Apache CloudStack project is using Maven libvirt java bindings in KVM
code. In libvirt-java project were added qemu methods that will prevent the
usage of bash scripts or Process class methods. It will be great to use
those features in CloudStack.
>From what I see, the previous release was requested in this list, and there
doesn’t seem to be a formal procedure for this, so how should I proceed?
Best regards and thank you in advance,
Slavka
4 years, 5 months
[PATCH libvirt v1 0/6] Fix ZPCI address auto-generation on s390
by Shalini Chellathurai Saroja
The ZPCI address validation or autogeneration does not work as
expected in the following scenarios
1. uid = 0 and fid = 0
2. uid = 0 and fid > 0
3. uid = 0 and fid not specified
4. uid not specified and fid > 0
5. 2 ZPCI devices with uid > 0 and fid not specified.
This is because of the following reasons
1. If uid = 0 or fid = 0 the code assumes that user has not specified
the corresponding address
2. If either uid or fid is provided, the code assumes that both uid
and fid addresses are specified by the user.
This patch fixes these issues.
Shalini Chellathurai Saroja (6):
conf: fix ZPCI address validation on s390
tests: qemu: add tests for ZPCI on s390
conf: fix ZPCI address auto-generation on s390
qemu: move ZPCI uid validation into device validation
tests: qemu: add more tests for ZPCI on S390
tests: add test with PCI and CCW device
src/conf/device_conf.c | 45 ++++++--------
src/conf/domain_addr.c | 59 +++++++++++++------
src/conf/domain_conf.c | 2 +-
src/libvirt_private.syms | 4 +-
src/qemu/qemu_validate.c | 26 +++++++-
src/util/virpci.c | 23 ++------
src/util/virpci.h | 6 +-
.../hostdev-vfio-zpci-autogenerate-fids.args | 31 ++++++++++
.../hostdev-vfio-zpci-autogenerate-fids.xml | 29 +++++++++
.../hostdev-vfio-zpci-autogenerate-uids.args | 31 ++++++++++
.../hostdev-vfio-zpci-autogenerate-uids.xml | 29 +++++++++
.../hostdev-vfio-zpci-ccw-memballoon.args | 28 +++++++++
.../hostdev-vfio-zpci-ccw-memballoon.xml | 17 ++++++
.../hostdev-vfio-zpci-duplicate.xml | 30 ++++++++++
...ostdev-vfio-zpci-invalid-uid-valid-fid.xml | 21 +++++++
.../hostdev-vfio-zpci-multidomain-many.args | 6 +-
.../hostdev-vfio-zpci-set-zero.xml | 21 +++++++
.../hostdev-vfio-zpci-uid-set-zero.xml | 20 +++++++
tests/qemuxml2argvtest.c | 25 ++++++++
.../hostdev-vfio-zpci-multidomain-many.xml | 6 +-
20 files changed, 387 insertions(+), 72 deletions(-)
create mode 100644 tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-fids.args
create mode 100644 tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-fids.xml
create mode 100644 tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-uids.args
create mode 100644 tests/qemuxml2argvdata/hostdev-vfio-zpci-autogenerate-uids.xml
create mode 100644 tests/qemuxml2argvdata/hostdev-vfio-zpci-ccw-memballoon.args
create mode 100644 tests/qemuxml2argvdata/hostdev-vfio-zpci-ccw-memballoon.xml
create mode 100644 tests/qemuxml2argvdata/hostdev-vfio-zpci-duplicate.xml
create mode 100644 tests/qemuxml2argvdata/hostdev-vfio-zpci-invalid-uid-valid-fid.xml
create mode 100644 tests/qemuxml2argvdata/hostdev-vfio-zpci-set-zero.xml
create mode 100644 tests/qemuxml2argvdata/hostdev-vfio-zpci-uid-set-zero.xml
--
2.21.1
4 years, 5 months
[PATCH v2 0/7] Add Security Guest doc and check for capabilities cache validation
by Paulo de Rezende Pinatti
This series introduces the concept of a 'Secure Guest' feature
which covers on s390 IBM Secure Execution and on x86 AMD Secure
Encrypted Virtualization.
Besides adding documentation for IBM Secure Execution it also adds
checks during validation of the qemu capabilities cache.
These checks per architecture can be performed for IBM Secure
Execution on s390 and AMD Secure Encrypted Virtualization on AMD x86
CPUs (both checks implemented in this series).
For s390 the verification consists of:
- checking if /sys/firmware/uv is available: meaning the HW
facility is available and the host OS supports it;
- checking if the kernel cmdline contains 'prot_virt=1': meaning
the host OS wants to use the feature.
For AMD Secure Encrypted Virtualization the verification consists of:
- checking if /sys/module/kvm_amd/parameters/sev contains the
value '1': meaning SEV is enabled in the host kernel;
- checking if /dev/sev exists
Whenever the availability of the feature does not match the secure
guest flag in the cache then libvirt will re-build it in order to
pick up the new set of capabilities available.
Additionally, this series adds the same aforementioned checks to the
virt-host-validate tool to facilitate the manual verification
process for users.
Changes in v2:
[Patch 1]
Reworked kernel cmdline parser into a parameter based processing.
[Patch 2]
Added missing value "on" to kvalue list
[Patch 3]
Changed AMD SEV support check to module parameter is set and /dev/sev exists.
Moved doc changes to a new standalone patch 6.
[Patch 4]
Added missing value "on" to kvalue list
[Patch 5]
Changed AMD SEV support check to align with patch 3.
Moved doc changes to a new standalone patch 6.
[Patch 6]
Summarized AMD SEV doc changes from patches 3 and 5.
Adjusted libvirt version number
[Patch 7 (v1: Patch 6)]
Adjusted libvirt version number
link to v1: https://www.redhat.com/archives/libvir-list/2020-May/msg00416.html
Boris Fiuczynski (3):
tools: secure guest check on s390 in virt-host-validate
tools: secure guest check for AMD in virt-host-validate
docs: update AMD launch secure description
Paulo de Rezende Pinatti (3):
util: introduce a parser for kernel cmdline arguments
qemu: check if s390 secure guest support is enabled
qemu: check if AMD secure guest support is enabled
Viktor Mihajlovski (1):
docs: Describe protected virtualization guest setup
docs/kbase.html.in | 3 +
docs/kbase/launch_security_sev.rst | 9 +-
docs/kbase/s390_protected_virt.rst | 189 +++++++++++++++++++++++++++++
src/libvirt_private.syms | 2 +
src/qemu/qemu_capabilities.c | 76 ++++++++++++
src/util/virutil.c | 169 ++++++++++++++++++++++++++
src/util/virutil.h | 17 +++
tests/utiltest.c | 141 +++++++++++++++++++++
tools/virt-host-validate-common.c | 83 ++++++++++++-
tools/virt-host-validate-common.h | 5 +
tools/virt-host-validate-qemu.c | 4 +
11 files changed, 693 insertions(+), 5 deletions(-)
create mode 100644 docs/kbase/s390_protected_virt.rst
--
2.25.4
4 years, 5 months
[PATCH 0/5] qemu: Prefer -numa cpu over -numa node,cpus=
by Michal Privoznik
Another round of patches to catch up with the latest command line
building recommendations. This time, the winner is: -numa cpu
Thing is, the way vCPUs are assigned to NUMA nodes is not very
deterministic right now. QEMU relies on this weird mapping where firstly
threads are iterated over, then cores, then (possibly) dies and then
sockets. So they came up with a better schema:
-numa cpu,node-id=N,socket-id=S,die-id=D,core-id=C,thread-id=T
where everything is specified on the command line. Long story short,
some QEMU code is copied over to keep status quo (or, whatever you
want). It was buried down down (include/hw/i386/topology.h). I wanted to
allow users configure the vCPU topology too, but then decided not to
because neither Libvirt nor QEMU can really guarantee the topology will
look the same in the guest (at least in my testing with cpu
mode='host-passthrough' it didn't). Or, what you're proposing? I can
post them as a follow up if needed.
Michal Prívozník (5):
qemuBuildNumaArgStr: Move vars into loops
qemuBuildNumaArgStr: Use g_autofree on @nodeBackends
qemuBuildNumaArgStr: Separate out old style of building CPU list
qemu: Prefer -numa cpu over -numa node,cpus=
virCPUDefParseXML: Parse uint using virXPathUInt
src/conf/cpu_conf.c | 14 +-
src/qemu/qemu_command.c | 168 +++++++++++++++---
.../hugepages-nvdimm.x86_64-latest.args | 4 +-
...memory-default-hugepage.x86_64-latest.args | 10 +-
.../memfd-memory-numa.x86_64-latest.args | 10 +-
...y-hotplug-nvdimm-access.x86_64-latest.args | 4 +-
...ry-hotplug-nvdimm-align.x86_64-latest.args | 4 +-
...ry-hotplug-nvdimm-label.x86_64-latest.args | 4 +-
...ory-hotplug-nvdimm-pmem.x86_64-latest.args | 4 +-
...ory-hotplug-nvdimm-ppc64.ppc64-latest.args | 4 +-
...hotplug-nvdimm-readonly.x86_64-latest.args | 4 +-
.../memory-hotplug-nvdimm.x86_64-latest.args | 4 +-
...vhost-user-fs-fd-memory.x86_64-latest.args | 4 +-
...vhost-user-fs-hugepages.x86_64-latest.args | 4 +-
...host-user-gpu-secondary.x86_64-latest.args | 3 +-
.../vhost-user-vga.x86_64-latest.args | 3 +-
16 files changed, 195 insertions(+), 53 deletions(-)
--
2.26.2
4 years, 5 months
[PATCH 00/40] convert virObjects to GObject
by Rafael Fonseca
This patch series convert various simple instances of virObject to a
GObject equivalent.
virLockableObject and virObjects which are subclassed will be covered in
future patchsets.
New in v3:
- merge *Dispose back into *Finalize
- rebased on top of master
Rafael Fonseca (40):
util: virresctrl: convert classes to GObject
conf: capabilities: convert virCaps to GOBject
qemu: convert virQEMUCaps to GObject
util: add function to unref array of GObjects
rpc: convert virNetClientProgram to GObject
rpc: convert virNetServerProgram to GObject
conf: convert virDomainXMLOption to GObject
bhyve: convert bhyveMonitor to GObject
bhyve: convert virBhyveDriverConfig to GObject
rpc: convert virNetServerService to GObject
conf: convert virDomainCapsCPUModels to GObject
util: convert dnsmasqCaps to GObject
conf: convert virDomainChrSourceDef to GObject
rpc: gendispatch: prepare for GObject conversion
admin: convert virAdmServer to GObject
admin: convert virAdmClient to GObject
datatypes: convert virDomainCheckpoint to GObject
datatypes: convert virDomainSnapshot to GObject
datatypes: convert virNWFilter to GObject
datatypes: convert virNWFilterBinding to GObject
datatypes: convert virNetwork to GObject
datatypes: convert virNetworkPort to GObject
datatypes: convert virInterface to GObject
datatypes: convert virStoragePool to GObject
datatypes: convert virStorageVol to GObject
datatypes: convert virNodeDevice to GObject
datatypes: convert virSecret to GObject
datatypes: convert virStream to GObject
datatypes: convert virDomain to GObject
rpc: gendispatch: use g_autoptr where possible
conf: convert virNetworkXMLOption to GObject
lxc: convert virLXCDriverConfig to GObject
libxl: convert libxlDriverConfig to GObject
hypervisor: convert virHostdevManager to GObject
libxl: convert libxlMigrationDstArgs to GObject
qemu: convert qemuBlockJobData to GObject
qemu: convert virQEMUDriverConfig to GObject
conf: convert virDomain*Private to GObject
conf: convert virSaveCookie to GObject
util: convert virStorageSource to GObject
src/admin/admin_remote.c | 9 +-
src/admin/libvirt-admin.c | 4 +-
src/admin/libvirt_admin_private.syms | 4 +-
src/bhyve/bhyve_capabilities.c | 19 +-
src/bhyve/bhyve_conf.c | 36 +-
src/bhyve/bhyve_driver.c | 33 +-
src/bhyve/bhyve_monitor.c | 37 +-
src/bhyve/bhyve_monitor.h | 3 +-
src/bhyve/bhyve_utils.h | 11 +-
src/conf/capabilities.c | 41 +-
src/conf/capabilities.h | 6 +-
src/conf/domain_capabilities.c | 48 +-
src/conf/domain_capabilities.h | 10 +-
src/conf/domain_conf.c | 117 ++---
src/conf/domain_conf.h | 36 +-
src/conf/domain_event.c | 58 +--
src/conf/network_conf.c | 24 +-
src/conf/network_conf.h | 12 +-
src/conf/network_event.c | 7 +-
src/conf/node_device_event.c | 11 +-
src/conf/node_device_util.c | 4 +-
src/conf/secret_event.c | 15 +-
src/conf/snapshot_conf.c | 2 +-
src/conf/snapshot_conf.h | 2 +-
src/conf/storage_capabilities.c | 4 +-
src/conf/storage_event.c | 15 +-
src/conf/virchrdev.c | 4 +-
src/conf/virconftypes.h | 3 +-
src/conf/virdomaincheckpointobjlist.c | 7 +-
src/conf/virdomainsnapshotobjlist.c | 7 +-
src/conf/virinterfaceobj.c | 5 +-
src/conf/virnetworkobj.c | 10 +-
src/conf/virnodedeviceobj.c | 3 +-
src/conf/virnwfilterbindingobjlist.c | 2 +-
src/conf/virnwfilterobj.c | 7 +-
src/conf/virsavecookie.c | 10 +-
src/conf/virsavecookie.h | 14 +-
src/conf/virsecretobj.c | 2 +-
src/conf/virstorageobj.c | 4 +-
src/datatypes.c | 658 +++++++++++++++---------
src/datatypes.h | 219 ++++----
src/esx/esx_driver.c | 33 +-
src/hyperv/hyperv_driver.c | 8 +-
src/hypervisor/virhostdev.c | 31 +-
src/hypervisor/virhostdev.h | 13 +-
src/interface/interface_backend_netcf.c | 7 +-
src/interface/interface_backend_udev.c | 8 +-
src/libvirt-domain-checkpoint.c | 7 +-
src/libvirt-domain-snapshot.c | 7 +-
src/libvirt-domain.c | 6 +-
src/libvirt-interface.c | 6 +-
src/libvirt-network.c | 14 +-
src/libvirt-nodedev.c | 6 +-
src/libvirt-nwfilter.c | 14 +-
src/libvirt-secret.c | 7 +-
src/libvirt-storage.c | 12 +-
src/libvirt-stream.c | 7 +-
src/libvirt_private.syms | 35 +-
src/libxl/libxl_capabilities.c | 19 +-
src/libxl/libxl_conf.c | 56 +-
src/libxl/libxl_conf.h | 12 +-
src/libxl/libxl_driver.c | 175 +++----
src/libxl/libxl_migration.c | 92 ++--
src/libxl/xen_common.c | 3 +-
src/locking/lock_daemon.c | 28 +-
src/locking/lock_driver_lockd.c | 19 +-
src/locking/sanlock_helper.c | 5 +-
src/logging/log_daemon.c | 28 +-
src/logging/log_manager.c | 15 +-
src/lxc/lxc_conf.c | 46 +-
src/lxc/lxc_conf.h | 10 +-
src/lxc/lxc_controller.c | 20 +-
src/lxc/lxc_driver.c | 98 ++--
src/lxc/lxc_monitor.c | 2 +-
src/lxc/lxc_process.c | 35 +-
src/network/bridge_driver.c | 24 +-
src/nwfilter/nwfilter_driver.c | 3 +-
src/openvz/openvz_conf.c | 8 +-
src/qemu/qemu_blockjob.c | 119 ++---
src/qemu/qemu_blockjob.h | 15 +-
src/qemu/qemu_capabilities.c | 165 +++---
src/qemu/qemu_capabilities.h | 9 +-
src/qemu/qemu_conf.c | 42 +-
src/qemu/qemu_conf.h | 19 +-
src/qemu/qemu_domain.c | 436 +++++++---------
src/qemu/qemu_domain.h | 157 ++++--
src/qemu/qemu_driver.c | 34 +-
src/qemu/qemu_hotplug.c | 3 +-
src/qemu/qemu_migration.c | 34 +-
src/qemu/qemu_process.c | 20 +-
src/qemu/qemu_virtiofs.c | 12 +-
src/remote/remote_daemon.c | 52 +-
src/remote/remote_daemon_dispatch.c | 188 +++----
src/remote/remote_daemon_stream.c | 6 +-
src/remote/remote_driver.c | 178 +++----
src/rpc/gendispatch.pl | 90 ++--
src/rpc/virnetclient.c | 7 +-
src/rpc/virnetclientprogram.c | 29 +-
src/rpc/virnetclientprogram.h | 10 +-
src/rpc/virnetclientstream.c | 4 +-
src/rpc/virnetserver.c | 21 +-
src/rpc/virnetserverprogram.c | 30 +-
src/rpc/virnetserverprogram.h | 17 +-
src/rpc/virnetserverservice.c | 87 ++--
src/security/virt-aa-helper.c | 9 +-
src/storage/storage_backend.c | 3 +-
src/storage/storage_driver.c | 7 +-
src/storage/storage_util.c | 4 +-
src/test/test_driver.c | 33 +-
src/util/virdnsmasq.c | 56 +-
src/util/virdnsmasq.h | 6 +-
src/util/virfdstream.c | 4 +-
src/util/virfilecache.c | 43 +-
src/util/virfilecache.h | 12 +-
src/util/virobject.c | 24 +
src/util/virobject.h | 4 +
src/util/virresctrl.c | 155 +++---
src/util/virresctrl.h | 15 +-
src/util/virsecret.c | 3 +-
src/util/virstoragefile.c | 42 +-
src/util/virstoragefile.h | 9 +-
src/vbox/vbox_common.c | 19 +-
src/vmware/vmware_conf.c | 13 +-
src/vz/vz_driver.c | 33 +-
src/vz/vz_sdk.c | 22 +-
tests/bhyveargv2xmltest.c | 4 +-
tests/bhyvexml2argvtest.c | 4 +-
tests/bhyvexml2xmltest.c | 4 +-
tests/cputest.c | 48 +-
tests/domaincapstest.c | 5 +-
tests/domainconftest.c | 4 +-
tests/genericxml2xmltest.c | 4 +-
tests/networkxml2conftest.c | 10 +-
tests/networkxml2xmltest.c | 3 +-
tests/openvzutilstest.c | 4 +-
tests/qemublocktest.c | 3 +-
tests/qemucapabilitiestest.c | 9 +-
tests/qemucaps2xmltest.c | 29 +-
tests/qemucapsprobe.c | 4 +-
tests/qemuhotplugtest.c | 6 +-
tests/qemumemlocktest.c | 3 +-
tests/testutils.c | 4 +-
tests/testutilslxc.c | 22 +-
tests/testutilsqemu.c | 31 +-
tests/testutilsxen.c | 7 +-
tests/vircaps2xmltest.c | 6 +-
tests/vircapstest.c | 14 +-
tests/virfilecachetest.c | 69 ++-
tests/virnetdaemontest.c | 4 +-
tests/virresctrltest.c | 6 +-
tests/vmx2xmltest.c | 10 +-
tests/xml2vmxtest.c | 12 +-
152 files changed, 2411 insertions(+), 2660 deletions(-)
--
2.26.2
4 years, 5 months