[libvirt] [PATCH v3 0/6] BaselineHypervisorCPU using QEMU QMP exchanges
by Chris Venteicher
Some architectures (S390) depend on QEMU to compute baseline CPU model and
expand a models feature set.
Interacting with QEMU requires starting the QEMU process and completing one or
more query-cpu-model-baseline QMP exchanges with QEMU in addition to a
query-cpu-model-expansion QMP exchange to expand all features in the model.
See "s390x CPU models: exposing features" patch set on Qemu-devel for discussion
of QEMU aspects.
This is part of resolution of: https://bugzilla.redhat.com/show_bug.cgi?id=1511999
Version 3 attempts to address all issues from V1 and V2 including making the
QEMU process activation for QMP Queries generic (not specific to capabilities)
and moving that code from qemu_capabilities to qemu_process.
I attempted to make the new qemu_process functions consistent with the functions
but mostly ended up reusing the guts of the previous functions from qemu_capabilities.
Of interest may be the choice to reuse a domain structure to hold the Qmp Query
process state and connection information. Also, pay attention to the use of a large
random number to uniquely identify decoupled processes in terms of sockets and
file system footprint. If you believe it's worth the effort I think there are some
pre-existing library functions to establish files with unique identifiers in a
thread safe way.
The last patch may also be interesting and is based on past discussion of QEMU cpu
expansion only returning migratable features except for one x86 case where
non-migratable features are explicitly requested. The patch records that features
are migratable based on QEMU only returning migratable features. The main result
is the CPU xml for S390 CPU's contains the same migratability info the X86 currently
does. The testcases were updated to reflect the change. Is this ok?
Unlike the previous versions every patch should compile independently if applied
in sequence.
Thanks,
Chris
Chris Venteicher (6):
qemu_monitor: Introduce qemuMonitorCPUModelInfoNew
qemu_monitor: Introduce qemuMonitorCPUModelInfo / JSON conversion
qemu_monitor: qemuMonitorGetCPUModelExpansion inputs and outputs
CPUModelInfo
qemu_process: Use common processes mgmt funcs for all QMP query types
qemu_driver: BaselineHypervisorCPU support for S390
qemu_monitor: Default props to migratable when expanding cpu model
src/qemu/qemu_capabilities.c | 559 ++++++++----------
src/qemu/qemu_capabilities.h | 4 +
src/qemu/qemu_driver.c | 143 ++++-
src/qemu/qemu_monitor.c | 199 ++++++-
src/qemu/qemu_monitor.h | 29 +-
src/qemu/qemu_monitor_json.c | 210 +++++--
src/qemu/qemu_monitor_json.h | 13 +-
src/qemu/qemu_process.c | 398 +++++++++++++
src/qemu/qemu_process.h | 35 ++
tests/cputest.c | 11 +-
.../caps_2.10.0.s390x.xml | 60 +-
.../caps_2.11.0.s390x.xml | 58 +-
.../caps_2.12.0.s390x.xml | 56 +-
.../qemucapabilitiesdata/caps_2.8.0.s390x.xml | 32 +-
.../qemucapabilitiesdata/caps_2.9.0.s390x.xml | 34 +-
.../qemucapabilitiesdata/caps_3.0.0.s390x.xml | 64 +-
tests/qemucapabilitiestest.c | 7 +
17 files changed, 1375 insertions(+), 537 deletions(-)
--
2.17.1
6 years, 2 months
[libvirt] [RFC] cgroup settings and systemd daemon-reload conflict
by Nikolay Shirokovskiy
Hi, all.
It turns out that systemd daemon-reload reset settings that are managable
thru 'systemctl set-property' interface.
> virsh schedinfo tst3 | grep global_quota
global_quota : -1
> virsh schedinfo tst3 --set global_quota=50000 | grep global_quota
global_quota : 50000
> systemctl daemon-reload
> virsh schedinfo tst3 | grep global_quota
global_quota : -1
This behaviour does not limited to cpu controller, same for blkio for example.
I checked different versions of systemd (219 - Feb 15, and quite recent 236 - Dec 17)
to make sure it is not kind of bug of old version. So systemd does not play well
with direct writes to cgroup parameters that managable thru systemd. Looks like
libvirtd needs to use systemd's dbus interface to change all such parameters.
I only wonder how this can be unnoticed for such long time (creating cgroup for domain
thru systemd - Jul 2013) as daemon-reload is called upon libvirtd package update. May
be I miss something?
Nikolay
6 years, 2 months
[libvirt] question about syntax of storage volume <target> element
by Jim Fehlig
I've attempted to use virt-manager to create a new VM that uses a volume from an
rbd-based network pool, but have not been able to progress past step 4/5 where
VM storage is selected. It appears virt-manager has problems properly detecting
the volume as network-based storage, but before investigating those further I
have a question about the syntax of the <target> element of a storage volume.
The storage management page [0] of the website describing rbd volumes claims
that the <path> subelement contains an 'rbd:rbd/' prefix in the volume path. But
the page describing pool and volume format [1] syntax does not contain any info
wrt specifying network URLs in the <path> subelement.
What is the expectation wrt the <path> subelement of the <target> element within
rbd volume config? In general, should the <path> subelement encode the scheme
(e.g. rbd://) of a network-based volume? And if so, should it be formatted in
the traditional 'rbd://' vs 'rbd:rbd/' syntax?
Regards,
Jim
[0] https://libvirt.org/storage.html#StorageBackendRBD
[1] https://libvirt.org/formatstorage.html#StorageVolTarget
6 years, 2 months
[libvirt] [PATCH 0/8] qemu: Fix disk hotplug/media change regression
by Peter Krempa
Few of my recent patches (see the first two reverts) introduced a
regression when adding disks. The disk alias is needed when creating
names of certain backend objects (secrets/TLS). The code preparing those
was moved prior to alias formatting.
Revert those patches and fix it in a different way.
Peter Krempa (8):
Revert "qemu: hotplug: Prepare disk source in
qemuDomainAttachDeviceDiskLive"
Revert "qemu: hotplug: consolidate media change code paths"
qemu: hotplug: Use new source when preparing/translating for media
change
qemu: hotplug: Prepare disk source for media changing
qemu: hotplug: Add wrapper for disk hotplug code
qemu: conf: Export qemuAddSharedDisk
qemu: hotplug: Split out media change code from disk hotplug
qemu: hotplug: Refactor qemuDomainAttachDeviceDiskLiveInternal
src/qemu/qemu_conf.c | 2 +-
src/qemu/qemu_conf.h | 5 ++
src/qemu/qemu_driver.c | 7 +-
src/qemu/qemu_hotplug.c | 188 ++++++++++++++++++++++------------------
src/qemu/qemu_hotplug.h | 9 +-
tests/qemuhotplugtest.c | 2 +-
6 files changed, 123 insertions(+), 90 deletions(-)
--
2.17.1
6 years, 2 months
[libvirt] [PATCH 0/2] cpu_map: Add Icelake CPU models
by Jiri Denemark
The second patch mentions that ospke feature was removed in QEMU 3.0,
see
http://lists.nongnu.org/archive/html/qemu-devel/2018-09/msg02113.html
for discussion about how to deal with removed CPU features.
Jiri Denemark (2):
cpu_map: Add features for Icelake CPUs
cpu_map: Add Icelake CPU models
src/cpu_map/x86_Icelake-Client.xml | 85 +++++++++++++++++
src/cpu_map/x86_Icelake-Server.xml | 95 +++++++++++++++++++
src/cpu_map/x86_features.xml | 33 +++++++
.../x86_64-cpuid-Core-i5-6600-guest.xml | 1 +
.../x86_64-cpuid-Core-i5-6600-host.xml | 1 +
.../x86_64-cpuid-Core-i7-5600U-arat-guest.xml | 1 +
.../x86_64-cpuid-Core-i7-5600U-arat-host.xml | 1 +
.../x86_64-cpuid-Core-i7-5600U-guest.xml | 1 +
.../x86_64-cpuid-Core-i7-5600U-host.xml | 1 +
.../x86_64-cpuid-Core-i7-5600U-ibrs-guest.xml | 1 +
.../x86_64-cpuid-Core-i7-5600U-ibrs-host.xml | 1 +
.../x86_64-cpuid-Core-i7-7700-guest.xml | 1 +
.../x86_64-cpuid-Core-i7-7700-host.xml | 1 +
.../x86_64-cpuid-Xeon-E3-1245-v5-guest.xml | 1 +
.../x86_64-cpuid-Xeon-E3-1245-v5-host.xml | 1 +
.../x86_64-cpuid-Xeon-E5-2623-v4-guest.xml | 1 +
.../x86_64-cpuid-Xeon-E5-2623-v4-host.xml | 1 +
.../x86_64-cpuid-Xeon-E5-2650-v4-guest.xml | 1 +
.../x86_64-cpuid-Xeon-E5-2650-v4-host.xml | 1 +
.../x86_64-cpuid-Xeon-Gold-5115-guest.xml | 1 +
.../x86_64-cpuid-Xeon-Gold-5115-host.xml | 1 +
.../x86_64-cpuid-Xeon-Gold-6148-guest.xml | 1 +
.../x86_64-cpuid-Xeon-Gold-6148-host.xml | 1 +
23 files changed, 233 insertions(+)
create mode 100644 src/cpu_map/x86_Icelake-Client.xml
create mode 100644 src/cpu_map/x86_Icelake-Server.xml
--
2.19.0
6 years, 2 months
[libvirt] [PATCH] tests: reintroduce tests for libxl's legacy nested setting
by Jim Fehlig
The preferred location for setting the nested CPU flag changed in
Xen 4.10 and is advertised via the LIBXL_HAVE_BUILDINFO_NESTED_HVM
define. Commit 95d19cd0 changed libxl to use the new preferred
location but unconditionally changed the tests, causing 'make check'
failures against Xen < 4.10 that do not contain the new location.
Commit e94415d5 fixed the failures by only running the tests when
LIBXL_HAVE_BUILDINFO_NESTED_HVM is defined. Since libvirt supports
several versions of Xen that use the old nested location, it is
prudent to test the flag is set correctly. This patch reintroduces
the tests for the legacy location of the nested setting.
Signed-off-by: Jim Fehlig <jfehlig(a)suse.com>
---
We could probably get by with one test for the old nested location,
in which case I'd drop vnuma-hvm-legacy-nest. Any opinions on that?
.../fullvirt-cpuid-legacy-nest.json | 60 ++++++
.../fullvirt-cpuid-legacy-nest.xml | 34 ++++
.../vnuma-hvm-legacy-nest.json | 178 ++++++++++++++++++
.../vnuma-hvm-legacy-nest.xml | 100 ++++++++++
tests/libxlxml2domconfigtest.c | 3 +
5 files changed, 375 insertions(+)
diff --git a/tests/libxlxml2domconfigdata/fullvirt-cpuid-legacy-nest.json b/tests/libxlxml2domconfigdata/fullvirt-cpuid-legacy-nest.json
new file mode 100644
index 0000000000..cdc8b9867d
--- /dev/null
+++ b/tests/libxlxml2domconfigdata/fullvirt-cpuid-legacy-nest.json
@@ -0,0 +1,60 @@
+{
+ "c_info": {
+ "type": "hvm",
+ "name": "XenGuest2",
+ "uuid": "c7a5fdb2-cdaf-9455-926a-d65c16db1809"
+ },
+ "b_info": {
+ "max_vcpus": 1,
+ "avail_vcpus": [
+ 0
+ ],
+ "max_memkb": 592896,
+ "target_memkb": 403456,
+ "shadow_memkb": 5656,
+ "cpuid": [
+ {
+ "leaf": 1,
+ "ecx": "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx0",
+ "edx": "xxxxxxxxxxxxxxxxxxxxxxxxxxx1xxxx"
+ }
+ ],
+ "sched_params": {
+ },
+ "type.hvm": {
+ "pae": "True",
+ "apic": "True",
+ "acpi": "True",
+ "nested_hvm": "False",
+ "nographic": "True",
+ "vnc": {
+ "enable": "False"
+ },
+ "sdl": {
+ "enable": "False"
+ },
+ "spice": {
+
+ },
+ "boot": "c",
+ "rdm": {
+
+ }
+ },
+ "arch_arm": {
+
+ }
+ },
+ "disks": [
+ {
+ "pdev_path": "/dev/HostVG/XenGuest2",
+ "vdev": "hda",
+ "backend": "phy",
+ "format": "raw",
+ "removable": 1,
+ "readwrite": 1
+ }
+ ],
+ "on_reboot": "restart",
+ "on_crash": "restart"
+}
diff --git a/tests/libxlxml2domconfigdata/fullvirt-cpuid-legacy-nest.xml b/tests/libxlxml2domconfigdata/fullvirt-cpuid-legacy-nest.xml
new file mode 100644
index 0000000000..4f06db0714
--- /dev/null
+++ b/tests/libxlxml2domconfigdata/fullvirt-cpuid-legacy-nest.xml
@@ -0,0 +1,34 @@
+<domain type='xen'>
+ <name>XenGuest2</name>
+ <uuid>c7a5fdb2-cdaf-9455-926a-d65c16db1809</uuid>
+ <memory unit='KiB'>592896</memory>
+ <currentMemory unit='KiB'>403456</currentMemory>
+ <vcpu placement='static'>1</vcpu>
+ <os>
+ <type arch='x86_64' machine='xenfv'>hvm</type>
+ </os>
+ <features>
+ <acpi/>
+ <apic/>
+ <pae/>
+ </features>
+ <cpu mode='host-passthrough'>
+ <feature policy='forbid' name='pni'/>
+ <feature policy='forbid' name='vmx'/>
+ <feature policy='require' name='tsc'/>
+ </cpu>
+ <clock offset='variable' adjustment='0' basis='utc'/>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>restart</on_crash>
+ <devices>
+ <disk type='block' device='disk'>
+ <driver name='phy' type='raw'/>
+ <source dev='/dev/HostVG/XenGuest2'/>
+ <target dev='hda' bus='ide'/>
+ <address type='drive' controller='0' bus='0' target='0' unit='0'/>
+ </disk>
+ <input type='mouse' bus='ps2'/>
+ <input type='keyboard' bus='ps2'/>
+ </devices>
+</domain>
diff --git a/tests/libxlxml2domconfigdata/vnuma-hvm-legacy-nest.json b/tests/libxlxml2domconfigdata/vnuma-hvm-legacy-nest.json
new file mode 100644
index 0000000000..3b2fc5f40f
--- /dev/null
+++ b/tests/libxlxml2domconfigdata/vnuma-hvm-legacy-nest.json
@@ -0,0 +1,178 @@
+{
+ "c_info": {
+ "type": "hvm",
+ "name": "test-hvm",
+ "uuid": "2147d599-9cc6-c0dc-92ab-4064b5446e9b"
+ },
+ "b_info": {
+ "max_vcpus": 6,
+ "avail_vcpus": [
+ 0,
+ 1,
+ 2,
+ 3,
+ 4,
+ 5
+ ],
+ "vnuma_nodes": [
+ {
+ "memkb": 2097152,
+ "distances": [
+ 10,
+ 21,
+ 31,
+ 41,
+ 51,
+ 61
+ ],
+ "vcpus": [
+ 0
+ ]
+ },
+ {
+ "memkb": 2097152,
+ "distances": [
+ 21,
+ 10,
+ 21,
+ 31,
+ 41,
+ 51
+ ],
+ "vcpus": [
+ 1
+ ]
+ },
+ {
+ "memkb": 2097152,
+ "distances": [
+ 31,
+ 21,
+ 10,
+ 21,
+ 31,
+ 41
+ ],
+ "vcpus": [
+ 2
+ ]
+ },
+ {
+ "memkb": 2097152,
+ "distances": [
+ 41,
+ 31,
+ 21,
+ 10,
+ 21,
+ 31
+ ],
+ "vcpus": [
+ 3
+ ]
+ },
+ {
+ "memkb": 2097152,
+ "distances": [
+ 51,
+ 41,
+ 31,
+ 21,
+ 10,
+ 21
+ ],
+ "vcpus": [
+ 4
+ ]
+ },
+ {
+ "memkb": 2097152,
+ "distances": [
+ 61,
+ 51,
+ 41,
+ 31,
+ 21,
+ 10
+ ],
+ "vcpus": [
+ 5
+ ]
+ }
+ ],
+ "max_memkb": 1048576,
+ "target_memkb": 1048576,
+ "video_memkb": 8192,
+ "shadow_memkb": 14336,
+ "device_model_version": "qemu_xen",
+ "device_model": "/bin/true",
+ "sched_params": {
+
+ },
+ "type.hvm": {
+ "pae": "True",
+ "apic": "True",
+ "acpi": "True",
+ "nested_hvm": "True",
+ "vga": {
+ "kind": "cirrus"
+ },
+ "vnc": {
+ "enable": "True",
+ "listen": "0.0.0.0",
+ "findunused": "False"
+ },
+ "sdl": {
+ "enable": "False"
+ },
+ "spice": {
+
+ },
+ "boot": "c",
+ "rdm": {
+
+ }
+ },
+ "arch_arm": {
+
+ }
+ },
+ "disks": [
+ {
+ "pdev_path": "/var/lib/xen/images/test-hvm.img",
+ "vdev": "hda",
+ "backend": "qdisk",
+ "format": "raw",
+ "removable": 1,
+ "readwrite": 1
+ }
+ ],
+ "nics": [
+ {
+ "devid": 0,
+ "mac": "00:16:3e:66:12:b4",
+ "bridge": "br0",
+ "script": "/etc/xen/scripts/vif-bridge",
+ "nictype": "vif_ioemu"
+ }
+ ],
+ "vfbs": [
+ {
+ "devid": -1,
+ "vnc": {
+ "enable": "True",
+ "listen": "0.0.0.0",
+ "findunused": "False"
+ },
+ "sdl": {
+ "enable": "False"
+ }
+ }
+ ],
+ "vkbs": [
+ {
+ "devid": -1
+ }
+ ],
+ "on_reboot": "restart"
+}
diff --git a/tests/libxlxml2domconfigdata/vnuma-hvm-legacy-nest.xml b/tests/libxlxml2domconfigdata/vnuma-hvm-legacy-nest.xml
new file mode 100644
index 0000000000..6e265e31a9
--- /dev/null
+++ b/tests/libxlxml2domconfigdata/vnuma-hvm-legacy-nest.xml
@@ -0,0 +1,100 @@
+<domain type='xen'>
+ <name>test-hvm</name>
+ <description>None</description>
+ <uuid>2147d599-9cc6-c0dc-92ab-4064b5446e9b</uuid>
+ <memory>1048576</memory>
+ <currentMemory>1048576</currentMemory>
+ <vcpu>6</vcpu>
+ <on_poweroff>destroy</on_poweroff>
+ <on_reboot>restart</on_reboot>
+ <on_crash>destroy</on_crash>
+ <clock offset='utc'/>
+ <os>
+ <type>hvm</type>
+ <loader>/usr/lib/xen/boot/hvmloader</loader>
+ <boot dev='hd'/>
+ </os>
+ <features>
+ <apic/>
+ <acpi/>
+ <pae/>
+ </features>
+ <cpu mode='host-passthrough'>
+ <numa>
+ <cell id='0' cpus='0' memory='2097152' unit='KiB'>
+ <distances>
+ <sibling id='0' value='10'/>
+ <sibling id='1' value='21'/>
+ <sibling id='2' value='31'/>
+ <sibling id='3' value='41'/>
+ <sibling id='4' value='51'/>
+ <sibling id='5' value='61'/>
+ </distances>
+ </cell>
+ <cell id='1' cpus='1' memory='2097152' unit='KiB'>
+ <distances>
+ <sibling id='0' value='21'/>
+ <sibling id='1' value='10'/>
+ <sibling id='2' value='21'/>
+ <sibling id='3' value='31'/>
+ <sibling id='4' value='41'/>
+ <sibling id='5' value='51'/>
+ </distances>
+ </cell>
+ <cell id='2' cpus='2' memory='2097152' unit='KiB'>
+ <distances>
+ <sibling id='0' value='31'/>
+ <sibling id='1' value='21'/>
+ <sibling id='2' value='10'/>
+ <sibling id='3' value='21'/>
+ <sibling id='4' value='31'/>
+ <sibling id='5' value='41'/>
+ </distances>
+ </cell>
+ <cell id='3' cpus='3' memory='2097152' unit='KiB'>
+ <distances>
+ <sibling id='0' value='41'/>
+ <sibling id='1' value='31'/>
+ <sibling id='2' value='21'/>
+ <sibling id='3' value='10'/>
+ <sibling id='4' value='21'/>
+ <sibling id='5' value='31'/>
+ </distances>
+ </cell>
+ <cell id='4' cpus='4' memory='2097152' unit='KiB'>
+ <distances>
+ <sibling id='0' value='51'/>
+ <sibling id='1' value='41'/>
+ <sibling id='2' value='31'/>
+ <sibling id='3' value='21'/>
+ <sibling id='4' value='10'/>
+ <sibling id='5' value='21'/>
+ </distances>
+ </cell>
+ <cell id='5' cpus='5' memory='2097152' unit='KiB'>
+ <distances>
+ <sibling id='0' value='61'/>
+ <sibling id='1' value='51'/>
+ <sibling id='2' value='41'/>
+ <sibling id='3' value='31'/>
+ <sibling id='4' value='21'/>
+ <sibling id='5' value='10'/>
+ </distances>
+ </cell>
+ </numa>
+ </cpu>
+ <devices>
+ <emulator>/bin/true</emulator>
+ <disk type='file' device='disk'>
+ <driver name='qemu'/>
+ <source file='/var/lib/xen/images/test-hvm.img'/>
+ <target dev='hda'/>
+ </disk>
+ <interface type='bridge'>
+ <source bridge='br0'/>
+ <mac address='00:16:3e:66:12:b4'/>
+ <script path='/etc/xen/scripts/vif-bridge'/>
+ </interface>
+ <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0'/>
+ </devices>
+</domain>
diff --git a/tests/libxlxml2domconfigtest.c b/tests/libxlxml2domconfigtest.c
index 22f9c2c871..cf2132563e 100644
--- a/tests/libxlxml2domconfigtest.c
+++ b/tests/libxlxml2domconfigtest.c
@@ -212,6 +212,9 @@ mymain(void)
# ifdef LIBXL_HAVE_BUILDINFO_NESTED_HVM
DO_TEST("vnuma-hvm");
DO_TEST("fullvirt-cpuid");
+# else
+ DO_TEST("vnuma-hvm-legacy-nest");
+ DO_TEST("fullvirt-cpuid-legacy-nest");
# endif
--
2.18.0
6 years, 2 months
[libvirt] [PATCH] security: dac: also label listen UNIX sockets
by Ján Tomko
We switched to opening mode='bind' sockets ourselves:
commit 30fb2276d88b275dc2aad6ddd28c100d944b59a5
qemu: support passing pre-opened UNIX socket listen FD
in v4.5.0-rc1~251
Then fixed qemuBuildChrChardevStr to change libvirtd's label
while creating the socket:
commit b0c6300fc42bbc3e5eb0b236392f7344581c5810
qemu: ensure FDs passed to QEMU for chardevs have correct SELinux labels
v4.5.0-rc1~52
Also add labeling of these sockets to the DAC driver.
Instead of trying to figure out which one was created by libvirt,
label it if it exists.
https://bugzilla.redhat.com/show_bug.cgi?id=1633389
Signed-off-by: Ján Tomko <jtomko(a)redhat.com>
---
src/security/security_dac.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/src/security/security_dac.c b/src/security/security_dac.c
index 62442745dd..da4a6c72fe 100644
--- a/src/security/security_dac.c
+++ b/src/security/security_dac.c
@@ -1308,7 +1308,12 @@ virSecurityDACSetChardevLabel(virSecurityManagerPtr mgr,
break;
case VIR_DOMAIN_CHR_TYPE_UNIX:
- if (!dev_source->data.nix.listen) {
+ if (!dev_source->data.nix.listen ||
+ (dev_source->data.nix.path &&
+ virFileExists(dev_source->data.nix.path))) {
+ /* Also label mode='bind' sockets if they exist,
+ * e.g. because they were created by libvirt
+ * and passed via FD */
if (virSecurityDACSetOwnership(mgr, NULL,
dev_source->data.nix.path,
user, group) < 0)
--
2.16.4
6 years, 2 months
[libvirt] [PATCH] uml: Fix weird logic inside umlConnectOpen() function.
by Julio Faracco
The pointer related to uml_driver needs to be checked before its usage
inside the function. Some attributes of the driver are being accessed
while the pointer is NULL considering the current logic.
Signed-off-by: Julio Faracco <jcfaracco(a)gmail.com>
---
src/uml/uml_driver.c | 14 +++++++-------
1 file changed, 7 insertions(+), 7 deletions(-)
diff --git a/src/uml/uml_driver.c b/src/uml/uml_driver.c
index fcd468243e..d1c71d8521 100644
--- a/src/uml/uml_driver.c
+++ b/src/uml/uml_driver.c
@@ -1193,6 +1193,13 @@ static virDrvOpenStatus umlConnectOpen(virConnectPtr conn,
{
virCheckFlags(VIR_CONNECT_RO, VIR_DRV_OPEN_ERROR);
+ /* URI was good, but driver isn't active */
+ if (uml_driver == NULL) {
+ virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
+ _("uml state driver is not active"));
+ return VIR_DRV_OPEN_ERROR;
+ }
+
/* Check path and tell them correct path if they made a mistake */
if (uml_driver->privileged) {
if (STRNEQ(conn->uri->path, "/system") &&
@@ -1211,13 +1218,6 @@ static virDrvOpenStatus umlConnectOpen(virConnectPtr conn,
}
}
- /* URI was good, but driver isn't active */
- if (uml_driver == NULL) {
- virReportError(VIR_ERR_INTERNAL_ERROR, "%s",
- _("uml state driver is not active"));
- return VIR_DRV_OPEN_ERROR;
- }
-
if (virConnectOpenEnsureACL(conn) < 0)
return VIR_DRV_OPEN_ERROR;
--
2.17.1
6 years, 2 months
[libvirt] [PATCH 0/7] Various Coverity based concerns
by John Ferlan
I'm sure it'll be felt one or two could just be false positives,
but I have 35-40 of true false positives and it seems at least
these go above just noise on the channel.
Perhaps the most difficult one to immediately see was the libxl
refcnt patch. That involves a little bit of theory and has been
in my noise pile for a while until I noted that the @args is
being added in a loop to a callback function that just Unref's
it when done. So if there was more than 1 IP Address, then all
sorts of fun things could happen. Without any change, the Alloc
is matched by the Unref, but with the change we add a Ref to
match each Unref in the I/O loop and we just ensure the Unref
is done for the path that put @args into the I/O callback.
I also think the nwfilter patch was "interesting" insomuch as
it has my "favorite" 'if (int-value) {' condition. IOW, if
not zero, then do something. What became "interesting" is that
the virNWFilterIPAddrMapDelIPAddr could return -1 if the
virHashLookup on @req->binding->portdevname returned NULL,
so when "shrinking" the code to only call the instantiation
for/when there was an IP Address found resolves a couple of
issues in the code.
John Ferlan (7):
lxc: Only check @nparams in lxcDomainBlockStatsFlags
libxl: Fix possible object refcnt issue
tests: Inline a sysconf call for linuxCPUStatsToBuf
util: Data overrun may lead to divide by zero
tests: Alter logic in testCompareXMLToDomConfig
tests: Use STRNEQ_NULLABLE
nwfilter: Alter virNWFilterSnoopReqLeaseDel logic
src/libxl/libxl_migration.c | 4 ++--
src/lxc/lxc_driver.c | 2 +-
src/nwfilter/nwfilter_dhcpsnoop.c | 9 ++++-----
src/util/virutil.c | 11 +++++------
tests/commandtest.c | 4 ++--
tests/libxlxml2domconfigtest.c | 11 +++++------
tests/virhostcputest.c | 12 ++++++++++--
7 files changed, 29 insertions(+), 24 deletions(-)
--
2.17.1
6 years, 2 months
[libvirt] [PATCH v2 0/8] libxl: PVHv2 support
by Marek Marczykowski-Górecki
This is a respin of my old PVHv1 patch[1], converted to PVHv2.
Should the code use "PVH" name (as libxl does internally), or "PVHv2" as in
many places in Xen documentation? I've chosen the former, but want to confirm
it.
Also, not sure about VIR_DOMAIN_OSTYPE_XENPVH (as discussed on PVHv1 patch) -
while it will be messy in many cases, there is
libxl_domain_build_info.u.{hvm,pv,pvh} which would be good to not mess up.
Also, PVHv2 needs different kernel features than PV (CONFIG_XEN_PVH vs
CONFIG_XEN_PV), so keeping the same VIR_DOMAIN_OSTYPE_XEN could be
confusing.
On the other hand, libxl_domain_build_info.u.pv is used in very few places (one
section of libxlMakeDomBuildInfo), so guarding u.hvm access with
VIR_DOMAIN_OSTYPE_HVM may be enough.
For now I've reused VIR_DOMAIN_OSTYPE_XEN - in the driver itself, most of
the code is the same as for PV.
Since PVHv2 relies on features in newer Xen versions, I needed to convert also
some older code. For example b_info->u.hvm.nested_hvm was deprecated in favor
of b_info->nested_hvm. While the code do handle both old and new versions
(obviously refusing PVHv2 if Xen is too old), this isn't the case for tests.
How it should be handled, if at all?
First few preparatory patches can be applied independently.
[1] https://www.redhat.com/archives/libvir-list/2016-August/msg00376.html
Changes in v2:
- drop "docs: don't refer to deprecated 'linux' ostype in example" patch -
migrating further away from "linux" os type is offtopic to this series and
apparently is a controversial thing
- drop "docs: update domain schema for machine attribute" patch -
already applied
- apply review comments from Jim
- rebase on master
Marek Marczykowski-Górecki (8):
docs: add documentation of arch element of capabilities.xml
libxl: set shadow memory for any guest type, not only HVM
libxl: prefer new location of nested_hvm in libxl_domain_build_info
libxl: reorder libxlMakeDomBuildInfo for upcoming PVH support
libxl: add support for PVH
tests: add basic Xen PVH test
xenconfig: add support for parsing type= xl config entry
xenconfig: add support for type="pvh"
docs/formatcaps.html.in | 22 ++++-
docs/formatdomain.html.in | 11 +-
docs/schemas/domaincommon.rng | 1 +-
src/libxl/libxl_capabilities.c | 33 ++++++-
src/libxl/libxl_conf.c | 81 ++++++++++++-----
src/libxl/libxl_driver.c | 6 +-
src/xenconfig/xen_common.c | 27 +++++-
src/xenconfig/xen_xl.c | 5 +-
tests/libxlxml2domconfigdata/basic-pv.json | 1 +-
tests/libxlxml2domconfigdata/basic-pvh.json | 49 ++++++++++-
tests/libxlxml2domconfigdata/basic-pvh.xml | 28 ++++++-
tests/libxlxml2domconfigdata/fullvirt-cpuid.json | 2 +-
tests/libxlxml2domconfigdata/multiple-ip.json | 1 +-
tests/libxlxml2domconfigdata/vnuma-hvm.json | 2 +-
tests/libxlxml2domconfigtest.c | 1 +-
tests/testutilsxen.c | 3 +-
tests/xlconfigdata/test-fullvirt-type.cfg | 21 ++++-
tests/xlconfigdata/test-fullvirt-type.xml | 27 ++++++-
tests/xlconfigdata/test-paravirt-type.cfg | 13 +++-
tests/xlconfigdata/test-paravirt-type.xml | 25 +++++-
tests/xlconfigdata/test-pvh-type.cfg | 13 +++-
tests/xlconfigdata/test-pvh-type.xml | 25 +++++-
tests/xlconfigtest.c | 3 +-
23 files changed, 359 insertions(+), 41 deletions(-)
create mode 100644 tests/libxlxml2domconfigdata/basic-pvh.json
create mode 100644 tests/libxlxml2domconfigdata/basic-pvh.xml
create mode 100644 tests/xlconfigdata/test-fullvirt-type.cfg
create mode 100644 tests/xlconfigdata/test-fullvirt-type.xml
create mode 100644 tests/xlconfigdata/test-paravirt-type.cfg
create mode 100644 tests/xlconfigdata/test-paravirt-type.xml
create mode 100644 tests/xlconfigdata/test-pvh-type.cfg
create mode 100644 tests/xlconfigdata/test-pvh-type.xml
base-commit: 16858439deaec0832de61c5ddb93d8e80adccf6c
--
git-series 0.9.1
6 years, 2 months