[libvirt PATCH] vmx: Don't error out on missing filename for cdrom
by Martin Kletzander
This is perfectly valid in VMWare and the VM just boots with an empty drive. We
used to just skip the whole drive before, but since we changed how we parse
empty cdrom drives this now results in an error and the user not being able to
even dump the XML. Instead of erroring out, just keep the drive empty.
Resolves: https://bugzilla.redhat.com/show_bug.cgi?id=1903953
Signed-off-by: Martin Kletzander <mkletzan(a)redhat.com>
---
src/vmx/vmx.c | 14 +++++++++++---
1 file changed, 11 insertions(+), 3 deletions(-)
diff --git a/src/vmx/vmx.c b/src/vmx/vmx.c
index b86dbe9ca267..40e4ef962992 100644
--- a/src/vmx/vmx.c
+++ b/src/vmx/vmx.c
@@ -2447,10 +2447,18 @@ virVMXParseDisk(virVMXContext *ctx, virDomainXMLOptionPtr xmlopt, virConfPtr con
goto cleanup;
}
+ tmp = ctx->parseFileName(fileName, ctx->opaque);
virDomainDiskSetType(*def, VIR_STORAGE_TYPE_FILE);
- if (!(tmp = ctx->parseFileName(fileName, ctx->opaque)))
- goto cleanup;
- virDomainDiskSetSource(*def, tmp);
+ /* It is easily possible to have a cdrom with non-existing filename
+ * as the image and vmware just provides an empty cdrom.
+ *
+ * See: https://bugzilla.redhat.com/1903953
+ */
+ if (tmp) {
+ virDomainDiskSetSource(*def, tmp);
+ } else {
+ virResetLastError();
+ }
VIR_FREE(tmp);
} else if (deviceType && STRCASEEQ(deviceType, "atapi-cdrom")) {
virDomainDiskSetType(*def, VIR_STORAGE_TYPE_BLOCK);
--
2.29.2
4 years, 3 months
Hotplugng disk not adding backing layers to apparmor profile
by Russell Cattelan
We have been working on a feature at IBM cloud around snapshots.
One of the workflows is to add a snapshoted disk to a running virtual
instance. This involves adding a disk that has at minimum 2 qcow2 files,
one for the active overlay and one or more backing files.
The problem we are running into is that they dynamic update of the
apparmor profile appears to only add the first file in the chain to the
profile.
It based on some experiments it appears that this should be adding
all the files to the security profile but this seems to only do the
first (topmost) file. "disk->src"
https://gitlab.com/libvirt/libvirt/-/blob/a7db0b757d210071d39e6d116e6a4bc...
I does not appear to loop over the disks where as
qemuBlockStorageSourceChainAttach does
https://gitlab.com/libvirt/libvirt/-/blob/a7db0b757d210071d39e6d116e6a4bc...
The attached disk then fails since apparmor will reject the backing
files access.
This is fairly easy to demonstrate when apparmor is active.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt2/hotplug2.qcow2' index='1'/>
<backingStore type='file' index='2'>
<format type='qcow2'/>
<source file='/mnt2/hotplug1.qcow2'/>
<backingStore/>
</backingStore>
<target dev='vdc' bus='virtio'/>
</disk>
virsh attach-device test1 /mnt2/attach.xml
[535657.524784] audit: type=1400 audit(1608242451.762:79):
apparmor="DENIED" operation="open"
profile="libvirt-a7fd0ca2-1429-4a60-9ab4-a545660666ce"
name="/mnt2/hotplug1.qcow2" pid=11999 comm="qemu-system-x86"
requested_mask="r" denied_mask="r" fsuid=64055 ouid=64055
-Russell Cattelan
4 years, 3 months
Hotplugng disk not adding backing layers to apparmor profile
by Russell Cattelan
We have been working on a feature at IBM cloud around snapshots.
One of the workflows is to add a snapshoted disk to a running virtual
instance. This involves adding a disk that has at minimum 2 qcow2 files,
one for the active overlay and one or more backing files.
The problem we are running into is that they dynamic update of the
apparmor profile appears to only add the first file in the chain to the
profile.
It based on some experiments it appears that this should be adding
all the files to the security profile but this seems to only do the
first (topmost) file. "disk->src"
https://gitlab.com/libvirt/libvirt/-/blob/a7db0b757d210071d39e6d116e6a4bc...
I does not appear to loop over the disks where as
qemuBlockStorageSourceChainAttach does
https://gitlab.com/libvirt/libvirt/-/blob/a7db0b757d210071d39e6d116e6a4bc...
The attached disk then fails since apparmor will reject the backing
files access.
This is fairly easy to demonstrate when apparmor is active.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/mnt2/hotplug2.qcow2' index='1'/>
<backingStore type='file' index='2'>
<format type='qcow2'/>
<source file='/mnt2/hotplug1.qcow2'/>
<backingStore/>
</backingStore>
<target dev='vdc' bus='virtio'/>
</disk>
virsh attach-device test1 /mnt2/attach.xml
[535657.524784] audit: type=1400 audit(1608242451.762:79):
apparmor="DENIED" operation="open"
profile="libvirt-a7fd0ca2-1429-4a60-9ab4-a545660666ce"
name="/mnt2/hotplug1.qcow2" pid=11999 comm="qemu-system-x86"
requested_mask="r" denied_mask="r" fsuid=64055 ouid=64055
-Russell Cattelan
4 years, 3 months
[libvirt PATCH 00/29] Refactor scripts in tests/cputestdata
by Tim Wiederhake
This series refactors the various scripts found in tests/cputestdata and
adds support for CORE_CAPABILITY MSR, as found on e.g. SnowRidge.
Acquiring test data on a new system is a two step process. "cpu-gather.sh"
gathers information on the target machine and has as few dependencies as
possible. "cpu-parse.sh" processes this information but requires access to
a libvirt source tree and has more dependencies, e.g. "xmltodict".
This series merges three of the four involved scripts (cpu-gather.sh,
cpu-parse.sh and cpu-reformat.py) into a single python3 script. python3
already was a dependency for cpu-gather.sh and care has been taken to not
depend on modules that are not installed by default [1]. Merging the fourth
script, cpu-cpuid.py, will come in a seperate series.
Patches 1 to 14 transform cpu-gather into a python script, preserving the
format of the output (except for consistent "\n" line endings; previously
the tool would output a mix of "\n" and "\r\n").
Patches 15 to 23 merge cpu-parse into the script. In this process, the
format of the intermediary data is changed to json.
Patches 24 to 29 add support for "all in one" operation and extracting
IA32_CORE_CAPABILITY_MSR, which can be found on e.g. SnowRidge CPUs.
Old usage:
./cpu-gather.sh | ./cpu-parse.sh
New:
./cpu-gather.py [--gather] | ./cpu-gather.py --parse
Alternative on single machine:
./cpu-gather.py --gather --parse
[1] https://docs.python.org/3/py-modindex.html
Tim Wiederhake (29):
cpu-cpuid: Shorten overly long line
cpu-gather: Create python wrapper for shell script
cpu-gather: Move model_name to new script
cpu-gather: Allow overwriting model name
cpu-gather: Move cpuid call to new script
cpu-gather: Allow overwriting cpuid binary location
cpu-gather: Move msr decoding to new script
cpu-gather: Move qemu detection to new script
cpu-gather: Move static model expansion to new script
cpu-gather: Move static model extraction to new script
cpu-gather: Move simple model extraction to new script
cpu-gather: Move full model extraction to new script
cpu-gather: Merge model gathering logic
cpu-gather: Delete old script
cpu-gather: Separate data input and output
cpu-parse: Wrap with python script
cpu-gather: Transport data as json
cpu-parse: Move model name detection to new script
cpu-parse: Move file name generation to new script
cpu-parse: Move xml output to new script
cpu-parse: Move json output to new script
cpu-parse: Move call to cpu-cpuid.py to new script
cpu-parse: Delete old script
cpu-gather: Ignore empty responses from qemu
cpu-gather: Ignore shutdown messages from qemu
cpu-gather: Parse cpuid leaves early
cpu-gather: Allow gathering and parsing data in one step.
cpu-gather: Prepare gather_msr for reading multiple msr
cpu-gather: Add IA32_CORE_CAPABILITY_MSR
tests/cputestdata/cpu-cpuid.py | 5 +-
tests/cputestdata/cpu-gather.py | 365 ++++++++++++++++++++++++++++++
tests/cputestdata/cpu-gather.sh | 103 ---------
tests/cputestdata/cpu-parse.sh | 65 ------
tests/cputestdata/cpu-reformat.py | 9 -
5 files changed, 368 insertions(+), 179 deletions(-)
create mode 100755 tests/cputestdata/cpu-gather.py
delete mode 100755 tests/cputestdata/cpu-gather.sh
delete mode 100755 tests/cputestdata/cpu-parse.sh
delete mode 100755 tests/cputestdata/cpu-reformat.py
--
2.26.2
4 years, 3 months
[libvirt PATCH 0/2] Schema fixes for virsh [hypervisor-]cpu-compare
by Tim Wiederhake
See individual commit messages for more details.
Tim Wiederhake (2):
schemas: Deduplicate cpuTopology in cputypes.rng
schema: Allow counter element in host cpu definition
docs/schemas/cputypes.rng | 15 +++++++++------
1 file changed, 9 insertions(+), 6 deletions(-)
--
2.26.2
4 years, 3 months
[libvirt][PATCH v1 0/3] introduce 'restrictive' mode in numatune
by Luyao Zhong
Before this patch set, numatune only has three memory modes:
static, interleave and prefered. These memory policies are
ultimately set by mbind() system call.
Memory policy could be 'hard coded' into the kernel, but none of
above policies fit our requirment under this case. mbind() support
default memory policy, but it requires a NULL nodemask. So obviously
setting allowed memory nodes is cgroups' mission under this case.
So we introduce a new option for mode in numatune named 'restrictive'.
<numatune>
<memory mode="restrictive" nodeset="1-4,^3"/>
<memnode cellid="0" mode="restrictive" nodeset="1"/>
<memnode cellid="2" mode="restrictive" nodeset="2"/>
</numatune>
The config above means we only use cgroups to restrict the allowed
memory nodes and not setting any specific memory policies explicitly.
RFC discussion:
https://www.redhat.com/archives/libvir-list/2020-November/msg01256.html
Regards,
Luyao
Luyao Zhong (3):
docs: add docs for 'restrictive' option for mode in numatune
schema: add 'restrictive' config option for mode in numatune
qemu: add parser and formatter for 'restrictive' mode in numatune
docs/formatdomain.rst | 7 +++-
docs/schemas/domaincommon.rng | 2 +
include/libvirt/libvirt-domain.h | 1 +
src/conf/numa_conf.c | 9 +++++
src/qemu/qemu_command.c | 6 ++-
src/qemu/qemu_process.c | 27 +++++++++++++
src/util/virnuma.c | 3 ++
.../numatune-memnode-invalid-mode.err | 1 +
.../numatune-memnode-invalid-mode.xml | 33 +++++++++++++++
...emnode-restrictive-mode.x86_64-latest.args | 40 +++++++++++++++++++
.../numatune-memnode-restrictive-mode.xml | 33 +++++++++++++++
tests/qemuxml2argvtest.c | 2 +
...memnode-restrictive-mode.x86_64-latest.xml | 40 +++++++++++++++++++
tests/qemuxml2xmltest.c | 1 +
14 files changed, 202 insertions(+), 3 deletions(-)
create mode 100644 tests/qemuxml2argvdata/numatune-memnode-invalid-mode.err
create mode 100644 tests/qemuxml2argvdata/numatune-memnode-invalid-mode.xml
create mode 100644 tests/qemuxml2argvdata/numatune-memnode-restrictive-mode.x86_64-latest.args
create mode 100644 tests/qemuxml2argvdata/numatune-memnode-restrictive-mode.xml
create mode 100644 tests/qemuxml2xmloutdata/numatune-memnode-restrictive-mode.x86_64-latest.xml
--
2.25.4
4 years, 3 months
[PATCH 0/5] Followup to virNetDevGenerateName() patches
by Laine Stump
A few issues came up during review of this series that were better
fixed in cleanups rather than requiring yet another round of
review. (A couple of things I noticed after the other patches were
already pushed).
Laine Stump (5):
util: fix tap device name auto-generation for FreeBSD
bhyve: remove redundant code that adds "template" netdev name
qemu: remove redundant code that adds "template" netdev name
util: simplify virNetDevMacVLanCreateWithVPortProfile()
util: minor comment/formatting changes to virNetDevTapCreate()
src/bhyve/bhyve_command.c | 7 ----
src/qemu/qemu_interface.c | 18 +++--------
src/util/virnetdevmacvlan.c | 64 ++++++-------------------------------
src/util/virnetdevtap.c | 46 +++++++-------------------
4 files changed, 25 insertions(+), 110 deletions(-)
--
2.28.0
4 years, 3 months