Testing mail delivery
by dberrange@yahoo.com
We continue to have problems with mail delivery, especially to Yahoo addresses. This mail is a test to investigate delivery behaviour, please ignore this mail thread.
Daniel
4 years, 10 months
[libvirt-users] Connecting a VM to an existing OVS bridge
by Amir Sela
Hi,
I have an existing OVS bridge, that I can see in ovs-vsctl and use
for other purposes.
I've edited the machine's XML as instructed in
http://docs.openvswitch.org/en/latest/howto/libvirt/
When I try to start the VM, i get
error: Cannot get interface MTU on 'ovsbr': No such device
Any ideas?
(Note: I can't see the ovs switch in brctl show or any other regular
kernel tool, should it appear there?)
Versions:
openvswitch-2.10.1-3.fc30.x86_64
libvirt-daemon-5.1.0-9.fc30.x86_64
Thanks!
4 years, 10 months
USB-hotplugging fails with "failed to load cgroup BPF prog: Operation not permitted" on cgroups v2
by Pol Van Aubel
Hi all,
I've disabled cgroups v1 on my system with the kernel boot option
"systemd.unified_cgroup_hierarchy=1". Since doing so, USB hotplugging
fails to work, seemingly due to a permissions problem with BPF. Please
note that the technique I'm going to describe worked just fine for
hotplugging USB devices to running domains until this change.
Attaching / detaching USB devices when the domain is down still works as
expected.
I get the same error when attaching a device in virt-manager, as I do
when running the following command:
sudo virsh attach-device wenger /dev/stdin --persistent <<END
<hostdev mode='subsystem' type='usb' managed='yes'>
<source startupPolicy='optional'>
<vendor id='0x046d' />
<product id='0xc215' />
</source>
</hostdev>
END
This returns
error: Failed to attach device from /dev/stdin
error: failed to load cgroup BPF prog: Operation not permitted
virt-manager returns basically the same error, but for completeness'
sake, here it is:
failed to load cgroup BPF prog: Operation not permitted
Traceback (most recent call last):
File "/usr/share/virt-manager/virtManager/addhardware.py", line 1327, in _add_device
self.vm.attach_device(dev)
File "/usr/share/virt-manager/virtManager/object/domain.py", line 920, in attach_device
self._backend.attachDevice(devxml)
File "/usr/lib/python3.8/site-packages/libvirt.py", line 590, in attachDevice
if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
libvirt.libvirtError: failed to load cgroup BPF prog: Operation not permitted
Now, libvirtd is running as root, so I don't understand why any
operation on BPF programs is not permitted. I've dug into libvirt's code
a bit to see what is throwing this error and it boils down to
<https://github.com/libvirt/libvirt/blob/7d608469621a3fda72dff2a89308e68cc...>
and
<https://github.com/libvirt/libvirt/blob/02bf7cc68bfc76242f02d23e73cad3661...>
but I have no clue what that syscall is doing, so that's where my
debugging capability basically ends.
Maybe this is something as simple as setting the right ACL somewhere. I
haven't touched /etc/libvirt/qemu.conf except for setting nvram. There
*is* something about cgroup_device_acl there but afaict that's for
cgroups v1, when there was still a device cgroup controller. Any help
would be greatly appreciated.
Domain log files:
Upon execution of the above commands, nothing gets added to the domain
log in /var/log/qemu/wenger.log, so I've decided they're likely
irrelevant to the issue. Please ask for any additional info required.
System information:
Arch Linux, (normal) kernel 5.4.11
libvirt 5.10.0
qemu 4.2.0, using KVM.
Host system is x86_64 on an intel 5820k.
Guest system is probably irrelevant, but is Windows 10 on the same.
Possibly relevant kernel build options:
$ zgrep BPF /proc/config.gz
[22:55:52]: zgrep BPF /proc/config.gz
CONFIG_CGROUP_BPF=y
CONFIG_BPF=y
CONFIG_BPF_SYSCALL=y
CONFIG_BPF_JIT_ALWAYS_ON=y
CONFIG_IPV6_SEG6_BPF=y
CONFIG_NETFILTER_XT_MATCH_BPF=m
# CONFIG_BPFILTER is not set
CONFIG_NET_CLS_BPF=m
CONFIG_NET_ACT_BPF=m
CONFIG_BPF_JIT=y
CONFIG_BPF_STREAM_PARSER=y
CONFIG_LWTUNNEL_BPF=y
CONFIG_HAVE_EBPF_JIT=y
CONFIG_BPF_EVENTS=y
# CONFIG_BPF_KPROBE_OVERRIDE is not set
# CONFIG_TEST_BPF is not set
Regards,
Pol Van Aubel
4 years, 10 months
Prevent virsh live migration from copy zero -- increment copy on live migration possible ?
by Oliver Dzombic
Hi,
i am doing a live migration like this:
[root@nodeA ~]#virsh migrate --copy-storage-all --verbose --live kvm1776
qemu+ssh://nodeb/system
which works fine with libvirt version 6.
The zfs backed hdd is 100G, while its thin-provisioned.
The real data on it is 1G.
So even its 1G of data, virsh copies a lot of zero's as it seems.
There is a lot of HDD IO and network activity, but in fact the size at
the target volume does not grow ( since there are no more data ).
After it successfully finished the copy, i tried what will happen with
the --copy-storage-inc parameter:
[root@nodeB ~]# virsh migrate --copy-storage-inc --verbose --live
kvm1776 qemu+ssh://nodea/system
But that didnt change anything. Even both volumes are identical now,
again the whole copy of all ( non existing ) 100GB.
Is there any way to make libvirt just copy real data and not Zero's that
actually do not exist and causing network and HDD IO traffic ?
Thank you !
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
Layer7 Networks
mailto:info@layer7.net
Anschrift:
Layer7 Networks GmbH
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 96293 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic
UST ID: DE259845632
4 years, 10 months
Bridge-less VM
by Rob Roschewsk
I'm trying to create a free standing VM that doesn't connect to a bridge.
This is supposedly do able according to the WIKI:
https://libvirt.org/formatdomain.html#elementsNICSEthernet
But with a config similar to:
<interface type='ethernet'>
<target dev='mytap1' managed='no'/>
<model type='virtio'/>
</interface>
When starting the domain I get the error:
error: internal error: process exited while connecting to monitor:
2020-01-16T18:08:04.788860Z qemu-system-x86_64: -netdev
tap,id=hostnet0,vhost=on,vhostfd=26: could not open /dev/net/tun: Operation
not permitted
Checked permissions on /dev/net/tunand it's 666
If I just configure it as a "bridge" connection the domain starts. Then I
can use brctl to remove it from the bridge to get what I want. That just
proves it possible but with extra steps (Shout out to Rick and Morty)
Thoughts??
Running Ubuntu 16.04.1 Kernel 4.15.0-74
libvirt 1.3.1-1ubuntu10.27
qemu 1:2.5+dfsg-5ubuntu10.41
Thanks,
--> Rob
4 years, 10 months
Volume file permissions and huge volume downloads
by R. Diez
Hi all:
I am using the libvirt version that comes with Ubuntu 18.04.3 LTS.
I want to backup a virtual machine in a foolproof way:
- Gracefully shutdown the VM.
- Backup the disk image.
- Restart the VM.
I wrote the following script to do that:
https://github.com/rdiez/Tools/blob/master/VirtualMachineManager/BackupVm.sh
Writing that script was difficult enough because of the virsh limitations (in my opinion) described in the comments. But at least it works.
The main problem is that libvirt sets the ownership and permissions of volume files in such a way that a standard user cannot access them,
even if it is member of the 'libvirt' group.
While the VM is running, volume file permissions are like this:
-rw-r----- 1 libvirt-qemu kvm [...] /var/lib/libvirt/images/YourVmDisk.qcow2
When the VM is shutoff:
-rw-r----- 1 root root [...] /var/lib/libvirt/images/YourVmDisk.qcow2
The trouble is, I want to access that .qcow2 file when the VM is shutoff.
I would really like not to run my script as root. I could not find a way to specify the permissions for the .qcow2 files, so I tried editing
the group ownership for the whole pool:
virsh pool-edit default
<target>
<path>/var/lib/libvirt/images</path>
<permissions>
<mode>0711</mode>
<owner>0</owner>
<group>131</group>
</permissions>
</target>
Setting <group> to "libvirt" did not work (XML validation error), so I tried the numeric ID of that group (131).
That did not work. When I restart libvirt with "service libvirtd restart", my changes to the pool XML file disappear (!).
Later on, I found out that you can download a volume like this:
virsh vol-download --pool default YourVmDisk.qcow2 YourVmDisk.qcow2
The trouble is that I created the virtual disk with a maximum size of 120 GB. I copied it around a few times, so I think it has lost any
"sparseness" inside. Command "virsh vol-download" takes ages and downloads all those 120 GB. My script uses "qemu-img convert", which
discards all unused space and writes just 17 GB of data, and that is without turning compression on. The whole backup takes seconds instead
of minutes.
Can somebody help me with these issues?
Thanks in advance,
rdiez
4 years, 10 months
error: internal error: unable to execute QEMU command 'blockdev-mirror': Cannot find device= nor node_name=
by Oliver Dzombic
Hi,
i try to test live-migration using this command:
#virsh migrate --copy-storage-all --verbose --live kvm1776
qemu+ssh://nodeb/system
Instant error message:
error: internal error: unable to execute QEMU command 'blockdev-mirror':
Cannot find device=drive-sata0-0-0 nor node_name=drive-sata0-0-0
Its backed via zfs.
Source:
Virsh command line tool of libvirt 5.10.0
See web site at https://libvirt.org/
Compiled with support for:
Hypervisors: QEMU/KVM LXC LibXL OpenVZ VMware PHYP VirtualBox ESX
Hyper-V Test
Networking: Remote Network Bridging Interface netcf Nwfilter VirtualPort
Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM RBD Sheepdog
Gluster ZFS
Miscellaneous: Daemon Nodedev SELinux Secrets Debug DTrace Readline
Target:
Virsh command line tool of libvirt 6.0.0
See web site at https://libvirt.org/
Compiled with support for:
Hypervisors: QEMU/KVM LXC LibXL OpenVZ VMware VirtualBox ESX Hyper-V Test
Networking: Remote Network Bridging Interface netcf Nwfilter VirtualPort
Storage: Dir Disk Filesystem SCSI Multipath iSCSI LVM RBD Sheepdog
Gluster ZFS
Miscellaneous: Daemon Nodedev SELinux Secrets Debug DTrace Readline
both fedora31.
The XML File:
<domain type='kvm' id='292'>
<name>kvm1776</name>
<uuid>a4668f0b-4a71-4956-b74a-a0d3af5fe1f8</uuid>
<metadata>
<libosinfo:libosinfo
xmlns:libosinfo="http://libosinfo.org/xmlns/libvirt/domain/1.0">
<libosinfo:os id="http://debian.org/debian/10"/>
</libosinfo:libosinfo>
</metadata>
<memory unit='KiB'>8192000</memory>
<currentMemory unit='KiB'>8192000</currentMemory>
<blkiotune>
<device>
<path>/dev/kvm-storage/465-9fe4507b-c232-47b3-9b3b-f885359449c6</path>
<read_iops_sec>500</read_iops_sec>
<write_iops_sec>500</write_iops_sec>
<read_bytes_sec>51200000</read_bytes_sec>
<write_bytes_sec>51200000</write_bytes_sec>
</device>
</blkiotune>
<vcpu placement='static' current='4'>32</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-q35-4.2'>hvm</type>
<boot dev='hd'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<cpu mode='custom' match='exact' check='full'>
<model fallback='forbid'>IvyBridge-IBRS</model>
<vendor>Intel</vendor>
<feature policy='require' name='ss'/>
<feature policy='require' name='vmx'/>
<feature policy='require' name='pcid'/>
<feature policy='require' name='hypervisor'/>
<feature policy='require' name='arat'/>
<feature policy='require' name='tsc_adjust'/>
<feature policy='require' name='umip'/>
<feature policy='require' name='md-clear'/>
<feature policy='require' name='stibp'/>
<feature policy='require' name='arch-capabilities'/>
<feature policy='require' name='ssbd'/>
<feature policy='require' name='xsaveopt'/>
<feature policy='require' name='pdpe1gb'/>
<feature policy='require' name='ibpb'/>
<feature policy='require' name='amd-ssbd'/>
<feature policy='require' name='skip-l1dfl-vmentry'/>
</cpu>
<clock offset='utc'>
<timer name='rtc' tickpolicy='catchup'/>
<timer name='pit' tickpolicy='delay'/>
<timer name='hpet' present='no'/>
</clock>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<pm>
<suspend-to-mem enabled='no'/>
<suspend-to-disk enabled='no'/>
</pm>
<devices>
<emulator>/usr/bin/qemu-system-x86_64</emulator>
<disk type='block' device='disk'>
<driver name='qemu' type='raw' cache='none' io='native'/>
<source
dev='/dev/kvm-storage/465-9fe4507b-c232-47b3-9b3b-f885359449c6' index='2'/>
<backingStore/>
<target dev='sda' bus='sata'/>
<alias name='sata0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu'/>
<target dev='sdb' bus='sata'/>
<readonly/>
<alias name='sata0-0-1'/>
<address type='drive' controller='0' bus='0' target='0' unit='1'/>
</disk>
<controller type='usb' index='0' model='qemu-xhci' ports='15'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x00'
function='0x0'/>
</controller>
<controller type='sata' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x1f'
function='0x2'/>
</controller>
<controller type='pci' index='0' model='pcie-root'>
<alias name='pcie.0'/>
</controller>
<controller type='pci' index='1' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='1' port='0x10'/>
<alias name='pci.1'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x0' multifunction='on'/>
</controller>
<controller type='pci' index='2' model='pcie-to-pci-bridge'>
<model name='pcie-pci-bridge'/>
<alias name='pci.2'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00'
function='0x0'/>
</controller>
<controller type='pci' index='3' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='3' port='0x11'/>
<alias name='pci.3'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x1'/>
</controller>
<controller type='pci' index='4' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='4' port='0x12'/>
<alias name='pci.4'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x2'/>
</controller>
<controller type='pci' index='5' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='5' port='0x13'/>
<alias name='pci.5'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x3'/>
</controller>
<controller type='pci' index='6' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='6' port='0x14'/>
<alias name='pci.6'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x4'/>
</controller>
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0x15'/>
<alias name='pci.7'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x5'/>
</controller>
<controller type='pci' index='8' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='8' port='0x16'/>
<alias name='pci.8'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x6'/>
</controller>
<controller type='virtio-serial' index='0'>
<alias name='virtio-serial0'/>
<address type='pci' domain='0x0000' bus='0x05' slot='0x00'
function='0x0'/>
</controller>
<interface type='bridge'>
<mac address='16:c7:25:d4:23:14'/>
<source network='Public Network'
portid='7291b4be-07a7-49ad-b6ce-b1a0593c96c3' bridge='ovsbr'/>
<virtualport type='openvswitch'>
<parameters interfaceid='a9ddd087-71ca-456f-bb7b-c2848d11a5a6'/>
</virtualport>
<target dev='k1776-5nEsy'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x02' slot='0x01'
function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/19'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/19'>
<source path='/dev/pts/19'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<channel type='unix'>
<source mode='bind'
path='/var/lib/libvirt/qemu/channel/target/domain-292-kvm1776/org.qemu.guest_agent.0'/>
<target type='virtio' name='org.qemu.guest_agent.0'
state='disconnected'/>
<alias name='channel0'/>
<address type='virtio-serial' controller='0' bus='0' port='1'/>
</channel>
<input type='tablet' bus='usb'>
<alias name='input0'/>
<address type='usb' bus='0' port='1'/>
</input>
<input type='mouse' bus='ps2'>
<alias name='input1'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input2'/>
</input>
<graphics type='vnc' port='5917' autoport='yes' listen='0.0.0.0'
keymap='en-us'>
<listen type='address' address='0.0.0.0'/>
</graphics>
<video>
<model type='qxl' ram='65536' vram='65536' vgamem='16384'
heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01'
function='0x0'/>
</video>
<memballoon model='virtio'>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x06' slot='0x00'
function='0x0'/>
</memballoon>
<rng model='virtio'>
<backend model='random'>/dev/urandom</backend>
<alias name='rng0'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</rng>
</devices>
<seclabel type='dynamic' model='dac' relabel='yes'>
<label>+107:+107</label>
<imagelabel>+107:+107</imagelabel>
</seclabel>
</domain>
----
Any idea ? I could not find anything about this error.
Thank you very much !
--
Mit freundlichen Gruessen / Best regards
Oliver Dzombic
Layer7 Networks
mailto:info@layer7.net
Anschrift:
Layer7 Networks GmbH
Zum Sonnenberg 1-3
63571 Gelnhausen
HRB 96293 beim Amtsgericht Hanau
Geschäftsführung: Oliver Dzombic
UST ID: DE259845632
4 years, 10 months
Reload domain configuration on guest restart
by Gionatan Danti
Hi all,
as you surely know, after changing some domain parameters via libvirt
(ie: NIC type), the guest need to be shutdown and restarted. A simple
reboot will not be sufficient, as libvirt will not launch a new qemu
domain (ie: the same qemu process will be in charge to start the new
guest instance).
It is possible to configure libvirt to start a new qemu domain on guest
reboot? Can the <on_reboot> domain attribute be used for implementing
something similar?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
4 years, 10 months
libvirt-python: issue on fedora
by Matthias Tafelmeier
Hello,
ran into oddish glitch on fedora 30 cloud image with tooling based on
libvirt-python onpython3.7. *qemu-img *is installed though. Could anyone
have a look.
Build reference, hope it works:
https://travis-ci.com/cherusk/godon/builds/144068483?utm_medium=notificat...
Essential stretch:
**
-----
Image fedora30 Added
Adding a profile named fedora30 with default values
+ kcli list image
+--------------------------------------------+
| Images |
+--------------------------------------------+
| /srv/Fedora-Cloud-Base-30-1.2.x86_64.qcow2 |
+--------------------------------------------+
+ kcli create plan -f /opt/infra/machines.yml micro_fedora
Deploying Networks...
Network vm_net deployed
Deploying Vms...
Traceback (most recent call last):
File "/usr/bin/kcli", line 11, in <module>
load_entry_point('kcli==99.0', 'console_scripts', 'kcli')()
File "/usr/lib/python3.7/site-packages/kvirt/cli.py", line 2482, in cli
args.func(args)
File "/usr/lib/python3.7/site-packages/kvirt/cli.py", line 1071, in
create_plan
overrides=overrides, wait=wait)
File "/usr/lib/python3.7/site-packages/kvirt/config.py", line 1345, in plan
plan=plan, basedir=currentplandir, client=vmclient, onfly=onfly,
planmode=True)
File "/usr/lib/python3.7/site-packages/kvirt/config.py", line 610, in
create_vm
pcidevices=pcidevices)
File "/usr/lib/python3.7/site-packages/kvirt/kvm/__init__.py", line 893,
in create
storagepool.createXML(volxml, 0)
File "/usr/lib64/python3.7/site-packages/libvirt.py", line 3417, in
createXML
if ret is None:raise libvirtError('virStorageVolCreateXML() failed',
pool=self)
*libvirt.libvirtError: internal error: creation of non-raw images is not
supported without qemu-img *
-----
**
--
Regards
Matthias Tafelmeier
4 years, 10 months
[libvirt-users] FYI: intention to remove mail subject prefix & footer text
by Daniel P. Berrangé
Hi List Subscribers,
In recent months we have been seeing an increasing number of bounced
deliveries from libvirt mailing lists[1] due to DMARC policies on list
subscriber's mail servers. IOW, many subscribers are only receiving
a subset of mails sent to the libvirt mailing lists.
We believe the root cause of many of the problems is that mailman is
modifying the mail subject to add the "[libvirt]" / "[libvirt-users]"
prefix, and modifying the mail body to add the footer with links to
the listinfo page.
These modifications invalidate the DKIM signatures on mails sent to
the list by some of our subscribers. This in turn causes DMARC policy
rejections by the destination SMTP servers when mailman delivers
messages.
The solution is to disable any feature in mailman which modifies
parts of the mail validated by the DKIM signature. This means removing
the subject prefix and the mail body header. Further information on
this approach can be seen here:
https://begriffs.com/posts/2018-09-18-dmarc-mailing-list.html
QEMU has made the same change on their mailing list last year:
https://lists.gnu.org/archive/html/qemu-devel/2019-09/msg00416.html
If you are currently doing mail filtering / sorting based on the subject
prefix, you will need to change to use the List-Id header instead.
I will wait until the latter part of next week before making this change
to allow people time to adapt any filters.
Regards,
Daniel
[1] technically moderators have only been seeing bounces for messages on
libvirt-users-list, but that's because we've got mailman configured
to send libvir-list bounces to /dev/null.
--
|: https://berrange.com -o- https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org -o- https://fstop138.berrange.com :|
|: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
4 years, 10 months