Distinguishing between host and guest initiated VM shutdown
by Milan Zamazal
Hi,
we have a problem in oVirt that highly available VMs don't restart after
host poweroff because Vdsm identifies the case as a user initiated
shutdown (https://bugzilla.redhat.com/1800966).
When poweroff is run on the host, libvirt-guests service takes an
action. `virsh shutdown' is run on the VM, the guest OS is shut down
cleanly and libvirt reports a shutdown event with
VIR_DOMAIN_EVENT_SHUTDOWN_GUEST detail. Although it is a host initiated
shutdown actually.
Does libvirt provide any means to distinguish this case from a regular
user shutdown?
Thanks,
Milan
4 years, 1 month
support for live migration with PCI passthrough devices
by Henry lol
Hi guys,
I'm wondering whether libvirt supports live migration for the VM with PCI
passthrough devices.
or it must be assumed before live migration that all passthrough devices be
unplugged?
If so, all unplugged devices should be manually hot-plugged to the VM after
migration??
Thanks.
4 years, 1 month
python-libvirt domain.destroy() doesn't appear to be working for me
by Jeremy Markle
I'm using the python-libvirt library and finding that I cannot get
.destroy() or .shutdown() to work.
https://github.com/simora/docker-libvirt-flask/blob/cba6041b47bdf4ccb3b95...
That is the line in my code. .create() works fine and using virsh in the
docker container works fine to destroy the domain as well.
This is based on ubuntu bionic in a docker container with the libvirt
socket and such made available. .create() works, listing works and virsh
works so permissions are not the issue. I'm confident I have failed to
properly use the function or am missing the appropriate logging.
Any assistance would be greatly appreciated.
4 years, 1 month
couple of questions
by Vjaceslavs Klimovs
Hey folks,
I've been experimenting with native NBD live migration w/ TLS and have
a couple of questions.
1) It appears that in some cases modified default_tls_x509_cert_dir
from qemu.conf is not respected, seems like virsh always expects a
default location and does not check default_tls_x509_cert_dir:
virsh # migrate vm1 qemu+tls://ratchet.lan/system --live --persistent
--undefinesource --copy-storage-all --verbose --tls
error: internal error: unable to execute QEMU command 'object-add':
Unable to access credentials /etc/pki/qemu/ca-cert.pem: No such file
or directory
It's checking /etc/pki and not the location specified in
default_tls_x509_cert_dir. Is this a bug or am I missing something?
2) QEMU has -object tls-cipher-suites, but there does not seem to be a
way to specify TLS priority in libvirt's qemu conf. Solvable via
compile time --tls-priority flag, but that's not very convenient. Is
there a way to set TLS priority for QEMU TLS connections from libvirt
configs? This would be equivalent to libvirtd.conf's tls_priority
setting, but for QEMU, not for libvirt's own connections.
3) After setting up default_tls_x509_cert_dir and
default_tls_x509_verify = 1 (and directories as required see 1),
virsh initiated migrations with --tls flag succeed and captures show
that it's using TLS. However, they equally succeed without the flag.
Is there a way to ensure that only TLS communication is permitted
between QEMUs? I tried nbd_tls, but that did not seem to have any
effect.
Thanks a lot for your help!
4 years, 1 month
unable to migrate non shared storage in tunneled mode
by Vjaceslavs Klimovs
Hey all,
With libvirt 6.5.0 and qemu 5.1.0 migration of non shared disks in
tunneled mode does not work for me:
virsh # migrate alpinelinux3.8 qemu+tls://ratchet.lan/system --live
--persistent --undefinesource --copy-storage-all --tunneled --p2p
error: internal error: qemu unexpectedly closed the monitor: Receiving
block device images
Error unknown block device
2020-08-15T21:21:48.995016Z qemu-system-x86_64: error while loading
state section id 1(block)
2020-08-15T21:21:48.995193Z qemu-system-x86_64: load of migration
failed: Invalid argument
This is both with UEFI and BIOS guests.
I understand that newer ways of migrating non shared disks is via NBD
directly with QEMU, however I am certain
that this used to work before libvirt 6.0. There is a series of
commits to /src/qemu/qemu_migration.c on Dec 8, 2019,
could they have something to do with this?
Is migration of non shared disks supported and supposed to work in
tunneled mode or is it not a supported configuration
and native NBD directly with QEMU should be used in all cases?
Thanks in advance!
Full qemu log on receiving host:
2020-08-15 21:23:38.917+0000: starting up libvirt version: 6.5.0, qemu
version: 5.1.0, kernel: 5.4.57-gentoo, hostname: ratchet.lan
LC_ALL=C \
PATH=/bin:/sbin:/bin:/sbin:/usr/bin:/usr/sbin:/usr/bin:/usr/sbin:/usr/local/bin:/usr/local/sbin:/opt/bin
\
HOME=/var/lib/libvirt/qemu/domain-4-alpinelinux3.8 \
USER=root \
XDG_DATA_HOME=/var/lib/libvirt/qemu/domain-4-alpinelinux3.8/.local/share \
XDG_CACHE_HOME=/var/lib/libvirt/qemu/domain-4-alpinelinux3.8/.cache \
XDG_CONFIG_HOME=/var/lib/libvirt/qemu/domain-4-alpinelinux3.8/.config \
QEMU_AUDIO_DRV=none \
/usr/bin/qemu-system-x86_64 \
-name guest=alpinelinux3.8,debug-threads=on \
-S \
-object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-4-alpinelinux3.8/master-key.aes
\
-blockdev '{"driver":"file","filename":"/var/lib/libvirt/uefi/OVMF_CODE.fd","node-name":"libvirt-pflash0-storage","auto-read-only":true,"discard":"unmap"}'
\
-blockdev '{"node-name":"libvirt-pflash0-format","read-only":true,"driver":"raw","file":"libvirt-pflash0-storage"}'
\
-blockdev '{"driver":"file","filename":"/var/lib/libvirt/qemu/nvram/alpinelinux3.8_VARS.fd","node-name":"libvirt-pflash1-storage","auto-read-only":true,"discard":"unmap"}'
\
-blockdev '{"node-name":"libvirt-pflash1-format","read-only":false,"driver":"raw","file":"libvirt-pflash1-storage"}'
\
-machine pc-q35-5.1,accel=kvm,usb=off,dump-guest-core=off,pflash0=libvirt-pflash0-format,pflash1=libvirt-pflash1-format
\
-cpu kvm64,ibpb=on,md-clear=on,spec-ctrl=on,ssbd=on,vme=on,x2apic=on,hypervisor=on
\
-m 2048 \
-overcommit mem-lock=off \
-smp 2,sockets=2,cores=1,threads=1 \
-uuid 95286971-32fa-4138-be0e-519ec21af800 \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=35,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=utc,driftfix=slew \
-global kvm-pit.lost_tick_policy=delay \
-no-hpet \
-no-shutdown \
-global ICH9-LPC.disable_s3=1 \
-global ICH9-LPC.disable_s4=1 \
-boot strict=on \
-device pcie-root-port,port=0x10,chassis=1,id=pci.1,bus=pcie.0,multifunction=on,addr=0x2
\
-device pcie-root-port,port=0x11,chassis=2,id=pci.2,bus=pcie.0,addr=0x2.0x1 \
-device pcie-root-port,port=0x12,chassis=3,id=pci.3,bus=pcie.0,addr=0x2.0x2 \
-device pcie-root-port,port=0x13,chassis=4,id=pci.4,bus=pcie.0,addr=0x2.0x3 \
-device qemu-xhci,p2=15,p3=15,id=usb,bus=pci.1,addr=0x0 \
-blockdev '{"driver":"host_device","filename":"/dev/vg0/test","aio":"native","node-name":"libvirt-2-storage","cache":{"direct":true,"no-flush":false},"auto-read-only":true,"discard":"unmap"}'
\
-blockdev '{"node-name":"libvirt-2-format","read-only":false,"cache":{"direct":true,"no-flush":false},"driver":"raw","file":"libvirt-2-storage"}'
\
-device virtio-blk-pci,bus=pci.2,addr=0x0,drive=libvirt-2-format,id=virtio-disk0,bootindex=1,write-cache=on
\
-device ide-cd,bus=ide.0,id=sata0-0-0 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-vnc 127.0.0.1:2 \
-device VGA,id=video0,vgamem_mb=16,bus=pcie.0,addr=0x1 \
-incoming defer \
-device virtio-balloon-pci,id=balloon0,bus=pci.3,addr=0x0 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny
\
-msg timestamp=on
char device redirected to /dev/pts/5 (label charserial0)
Receiving block device images
Error unknown block device
2020-08-15T21:23:39.278287Z qemu-system-x86_64: error while loading
state section id 1(block)
2020-08-15T21:23:39.278422Z qemu-system-x86_64: load of migration
failed: Invalid argument
2020-08-15 21:23:39.344+0000: shutting down, reason=failed
4 years, 1 month
Conflicting parameters on qemu call
by Jan Walzer
Hi Lists,
I currently have the issue of wanting to use emu-system-x86_64 on a ppc64le platform.
It is imperative to pass the "-accel tcg,thread=multi” parameter to qemu when starting an instance, as without that, it will only use one thread and hence of limited/no use.
The problem is, that libvirt itself, passes “-machine q35,accel=tcg” to qemu, which is a different parameter, that conflicts with the other one.
Can we discuss, if I either have overlooked something, or is there a workaround, or is this a bug?
I’m running:
# uname -a
Linux tiger-v4 5.7.0-1-powerpc64le #1 SMP Debian 5.7.6-1 (2020-06-24) ppc64le GNU/Linux
# apt-cache policy libvirt-daemon
libvirt-daemon:
Installed: 6.5.0-1
Candidate: 6.5.0-1
Version table:
*** 6.5.0-1 500
500 http://deb.debian.org/debian testing/main ppc64el Packages
# apt-cache policy qemu-system-x86
qemu-system-x86:
Installed: 1:5.0-14
Candidate: 1:5.0-14
Version table:
*** 1:5.0-14 500
500 http://deb.debian.org/debian testing/main ppc64el Packages
Do you need any more information?
Feel free to only continue on the list that matters.
Greetings, Jan
4 years, 1 month
multiple vms with same PCI passthrough
by Daniel Black
In attempting to isolate vfio-pci problems between two different guest
instances, the creation of a second guest (with existing guest shutdown)
resulted in:.
Aug 09 12:43:23 grit libvirtd[6716]: internal error: Device 0000:01:00.3 is
already in use
Aug 09 12:43:23 grit libvirtd[6716]: internal error: Device 0000:01:00.3 is
already in use
Aug 09 12:43:23 grit libvirtd[6716]: Failed to allocate PCI device list:
internal error: Device 0000:01:00.3 is already in use
Compiled against library: libvirt 6.1.0
Using library: libvirt 6.1.0
Using API: QEMU 6.1.0
Running hypervisor: QEMU 4.2.1
(fc32 default install)
The upstream code seems also to test definitions rather than active uses
of the PCI device.
My potentially naive patch to correct this (but not the failing test cases)
would be:
diff --git a/src/util/virpci.c b/src/util/virpci.c
index 47c671daa0..a00c5e6f44 100644
--- a/src/util/virpci.c
+++ b/src/util/virpci.c
@@ -1597,7 +1597,7 @@ int
virPCIDeviceListAdd(virPCIDeviceListPtr list,
virPCIDevicePtr dev)
{
- if (virPCIDeviceListFind(list, dev)) {
+ if (virPCIDeviceBusContainsActiveDevices(dev, list)) {
virReportError(VIR_ERR_INTERNAL_ERROR,
_("Device %s is already in use"), dev->name);
return -1;
Is this too simplistic or undesirable a feature request/implementation?
I'd be more than grateful if someone carries this through as I'm unsure
when I may get time for this.
4 years, 1 month
KVM guest VM IP address
by Kaushal Shriyan
Hi,
I am trying to find out the IP address of the KVM guest virtual machine.
#virsh dumpxml newsoftlinedrupalpoc | grep "mac address" | awk -F\' '{
print $2}'
52:54:00:2c:7e:ff
[root@baseserver1 ~]# arp -an | grep 52:54:00:2c:7e:ff
[root@baseserver1 ~]# virsh domifaddr newsoftlinedrupalpoc
Name MAC address Protocol Address
-------------------------------------------------------------------------------
[root@baseserver1 ~]#
It is not showing anything. I manually configure the IP
using /etc/sysconfig/network-scripts/ifcfg-eth0
ONBOOT=yes
IPADDR=192.168.0.189
PREFIX=24
GATEWAY=192.168.0.10
DNS1=8.8.8.8
DNS2=8.8.4.4
CentOS Linux release 7.6.1810 (Core)
virt-install --version
1.5.0
virsh version
Compiled against library: libvirt 4.5.0
Using library: libvirt 4.5.0
Using API: QEMU 4.5.0
Running hypervisor: QEMU 1.5.3
Am I missing something? Thanks in advance and I look forward to hearing
from you.
Best Regards,
4 years, 1 month
ipv6 NAT; accept_ra errors and about network choice
by Ian Wienand
Hello,
Firstly THANK YOU for the IPv6 NAT support merged in 6.5. It has been
almost impossible to get IPv6 into a VM on a laptop that switches
between wifi and wired (dock) connections, because you can not add a
wifi interface to a bridge. I know NAT is against the IPv6 end-to-end
xen but it makes this "just work" for the vast majority of people like
me who need to ssh/curl/talk to ipv6 only hosts!
So I installed 6.6.0 from the virt-preview repos on Fedora 32 to
eagerly test it out.
My network config looks like
<network>
<name>network</name>
<uuid> ... </uuid>
<forward mode='nat'>
<nat ipv6='yes'/>
</forward>
<bridge name='virbr0' stp='on' delay='0'/>
<mac address=' ... '/>
<domain name='network'/>
<ip address='192.168.100.1' netmask='255.255.255.0'>
<dhcp>
<range start='192.168.100.128' end='192.168.100.254'/>
</dhcp>
</ip>
<ip family='ipv6' address='fc00:dead:beef:55::' prefix='64'>
</ip>
</network>
The first problem I hit was trying to start that network:
error: internal error: Check the host setup: enabling IPv6 forwarding
with RA routes without accept_ra set to 2 is likely to cause routes
loss. Interfaces to look at: wlp4s0
wlp4s0 is my wifi card that is configured by NetworkManager in a
completely unremarkable fashion. By default it gets an ipv6 via SLAAC
from my router. This feels a bit like the unresolved bug [1] which
says that systemd-networkd is handling the RA's in userspace for
... reasons [2]. It's unclear to me if NetworkManager is doing
similar.
I feel like this must be a red-herring. My wired interface has the
same setting of 0
$ cat /proc/sys/net/ipv6/conf/enp0s31f6/accept_ra
0
and is similarly just a very standard auto-configured NetworkManager
interface. When I "net-start" the network whilst on wifi libvirt
doesn't seem to care about that interface (I presume it only looks at
the active one?). When I dock and turn off wifi, ipv6 connectivity
continues to work through enp0s31f6, so I don't think the accept_ra
really matters in this case.
I feel like this message is incorrect, and being as I've done nothing
special to my underlying interfaces probably going to be wrong for a
lot of people trying this? Does anyone know the details of this
message and see why it would be required in this situation?
The other thing that I'd like to expand the documentation on, if I can
get some clarity, is the choice of network. It seems like it has to
be a /64, and it seems like the best choice is within fc00::/7, or at
least that is what has been assigned for private networks like this
[3]?
The only problem with this is that I think glibc filters this range so
nothing prefers IPv6. Is this the range expected to be used for ipv6
NAT? If so, would a patch to drop some documentation breadcrumbs
about setting gai.conf or something be useful? Or are there better
choices for the network?
Thanks!
-i
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1639087
[2] https://github.com/systemd/systemd/commit/3b015d40c19d9338b66bf916d84dec6...
[3] https://tools.ietf.org/html/rfc4193
4 years, 1 month
Post-firewall hook to insert custom rules?
by Gunnar Niels
Hello, I have a set of iptables rules that I need to insert *after* libvirt
has set up all of its firewall rules. Is there a hook that I can tap into in
order to run something like a custom script to make sure this happens? Any ideas?
-GN
4 years, 1 month