[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 3 months
[libvirt-users] libvirt 5.0.0 - LXC container still in "virsh list" output after shutdown
by mxs kolo
Hello.
Centos 7.6 with libvirt build from base "virt" repository:
libvirt-daemon-driver-lxc-5.0.0-1.el7.x86_64
libvirt-client-5.0.0-1.el7.x86_64
libvirt-daemon-5.0.0-1.el7.x86_64
libvirt-daemon-driver-network-5.0.0-1.el7.x86_64
libvirt-libs-5.0.0-1.el7.x86_64
+
systemd-219-62.el7_6.2.x86_64
Now lxc containers with type='direct' can be started, but can't be stopped :)
Before shutdown.
# virsh list --all
Id Name State
------------------------------------------
16312 test.lxc running
# machinectl | grep test
lxc-16312-test.lxc container libvirt-lxc
# pstree -apA 16312
libvirt_lxc,16312 --name test.lxc --console 42 --security=none
--handshake 45 --veth macvlan0
|-systemd,16315
| |-agetty,16690 --noclear --keep-baud console 115200 38400 9600 vt220
| |-dbus-daemon,16659 --system --address=systemd: --nofork
--nopidfile --systemd-activation
| |-rsyslogd,17614 -n
| | |-{rsyslogd},17616
| | `-{rsyslogd},17626
| |-sshd,17613 -D
| |-systemd-journal,16613
| `-systemd-logind,16657
`-{libvirt_lxc},16319
Shutdown.
# virsh shutdown 16312
Domain 16312 is being shutdown
# echo $?
0
After.
In list output still present:
# virsh list --all
Id Name State
------------------------------------------
16312 test.lxc running
# machinectl | grep test | wc -l
0
No main process:
# ps axuwwf | grep libvirt_lxc | grep test | wc -l
0
LXC realy stopped, but only virsh still show it in list
Cgroups "blkio, cpuacct, memory, cpu, cpu,cpuacct, devices, hugetlb,
pids" deleted.
But another cgroups still present in fs:
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/cpuset/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/freezer/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/net_cls/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/net_cls,net_prio/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/net_prio/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
drwxr-xr-x 2 root root 0 Jan 21 13:07
/sys/fs/cgroup/perf_event/machine.slice/machine-lxc\x2d16312\x2dtest.lxc.scope
Restarted libvirtd temporary place container in correct state:
# systemctl restart libvirtd
# virsh list --all | grep test
- test.lxc shut off
libvirt 4.5.0 and 4.10.0 perform correct LXC shutdown.
b.r.
Maxim Kozin
5 years, 9 months
[libvirt-users] concurrent migration of several domains rarely fails
by Lentes, Bernd
Hi,
i have a two-node cluster with several domains as resources. During testing i tried several times to migrate some domains concurrently.
Usually it suceeded, but rarely it failed. I found one clue in the log:
Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+0000: 3252: error : virKeepAliveTimerInternal:143 : internal error: connection closed due to keepalive timeout
The domains are configured similar:
primitive vm_geneious VirtualDomain \
params config="/mnt/san/share/config.xml" \
params hypervisor="qemu:///system" \
params migration_transport=ssh \
op start interval=0 timeout=120 trace_ra=1 \
op stop interval=0 timeout=130 trace_ra=1 \
op monitor interval=30 timeout=25 trace_ra=1 \
op migrate_from interval=0 timeout=300 trace_ra=1 \
op migrate_to interval=0 timeout=300 trace_ra=1 \
meta allow-migrate=true target-role=Started is-managed=true \
utilization cpu=2 hv_memory=8000
What is the algorithm to discover the port used for live migration ?
I have the impression that "params migration_transport=ssh" is worthless, port 22 isn't involved for live migration.
My experience is that for the migration tcp ports > 49151 are used. But the exact procedure isn't clear for me.
Does live migration uses first tcp port 49152 and for each following domain one port higher ?
E.g. for the concurrent live migration of three domains 49152, 49153 and 49154.
Why does live migration for three domains usually succeed, although on both hosts just 49152 and 49153 is open ?
Is the migration not really concurrent, but sometimes sequential ?
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
[ mailto:bernd.lentes@helmholtz-muenchen.de | bernd.lentes(a)helmholtz-muenchen.de ]
phone: +49 89 3187 1241
fax: +49 89 3187 2294
[ http://www.helmholtz-muenchen.de/idg | http://www.helmholtz-muenchen.de/idg ]
wer Fehler macht kann etwas lernen
wer nichts macht kann auch nichts lernen
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDirig.in Petra Steiner-Hoffmann
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
5 years, 9 months
[libvirt-users] Hook problem
by David Gilmour
I am trying to use /etc/libvirt/hooks/qemu to control the startup of several guests with interdependencies. The goal is to delay the start of guest B until the DNS server on guest A is running. To accomplish this, I wrote a qemu hook script that detects the normal startup of guest B and start a second script in the background to wait until the preconditions to start B are fulfilled, then start B using a call to the virsh command.
For this strategy to work, it must handle the case where libvirt has chosen guest B as the first guest to attempt to start. (Although renaming the symlinks in /etc/libvirt/qemu/autostart to force starting the guests in a particular order might work, I do not want to rely on this undocumented behavior). In the case where libvirt happens to attempt to start guest B before it starts guest A, the hook script needs to somehow tell libvirt to skip guest B and go on to starting the next guest. Otherwise a deadlock would result as libvirt waited for B to start, but B was waiting for A to start. I have tried to handle this by returning failure from the hook script for the initial attempt to start B once the background script has been started to implement the DNS check and eventually the delayed start of B.
Unfortunately, I cannot find a way to force libvirt to continue until the background script exits. No combination of background execution, nohup, disown, setsid -f, or at seems to detach the process sufficiently to "fool" libvirt into acting on the "exit 1" line in the qemu script and proceed on to start other guests. As a result, the dependency of B on A deadlocks, and neither guest ever starts.
Can someone please either find an error in my approach or propose a different strategy to implement this customized dependency of the startup of one guest on another?
Thanks -
======
Here is my qemu script:
#!/bin/bash
if [[ "$2" == 'start' ]]; then
echo "$0: Starting $1..." |& logger
if [[ "$1" == 'B' ]]; then
# The next line is where the background script is invoked
/bin/bash /usr/local/bin/startB &
# These also don't work:
# (/bin/bash /usr/local/bin/startB) & ; disown
# setsid -f (/bin/bash /usr/local/bin/startB) & ; disown
# Unfortunately, the exit in the following line doesn't force libvirt to move on to the next guest to start until the background command has itself exited
exit 1;
fi
fi
Here is the startB script, including a call to a program named in the $dnssuccess variable that does the testing of DNS availability on guest A:
#!/bin/bash
until $dnssuccess
do
echo "$0: Delaying start of guest B 10 seconds" |& logger
sleep 10;
done
# It's now OK to start guest
echo "$0: Now starting guest B" |& logger
virsh start B;
=====
5 years, 9 months
[libvirt-users] Keyboard problems with VNC access
by Nick Howitt
Hi,
I am pretty new to libvirt, but have succeeded in setting up two VM's,
Windows 10 and ClearOS (a Centos derivative) and they both have the same
issue. I have installed both of them with a UK English keyboard, but the
host machine is remote and in the US with a US locale. When I access
either through VNC (I've tried TightVNC and VNC-Viewer), a number of my
keys don't map correctly. As an example the \ key comes up with a #, and
the # key comes up with a 3 (as does the 3 key). Other keys are the
standard US type of mapping, but the £ (above 3) comes up blank and the
$ (above 4) comes up with $ which, I think is the UK setting and not the
US setting. At this point I am thoroughly confused.
Does anyone have any ideas?
Thanks,
Nick
5 years, 9 months
[libvirt-users] Workaround for "internal snapshots of a VM with pflash based firmware are not supported" problem?
by Andreas Tscharner
Hello World,
We were hit by the "internal snapshots of a VM with pflash based
firmware are not supported" problem. Is there a workaround for it (for
which revert works)? Another format perhaps?
Our use case: We have an image (Win10, UEFI) with some software
installed. This image is used by Jenkins to execute some tests. After
the tests, the image should be reverted to the original state. Jenkins
uses the libvirt plugin to connect to and start/stop and revert the
virtual machines. The images are in qcow2 format.
We've usually created a snapshot of the base state using "virsh
snapshot-create-as" on the host system, but this is no longer possible.
What is the recommended workaround?
TIA and best regards
Andreas
--
Andreas Tscharner sternenfeuer(a)gmail.com
Gordon's Law:
If you think you have the solution, the question was poorly phrased.
5 years, 9 months
[libvirt-users] unable to list virtualbox domain remotely
by François Marsy
Hi dear all,
I have the following issue :
I have a libvirtd intstalled on an ubuntu desktop. The libvirtd is
listening on port 16509 (options listen_tls = 0 and listen_tcp = 1)
On the desktop I have a virtualbox installed with a running VM :
"styx32-dry-run3"
I want to control my VM remotely by using libvirt.
I performed following tests :
1)Local virsh running fine :
virsh -c vbox:///session list
Id Name State
----------------------------------------------------
2 styx32-dry-run3 running
2)Using the tcp port is showing an empty list:
virsh -c vbox+tcp://127.0.0.1:16509/session list
Id Name State
----------------------------------------------------
3)net-list command works
virsh -c vbox+tcp://127.0.0.1:16509/session net-list
Name State Autostart Persistent
----------------------------------------------------------
vboxnet0 active no autostart yes
Can any of you help me please, I really don't understand why my VM is not
showing up in the 2 ) list .. and I really don't know where to look at :-(
Kind regards
Francois
5 years, 10 months
[libvirt-users] stuck in pxe loop from uefi + tianocore/ovmf
by jsl6uy js16uy
Hello all, hope all is well
Was wondering if I'm doing something wrong/missing something.
I have pxe+tftp+uefi working between 2 vms. However after the build is
done and I reboot, I still come up to pxe b/c tianocore always starts,
from what I've seen, with trying PXE first. Great for initial builds
:) but I would like to it stop after.
In the xml I am pointing to the disk to boot first. if I virsh destory
and run qemu directly on the qcow image, the vm comes up as expected
and boots into the OS
I have tried removing the nvram under
/var/lib/libvirt/qemu/nvram/virtmachine. But the vm boots up and
starts looking for pxe again.
If I let that timeout, then type exit and select continue in the
tianocore boot firmware, I just loop back into trying pxe over ipv4
etc
I tried <rom bar='off'/> on the e1000 and that does work to stop pxe.
Then I get to the uefi shell quickly, but all I see at that point is 2
BLK maps pointing to floppy 1 and 2. I see nothing when I try to
manually add a new boot option.
any help/wisdom would be appreciated.
libvirtd (libvirt) 4.9.0
qemu command:
sudo qemu-system-x86_64 -m 2048 -L /usr/share/ovmf/x64/ -bios
OVMF_CODE.fd -drive
file=/images/rdhcli02,format=qcow2,cache=writeback,if=virtio -device
virtio-net,netdev=net10,mac=52:54:00:11:22:33 -netdev
tap,id=net10,ifname=rdhcli02,script=no,downscript=no
Is there a way to address this?
5 years, 10 months