[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 1 month
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 2 months
[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 2 months
[libvirt-users] virRandomBits - not very random
by Brian Rak
I just ran into an issue where I had about 30 guests get duplicate mac
addresses assigned. These were scattered across 30 different machines.
Some debugging revealed that:
1) All the host machines were restarted within a couple seconds of each
other
2) All the host machines had fairly similar libvirtd pids (within ~100
PIDs of each other)
3) Libvirt seeds the RNG using 'time(NULL) ^ getpid()'
This perfectly explains why I saw so many duplicate mac addresses.
Why is the RNG seed such a predictable value? Surely there has to be a
better source of a random seed then the timestamp and the pid?
The PID seems to me to be a very bad source of any randomness. I just
ran a test across 60 of our hosts. 43 of them shared their PID with at
least one other machine.
6 years, 5 months
[libvirt-users] Create qcow2 v3 volumes via libvirt
by Gionatan Danti
Hi all,
on a fully patched CentOS 7.4 x86-64, I see the following behavior:
- when creating a new volumes using vol-create-as, the resulting file is
a qcow2 version 2 (compat=0.10) file. Example:
[root@gdanti-lenovo vmimages]# virsh vol-create-as default zzz.qcow2
8589934592 --format=qcow2 --backing-vol /mnt/vmimages/centos6.img
Vol zzz.qcow2 created
[root@gdanti-lenovo vmimages]# file zzz.qcow2
zzz.qcow2: QEMU QCOW Image (v2), has backing file (path
/mnt/vmimages/centos6.img), 8589934592 bytes
[root@gdanti-lenovo vmimages]# qemu-img info zzz.qcow2
image: zzz.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 196K
cluster_size: 65536
backing file: /mnt/vmimages/centos6.img
backing file format: raw
Format specific information:
compat: 0.10
refcount bits: 16
- when creating a snapshot, the resulting file is a qcow2 version 3
(comapt=1.1) file. Example:
[root@gdanti-lenovo vmimages]# virsh snapshot-create-as centos6left
--disk-only --no-metadata snap.qcow2
Domain snapshot snap.qcow2 created
[root@gdanti-lenovo vmimages]# file centos6left.snap.qcow2
centos6left.snap.qcow2: QEMU QCOW Image (v3), has backing file (path
/mnt/vmimages/centos6left.qcow2), 8589934592 bytes
[root@gdanti-lenovo vmimages]# qemu-img info centos6left.snap.qcow2
image: centos6left.snap.qcow2
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 196K
cluster_size: 65536
backing file: /mnt/vmimages/centos6left.qcow2
backing file format: qcow2
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false
From what I know, this is a deliberate decision: compat=1.1 requires
relatively recent qemu version, and creating a new volume play on the
"safe side" of compatibility.
It is possible to create a new volume using qcow2 version 3 (compat=1.1)
format *using libvirt/virsh* (I know I can do that via qemu-img)? Any
drawback on using version 3 format?
Thanks.
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
6 years, 5 months
[libvirt-users] kvm/libvirt on CentOS7 w/Windows 10 Pro guest
by Benjammin2068
Hey all,
New to list, so I apologize if this has been asked a bunch already...
Is there something I'm missing with Windows 10 as a guest that keeps Windows Updates from nuking the boot process?
I just did an orderly shutdown and windows updated itself <I forgot to disable in time> only to reboot to the diagnostics screen which couldn't repair.
going to command prompt and doing the usual "bootrec /fixmbr, /fixboot and /RebuildBcd" didn't help.
This has happened a few times. I can't believe how fragile Win10pro is while running in a VM.
(and it's happened on a couple machines I've been experimenting with -- both running same OS, but different hardware.)
I just saw the FAQ about the libvirt repo for the virtio drivers for windows.... I need to go read more on it...
but in the meantime, is there any other smoking gun I'm not aware of? (after lots of google searching)
Thanks,
-Ben
6 years, 8 months
[libvirt-users] libvirt and NAT on a system that already has a DHCP server
by john@bluemarble.net
I'm trying to use virt-manager and qemu/kvm on Arch Linux. The box I'm
using is also the router for my house. It runs a kea DHCP server. When I
try to start the default NAT network, it can't start dnsmasq because that
port is already bound. Is there a way to have it not bind on this
interface? I see there is an except-on statement in the dnsmasq.conf, but
I can't add lines to that directly, and I didn't see any way to add
special options using virsh net-edit default.
Thanks.
6 years, 8 months
[libvirt-users] libvirtd hangs
by Artem Likhachev
Hello everybody!
We have a cluster of servers managed by VMmanager 5 KVM (by ispsystem).
A typical node:
# cat /etc/redhat-release
CentOS Linux release 7.3.1611 (Core)
# uname -r
3.10.0-693.11.6.el7.x86_64
# rpm -qa |grep libvirt
libvirt-daemon-driver-qemu-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-storage-disk-3.7.0-1.el7.centos.x86_64
libvirt-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-storage-core-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-nodedev-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-lxc-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-storage-iscsi-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-storage-gluster-3.7.0-1.el7.centos.x86_64
libvirt-daemon-kvm-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-network-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-interface-3.7.0-1.el7.centos.x86_64
libvirt-daemon-config-network-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-storage-rbd-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-storage-3.7.0-1.el7.centos.x86_64
libvirt-libs-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-nwfilter-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-secret-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-storage-mpath-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-storage-scsi-3.7.0-1.el7.centos.x86_64
libvirt-client-3.7.0-1.el7.centos.x86_64
libvirt-daemon-3.7.0-1.el7.centos.x86_64
libvirt-daemon-config-nwfilter-3.7.0-1.el7.centos.x86_64
libvirt-daemon-driver-storage-logical-3.7.0-1.el7.centos.x86_64
# rpm -qa |grep qemu
qemu-kvm-common-rhev-2.6.0-27.1.el7.centos.maros.x86_64
ipxe-roms-qemu-20160127-5.git6366fa7a.el7.noarch
qemu-img-rhev-2.6.0-27.1.el7.centos.maros.x86_64
qemu-kvm-rhev-2.6.0-27.1.el7.centos.maros.x86_64
# rpm -qa |grep ebtables
ebtables-2.0.10-15.el7.centos.marosnet.x86_64
ebtables build with patch
https://marc.info/?l=netfilter-devel&m=150728694430435 (described at
https://bugzilla.redhat.com/show_bug.cgi?id=1495893)
Sometimes libvirtd just hangs and stops answering for virsh requests
(like `virsh list --all`).
At those moments:
# strace -p 5786
read(53, "\0\0\0\34", 4) = 4
read(53, "keep\0\0\0\1\0\0\0\2\0\0\0\2\0\0\0\0\0\0\0\0", 24) = 24
poll([{fd=5, events=POLLIN}, {fd=7, events=POLLIN}, {fd=12,
events=POLLIN}, {fd=13, events=POLLIN}, {fd=14, events=POLLIN}, {fd=15,
events=POLLIN}, {fd=19, events=POLLIN}, {fd=23,
events=POLLIN|POLLERR|POLLHUP}, {fd=21, events=POLLIN|POLLERR|POLLHUP},
{fd=27, events=POLLIN|POLLERR|POLLHUP}, {fd=25,
events=POLLIN|POLLERR|POLLHUP}, {fd=22, events=POLLIN|POLLERR|POLLHUP},
{fd=24, events=POLLIN|POLLERR|POLLHUP}, {fd=26,
events=POLLIN|POLLERR|POLLHUP}, {fd=29, events=POLLIN|POLLERR|POLLHUP},
{fd=30, events=POLLIN|POLLERR|POLLHUP}, {fd=31,
events=POLLIN|POLLERR|POLLHUP}, {fd=33, events=POLLIN|POLLERR|POLLHUP},
{fd=32, events=POLLIN|POLLERR|POLLHUP}, {fd=36,
events=POLLIN|POLLERR|POLLHUP}, {fd=35, events=POLLIN|POLLERR|POLLHUP},
{fd=39, events=POLLIN|POLLERR|POLLHUP}, {fd=40,
events=POLLIN|POLLERR|POLLHUP}, {fd=41, events=POLLIN|POLLERR|POLLHUP},
{fd=44, events=POLLIN|POLLERR|POLLHUP}, {fd=42,
events=POLLIN|POLLERR|POLLHUP}, {fd=43, events=POLLIN|POLLERR|POLLHUP},
{fd=48, events=POLLIN|POLLERR|POLLHUP}, {fd=49,
events=POLLIN|POLLERR|POLLHUP}, {fd=59, events=POLLIN|POLLERR|POLLHUP},
{fd=46, events=POLLIN|POLLERR|POLLHUP}, {fd=50,
events=POLLIN|POLLERR|POLLHUP}, ...], 43, 5000
# gdb -p 5786
(gdb) thread apply all bt
Thread 17 (Thread 0x7f411a9d7700 (LWP 5788)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 () at
../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007f4129c2a2e6 in virCondWait (c=c@entry=0x7f412b18ebb8,
m=m@entry=0x7f412b18eb90) at util/virthread.c:154
#2 0x00007f4129c2ada3 in virThreadPoolWorker
(opaque=opaque@entry=0x7f412b183ab0) at util/virthreadpool.c:124
#3 0x00007f4129c2a078 in virThreadHelper (data=<optimized out>) at
util/virthread.c:206
#4 0x00007f4127033dc5 in start_thread (arg=0x7f411a9d7700) at
pthread_create.c:308
#5 0x00007f4126d6273d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 16 (Thread 0x7f411a1d6700 (LWP 5789)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 () at
../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007f4129c2a2e6 in virCondWait (c=c@entry=0x7f412b18ebb8,
m=m@entry=0x7f412b18eb90) at util/virthread.c:154
#2 0x00007f4129c2ada3 in virThreadPoolWorker
(opaque=opaque@entry=0x7f412b183a00) at util/virthreadpool.c:124
#3 0x00007f4129c2a078 in virThreadHelper (data=<optimized out>) at
util/virthread.c:206
#4 0x00007f4127033dc5 in start_thread (arg=0x7f411a1d6700) at
pthread_create.c:308
#5 0x00007f4126d6273d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 15 (Thread 0x7f41199d5700 (LWP 5790)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 () at
../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007f4129c2a2e6 in virCondWait (c=c@entry=0x7f412b18ebb8,
m=m@entry=0x7f412b18eb90) at util/virthread.c:154
#2 0x00007f4129c2ada3 in virThreadPoolWorker
(opaque=opaque@entry=0x7f412b183950) at util/virthreadpool.c:124
#3 0x00007f4129c2a078 in virThreadHelper (data=<optimized out>) at
util/virthread.c:206
#4 0x00007f4127033dc5 in start_thread (arg=0x7f41199d5700) at
pthread_create.c:308
#5 0x00007f4126d6273d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 14 (Thread 0x7f41191d4700 (LWP 5791)):
#0 pthread_cond_wait@(a)GLIBC_2.3.2 () at
../nptl/sysdeps/unix/sysv/linux/x86_64/pthread_cond_wait.S:185
#1 0x00007f4129c2a2e6 in virCondWait (c=c@entry=0x7f412b18ebb8,
m=m@entry=0x7f412b18eb90) at util/virthread.c:154
#2 0x00007f4129c2ada3 in virThreadPoolWorker
(opaque=opaque@entry=0x7f412b1838a0) at util/virthreadpool.c:124
#3 0x00007f4129c2a078 in virThreadHelper (data=<optimized out>) at
util/virthread.c:206
#4 0x00007f4127033dc5 in start_thread (arg=0x7f41191d4700) at
pthread_create.c:308
#5 0x00007f4126d6273d in clone () at
../sysdeps/unix/sysv/linux/x86_64/clone.S:113
Thread 13 (Thread 0x7f41189d3700 (LWP 5792)):
---Type <return> to continue, or q <return> to quit---
log_level = 3 at /etc/libvirt/libvirtd.conf doesn't help to detect the
problem. Actually, libvirtd continues acting, but is not responding.
It's like waiting for something... may be an answer. No zombieing, no
cpu loading.
This fixes the issue:
rm -f /run/ebtables.lock ; killall -9 virsh; systemctl restart
systemd-{journald,udevd,logind,machined} ; systemctl restart libvirtd
The same situation appears with libvirt-3.2.0-14.el7_4.7.x86_64.
Could anybody help to resolve this situation?
6 years, 9 months