[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 3 months
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 4 months
[libvirt-users] Libvirt access control drivers
by Anastasiya Ruzhanskaya
Hello!
According to the documentation access control drivers are not in really
"good condition". There is a polkit, but it can distinguish users only
according the pid. However, I have met some articles about more
fine-grained control and about selinux drivers for libvirt? So, what is the
status now? Should I implement something by myself if I want access based
on login, are their instructions how to write these drivers or there is
smth already?
6 years, 1 month
[libvirt-users] live migration via unix socket
by David Vossel
Hey,
Over in KubeVirt we're investigating a use case where we'd like to perform
a live migration within a network namespace that does not provide libvirtd
with network access. In this scenario we would like to perform a live
migration by proxying the migration through a unix socket to a process in
another network namespace that does have network access. That external
process would live on every node in the cluster and know how to correctly
route connections between libvirtds.
virsh example of an attempted migration via unix socket.
virsh migrate --copy-storage-all --p2p --live --xml domain.xml my-vm
qemu+unix:///system?socket=destination-host-proxy-sock
In this example, the src libvirtd is able to establish a connection to the
destination libvirtd via the unix socket proxy. However, the migration-uri
appears to require either tcp or rdma network connection. If I force the
migration-uri to be a unix socket, I receive an error [1] indicating that
qemu+unix is not a valid transport.
Technically with qemu+kvm I believe what we're attempting should be
possible (even though it is inefficient). Please correct me if I'm wrong.
Is there a way to achieve this migration via unix socket functionality this
using Libvirt? Also, is there a reason why the migration uri is limited to
tcp/rdma
Thanks!
- David
[1]
https://github.com/libvirt/libvirt/blob/master/src/qemu/qemu_migration.c#...
6 years, 2 months
[libvirt-users] Snapshot overlay-file deletion
by Gionatan Danti
Hi list,
issuing the command "virsh blockcommit --delete --pivot" (on a
previously created external snapshot/overlay) returns an error regarding
an unsupported flag warning stating "error: unsupported flags (0x2)". I
need to manually "rm" the affected file after the blockcommit.
A similar issue was reported many years ago here [1]. Am I missing
something, or is the "--delete" flag not supported?
Thanks.
[1] https://bugzilla.redhat.com/show_bug.cgi?id=1001475
--
Danti Gionatan
Supporto Tecnico
Assyoma S.r.l. - www.assyoma.it
email: g.danti(a)assyoma.it - info(a)assyoma.it
GPG public key ID: FF5F32A8
6 years, 3 months
[libvirt-users] Setting up port forwarding to guests on nat network
by Rhys Ferris
Hello all,
I’m currently trying to figure out how to forward ports to guests that are on a NAT Network. I have followed the directions on https://wiki.libvirt.org/page/Networking under the “Forwarding Incoming Connections” Section and get connection refused when attempting to connect.
System: Ubuntu Server 18.04.1
Virsh / LibVirtd Version: 4.0.0
Here’s the contents of /etc/libvirt/hooks/qemu
#!/bin/bash
# IMPORTANT: Change the "VM NAME" string to match your actual VM Name.
# In order to create rules to other VMs, just duplicate the below block and configure
# it accordingly.
if [ "${1}" = "testy" ]; then
# Update the following variables to fit your setup
GUEST_IP='10.128.10.100'
GUEST_PORT='22'
HOST_PORT='2588'
if [ "${2}" = "stopped" ] || [ "${2}" = "reconnect" ]; then
/sbin/iptables -D FORWARD -o virbr0 -d $GUEST_IP -j ACCEPT
/sbin/iptables -t nat -D PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
fi
if [ "${2}" = "start" ] || [ "${2}" = "reconnect" ]; then
/sbin/iptables -I FORWARD -o virbr0 -d $GUEST_IP -j ACCEPT
/sbin/iptables -t nat -I PREROUTING -p tcp --dport $HOST_PORT -j DNAT --to $GUEST_IP:$GUEST_PORT
fi
fi
Here’s my network XML
<network>
<name>olympus</name>
<uuid>3b0d968c-8166-42f7-8109-e5f0317cab42</uuid>
<forward mode='nat'>
<nat>
<port start='1024' end='65535'/>
</nat>
</forward>
<bridge name='virbr1' stp='on' delay='0'/>
<mac address='52:54:00:bb:18:6b'/>
<ip address='10.128.10.1' netmask='255.255.255.0'>
<dhcp>
<range start='10.128.10.2' end='10.128.10.254'/>
<host mac='52:54:00:8d:f5:0c' name='testy' ip='10.128.10.100'/>
</dhcp>
</ip>
</network>
And here’s the results of iptables -L -vt nat:
BEFORE VM BOOT:
Chain PREROUTING (policy ACCEPT 46615 packets, 6618K bytes)
pkts bytes target prot opt in out source destination
Chain INPUT (policy ACCEPT 46615 packets, 6618K bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 198K packets, 18M bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 198K packets, 18M bytes)
pkts bytes target prot opt in out source destination
24 1812 RETURN all -- any any 10.128.10.0/24 base-address.mcast.net/24
0 0 RETURN all -- any any 10.128.10.0/24 255.255.255.255
17 1020 MASQUERADE tcp -- any any 10.128.10.0/24 !10.128.10.0/24 masq ports: 1024-65535
15 1700 MASQUERADE udp -- any any 10.128.10.0/24 !10.128.10.0/24 masq ports: 1024-65535
0 0 MASQUERADE all -- any any 10.128.10.0/24 !10.128.10.0/24
22 1666 RETURN all -- any any 192.168.122.0/24 base-address.mcast.net/24
0 0 RETURN all -- any any 192.168.122.0/24 255.255.255.255
0 0 MASQUERADE tcp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
8 1168 MASQUERADE udp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE all -- any any 192.168.122.0/24 !192.168.122.0/24
AFTER VM BOOT
Chain PREROUTING (policy ACCEPT 2 packets, 120 bytes)
pkts bytes target prot opt in out source destination
0 0 DNAT tcp -- any any anywhere anywhere tcp dpt:2588 to:10.128.10.100:22
Chain INPUT (policy ACCEPT 2 packets, 120 bytes)
pkts bytes target prot opt in out source destination
Chain OUTPUT (policy ACCEPT 18 packets, 1263 bytes)
pkts bytes target prot opt in out source destination
Chain POSTROUTING (policy ACCEPT 18 packets, 1263 bytes)
pkts bytes target prot opt in out source destination
24 1812 RETURN all -- any any 10.128.10.0/24 base-address.mcast.net/24
0 0 RETURN all -- any any 10.128.10.0/24 255.255.255.255
17 1020 MASQUERADE tcp -- any any 10.128.10.0/24 !10.128.10.0/24 masq ports: 1024-65535
15 1700 MASQUERADE udp -- any any 10.128.10.0/24 !10.128.10.0/24 masq ports: 1024-65535
0 0 MASQUERADE all -- any any 10.128.10.0/24 !10.128.10.0/24
22 1666 RETURN all -- any any 192.168.122.0/24 base-address.mcast.net/24
0 0 RETURN all -- any any 192.168.122.0/24 255.255.255.255
0 0 MASQUERADE tcp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
8 1168 MASQUERADE udp -- any any 192.168.122.0/24 !192.168.122.0/24 masq ports: 1024-65535
0 0 MASQUERADE all -- any any 192.168.122.0/24 !192.168.122.0/24
And lastly heres what actually happens on attempt to SSH:
rhys@odin:~$ ssh rhys(a)172.16.99.170 -p 2258
ssh: connect to host 172.16.99.170 port 2258: Connection refused
rhys@odin:~$
The connection refused is instant, not a timeout.
I’ve ensured that ufw is disabled.
Any help appreciated. I just can’t figure this out.
Sent from Mail for Windows 10
6 years, 3 months
[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 3 months
[libvirt-users] OVMF / UEFI boot abnormally
by Allence
Hey guys, today I use ovmf to start win10 guests, my monitor is not bright
But is it normal for me to use seabios? why? The next message is my detailed settings.
ovmf:
os:
<loader readonly='yes' type='pflash'>/root/OVMF_CODE.fd</loader>
<nvram>/root/OVMF_VARS.fd</nvram>
vga:
<qemu:arg value='-device'/>
<qemu:arg value='vfio-pci,host=01:00.0,romfile=/root/10de:1b06:19da:1474.dump,multifunction=on'/>
bios:
os:
<loader readonly='yes' type='rom'>/usr/share/qemu/bios.bin</loader>
vga:
<qemu:arg value='-device'/>
<qemu:arg value='vfio-pci,host=01:00.0,romfile=/root/us.dump,multifunction=on,x-vga=on'/>
And this is my kvm environment:
qemu: 2.12.0
libvirt: 4.14
kernel: 4.17.2
host: Lfs-8.2(as similar as ubuntu)
6 years, 3 months