[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 3 months
[libvirt-users] concurrent migration of several domains rarely fails
by Lentes, Bernd
Hi,
i have a two-node cluster with several domains as resources. During testing i tried several times to migrate some domains concurrently.
Usually it suceeded, but rarely it failed. I found one clue in the log:
Dec 03 16:03:02 ha-idg-1 libvirtd[3252]: 2018-12-03 15:03:02.758+0000: 3252: error : virKeepAliveTimerInternal:143 : internal error: connection closed due to keepalive timeout
The domains are configured similar:
primitive vm_geneious VirtualDomain \
params config="/mnt/san/share/config.xml" \
params hypervisor="qemu:///system" \
params migration_transport=ssh \
op start interval=0 timeout=120 trace_ra=1 \
op stop interval=0 timeout=130 trace_ra=1 \
op monitor interval=30 timeout=25 trace_ra=1 \
op migrate_from interval=0 timeout=300 trace_ra=1 \
op migrate_to interval=0 timeout=300 trace_ra=1 \
meta allow-migrate=true target-role=Started is-managed=true \
utilization cpu=2 hv_memory=8000
What is the algorithm to discover the port used for live migration ?
I have the impression that "params migration_transport=ssh" is worthless, port 22 isn't involved for live migration.
My experience is that for the migration tcp ports > 49151 are used. But the exact procedure isn't clear for me.
Does live migration uses first tcp port 49152 and for each following domain one port higher ?
E.g. for the concurrent live migration of three domains 49152, 49153 and 49154.
Why does live migration for three domains usually succeed, although on both hosts just 49152 and 49153 is open ?
Is the migration not really concurrent, but sometimes sequential ?
Bernd
--
Bernd Lentes
Systemadministration
Institut für Entwicklungsgenetik
Gebäude 35.34 - Raum 208
HelmholtzZentrum münchen
[ mailto:bernd.lentes@helmholtz-muenchen.de | bernd.lentes(a)helmholtz-muenchen.de ]
phone: +49 89 3187 1241
fax: +49 89 3187 2294
[ http://www.helmholtz-muenchen.de/idg | http://www.helmholtz-muenchen.de/idg ]
wer Fehler macht kann etwas lernen
wer nichts macht kann auch nichts lernen
Helmholtz Zentrum Muenchen
Deutsches Forschungszentrum fuer Gesundheit und Umwelt (GmbH)
Ingolstaedter Landstr. 1
85764 Neuherberg
www.helmholtz-muenchen.de
Aufsichtsratsvorsitzende: MinDirig.in Petra Steiner-Hoffmann
Stellv.Aufsichtsratsvorsitzender: MinDirig. Dr. Manfred Wolter
Geschaeftsfuehrer: Prof. Dr. med. Dr. h.c. Matthias Tschoep, Heinrich Bassler, Dr. rer. nat. Alfons Enhsen
Registergericht: Amtsgericht Muenchen HRB 6466
USt-IdNr: DE 129521671
5 years, 9 months
[libvirt-users] macvtap and tagged VLANs to the VM
by Marc Haber
Hi,
I would like to run a network firewall as a VM on a KVM host. There are
~ 25 VLANs delivered to the KVM host on three dedicated links, no LACP
or other things. I have the VLANs 100-180 on the host's enp1s0, the VLANs
200-280 on the host's enp2s0 and the VLANs 300-380 on the host's enp3s0.
To save myself from configuring all VLANs on the KVM host, I'd like to
hand the entire ethernet link to the VM and to have the VLAN interfaces
there. Using classical Linux bridges (brctl), things work fine.
They don't when I try macvlan:
On the host:
4: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 00:0d:b9:34:2a:fe brd ff:ff:ff:ff:ff:ff promiscuity 1 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
5: unt382@enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 00:0d:b9:34:2a:fe brd ff:ff:ff:ff:ff:ff promiscuity 0
vlan protocol 802.1Q id 382 <REORDER_HDR> addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
15: macvtap3@enp3s0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 500
link/ether 52:54:00:bf:bb:ab brd ff:ff:ff:ff:ff:ff promiscuity 0
macvtap mode bridge addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
4: enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 00:0d:b9:34:2a:fe brd ff:ff:ff:ff:ff:ff
inet6 fe80::20d:b9ff:fe34:2afe/64 scope link
valid_lft forever preferred_lft forever
5: unt382@enp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 00:0d:b9:34:2a:fe brd ff:ff:ff:ff:ff:ff
inet6 fe80::20d:b9ff:fe34:2afe/64 scope link
valid_lft forever preferred_lft forever
15: macvtap3@enp3s0: <BROADCAST,MULTICAST,ALLMULTI,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 500
link/ether 52:54:00:bf:bb:ab brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:febf:bbab/64 scope link
valid_lft forever preferred_lft forever
In the XML:
<interface type='direct'>
<mac address='52:54:00:bf:bb:ab'/>
<source dev='enp3s0' mode='bridge'/>
<target dev='macvtap3'/>
<model type='virtio'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
And in the VM:
root@grml ~ # ip -d link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 promiscuity 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:bf:bb:ab brd ff:ff:ff:ff:ff:ff promiscuity 0 addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
3: vlan0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:bf:bb:ab brd ff:ff:ff:ff:ff:ff promiscuity 0
vlan protocol 802.1Q id 382 <REORDER_HDR> addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535
root@grml ~ # ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
inet6 ::1/128 scope host
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:bf:bb:ab brd ff:ff:ff:ff:ff:ff
inet6 fe80::5054:ff:febf:bbab/64 scope link
valid_lft forever preferred_lft forever
3: vlan0@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
link/ether 52:54:00:bf:bb:ab brd ff:ff:ff:ff:ff:ff
inet 192.168.252.220/24 brd 192.168.252.255 scope global vlan0
valid_lft forever preferred_lft forever
inet6 fe80::5054:ff:febf:bbab/64 scope link
valid_lft forever preferred_lft forever
root@grml ~ #
I then ping from the VM to 192.168.252.241, which is a differnt host on
the network, neither the VM host the VM is running on nor another VM on
the same host. That should rule out the connectivity issues that a
macvtap interface has, right? On the VM, I see ARP requests going out,
but no answers come in.
On the pinged host, I see:
22:50:23.881163 52:54:00:bf:bb:ab > ff:ff:ff:ff:ff:ff, ethertype ARP (0x0806), length 60: Request who-has 192.168.252.241 tell 192.168.252.220, length 46
22:50:23.881242 52:54:00:95:df:a6 > 52:54:00:bf:bb:ab, ethertype ARP (0x0806), length 42: Reply 192.168.252.241 is-at 52:54:00:95:df:a6, length 28
So, the packets going out from my VM are correctly delivered to the
target, the target replies, but the replies never make it back to the
VM.
Do I see correctly that tcpdump on the VM host won't give accurate
readings since macvtap will divert the frame before tcpdump will see it?
On the other hand, a VM directly configured to the host's unt382
interface works fine:
<interface type='direct'>
<mac address='52:54:00:cb:ed:34'/>
<source dev='unt382' mode='bridge'/>
<model type='virtio'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
I would however like to avoid having 25 interface stanzas in my XML.
I would appeciate any ideas to solve this issue. I know this is most
probably not a libvirt issue, but this list is about the only place that
comes to my mind where people knowledgeable about those complex network
stuff might hang around. If there is a better place to ask, I am open
for suggestion. Please pardon my intrusion.
Greetings
Marc
--
-----------------------------------------------------------------------------
Marc Haber | "I don't trust Computers. They | Mailadresse im Header
Leimen, Germany | lose things." Winona Ryder | Fon: *49 6224 1600402
Nordisch by Nature | How to make an American Quilt | Fax: *49 6224 1600421
5 years, 10 months
[libvirt-users] assigning PCI addresses with bus > 0x09
by Riccardo Ravaioli
Hi,
My goal is to assign PCI addresses to a number of devices (network
interfaces, disks and PCI devices in PCI-passthrough) without delegating to
libvirt the generation of those values. This should give me more control
and for sure more predictability over the hardware configuration of a
virtual machine and consequently the name of the interfaces in it. I'm
using libvirt 4.3.0 to create qemu/KVM virtual machines running Linux
(Debian Stretch).
So, for every device of the type mentioned above, I add this line:
<address type='pci' domain='0x0000' bus='0x__' slot='0x__' function='0x0'/>,
... with values from 00 to ff in the bus field, and from 00 to 1f in the
slot field, as described in the documentation.
Long story short, I noticed that as soon as I assign values > 0x09 to the
bus field, the serial console hangs indefinitely, in both Debian and
Ubuntu. The VM seems to be started correctly and its state is "running"; in
the XML file created by libvirt, I see all controllers from 0 the largest
bus value I assigned, so everything from that side seems ok.
What am I missing here?
Thanks!
Riccardo
5 years, 10 months
[libvirt-users] avoiding PCI bus 8 / using PCI function / virt-install
by b f31415
I’m using virt-install to spin up VMs. At times I have a need to spin up
VMs which have 100s of interfaces. I ran into the PCI issue mentioned in
this previous thread based on how virt-install assigns PCI addresses to
interfaces:
https://www.redhat.com/archives/libvirt-users/2018-December/msg00064.html
Using the info mention there I was able to part hand / part sw re-write an
XML where I would remove PCI bus references above the value of 8 and
re-address the per interface PCI info to use the function field (I don’t
need hot pluggable).
But the process I’ve built is brittle.
Wondering what options i might have to better deal with this PCI issue.
Is there a way to tell virt-install, when building the info it passes to
qemu, to use the function field during the PCI assignment process so as to
support many more interfaces before hitting the PCI bus == 8 issue?
If not, is there a way with one of the virt command line tools to create
the XML (with the PCI addresses specified) so that I can process that XML
and re-write the PCI addressing values? Right now the only way I’ve been
able to get that detailed XML file is to 1) virt-install and let the VM
begin the boot process and then do a 2) virsh dumpxml and then 3) virsh
destroy/undefine that VM, 4) modify the XML and then 5) virsh create
./modified.xml. Is there a cleaner way to do this?
Thanks
5 years, 10 months
[libvirt-users] Network filters with clean-traffic not working on Debian Stretch
by fatal
Hello,
I'm recently stumbled over the libvirt network filter capabilities and
got pretty excited. Unfortunately I'm not able to get the the
"clean-traffic" filterset working. I'm using a freshly installed Debian
Stretch with libvirt, qemu and KVM.
My config snippet looks as follows:
sudo virsh edit <VM>
[...]
<interface type='bridge'>
<mac address='52:54:00:0c:14:07'/>
<source bridge='br0'/>
<model type='virtio'/>
<filterref filter='clean-traffic'>
<parameter name='IP' value='10.10.1.2'/>
</filterref>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
<interface type='bridge'>
<mac address='52:54:00:0c:24:17'/>
<source bridge='br1'/>
<model type='virtio'/>
<filterref filter='clean-traffic'>
<parameter name='IP' value='172.16.1.2'/>
</filterref>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
[...]
I restarted the VM from within the VM, did a "virsh reboot <VM>",
restarted libvirtd and even did a reboot of the host - just to be sure.
Unfortunately neither "iptables -L" nor "ebtables --list" show any
entries added by libvirt. Also omitting the "parameter name='IP'" part
didn't change anything.
There are no error messages in /var/log/syslog nor in
/var/log/libvirt/qemu/<VM>
My main references were:
https://libvirt.org/firewall.html
https://libvirt.org/formatnwfilter.html
https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/...
https://www.berrange.com/posts/2011/10/03/guest-mac-spoofing-denial-of-se...
Any help really would be much appreciated!
Thanks a lot!
Sam
5 years, 10 months
[libvirt-users] luks ecrypted storage poll - lvm - possible?
by lejeczek
hi everyone,
do we get to encrypt lvm pools in/with libvirt?
I'm on Centos 7.x but see mention of it, not even on the net.
Or in other words - can guests(lxc I'm thinking of) run off ecrypted lvm
where at least the part when dev gets luksOpened is taken care of by
libvirt?
many thanks, L.
5 years, 11 months
[libvirt-users] Shutdown problem from virsh
by Thomas Munfort
Hi everyone,
On a server running Debian 9, I have multiple KVM guests automatically
started at boot by libvirtd.service and running in the background. When I
shutdown this server, libvirtd.service automatically and gracefully shut
them all down first. So, every thing is fine so far.
Recently, I've added an additional KVM guest and this one does not respond
to a shutdown command from virsh and therefore messes up the shutdown of
the whole Debian 9 server.
The things is, this additional KVM guest is not a regular Linux guest. It
is DiskStation Manager (DSM) 6.2 from Synology (with XPEnology as the boot
loader).
I know this setup isn't supported by libvirt nor Synology. Nonetheless, I
wonder if anyone would have idea on that matter ? Even some suggestions
might help me find what needs fixing.
Thank you,
Best regards,
Thomas M.
5 years, 11 months