[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 3 months
[libvirt-users] Libvirt access control drivers
by Anastasiya Ruzhanskaya
Hello!
According to the documentation access control drivers are not in really
"good condition". There is a polkit, but it can distinguish users only
according the pid. However, I have met some articles about more
fine-grained control and about selinux drivers for libvirt? So, what is the
status now? Should I implement something by myself if I want access based
on login, are their instructions how to write these drivers or there is
smth already?
6 years
[libvirt-users] ceph rbd pool and libvirt manageability (virt-install)
by Jelle de Jong
-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1
Hello everybody,
I created a rbd pool and activated it, but I can't seem to create
volumes in it with virsh or virt-install?
# virsh pool-dumpxml myrbdpool
<pool type='rbd'>
<name>myrbdpool</name>
<uuid>2d786f7a-2df3-4d79-ae60-1535bcf1c6b5</uuid>
<capacity unit='bytes'>6997998301184</capacity>
<allocation unit='bytes'>10309227031</allocation>
<available unit='bytes'>6977204658176</available>
<source>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
<name>libvirt-pool</name>
<auth type='ceph' username='libvirt'>
<secret uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
</source>
</pool>
# virt-install --version
1.0.1
# virsh --version
1.2.9
I ended using virsh edit ceph-test.powercraft.nl and making creating
the disk manually.
<disk type='network' device='disk'>
<auth username='libvirt'>
<secret type='ceph' uuid='029a334e-ed57-4293-bb99-ffafa8867122'/>
</auth>
<source protocol='rbd' name='libvirt-pool/kvm01-storage'>
<host name='ceph01.powercraft.nl' port='6789'/>
<host name='ceph02.powercraft.nl' port='6789'/>
<host name='ceph03.powercraft.nl' port='6789'/>
</source>
<target dev='vdc' bus='virtio'/>
</disk>
I use virt-install a lot to define, import and undefine domains, how
can I use virt-install to manage my rdb disks?
Kind regards,
Jelle de Jong
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1
iJwEAQECAAYFAlV1xlQACgkQ1WclBW9j5HkbPQP+PjNrzvlqysslOp2Yk7wH4Mxy
2sh2dn96G0KOAHEeEn3BN6IWlnD1TADZbHdpTtMwkdv48Xwn0sP1s+3QDM4pb3gP
n+z+dVxS8FouDIy/eiso3IBCj3g4TWbEX8ZHqs3jKqe0lZgAXBzB9xYSUowcEBrZ
ddkPbr8p8ozWyOG+9V8=
=lkK7
-----END PGP SIGNATURE-----
6 years, 3 months
[libvirt-users] Trouble passing PCI device in isolated IOMMU group
by Quincy Wofford
I'm attempting PCI-passthrough from host to guest on an HP ProLiant 380P,
which has an outdated BIOS (2014), but it does support VT-d. I'm running
CentOS 7, kernel 3.10.0-862.9.1.el7.x86_64.
I have an Intel 82580 NIC installed with 4 ports. Each of these ports is in
its own IOMMU group (I enabled SR-IOV at the BIOS, which might be the
reason they show up separately)
After detaching and adding a 'hostdev' device with the appropriate pci
address, I attempt to start my VM. I get " failed to set iommu for
container: Operation not permitted". As recommended here (
http://vfio.blogspot.com/2014/08/vfiovga-faq.html) I parsed dmesg in an
attempt to find:
-------------------
No interrupt remapping support. Use the module param
"allow_unsafe_interrupts" to enable VFIO IOMMU support on this platform
-------------------
...but nothing similar exists in my logs.
Since this device is showing up in its own IOMMU group, I assume ACS
override won't get me any further. In any case, it is not an option for me
to leave ACS override on. I can turn it on for testing, the server is not
currently in production.
Any idea why this could be failing?
6 years, 3 months
[libvirt-users] How to set the MTU size of VM interface
by netsurfed
Dear all:
I start a VM with MTU size 1450, and login VM to check the mtu size, but this setting didn't work.
The VM xml is like this:
<interface type='bridge'>
<mac address='52:54:00:54:14:f8'/>
<source bridge='ovsbr1'/>
<virtualport type='openvswitch'>
<parameters interfaceid='a42e5b42-09db-4cfa-b198-d2ce62843378'/>
</virtualport>
<target dev='vnet0'/>
<model type='virtio'/>
<mtu size='1450'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
However, the actual mtu size of VM is still 1500, as following:
root@ubuntu-zhf:~# ip addr
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 52:54:00:54:14:f8 brd ff:ff:ff:ff:ff:ff
The MTU size of both ovs bridge and vnet are 1450 in my hypervisor:
root@ubuntu-191:~# ip addr show ovsbr1
16: ovsbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc noqueue state UNKNOWN group default qlen 1
link/ether 12:01:7b:5b:85:49 brd ff:ff:ff:ff:ff:ff
inet6 fe80::1001:7bff:fe5b:8549/64 scope link
valid_lft forever preferred_lft forever
root@ubuntu-191:~# ip addr show vnet0
22: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1450 qdisc pfifo_fast master ovs-system state UNKNOWN group default qlen 1000
link/ether fe:54:00:54:14:f8 brd ff:ff:ff:ff:ff:ff
inet6 fe80::fc54:ff:fe54:14f8/64 scope link
valid_lft forever preferred_lft forever
I use ovs to create vxlan networks and add VMs to the ovs bridge. Because of the limitations of vxlan, I need to set the mtu of vm to 1450.
How to set the MTU of VM? Thanks.
Below some information about my hypervisor:
root@ubuntu-192:~# virsh -V
Virsh command line tool of libvirt 3.4.0
See web site at http://libvirt.org/
Compiled with support for:
Hypervisors: QEMU/KVM LXC UML OpenVZ VMware VirtualBox Test
Networking: Remote Network Bridging Interface udev Nwfilter VirtualPort
Storage: Dir Filesystem SCSI Multipath iSCSI LVM
Miscellaneous: Daemon Nodedev SELinux Secrets Debug Modular
root@ubuntu-192:~# qemu-x86_64 --version
qemu-x86_64 version 2.9.0
Copyright (c) 2003-2017 Fabrice Bellard and the QEMU Project developers
root@ubuntu-192:~# uname -a
Linux ubuntu-192 4.4.0-62-generic #83-Ubuntu SMP Wed Jan 18 14:10:15 UTC 2017 x86_64 x86_64 x86_64 GNU/Linux
6 years, 3 months
[libvirt-users] multiple devices in the same iommu group in L1 guest
by Yalan Zhang
Hi,
I have a guest enabled vIOMMU, but on the guest there are several devices
in the same iommu group.
Could someone help to check if I missed something?
Thank you very much!
1. guest xml:
# virsh edit q
...
<os>
<type arch='x86_64' machine='pc-q35-rhel7.5.0'>hvm</type>
<loader readonly='yes' secure='yes'
type='pflash'>/usr/share/OVMF/OVMF_CODE.secboot.fd</loader>
<nvram>/var/lib/libvirt/qemu/nvram/q_VARS.fd</nvram>
</os>
...
<features>
...
<ioapic driver='qemu'/>
</features>
<cpu mode='host-passthrough' check='none'>
<feature policy='require' name='vmx'/>
</cpu>
...
<devices>
...
<controller type='pci' index='7' model='pcie-root-port'>
<model name='pcie-root-port'/>
<target chassis='7' port='0x15'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02'
function='0x5'/>
</controller>
<interface type='network'>
<mac address='52:54:00:b9:ff:90'/>
<source network='default'/>
<model type='e1000'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</interface>
<iommu model='intel'>
<driver intremap='on' caching_mode='on' iotlb='on'/>
</iommu>
</devices>
...
2. guest has 'intel_iommu=on' enabled in kernel cmdline, then reboot guest
3. log in guest to check:
# dmesg | grep -i DMAR
[ 0.000000] ACPI: DMAR 000000007d83f000 00050 (v01 BOCHS BXPCDMAR
00000001 BXPC 00000001)
[ 0.000000] DMAR: IOMMU enabled
[ 0.155178] DMAR: Host address width 39
[ 0.155180] DMAR: DRHD base: 0x000000fed90000 flags: 0x1
[ 0.155221] DMAR: dmar0: reg_base_addr fed90000 ver 1:0 cap
12008c22260286 ecap f00f5e
[ 0.155228] DMAR: ATSR flags: 0x1
[ 0.155231] DMAR-IR: IOAPIC id 0 under DRHD base 0xfed90000 IOMMU 0
[ 0.155232] DMAR-IR: Queued invalidation will be enabled to support
x2apic and Intr-remapping.
[ 0.156843] DMAR-IR: Enabled IRQ remapping in x2apic mode
[ 2.112369] DMAR: No RMRR found
[ 2.112505] DMAR: dmar0: Using Queued invalidation
[ 2.112669] DMAR: Setting RMRR:
[ 2.112671] DMAR: Prepare 0-16MiB unity mapping for LPC
[ 2.112820] DMAR: Setting identity map for device 0000:00:1f.0 [0x0 -
0xffffff]
[ 2.211577] DMAR: Intel(R) Virtualization Technology for Directed I/O
===> This is expected
# dmesg | grep -i iommu |grep device
[ 2.212267] iommu: Adding device 0000:00:00.0 to group 0
[ 2.212287] iommu: Adding device 0000:00:01.0 to group 1
[ 2.212372] iommu: Adding device 0000:00:02.0 to group 2
[ 2.212392] iommu: Adding device 0000:00:02.1 to group 2
[ 2.212411] iommu: Adding device 0000:00:02.2 to group 2
[ 2.212444] iommu: Adding device 0000:00:02.3 to group 2
[ 2.212464] iommu: Adding device 0000:00:02.4 to group 2
[ 2.212482] iommu: Adding device 0000:00:02.5 to group 2
[ 2.212520] iommu: Adding device 0000:00:1d.0 to group 3
[ 2.212533] iommu: Adding device 0000:00:1d.1 to group 3
[ 2.212541] iommu: Adding device 0000:00:1d.2 to group 3
[ 2.212550] iommu: Adding device 0000:00:1d.7 to group 3
[ 2.212567] iommu: Adding device 0000:00:1f.0 to group 4
[ 2.212576] iommu: Adding device 0000:00:1f.2 to group 4
[ 2.212585] iommu: Adding device 0000:00:1f.3 to group 4
[ 2.212599] iommu: Adding device 0000:01:00.0 to group 2
[ 2.212605] iommu: Adding device 0000:02:01.0 to group 2
[ 2.212621] iommu: Adding device 0000:04:00.0 to group 2
[ 2.212634] iommu: Adding device 0000:05:00.0 to group 2
[ 2.212646] iommu: Adding device 0000:06:00.0 to group 2
[ 2.212657] iommu: Adding device 0000:07:00.0 to group 2
====> several devices in the same iommu group
# virsh nodedev-dumpxml pci_0000_07_00_0
<device>
<name>pci_0000_07_00_0</name>
<path>/sys/devices/pci0000:00/0000:00:02.5/0000:07:00.0</path>
<parent>pci_0000_00_02_5</parent>
<driver>
<name>e1000</name>
</driver>
<capability type='pci'>
<domain>0</domain>
<bus>7</bus>
<slot>0</slot>
<function>0</function>
<product id='0x100e'>82540EM Gigabit Ethernet Controller</product>
<vendor id='0x8086'>Intel Corporation</vendor>
<iommuGroup number='2'>
<address domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
<address domain='0x0000' bus='0x00' slot='0x02' function='0x1'/>
<address domain='0x0000' bus='0x00' slot='0x02' function='0x2'/>
<address domain='0x0000' bus='0x00' slot='0x02' function='0x3'/>
<address domain='0x0000' bus='0x00' slot='0x02' function='0x4'/>
<address domain='0x0000' bus='0x00' slot='0x02' function='0x5'/>
<address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
<address domain='0x0000' bus='0x02' slot='0x01' function='0x0'/>
<address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/>
<address domain='0x0000' bus='0x05' slot='0x00' function='0x0'/>
<address domain='0x0000' bus='0x06' slot='0x00' function='0x0'/>
<address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</iommuGroup>
</capability>
</device>
Thus, can not attach the device to L2 guest:
# cat hostdev.xml
<hostdev mode='subsystem' type='pci' managed='yes'>
<source>
<address domain='0x0000' bus='0x07' slot='0x00' function='0x0'/>
</source>
</hostdev>
# virsh attach-device rhel hostdev.xml
error: Failed to attach device from hostdev.xml
error: internal error: unable to execute QEMU command 'device_add': vfio
error: 0000:07:00.0: group 2 is not viable
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
6 years, 3 months
[libvirt-users] Adding a VLAN tag to a libvirt SR-IOV VF network using the "virsh net-update" command
by Yegappan Lakshmanan
Hi all,
How do you add a VLAN tag to a libvirt SR-IOV VF network usingthe "virsh net-update" command? I couldn't find the section to passto the "virsh net-update" command to add the VLAN tag.
I have the following libvirt network defined for a SR-IOV VF:
<network> <name>GE0-0-SRIOV-1</name> <uuid>7bc67166-c78e-4bcf-89ee-377dd9086631</uuid> <forward mode='hostdev' managed='yes'> <driver name='vfio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x10' function='0x0'/> </forward></network>
I am trying to add VLAN 100 to this network. If I use the following command toadd a VLAN tag to this network:
virsh net-update GE0-0-SRIOV-1 modify bridge --xml "<vlan trunk='no'><tag id='100'/></vlan>"
It fails as the section "bridge" doesn't support adding a VLAN tag. I have triedother section values like domain and all of them fail.
I am able to add the VLAN to the portgroup section, but then the VLAN configurationis not propagated to the underlying physical PF interface.
If I manually add the VLAN configuration using the "virsh net-edit" command andthen connect a VM to this network, then the VLAN information is correctly propagatedto the underlying physical NIC device. I need a way to automate this configuration.
Any pointers?
Thanks,Yegappan
6 years, 3 months