[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 2 months
move mount permission denied
by Jiatong Shen
Hello community!
I am faced with a mysterious [error](
https://gist.github.com/jshen28/5f29eed51e0a1308684214b35f009478) which
says move mount is not permissioned.
We are using libvirt with openstack-helm which running libvirt in a docker
based k8s environment. /dev/termination-log is a device created and
attached by k8s and mount looks like `/dev/mapper/ubuntu--vg-root on
/var/log/termination-log type ext4
(rw,relatime,errors=remount-ro,data=ordered)`
Any idea why this happens? appreciate the help!
--
Best Regards,
Jiatong Shen
3 years, 7 months
how to use external snapshots with memory state
by Riccardo Ravaioli
Hi all,
Best wishes for 2021! :)
So I've been reading and playing around with live snapshots and still
haven't figured out how to use an external memory snapshot. My goal is to
take a disk+memory snapshot of a running VM and, if possible, save it in
external files.
As far as I understand, I can run:
$ virsh snapshot-create $VM
... and that'll take an *internal* live snapshot of a given VM, consisting
of its disks and memory state, which will be stored in the qcow2 disk(s) of
the VM. In particular, the memory state will be stored in the first disk of
the VM. I can then use the full range of snapshot commands available:
revert, list, current, delete.
Now, an external snapshot can be taken with:
$ virsh snapshot-create-as --domain $VM mysnapshot --diskspec
vda,file=/home/riccardo/disk_mysnapshot.qcow2,snapshot=external --memspec
file=/home/riccardo/mem_mysnapshot.qcow2,snapshot=external
... with as many "--diskspec" as there are disks in the VM.
I've read the virsh manual and the libvirt API documentation, but it's not
clear to me what exactly I can do then with an external snapshot, in
particular with the file containing the memory state. In articles from 7-8
years ago people state that external memory snapshots cannot be reverted...
is it still the case today? If so, what's a typical usage for such files?
If not with libvirt, is it possible to revert to an external memory + disk
state in other ways, for instance through qemu commands?
Thanks!
Riccardo
3 years, 7 months
Virtual Network API for QEMU
by Radek Simko
Hi,
According to this support matrix
https://libvirt.org/hvsupport.html#virNetworkDriver
there is no support for any APIs other than hypervisor ones for qemu.
For example virConnectNumOfNetworks is not supported.
Is there any particular reason this is not supported? Has any development
in that area been attempted in the past? Would contributions adding support
be welcomed?
Thanks,
Radek Simko
3 years, 7 months
Regarding location of Libvirt library
by shafnamol N
Hi,
I have installed Libvirt 7.1.0.
I configured and built libvirt based on instructions from
https://libvirt.org/compiling.html.
Now I developed a client program to create a VM using an XML file.As the
API for it is *virDomainCreateXML,* called this API by passing XML file .It
shows the following error.
undefined reference to `virDomainCreateXML'.
I included the header files containing the said API declaration.But need to
include the library also.
My question is where do the libvirt library located after building it.
Thanks for the help in advance.
3 years, 7 months
virsh dommemstat doesn't update its information
by Lentes, Bernd
Hi,
i'm playing a bit around with my domains and the balloon driver.
To get information about ballooning i use virsh dommemstat.
But i only get very few information:
virsh # dommemstat vm_idcc_devel
actual 1044480
last_update 0
rss 1030144
Also configuring "dommemstat --domain vm_idcc_devel --period 5 --live"
or "dommemstat --domain vm_idcc_devel --period 5 --current" does neither update nor extend the information.
In vm_idcc_devel virtio_balloon is loaded:
idcc-devel:~ # lsmod|grep balloon
virtio_balloon 22788 0
Guest OS is SLES 10 SP4. Is that too old ?
Host OS is SLES 12 SP5.
There are other domains in which the information is updated.
Here is the config from vm_idcc_devel:
virsh # dumpxml vm_idcc_devel
<domain type='kvm' id='7'>
<name>vm_idcc_devel</name>
<uuid>4993009b-42ff-45d9-b1e0-145b8c0c8f82</uuid>
<memory unit='KiB'>2044928</memory>
<currentMemory unit='KiB'>1044480</currentMemory>
<vcpu placement='static'>1</vcpu>
<resource>
<partition>/machine</partition>
</resource>
<os>
<type arch='x86_64' machine='pc-i440fx-1.4'>hvm</type>
<boot dev='hd'/>
<boot dev='cdrom'/>
<bootmenu enable='yes'/>
</os>
<features>
<acpi/>
<apic/>
</features>
<clock offset='localtime'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<emulator>/usr/bin/qemu-kvm</emulator>
<disk type='file' device='disk'>
<driver name='qemu' type='raw' cache='none'/>
<source file='/mnt/ocfs2/vm_idcc_devel.raw'/>
<backingStore/>
<target dev='vdb' bus='ide'/>
<alias name='ide0-0-0'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
<disk type='file' device='cdrom'>
<driver name='qemu'/>
<target dev='hdc' bus='ide'/>
<readonly/>
<alias name='ide0-1-0'/>
<address type='drive' controller='0' bus='1' target='0' unit='0'/>
</disk>
<controller type='usb' index='0' model='piix3-uhci'>
<alias name='usb'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
</controller>
<controller type='pci' index='0' model='pci-root'>
<alias name='pci.0'/>
</controller>
<controller type='ide' index='0'>
<alias name='ide'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
</controller>
<interface type='bridge'>
<mac address='52:54:00:37:92:05'/>
<source bridge='br0'/>
<target dev='vnet6'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
<serial type='pty'>
<source path='/dev/pts/6'/>
<target type='isa-serial' port='0'>
<model name='isa-serial'/>
</target>
<alias name='serial0'/>
</serial>
<console type='pty' tty='/dev/pts/6'>
<source path='/dev/pts/6'/>
<target type='serial' port='0'/>
<alias name='serial0'/>
</console>
<input type='mouse' bus='ps2'>
<alias name='input0'/>
</input>
<input type='keyboard' bus='ps2'>
<alias name='input1'/>
</input>
<graphics type='vnc' port='5902' autoport='no' listen='127.0.0.1'>
<listen type='address' address='127.0.0.1'/>
</graphics>
<video>
<model type='cirrus' vram='16384' heads='1' primary='yes'/>
<alias name='video0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
</video>
<memballoon model='virtio'>
<stats period='5'/>
<alias name='balloon0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
</memballoon>
</devices>
</domain>
Bernd
--
Bernd Lentes
System Administrator
Institute for Metabolism and Cell Death (MCD)
Building 25 - office 122
HelmholtzZentrum München
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 89 3187 1241
phone: +49 89 3187 3827
fax: +49 89 3187 2294
http://www.helmholtz-muenchen.de/mcd
Public key:
30 82 01 0a 02 82 01 01 00 b3 72 3e ce 2c 0a 6f 58 49 2c 92 23 c7 b9 c1 ff 6c 3a 53 be f7 9e e9 24 b7 49 fa 3c e8 de 28 85 2c d3 ed f7 70 03 3f 4d 82 fc cc 96 4f 18 27 1f df 25 b3 13 00 db 4b 1d ec 7f 1b cf f9 cd e8 5b 1f 11 b3 a7 48 f8 c8 37 ed 41 ff 18 9f d7 83 51 a9 bd 86 c2 32 b3 d6 2d 77 ff 32 83 92 67 9e ae ae 9c 99 ce 42 27 6f bf d8 c2 a1 54 fd 2b 6b 12 65 0e 8a 79 56 be 53 89 70 51 02 6a eb 76 b8 92 25 2d 88 aa 57 08 42 ef 57 fb fe 00 71 8e 90 ef b2 e3 22 f3 34 4f 7b f1 c4 b1 7c 2f 1d 6f bd c8 a6 a1 1f 25 f3 e4 4b 6a 23 d3 d2 fa 27 ae 97 80 a3 f0 5a c4 50 4a 45 e3 45 4d 82 9f 8b 87 90 d0 f9 92 2d a7 d2 67 53 e6 ae 1e 72 3e e9 e0 c9 d3 1c 23 e0 75 78 4a 45 60 94 f8 e3 03 0b 09 85 08 d0 6c f3 ff ce fa 50 25 d9 da 81 7b 2a dc 9e 28 8b 83 04 b4 0a 9f 37 b8 ac 58 f1 38 43 0e 72 af 02 03 01 00 01
3 years, 7 months
how to check a virtual disk
by Lentes, Bernd
Hi,
we have a two-node cluster with pacemaker a SAN.
The resources are inside virtual domains.
The images of the virtual disks reside on the SAN.
On one domain i have errors from the hd in my log:
2021-03-24T21:02:28.416504+01:00 geneious kernel: [2159685.909613] JBD2: Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:02:46.505323+01:00 geneious kernel: [2159704.012213] JBD2: Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:02:55.573149+01:00 geneious kernel: [2159713.078560] JBD2: Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:03:23.702946+01:00 geneious kernel: [2159741.202546] JBD2: Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:03:30.289606+01:00 geneious kernel: [2159747.796192] ------------[ cut here ]------------
2021-03-24T21:03:30.289635+01:00 geneious kernel: [2159747.796207] WARNING: CPU: 0 PID: 457 at ../fs/buffer.c:1108 mark_buffer_dirty+0xe8/0x100
2021-03-24T21:03:30.289637+01:00 geneious kernel: [2159747.796208] Modules linked in: st sr_mod cdrom lp parport_pc ppdev parport xfrm_user xfrm_algo binfmt_misc uinput nf_log_ipv6 xt_comme
nt nf_log_ipv4 nf_log_common xt_LOG xt_limit af_packet iscsi_ibft iscsi_boot_sysfs ip6t_REJECT nf_conntrack_ipv6 nf_defrag_ipv6 ipt_REJECT xt_pkttype xt_tcpudp iptable_filter ip6table_mangl
e nf_conntrack_netbios_ns nf_conntrack_broadcast nf_conntrack_ipv4 nf_defrag_ipv4 ip_tables xt_conntrack nf_conntrack libcrc32c ip6table_filter ip6_tables x_tables joydev virtio_net net_fai
lover failover virtio_balloon i2c_piix4 qemu_fw_cfg pcspkr button ext4 crc16 jbd2 mbcache ata_generic hid_generic usbhid ata_piix sd_mod virtio_rng ahci floppy libahci serio_raw ehci_pci bo
chs_drm drm_kms_helper syscopyarea sysfillrect sysimgblt fb_sys_fops ttm drm uhci_hcd ehci_hcd usbcore virtio_pci
2021-03-24T21:03:30.289637+01:00 geneious kernel: [2159747.796374] drm_panel_orientation_quirks libata dm_mirror dm_region_hash dm_log sg dm_multipath dm_mod scsi_dh_rdac scsi_dh_emc scsi_
dh_alua scsi_mod autofs4 [last unloaded: parport_pc]
2021-03-24T21:03:30.289643+01:00 geneious kernel: [2159747.796400] Supported: Yes
2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796405] CPU: 0 PID: 457 Comm: jbd2/dm-0-8 Not tainted 4.12.14-122.57-default #1 SLE12-SP5
2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796406] Hardware name: QEMU Standard PC (i440FX + PIIX, 1996), BIOS rel-1.12.0-0-ga698c89-rebuilt.suse.com 04/01/2014
2021-03-24T21:03:30.289645+01:00 geneious kernel: [2159747.796407] task: ffff8ba32766c380 task.stack: ffff99954124c000
2021-03-24T21:03:30.289645+01:00 geneious kernel: [2159747.796409] RIP: 0010:mark_buffer_dirty+0xe8/0x100
2021-03-24T21:03:30.289646+01:00 geneious kernel: [2159747.796409] RSP: 0018:ffff99954124fcf0 EFLAGS: 00010246
2021-03-24T21:03:30.289650+01:00 geneious kernel: [2159747.796413] RAX: 0000000000a20828 RBX: ffff8ba209a58d90 RCX: ffff8ba3292d7958
2021-03-24T21:03:30.289651+01:00 geneious kernel: [2159747.796413] RDX: ffff8ba209a585b0 RSI: ffff8ba24270b690 RDI: ffff8ba3292d7958
2021-03-24T21:03:30.289652+01:00 geneious kernel: [2159747.796414] RBP: ffff8ba3292d7958 R08: ffff8ba209a585b0 R09: 0000000000000001
2021-03-24T21:03:30.289652+01:00 geneious kernel: [2159747.796415] R10: ffff8ba328c1c0b0 R11: ffff8ba287805380 R12: ffff8ba3292d795a
2021-03-24T21:03:30.289653+01:00 geneious kernel: [2159747.796415] R13: 0000000000000000 R14: ffff8ba3292d7958 R15: ffff8ba209a58d90
2021-03-24T21:03:30.289653+01:00 geneious kernel: [2159747.796417] FS: 0000000000000000(0000) GS:ffff8ba333c00000(0000) knlGS:0000000000000000
2021-03-24T21:03:30.289654+01:00 geneious kernel: [2159747.796417] CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033
2021-03-24T21:03:30.289654+01:00 geneious kernel: [2159747.796418] CR2: 0000000099bff000 CR3: 0000000101b06000 CR4: 00000000000006f0
2021-03-24T21:03:30.289655+01:00 geneious kernel: [2159747.796424] Call Trace:
2021-03-24T21:03:30.289656+01:00 geneious kernel: [2159747.796470] __jbd2_journal_refile_buffer+0xbb/0xe0 [jbd2]
2021-03-24T21:03:30.289656+01:00 geneious kernel: [2159747.796479] jbd2_journal_commit_transaction+0xf1a/0x1870 [jbd2]
2021-03-24T21:03:30.289657+01:00 geneious kernel: [2159747.796489] ? __switch_to_asm+0x41/0x70
2021-03-24T21:03:30.289658+01:00 geneious kernel: [2159747.796490] ? __switch_to_asm+0x35/0x70
2021-03-24T21:03:30.289662+01:00 geneious kernel: [2159747.796493] kjournald2+0xbb/0x230 [jbd2]
2021-03-24T21:03:30.289663+01:00 geneious kernel: [2159747.796499] ? wait_woken+0x80/0x80
2021-03-24T21:03:30.289663+01:00 geneious kernel: [2159747.796503] kthread+0xf6/0x130
2021-03-24T21:03:30.289664+01:00 geneious kernel: [2159747.796508] ? commit_timeout+0x10/0x10 [jbd2]
2021-03-24T21:03:30.289664+01:00 geneious kernel: [2159747.796510] ? kthread_bind+0x10/0x10
2021-03-24T21:03:30.289665+01:00 geneious kernel: [2159747.796511] ret_from_fork+0x35/0x40
2021-03-24T21:03:30.289665+01:00 geneious kernel: [2159747.796517] Code: 1b 48 8b 03 48 8b 7b 08 48 83 c3 18 48 89 ee e8 bf 42 76 00 48 8b 03 48 85 c0 75 e8 e9 3c ff ff ff 48 89 df 5b 5d e9
c8 35 fb ff <0f> 0b e9 26 ff ff ff 48 83 e8 01 e9 5b ff ff ff 0f 1f 84 00 00
2021-03-24T21:03:30.289670+01:00 geneious kernel: [2159747.796533] ---[ end trace db796891c8ff94af ]---
2021-03-24T21:03:46.593225+01:00 geneious kernel: [2159764.100145] JBD2: Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:05:09.372772+01:00 geneious kernel: [2159846.877201] JBD2: Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:06:39.943519+01:00 geneious kernel: [2159937.381068] JBD2: Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:07:42.364311+01:00 geneious kernel: [2159999.793805] JBD2: Detected IO errors while flushing file data on dm-1-8
2021-03-24T21:07:57.822133+01:00 geneious kernel: [2160015.291776] JBD2: Detected IO errors while flushing file data on dm-1-8
First i'm wondering: what is dm-1-8 ?
I don't have a device like that.
geneious:~ # find /dev -iname '*dm*'
/dev/dm-1
/dev/dm-0
/dev/disk/by-id/dm-uuid-LVM-a9Cy1cweHgXlAEECqZL5KZBfnuigUG6lq0ntdZJxxLIIp5G8XihsuYrTbx7Rs0vc
/dev/disk/by-id/dm-name-vg_local-lv_var
/dev/disk/by-id/dm-uuid-LVM-a9Cy1cweHgXlAEECqZL5KZBfnuigUG6l3fdsOpBFoDWral3Fa7c6ZeYECmLd6FFj
/dev/disk/by-id/dm-name-vg_local-lv_root
/dev/cpu_dma_latency
I just find /proc/fs/jbd2/dm-1-8.
There is a file /proc/fs/jbd2/dm-1-8/info:
453005 transactions (319055 requested), each up to 8192 blocks
average:
0ms waiting for transaction
12ms request delay
5124ms running transaction
0ms transaction was being locked
0ms flushing data (in ordered mode)
44ms logging transaction
8031us average transaction commit time
64 handles per transaction
5 blocks per transaction
6 logged blocks per transaction
What is that ?
The logfile says also something about dm-0-8:
2021-03-24T21:03:30.289644+01:00 geneious kernel: [2159747.796405] CPU: 0 PID: 457 Comm: jbd2/dm-0-8 Not tainted 4.12.14-122.57-default #1 SLE12-SP5
geneious:~ # find / -iname dm-0-8
/proc/fs/jbd2/dm-0-8
geneious:~ # ll /proc/fs/jbd2/dm-0-8
total 0
-r--r--r-- 1 root root 0 Mar 29 12:56 info
geneious:~ # cat /proc/fs/jbd2/dm-0-8/info
7356 transactions (556 requested), each up to 8192 blocks
average:
0ms waiting for transaction
20ms request delay
5628ms running transaction
4ms transaction was being locked
0ms flushing data (in ordered mode)
132ms logging transaction
134769us average transaction commit time
52 handles per transaction
18 blocks per transaction
19 logged blocks per transaction
geneious:~ #
I assume i have a harddisk problem. I'm checking currently the SAN with its own tools, via a web interface.
Afterwards i want to stop the domain, boot it with a live cd and run badblocks and fsck.ext3.
What else can i do ?
Bernd
--
Bernd Lentes
System Administrator
Institute for Metabolism and Cell Death (MCD)
Building 25 - office 122
HelmholtzZentrum München
bernd.lentes(a)helmholtz-muenchen.de
phone: +49 89 3187 1241
phone: +49 89 3187 3827
fax: +49 89 3187 2294
http://www.helmholtz-muenchen.de/mcd
Public key:
30 82 01 0a 02 82 01 01 00 b3 72 3e ce 2c 0a 6f 58 49 2c 92 23 c7 b9 c1 ff 6c 3a 53 be f7 9e e9 24 b7 49 fa 3c e8 de 28 85 2c d3 ed f7 70 03 3f 4d 82 fc cc 96 4f 18 27 1f df 25 b3 13 00 db 4b 1d ec 7f 1b cf f9 cd e8 5b 1f 11 b3 a7 48 f8 c8 37 ed 41 ff 18 9f d7 83 51 a9 bd 86 c2 32 b3 d6 2d 77 ff 32 83 92 67 9e ae ae 9c 99 ce 42 27 6f bf d8 c2 a1 54 fd 2b 6b 12 65 0e 8a 79 56 be 53 89 70 51 02 6a eb 76 b8 92 25 2d 88 aa 57 08 42 ef 57 fb fe 00 71 8e 90 ef b2 e3 22 f3 34 4f 7b f1 c4 b1 7c 2f 1d 6f bd c8 a6 a1 1f 25 f3 e4 4b 6a 23 d3 d2 fa 27 ae 97 80 a3 f0 5a c4 50 4a 45 e3 45 4d 82 9f 8b 87 90 d0 f9 92 2d a7 d2 67 53 e6 ae 1e 72 3e e9 e0 c9 d3 1c 23 e0 75 78 4a 45 60 94 f8 e3 03 0b 09 85 08 d0 6c f3 ff ce fa 50 25 d9 da 81 7b 2a dc 9e 28 8b 83 04 b4 0a 9f 37 b8 ac 58 f1 38 43 0e 72 af 02 03 01 00 01
3 years, 7 months
vcenter on esxi on KVM on RHEL8
by Nicholas Hardiman
I tried something new at the weekend - I created a lab for ansible
experiments.
One PC, two ESXi VMs, some nested guests.
Is this a sensible thing to attempt?
vcenter ubuntu1 ubuntu2
------- ------- -------
ESXi1 ESXI2
---------------- -------------
KVM
--------------------------------
RHEL 8
--------------------------------
ASUS PN50
Everything within ESXi1 works fine, but communication with the outside
world is not fine. Before I get into troubleshooting details, I thought
I'd just check if this is a reasonable thing to do, or if I'm heading in
the wrong direction.
Thanks, Nick
--
Nick Hardiman, RHCA
Senior Consultant
Red Hat <https://www.redhat.com/>
nhardima(a)redhat.com
M: 07585-206195
Upcoming PTO: -
Upcoming Training: Mon 8 - Fri 12 March
<https://www.redhat.com/>
3 years, 7 months
Packets dropped by virtual NICs
by Silvia Fichera
Hi all,
I want to use tc qdisc settings in a network coposed of several qemu VMs,
connected through bridges and tap interfaces.
I generate traffic with a spirent. Everything is fine when the scheduling
discipline is not installed but when I run the command to set taprio queues
traffic on the VM's NIC the traffic is dropped, i can send max 1mbps.
I think that there is something missing in the virtual NIC configuration or
setup. With ethtool i can see that queues are configured. I've also noticed
the BQL equals to 0, that is different than the physical machine
(BQL=18600) where everything works correctly.
I've read that it could be because NIC drivers do not support that setting.
Do you have any suggestions?
Thanks you all
Silvia
3 years, 7 months
Question about the Qos support status for different type interfaces
by Yalan Zhang
Hi there,
I have a question about the Qos support status for different type
interfaces.
Some types of interface do not support Qos, such as hostdev, user type,
mcast type, but the behavior are different, for hostdev, the guest can not
start with a meaningful error message, but for other types, vm can start
successfully with a warning message in the libvirtd log. I doubt that if it
is necessary to keep the behavior consistent for these different types?
There are 2 history bugs for them, I should have thought further and asked
early when testing the bugs.
Bug 1319044 <https://bugzilla.redhat.com/show_bug.cgi?id=1319044> - log
error when <bandwidth> requested on a <interface type='hostdev'>
Bug 1524230 <https://bugzilla.redhat.com/show_bug.cgi?id=1524230> - vhostuser
type interface do not support bandwidth, but no warning message
Thank you for looking into this and very appreciate your feedback!
1. start vm with user type interface set with Qos:
<interface type='user'>
<mac address='52:54:00:3e:ec:14'/>
<bandwidth>
<inbound average='1000' peak='5000' burst='5120'/>
<outbound average='128' peak='256' burst='256'/>
</bandwidth>
<model type='rtl8139'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x01'
function='0x0'/>
</interface>
# cat /var/log/libvirt/libvirtd.log | grep bandwidth
2021-03-26 10:47:11.452+0000: 20185: warning :
qemuBuildInterfaceCommandLine:8223 : setting bandwidth on interfaces of
type 'user' is not implemented yet
2. start with hostdev type interface with Qos setting:
<interface type='hostdev' managed='yes'>
<mac address='52:54:00:07:27:b0'/>
<source>
<address type='pci' domain='0x0000' bus='0x82' slot='0x10'
function='0x6'/>
</source>
<bandwidth>
<inbound average='1000' peak='5000' burst='5120'/>
<outbound average='128' peak='256' burst='256'/>
</bandwidth>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00'
function='0x0'/>
</interface>
# virsh start rh
error: Failed to start domain 'rh'
error: unsupported configuration: interface 52:54:00:07:27:b0 - bandwidth
settings are not supported for hostdev interfaces
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
3 years, 7 months