[libvirt-users] [virtual interface] detach interface during boot succeed with no changes
by Yalan Zhang
Hi guys,
when I detach an interface from vm during boot (vm boot not finished), it
always fail. I'm not sure if there is an existing bug. I have
confirmed with someone that for disk, there is similar behavior, if
this is also acceptable?
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 2; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
<address type='pci' domain='0x0000' bus='0x00' slot='0x06'
function='0x0'/>
</controller>
<interface type='network'>
<mac address='52:54:00:98:c4:a0'/>
<source network='default' bridge='virbr0'/>
<target dev='vnet0'/>
<model type='rtl8139'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03'
function='0x0'/>
</interface>
When I detach after the vm boot, expand the sleep time to 10, it will succeed.
# virsh destroy rhel7.2; virsh start rhel7.2 ;sleep 10; virsh
detach-interface rhel7.2 network 52:54:00:98:c4:a0; sleep 2; virsh
dumpxml rhel7.2 |grep /interface -B9
Domain rhel7.2 destroyed
Domain rhel7.2 started
Interface detached successfully
-------
Best Regards,
Yalan Zhang
IRC: yalzhang
Internal phone: 8389413
2 years, 3 months
guest-fsfreeze-freeze freezes all mounted block devices
by Marc Roos
I wondered if anyone here can confirm that
virsh qemu-agent-command domain '{"execute":"guest-fsfreeze-freeze"}'
Freezes all mounted block devices filesystems. So if I use 4 block
devices they are all frozen for snapshotting. Or just the root fs?
4 years, 4 months
[libvirt-users] Question about disabling UFO on guest
by Bao Nguyen
Hello everyone,
I would like to ask a question regarding to disable UFO of virtio vNIC in
my guest. I have read the document at https://libvirt.org/formatdomain.html
*host*
The csum, gso, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off host offloading options. By
default, the supported offloads are enabled by QEMU. *Since 1.2.9 (QEMU
only)* The mrg_rxbuf attribute can be used to control mergeable rx buffers
on the host side. Possible values are on (default) and off. *Since 1.2.13
(QEMU only)*
*guest*
The csum, tso4, tso6, ecn and ufo attributes with possible
values on and off can be used to turn off guest offloading options. By
default, the supported offloads are enabl
ed by QEMU.
*Since 1.2.9 (QEMU only)*
Then I disabled UFO on my vNIC on guest as the following configuration
<devices>
<interface type='network'>
<source network='default'/>
<target dev='vnet1'/>
<model type='virtio'/>
<driver name='vhost' txmode='iothread' ioeventfd='on' event_idx='off'
queues='5' rx_queue_size='256' tx_queue_size='256'>
*<host gso='off' ufo='off' />*
*<guest ufo='off'/>*
</driver>
</interface>
</devices>
Then I reboot my node to get the change effect and it works. However, can I
disable the UFO without touching the host OS? or it always has to disable
on both host and guest like that?
Thanks,
Brs,
Natsu
4 years, 4 months
qemu hook: event for source host too
by Guy Godfroy
Hello, this is my first time posting on this mailing list.
I wanted to suggest a addition to the qemu hook. I will explain it
through my own use case.
I use a shared LVM storage as a volume pool between my nodes. I use
lvmlockd in sanlock mode to protect both LVM metadata corruption and
concurrent volume mounting.
When I run a VM on a node, I activate the desired LV with exclusive lock
(lvchange -aey). When I stop the VM, I deactivate the LV, effectively
releasing the exclusive lock (lvchange -an).
When I migrate a VM (both live and offline), the LV has to be activated
on both source and target nodes, so I have to use a shared lock
(lvchange -asy). That's why I need a hook event on the source host too
(as far as I know after my tests, the migration event is only triggered
on the target host).
Is such a feature a possibility?
Thanks for your attention.
Guy Godfroy
4 years, 6 months
Re: USB-hotplugging fails with "failed to load cgroup BPF prog: Operation not permitted" on cgroups v2
by Pavel Hrdina
On Mon, Jan 20, 2020 at 09:00:15PM +0100, Pol Van Aubel wrote:
> Hi,
>
> Quoting Pavel Hrdina (2020-01-20 14:29:36)
> > On Sat, Jan 18, 2020 at 11:17:11PM +0100, Pol Van Aubel wrote:
> > > Hi all,
> > >
> > > I've disabled cgroups v1 on my system with the kernel boot option
> > > "systemd.unified_cgroup_hierarchy=1". Since doing so, USB hotplugging
> > > fails to work, seemingly due to a permissions problem with BPF. Please
> > > note that the technique I'm going to describe worked just fine for
> > > hotplugging USB devices to running domains until this change.
> > > Attaching / detaching USB devices when the domain is down still works as
> > > expected.
> > >
> > > I get the same error when attaching a device in virt-manager, as I do
> > > when running the following command:
> > >
> > > sudo virsh attach-device wenger /dev/stdin --persistent <<END
> > > <hostdev mode='subsystem' type='usb' managed='yes'>
> > > <source startupPolicy='optional'>
> > > <vendor id='0x046d' />
> > > <product id='0xc215' />
> > > </source>
> > > </hostdev>
> > > END
> > >
> > > This returns
> > > error: Failed to attach device from /dev/stdin
> > > error: failed to load cgroup BPF prog: Operation not permitted
> > >
> > >
> > > virt-manager returns basically the same error, but for completeness'
> > > sake, here it is:
> > >
> > > failed to load cgroup BPF prog: Operation not permitted
> > >
> > > Traceback (most recent call last):
> > > File "/usr/share/virt-manager/virtManager/addhardware.py", line 1327, in _add_device
> > > self.vm.attach_device(dev)
> > > File "/usr/share/virt-manager/virtManager/object/domain.py", line 920, in attach_device
> > > self._backend.attachDevice(devxml)
> > > File "/usr/lib/python3.8/site-packages/libvirt.py", line 590, in attachDevice
> > > if ret == -1: raise libvirtError ('virDomainAttachDevice() failed', dom=self)
> > > libvirt.libvirtError: failed to load cgroup BPF prog: Operation not permitted
> > >
> > >
> > > Now, libvirtd is running as root, so I don't understand why any
> > > operation on BPF programs is not permitted. I've dug into libvirt's code
> > > a bit to see what is throwing this error and it boils down to
> > > <https://github.com/libvirt/libvirt/blob/7d608469621a3fda72dff2a89308e68cc...>
> > > and
> > > <https://github.com/libvirt/libvirt/blob/02bf7cc68bfc76242f02d23e73cad3661...>
> > > but I have no clue what that syscall is doing, so that's where my
> > > debugging capability basically ends.
> > >
> > > Maybe this is something as simple as setting the right ACL somewhere. I
> > > haven't touched /etc/libvirt/qemu.conf except for setting nvram. There
> > > *is* something about cgroup_device_acl there but afaict that's for
> > > cgroups v1, when there was still a device cgroup controller. Any help
> > > would be greatly appreciated.
> > >
> > >
> > > Domain log files:
> > > Upon execution of the above commands, nothing gets added to the domain
> > > log in /var/log/qemu/wenger.log, so I've decided they're likely
> > > irrelevant to the issue. Please ask for any additional info required.
> > >
> > >
> > > System information:
> > > Arch Linux, (normal) kernel 5.4.11
> > > libvirt 5.10.0
> > > qemu 4.2.0, using KVM.
> > > Host system is x86_64 on an intel 5820k.
> > > Guest system is probably irrelevant, but is Windows 10 on the same.
> > >
> > >
> > > Possibly relevant kernel build options:
> > > $ zgrep BPF /proc/config.gz
> > > [22:55:52]: zgrep BPF /proc/config.gz
> > >
> > > CONFIG_CGROUP_BPF=y
> > > CONFIG_BPF=y
> > > CONFIG_BPF_SYSCALL=y
> > > CONFIG_BPF_JIT_ALWAYS_ON=y
> > > CONFIG_IPV6_SEG6_BPF=y
> > > CONFIG_NETFILTER_XT_MATCH_BPF=m
> > > # CONFIG_BPFILTER is not set
> > > CONFIG_NET_CLS_BPF=m
> > > CONFIG_NET_ACT_BPF=m
> > > CONFIG_BPF_JIT=y
> > > CONFIG_BPF_STREAM_PARSER=y
> > > CONFIG_LWTUNNEL_BPF=y
> > > CONFIG_HAVE_EBPF_JIT=y
> > > CONFIG_BPF_EVENTS=y
> > > # CONFIG_BPF_KPROBE_OVERRIDE is not set
> > > # CONFIG_TEST_BPF is not set
> >
> > Hi
> >
> > I've installed clean archlinux to try this out and it works as expected,
> > I'm able to attach USB device into a VM.
> >
> > My system env is mostly the same as yours except for kernel version:
> >
> > kernel 5.4.13
> > libvirt 5.10.0
> > qemu 4.2.0, using KVM.
> >
> > Please enable libvirt debug logs [1] and share the output with us.
>
> I've updated to 5.4.13 and created a barebones VM without storage to
> reproduce the behaviour. libvirtd debug logs are attached. There appear
> to be two BPF failures of the same BPF program (?). The first is on line
> 23209, which appears to be part of machine startup, and which I don't
> actually notice. The second one is where I manually add the USB device,
> on line 30599.
>
> Thanks,
Thanks for the logs, but it did not help to figure out where the issue
is. I was hoping to see some error output from the syscall but the line
that should contain it is empty:
2020-01-20 19:47:15.589+0000: 8579: debug : virBPFLoadProg:78 :
Can you please check system logs and output of dmesg?
I've managed to run into this article [1] that explains that even if you
have all permissions and no SELinux you can still be blocked by
something called kernel_lockdown and it should appear in dmesg.
Pavel
[1] <https://gehrcke.de/2019/09/running-an-ebpf-program-may-require-lifting-th...>
4 years, 7 months
kvm presenting wrong CPU Topology for cache
by Satish Patel
Folks,
I am having major performance issue with my Erlang application running
on openstack KVM hypervisor and after so many test i found something
wrong with my KVM guest CPU Topology
This is KVM host - http://paste.openstack.org/show/790120/
This is KVM guest - http://paste.openstack.org/show/790121/
If you carefully observe output of both host and guest you can see
guest machine threads has own cache that is very strange
L2 L#0 (4096KB) + Core L#0
L1d L#0 (32KB) + L1i L#0 (32KB) + PU L#0 (P#0)
L1d L#1 (32KB) + L1i L#1 (32KB) + PU L#1 (P#1)
I believe because of that my erlang doesn't understand topology and
going crazy..
I have Ali Cloud and AWS and when i compare with them they are showing
correct CPU Topology the way physical machine showing, something is
wrong with my KVM look like.
I am running qemu-kvm-2.12 on centos 7.6 and i have tune my KVM at my
best level, like CPU vining, NUMA and cpu host-passthrough.
Thanks in advance for your help.
4 years, 9 months
libvirtError: Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainMigratePerform3Params)
by Blake Anderson
Hi everyone,
I have a question that you may be able to help me with. I had a live block migration of a qemu-kvm guest fail (initiated via nova), in which the guest remained running on the source, but if I try to re-initiate the live migration it returns libvirtError: Timed out during operation: cannot acquire state change lock (held by monitor=remoteDispatchDomainMigratePerform3Params). Looking at blockjob --info I see there is a block copy that has been stuck at 23 % for a few hours now. Is it safe to issue a —abort to the blockjob without impacting the vm and will that resolve the lock?
Source:
libvirt-4.5.0-23.el7_7.5.x86_64
qemu-kvm-ev-2.10.0-21.el7_5.7.1.x86_64
Destination:
libvirt-4.5.0-23.el7_7.5.x86_64
qemu-kvm-ev-2.10.0-21.el7_5.7.1.x86_64
Thanks,
Blake
4 years, 9 months
Windows guest stalls during reboot
by Benjammin2068
Hey all,
Quick question.
I have a (RHEL/CentOS v7) Windows 10 Pro (64bit) guest that shutdown just fine (i.e. KVM shows the host as shut down), but when Windows is set to reboot, KVM Manager shows the guest screen as shut down, but KVM Manager shows the guest as still running and the system never reboots.
If I do 'virsh list' -- there's no guest running.
If I do 'virch start MyWin10Guest' -- that starts the guest back up and things are like they were before.
Am I missing a new setting someplace?
I can't seem to type in the right thing in Google to help me figure this one out.
Thanks a bunch,
-Ben
4 years, 9 months