virsh snapshot-create-as with quiesce option times out
by David Wells
Hi all!
I'm still working on the live backup of a couple vm's and what happens
most of the times es that when I execute virsh with the
snapshot-create-as with the --quiesce option the process finishes with
an error that reads
> error: Timed out during operation: cannot acquire state change lock
I tried turning up the debug level but found nothing that appears to be
of interest and I can connect successfully to the qemu guest agent since
the following command returns the correct value
> sudo /usr/sbin/virsh domfsinfo slackware-current
> Mountpoint Name Type Target
> -------------------------------------------------------------------
> / vda1 ext4 vda
I also tried running the qemu-ga agent on the guest with debugging
enabled and when I issue this last command I can see the agent talking
to the host but when I issue the quiesce option the guest agent shows
nothing at all.
Is this by any chance a known bug? Is there something obvious I'm
missing? What else can I provide to help debug this issue?
Thanks in advance!
Best regards,
David Wells.
3 years, 9 months
virtio model type and netfilter masquerade not being applied
by Brad Jennings
Quick question for anyone in the know, I have a fairly basic setup (at
least I think it is?) with an openvswitch, and the br0 port has an IP
assigned in the same subnet as the VM to act as a gateway.
|------ovs-------|
eno2 <-- |--br0 |
|--vnet0 - VM |
|------------------|
I would like the VM (vnet0) to use br0 as a gateway which local
connectivity wise seems fine but the internet is a bit odd. I can ping for
example 1.1.1.1 dns without any issues but anying udp/tcp is a no go.
I checked the physical hosts interface(eno2) and br0 to find that the VM's
packets were successfully heading to br0 but when leaving the physical
host(eno2) the tcp/udp packets weren't being masqueraded. The rule is
pretty straightforward and to test I plugged another device into the eno1
afxdp port and had no connectivity issues and packets were being
masqueraded fine.
I tried to set trustGuestRxFilters='yes' but that didn't work and the same
state remained, the only thing that worked was using the "rtl8139" model
type.
I always remember using 'virtio' in the past and I must be missing
something crucial in the somewhat lengthy libvirt documentation.
Would be super helpful if someone can shed some light on this ? and
possibly if I should be using virtio or the realtek driver ?
Thanks ! (config below)
Iptables:
sudo iptables -t nat -A POSTROUTING -o eno2 -j MASQUERADE
ovs-vsctl show
ec13c3e2-6159-4019-984e-36cc90c59075
Bridge br0
fail_mode: standalone
datapath_type: netdev
Port vnet0
Interface vnet0
Port eno1
Interface eno1
type: afxdp
Port br0
Interface br0
type: internal
instance domain xml
<interface type='bridge'>
<mac address='52:54:00:77:fc:70'/>
<source bridge='br0'/>
<virtualport type='openvswitch'>
<parameters interfaceid='2124ef39-e244-434c-8339-d2aa04d0d888'/>
</virtualport>
<model type='virtio'/> #rtl8139 works.
<address type='pci' domain='0x0000' bus='0x02' slot='0x01'
function='0x0'/>
</interface>
3 years, 9 months
Some confusion about lsilogic controller
by xingchaochao
Hello,
I have been confused by such a phenomenon recently.
Libvirt is the master branch , and the VM is centos8.2(kernel is 4.18.0-193.el8.aarch64).
When I hot-plug the scsi disk for a virtual machine without a virtio-scsi controller, libvirt will automatically generate an lsilogic controller for the scsi disk.
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2' cache='none' io='native'/>
<source file='/Images/xcc/tmp.img'/>
<backingStore/>
<target dev='sdt' bus='scsi'/>
<address type='drive' controller='0' bus='0' target='0' unit='0'/>
</disk>
linux-upcHIq:/Images/xcc # virsh list
Id Name State
----------------------
12 g1 running
linux-upcHIq:/Images/xcc # virsh attach-device g1 disk.xml
Device attached successfully
linux-upcHIq:/Images/xcc # virsh dumpxml g1 | grep scsi
<target dev='sdt' bus='scsi'/>
<alias name='scsi0-0-0'/>
<controller type='scsi' index='0' model='lsilogic'>
<alias name='scsi0'/>
But this scsi disk cannot be found through the lsblk command inside the virtual machine.
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 600M 0 part /boot/efi
├─vda2 252:2 0 1G 0 part /boot
└─vda3 252:3 0 18.4G 0 part
├─cl-root 253:0 0 16.4G 0 lvm /
└─cl-swap 253:1 0 2G 0 lvm [SWAP]
After hot unplugging the scsi disk, I performed the hot unplug operation of the lsilogic controller. libvirt shows "Device detached successfully", but in fact, the lsilogic controller is not removed from the live XML and persistent XML. Through "virsh dumpxml vmname" and "virsh edit vmname", I can see <controller type='scsi' index='0' model='lsilogic'> is always there.
linux-upcHIq:/Images/xcc # virsh detach-device g1 disk.xml
Device detached successfully
linux-upcHIq:/Images/xcc # virsh dumpxml g1 | grep scsi
<controller type='scsi' index='0' model='lsilogic'>
<alias name='scsi0'/>
linux-upcHIq:/Images/xcc #
linux-upcHIq:/Images/xcc # cat lsi.xml
<controller type='scsi' index='0' model='lsilogic'>
<alias name='scsi0'/>
<address type='pci' domain='0x0000' bus='0x03' slot='0x05' function='0x0'/>
</controller>
linux-upcHIq:/Images/xcc # virsh detach-device g1 lsi.xml
Device detached successfully
linux-upcHIq:/Images/xcc # virsh dumpxml g1 | grep scsi
<controller type='scsi' index='0' model='lsilogic'>
<alias name='scsi0'/>
I am confused, why libvirt chooses to generate an lsilogic controller for the scsi disk when there is no scsi controller, instead of directly reporting an error and exiting the hot plug operation. After all, the scsi disk based on the lsilogic controller is not perceived inside the virtual machine, and lsilogic will remain in the XML file of the virtual machine.
3 years, 9 months
Live backups create-snapshot-as and memspec
by David Wells
Hi all!
I've been using libvirt for some time and until now I have treated
backups of virtual computers as if they where physical computers
installing the backup client on the guest. I am now however facing the
need to backup a couple a couple of guest at the host level so I've been
trying to catch up by reading, googling and by trial and error too. Up
to now I've been able to backup a live machine whith a command like the
following
> virsh snapshot-create-as --domain test --name backup --atomic
> --diskspec vda,snapshot=external --disk-only
This command creates a file test.backup and in the meantime I can backup
the original test.qcow2 but for what I saw this disk image is in a
"dirty" state, as if the machine I could restore from this file had been
turned off whithout a proper shutdown.
I know that I can later restore the machine to its original state by
issuing a command like this
> virsh blockcommit --domain test vda --active --pivot
> virsh snapshot-delete test --metadata backup
I have seen that it is possible to create the snapshot using a memspec
parameter which would make the backup of the guest as if it where in a
clean state, however I haven't found the equivalent of the blockcommit
for the memory file, in a sort of speak, to be able to restore the guest
to it's original state.
Thank you very much!
Best regards.
David Wells.
3 years, 9 months