General use of zstd instead of zlib compression
by Michael Niehren
Hi,
currently i use on all VM's qcow2-Images with zlib compression. If i do an Backup, the Backup-Image will
be compressed with zstd Level 3 to shrink the image for transfering it over not so fast internet connections.
So, why not directly using zstd compression on the images. Are there any reason's not to do that ?
As i always use virt-manager for administration, i patched qemu (V9.2.2) to create on default zstd compressed images
(only 1 change in line 3525). So newly created images do have compression type zstd, which work's (qemi-img info).
I see one unusual thing. If i do an qemi-img convert with compression_type=zstd the size of the converted image
is much smaller than the original file while "qemu-img info" shows on both as compression type zstd. Do they use
different compression levels, maybe ?
If i now do an virsh backup-begin <domain>, the backup-Image does also has a bigger size than the original, while
showing zstd as compression type (qemu-img info). If i convert it with the similar command as above, both converted images has
nearly the same size. Even if i copy the smaller converted image to the original and boot the vm from the smaller
image, the backup-image (after backup-begin) is bigger.
So, i am confused. Are there any explanations about the different image sizes or what's going on here ?
best regards
Michael
2 days, 6 hours
live migration of SR-IOV vm
by Paul B. Henson
I have a vm using an sr-iov NIC that I'm testing live migration on (Debian
12, OS packages).
Per the documentation, I have the sr-iov link set as transient with a
pointer to the persistent virtio link:
<interface type='network'>
<mac address='52:54:00:a1:e0:38'/>
<source network='sr-iov-intel-10G-1'/>
<vlan>
<tag id='400'/>
</vlan>
<model type='virtio'/>
<teaming type='transient' persistent='ua-sr-iov-backup'/>
<address type='pci' domain='0x0000' bus='0x01' slot='0x00'
function='0x0'/>
</interface>
and a persistent virto link down by default:
<interface type='direct'>
<mac address='52:54:00:a1:e0:38'/>
<source dev='eno5np0.400' mode='bridge'/>
<model type='virtio'/>
<teaming type='persistent'/>
<link state='down'/>
<alias name='ua-sr-iov-backup'/>
<address type='pci' domain='0x0000' bus='0x07' slot='0x00'
function='0x0'/>
</interface>
The failover driver finds this in the vm:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP mode DEFAULT group default qlen 100
0
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel
master eth0 state DOWN mode DEFAULT group
default qlen 1000
10: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
eth0 state UP mode DEFAULT group default
qlen 1000
and the network works fine. However, during migration, the sr-iov
interface is removed, but the link on the virtio interface is *not*
brought up:
2: eth0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue
state DOWN mode DEFAULT group default qlen
1000
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel
master eth0 state DOWN mode DEFAULT group
default qlen 1000
resulting in no network for part of the migration.
Once the box finished migrating, the replacement sv-iov link is plugged
back in, and all is well once again:
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
UP mode DEFAULT group default qlen 1000
3: eth1: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel
master eth0 state DOWN mode DEFAULT group default qlen 1000
11: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq master
eth0 state UP mode DEFAULT group default qlen 1000
My understanding was that virsh migrate was supposed to automatically
bring up the virtio link when the sr-iov link is removed? Or do I need
to explicitly bring it up myself before the migration and take it down
after?
If I bring up the link manually before the migration, there are a few
packets lost at that time, but then none are lost during the migration
when the sr-iov link is pulled, or after the migration when I shut that
link down again. Ideally no packets would be lost :), but I realize
that's unlikely in practice...
Thanks...
2 days, 14 hours
system cache dropped
by matus valentin
Hi,
I have a setup with multiple virtual machines (VMs), each with a saved
state. All VMs share the same parent, which is located on a shared drive.
Whenever I restore any VM using virsh restore, the parent is dropped from
the system cache, causing all data to be downloaded from the shared drive
again. This results in unnecessary network traffic, even though the parent
changes very rarely. However, if I create a child from the parent and
call virsh
create to create a new VM, the parent is never dropped from the system
cache.
Is this expected behavior? Should the parent be retained in the system
cache during a virsh restore operation? Are there any configurations or
settings that can prevent the parent from being dropped from the cache?
thanks
5 days
best backup strategy for full backup's
by Michael Niehren
Hi together,
actually i only do full-backup's of my virtual machines.
I use the for the backup the "old" strategy:
- virsh snapshot-create-as $vmname overlay --disk-only --atomic --no-metadata --quiesce
- copy the qcow2 image file
- virsh blockcommit $vmname $device --active --wait --pivot
- the guest agent in the VM got's an 2 seconds freeze/thaw intervall
Now i want to switch to the new strategy with "backup-begin".
- virsh backup-begin $vmname
- the guest agent does not got an freez/thaw signal
As the guest agent got's no signal, is the backup over "backup-begin" still consistent ?
Or do i have to be consistent to send an virsh domfsfreeze $vmname before starting the backup and an
virsh domfsthaw $vmname it it is finished ?
If so, the time intervall between freeze/thaw would be on an huge disk much more then 2 secords.
So, is the old method currently still the better way, if only doing full-backup's ?
best regards,
Michael
1 week, 2 days