On 14. 08. 19 13:27, Peter Krempa wrote:
On Tue, Aug 13, 2019 at 15:15:11 +0200, Petr Stodulka wrote:
> Hi guys,
> I had to move to the new laptop week ago and I screw migration of my virtual
> machines. I recovered my virtual machines on the new laptop (virsh define)
> using the backed up xml files, but I am missing any file with metadata about
> snapshots. The original storage is cleaned so I cannot take these files
> anymore.
>
> Using qemu-info I can see my snapshots inside the qcow images, but libvirt
> doesn't know about them:
> ###########################################
> # virsh snapshot-list rhlvm
> Name Creation Time State
> -------------------------------
>
> # qemu-img info rhlvm.qcow2
> image: rhlvm.qcow2
> file format: qcow2
> virtual size: 25G (26843545600 bytes)
> disk size: 2.9G
> cluster_size: 65536
> Snapshot list:
> ID TAG VM SIZE DATE VM CLOCK
> 1 prepared 0 2018-09-05 11:06:06 00:00:00.000
> Format specific information:
> compat: 1.1
> lazy refcounts: true
> refcount bits: 16
> corrupt: false
>
>
> ###########################################
>
> Is there any nice way to regenerate snapshot metadata for libvirt from
> the data inside qcow images? I have bunch of VMs so if there is nice way
> how to recover those data, you will make me really happy :)
Hi,
libvirt unfortunately does not support metadata-less snapshots so you
must recreate the metadata to be able to use them. If you didn't modify
your configuration of the VM between the time you took the snapshot and
the current point it should be fairly straightforward to recover them,
but it will require some manual steps.
The snapshot creation api has a _REDEFINE flag which allows to create
the snapshot metadata without actually doing any disk changes. (virsh
snapshot-create --redefine).
For this to happen you must prepare a definition of the snapshot. I'll
provide an example with anotations what to update:
<domainsnapshot>
<name>1565701354</name> <--- this must be equal to the 'TAG'
field in the qemu-img output
<state>running</state> <--- state of the VM at snapshot, you'll
probably need to use shutoff as the VM was not running
<creationTime>1565701354</creationTime> <--- convert the
"DATE" field to a unix timestamp
<memory snapshot='internal'/> <----- your snapshot is internal so
this is ok
<disks>
<disk name='hda' snapshot='no'/> <--- entries here
depend on your configuration, you need one line per disk, readonly disks must have
'no' as snapshot, others are internal
<disk name='vda' snapshot='internal'/>
<disk name='vdb' snapshot='internal'/>
<disk name='vdc' snapshot='internal'/>
</disks>
<domain type='kvm'> <--- this is the domain definition XML obtained
by running a virsh dumpxml --migratable --inactive --security-info
<name>upstream</name>
<uuid>841752b8-9452-4078-a62b-8fd9a9af011c</uuid>
[...] (trimmed irrelevant stuff but make sure to use full XML)
<devices>
<emulator>/home/pipo/git/qemu.git/x86_64-softmmu/qemu-system-x86_64</emulator>
<disk type='file' device='cdrom'>
<driver name='qemu' type='raw'/>
<source
file='/var/lib/libvirt/images/systemrescuecd-x86-4.9.5.iso'/>
<backingStore/>
<target dev='hda' bus='ide'/>
<readonly/>
<boot order='1'/>
<address type='drive' controller='0' bus='0'
target='0' unit='0'/>
</disk>
<disk type='file' device='disk'> <---- these conform to
the table above
<driver name='qemu' type='qcow2'/>
<source file='/tmp/pull4.qcow2'/>
<target dev='vda' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x0a' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/tmp/commit4.qcow2'/>
<target dev='vdb' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x0c' function='0x0'/>
</disk>
<disk type='file' device='disk'>
<driver name='qemu' type='qcow2'/>
<source file='/tmp/copy4.qcow2'/>
<target dev='vdc' bus='virtio'/>
<address type='pci' domain='0x0000' bus='0x00'
slot='0x0d' function='0x0'/>
</disk>
[...]
</domain>
<cookie>
</cookie>
</domainsnapshot>
Use a document which you create with:
virsh snapshot-create $VMNAME --redefine --current
Use --current only for the most recent snapshot.
Thanks! It works as you wrote. All my snapshots are recovered.
--
Petr Stodulka
Core Services (In-place upgrades and migrations)
Red Hat