[libvirt-users] Help with understanding and solving snapshot problem

Hello Fairly new to libvirt. I’m hoping to both solve a problem with this question as well as learn more detail about how libvirt works. Using RHEL 6.4 and libvirt version is 0.10.2 and qemu-img version is 0.12.1.2 Using virt-manager I created a VM. Nothing unusual as far as I can see. I then added a disk. So I have a second virtio based volume which the guest then mounts as a separate filesystem. At some stage after that I created a snapshot of the system. I can’t delete the snapshot. The end result is I’d like to create a snapshot that excludes the second (much larger) disk - that is backed up via conventional backup systems. But the issue is first deleting the snapshot and my understanding of what its saying. So here is some basic info: # virsh snapshot-list host1 Name Creation Time State ------------------------------------------------------------ snap1-host1 2014-01-19 16:59:10 +1100 shutoff # # virsh snapshot-info host1 --current Name: snap1-host1 Domain: host1 Current: yes State: shutoff Location: internal Parent: - Children: 0 Descendants: 0 Metadata: yes So I try and delete it: # virsh snapshot-delete host1 --current error: Failed to delete snapshot snap1-host1 error: unsupported configuration: deletion of 1 external disk snapshots not supported yet Why does it say “external” in the above error, when the snapshot-info says its Location is internal? Are they not related ideas? The img file /var/lib/libvirt/images/host1.img is not being used by KVM. the snapshot files are in use instead: # virsh domblklist host1 Target Source ------------------------------------------------ vda /var/lib/libvirt/snap1-host1.qcow2 vdb /var/lib/libvirt/snap1-host1-db.qcow2 hdc - Querying them: # qemu-img info /var/lib/libvirt/snap1-host1.qcow2 image: /var/lib/libvirt/snap1-host1.qcow2 file format: qcow2 virtual size: 24G (25769803776 bytes) disk size: 3.5G cluster_size: 65536 [root@cocoa libvirt]# qemu-img info /var/lib/libvirt/snap1-host1-db.qcow2 image: /var/lib/libvirt/snap1-host1-db.qcow2 file format: qcow2 virtual size: 1.6T (1800279089664 bytes) disk size: 325G cluster_size: 65536 What have I done wrong and how can I correct this so I can discard the existing snapshot and have a snapshot of the system excluding the second disk (and optionally which I can re-take at intervals and discard old ones)? Many thanks, rolf.

On 04/10/2014 12:00 AM, rolf wrote:
Hello
Fairly new to libvirt. I’m hoping to both solve a problem with this question as well as learn more detail about how libvirt works.
[Can you convince your mailer to wrap long lines? It makes it easier for other readers]
Using RHEL 6.4 and libvirt version is 0.10.2 and qemu-img version is 0.12.1.2
Have you considered raising this as a support request with Red Hat? From the upstream perspective, 0.10.2 is quite old, and Red Hat may be better equipped to answer questions about what snapshot support they have backported to that version of RHEL. In particular, the fact that you are not using RHEL 6.5 is a bit worrisome, and I also understand that RHEL 6.x tends to not support internal snapshots.
Using virt-manager I created a VM. Nothing unusual as far as I can see. I then added a disk. So I have a second virtio based volume which the guest then mounts as a separate filesystem. At some stage after that I created a snapshot of the system.
I can’t delete the snapshot. The end result is I’d like to create a snapshot that excludes the second (much larger) disk - that is backed up via conventional backup systems. But the issue is first deleting the snapshot and my understanding of what its saying.
So here is some basic info:
# virsh snapshot-list host1 Name Creation Time State ------------------------------------------------------------ snap1-host1 2014-01-19 16:59:10 +1100 shutoff
# # virsh snapshot-info host1 --current Name: snap1-host1 Domain: host1 Current: yes State: shutoff Location: internal
So the fact that you created an internal snapshot may have already put you in unsupported territory for the versions of software that you are using. That said, I can still try to help, and I hope that upstream behaves nicer in this regards, although you have certainly given us enough steps to try and reproduce if this is still a bug in upstream. Or maybe the bug is here, and you really did create an external snapshot but the code is reporting it incorrectly. Can you post the actual command that you used to create the snapshot?
Parent: - Children: 0 Descendants: 0 Metadata: yes
So I try and delete it:
# virsh snapshot-delete host1 --current
error: Failed to delete snapshot snap1-host1 error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
Why does it say “external” in the above error, when the snapshot-info says its Location is internal? Are they not related ideas?
It's very likely that you have tripped up on a bug, and perhaps on a bug that has been fixed in the meantime in newer libvirt, which either needs backporting to the RHEL version, or which Red Hat has deemed not worth backporting because of their level of limited snapshot support in RHEL 6. But again, going through a Red Hat support ticket will get faster results than asking upstream.
The img file /var/lib/libvirt/images/host1.img is not being used by KVM. the snapshot files are in use instead:
# virsh domblklist host1 Target Source ------------------------------------------------ vda /var/lib/libvirt/snap1-host1.qcow2 vdb /var/lib/libvirt/snap1-host1-db.qcow2 hdc -
Querying them:
# qemu-img info /var/lib/libvirt/snap1-host1.qcow2 image: /var/lib/libvirt/snap1-host1.qcow2 file format: qcow2 virtual size: 24G (25769803776 bytes) disk size: 3.5G cluster_size: 65536
No external backing file listed; but I'm not sure how this relates to the original snapshot. Maybe you also want to show 'virsh snapshot-dumpxml host1 snap1-host1' to make it more obvious what the domain was using at the time you took the snapshot?
[root@cocoa libvirt]# qemu-img info /var/lib/libvirt/snap1-host1-db.qcow2 image: /var/lib/libvirt/snap1-host1-db.qcow2 file format: qcow2 virtual size: 1.6T (1800279089664 bytes) disk size: 325G cluster_size: 65536
What have I done wrong and how can I correct this so I can discard the existing snapshot and have a snapshot of the system excluding the second disk (and optionally which I can re-take at intervals and discard old ones)?
I don't quite have a full picture of how you got into the situation. If you are trying to just get rid of the snapshot, you could always try 'virsh snapshot-delete --metadata host1 snap1-host1', to make libvirt forget about the snapshot without cleaning up any actual data (leaving any external backing chains intact, and not removing internal snapshots from qcow2 files). -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

Hello Thanks heaps for your suggestions. responses inline. On 10 Apr 2014, at 10:34 pm, Eric Blake <eblake@rehost1hat.com> wrote:
[ … ]
[Can you convince your mailer to wrap long lines? It makes it easier for other readers]
I’ll try and keep the lines short. I don’t see any obvious setting to change the wrap.
Using RHEL 6.4 and libvirt version is 0.10.2 and qemu-img version is 0.12.1.2
Have you considered raising this as a support request with Red Hat?
My error. I am using RHEL 6.5. Sorry about the wrong info. I could put this is to Red Hat but thought the mailing list might be a better initial place.
From the upstream perspective, 0.10.2 is quite old, and Red Hat may be better equipped to answer questions about what snapshot support they have backported to that version of RHEL. In particular, the fact that you are not using RHEL 6.5 is a bit worrisome, and I also understand that RHEL 6.x tends to not support internal snapshots.
[ … ]
# virsh snapshot-list host1 Name Creation Time State ------------------------------------------------------------ snap1-host1 2014-01-19 16:59:10 +1100 shutoff
# # virsh snapshot-info host1 --current Name: snap1-host1 Domain: host1 Current: yes State: shutoff Location: internal
So the fact that you created an internal snapshot may have already put you in unsupported territory for the versions of software that you are using. That said, I can still try to help, and I hope that upstream behaves nicer in this regards, although you have certainly given us enough steps to try and reproduce if this is still a bug in upstream. Or maybe the bug is here, and you really did create an external snapshot but the code is reporting it incorrectly. Can you post the actual command that you used to create the snapshot?
Unfortunately no as I cannot remember it. I have a feeling that it was a menu item in the virt-manager gui, but that said I can’t find it. It is most likely to have been the simplest form of the virsh snapshot-create command.
Parent: - Children: 0 Descendants: 0 Metadata: yes
So I try and delete it:
# virsh snapshot-delete host1 --current
error: Failed to delete snapshot snap1-host1 error: unsupported configuration: deletion of 1 external disk snapshots not supported yet
Why does it say “external” in the above error, when the snapshot-info says its Location is internal? Are they not related ideas?
It's very likely that you have tripped up on a bug, and perhaps on a bug that has been fixed in the meantime in newer libvirt, which either needs backporting to the RHEL version, or which Red Hat has deemed not worth backporting because of their level of limited snapshot support in RHEL 6. But again, going through a Red Hat support ticket will get faster results than asking upstream.
Thanks. I just did a yum update to libvirt and got libvirt-0.10.2-29.el6_5.7.x86_64.rpm Which is not a big change. In any event the error persists unchanged when I try to delete the snapshot.
The img file /var/lib/libvirt/images/host1.img is not being used by KVM. the snapshot files are in use instead:
# virsh domblklist host1 Target Source ------------------------------------------------ vda /var/lib/libvirt/snap1-host1.qcow2 vdb /var/lib/libvirt/snap1-host1-db.qcow2 hdc -
Querying them:
# qemu-img info /var/lib/libvirt/snap1-host1.qcow2 image: /var/lib/libvirt/snap1-host1.qcow2 file format: qcow2 virtual size: 24G (25769803776 bytes) disk size: 3.5G cluster_size: 65536
No external backing file listed; but I'm not sure how this relates to the original snapshot. Maybe you also want to show 'virsh snapshot-dumpxml host1 snap1-host1' to make it more obvious what the domain was using at the time you took the snapshot?
ok. That output shows: # virsh snapshot-dumpxml host1 snap1-host1 <domainsnapshot> <name>snap1-host1</name> <description>After install completed</description> <state>shutoff</state> <creationTime>1390111150</creationTime> <memory snapshot='no'/> <disks> <disk name='vda' snapshot='external'> <driver type='qcow2'/> <source file='/var/lib/libvirt/snap1-host1.qcow2'/> </disk> <disk name='vdb' snapshot='external'> <driver type='qcow2'/> <source file='/var/lib/libvirt/snap1-host1-db.qcow2'/> </disk> <disk name='hdc' snapshot='no'/> </disks> <domain type='kvm'> <name>host1</name> <uuid>e1a43a89-af8f-95e2-e242-a42a44afc127</uuid> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <vcpu placement='static'>6</vcpu> <os> <type arch='x86_64' machine='rhel6.5.0'>hvm</type> <boot dev='hd'/> <bootmenu enable='no'/> </os> <features> <acpi/> <apic/> <pae/> </features> <clock offset='utc'/> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/host1.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </disk> <disk type='block' device='disk'> <driver name='qemu' type='raw' cache='none' io='native'/> <source dev='/dev/sdd1'/> <target dev='vdb' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdc' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='1' target='0' unit='0'/> </disk> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='usb' index='0'/> <interface type='bridge'> <mac address='52:54:00:97:0e:67'/> <source bridge='br3'/> <model type='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <input type='tablet' bus='usb'/> <input type='mouse' bus='ps2'/> <graphics type='vnc' port='-1' autoport='yes'/> <sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='cirrus' vram='9216' heads='1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </memballoon> </devices> </domain> </domainsnapshot> Not completely sure what to derive from that output?
[root@cocoa libvirt]# qemu-img info /var/lib/libvirt/snap1-host1-db.qcow2 image: /var/lib/libvirt/snap1-host1-db.qcow2 file format: qcow2 virtual size: 1.6T (1800279089664 bytes) disk size: 325G cluster_size: 65536
I don't quite have a full picture of how you got into the situation. If you are trying to just get rid of the snapshot, you could always try 'virsh snapshot-delete --metadata host1 snap1-host1', to make libvirt forget about the snapshot without cleaning up any actual data (leaving any external backing chains intact, and not removing internal snapshots from qcow2 files).
ok. What implications does this have long term? Is there then a related step to remove the data of the snapshot that is no longer referenced? Reclaiming the space would be handy. And as before, given the structure of the VM and its two disks, how is a snapshot created excluding the second disk? vdb in the above xml output? Many thanks for your help so far. regards rolf.
-- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On 04/10/2014 04:38 PM, rolf wrote:
[Can you convince your mailer to wrap long lines? It makes it easier for other readers]
I’ll try and keep the lines short. I don’t see any obvious setting to change the wrap.
Thanks for being accommodating; this was indeed easier to read (alas, many web mailers these days lack settings for automatic wrap, so I end up whacking the Enter key for manual wrap when I'm forced to use a web mail interface for sending to a technical list)
# virsh snapshot-info host1 --current Name: snap1-host1 Domain: host1 Current: yes State: shutoff Location: internal
# virsh snapshot-dumpxml host1 snap1-host1 <domainsnapshot> <name>snap1-host1</name> <description>After install completed</description> <state>shutoff</state> <creationTime>1390111150</creationTime> <memory snapshot='no'/> <disks> <disk name='vda' snapshot='external'>
Okay, we've definitely demonstrated a bug in at least 'virsh snapshot-list' for that particular build of libvirt; this proves that the snapshot is definitely external, even though the info output claimed it was internal. I didn't search whether a bugzilla entry was already tracking this for RHEL 6.5; it's not a crasher, so it probably won't get fixed until RHEL 6.6. If you'd like, you can open a BZ, (it might get closed as a dup if someone else finds where it was already reported, even though I didn't do that search), to make sure it doesn't get lost. Meanwhile, creating external snapshots is supported in that version of RHEL, but not deleting (at least not via virsh directly), so you'll have to get your hands a bit dirty with qemu-img and virsh edit.
<driver type='qcow2'/> <source file='/var/lib/libvirt/snap1-host1.qcow2'/>
This says that snap1-host1.qcow2 is the wrapper file created at the time of the snapshot, and that...
</disk> <disk name='vdb' snapshot='external'> <driver type='qcow2'/> <source file='/var/lib/libvirt/snap1-host1-db.qcow2'/> </disk> <disk name='hdc' snapshot='no'/> </disks> <domain type='kvm'>
...
<devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/host1.img'/>
it should have a backing file of /var/lib/libvirt/images/host1.img. Wonder why your 'qemu-img info' output didn't show that fact?
I don't quite have a full picture of how you got into the situation. If you are trying to just get rid of the snapshot, you could always try 'virsh snapshot-delete --metadata host1 snap1-host1', to make libvirt forget about the snapshot without cleaning up any actual data (leaving any external backing chains intact, and not removing internal snapshots from qcow2 files).
ok. What implications does this have long term? Is there then a related step to remove the data of the snapshot that is no longer referenced? Reclaiming the space would be handy.
Are you trying to revert to that state, or just forget that you ever took the snapshot? Are you okay keeping the two files as a backing chain, or do you want to collapse it into one? And if you DO want to collapse into one file (so you can delete the other), do you want the kept file to be snap1-host1.qcow2 (do a blockpull operation) or host1.img (do a commit operation)?
And as before, given the structure of the VM and its two disks, how is a snapshot created excluding the second disk? vdb in the above xml output?
If you create the snapshot via command line, 'virsh snapshot-create-as host1 --name ... --diskspec vda,file=/path/for/disk --diskspec vdb,snapshot=no' should be sufficient to exclude vdb from the snapshot (you can use the --print-xml option to see what those options would actually pass to the virDomainSnapshotCreateXML command).
Many thanks for your help so far.
Glad to hear it, and hope we can continue to be helpful. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

Hello On 11 Apr 2014, at 8:52 am, Eric Blake <eblake@redhat.com> wrote: [ … ]
# virsh snapshot-info host1 --current Name: snap1-host1 Domain: host1 Current: yes State: shutoff Location: internal
# virsh snapshot-dumpxml host1 snap1-host1 <domainsnapshot> <name>snap1-host1</name> <description>After install completed</description> <state>shutoff</state> <creationTime>1390111150</creationTime> <memory snapshot='no'/> <disks> <disk name='vda' snapshot='external'>
Okay, we've definitely demonstrated a bug in at least 'virsh snapshot-list' for that particular build of libvirt; this proves that the snapshot is definitely external, even though the info output claimed it was internal. I didn't search whether a bugzilla entry was already tracking this for RHEL 6.5; it's not a crasher, so it probably won't get fixed until RHEL 6.6. If you'd like, you can open a BZ, (it might get closed as a dup if someone else finds where it was already reported, even though I didn't do that search), to make sure it doesn't get lost
Thanks I’ll sort out getting bugzilla report put it. But will search it first.
Meanwhile, creating external snapshots is supported in that version of RHEL, but not deleting (at least not via virsh directly), so you'll have to get your hands a bit dirty with qemu-img and virsh edit.
<driver type='qcow2'/> <source file='/var/lib/libvirt/snap1-host1.qcow2'/>
This says that snap1-host1.qcow2 is the wrapper file created at the time of the snapshot, and that...
</disk> <disk name='vdb' snapshot='external'> <driver type='qcow2'/> <source file='/var/lib/libvirt/snap1-host1-db.qcow2'/> </disk> <disk name='hdc' snapshot='no'/> </disks> <domain type='kvm'> ... <devices> <emulator>/usr/libexec/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/host1.img'/>
it should have a backing file of /var/lib/libvirt/images/host1.img. Wonder why your 'qemu-img info' output didn't show that fact?
I’m not sure. File is definitely there but hasn’t been used since creation it seems. Something else I don’t quite understand. # ls -l /var/lib/libvirt/images/host1.img -rw------- 1 qemu qemu 25769803776 Jan 19 16:58 /var/lib/libvirt/images/host1.img I presume that had I never created a snapshot then its this file that would change as the VM is used?
I don't quite have a full picture of how you got into the situation. If you are trying to just get rid of the snapshot, you could always try 'virsh snapshot-delete --metadata host1 snap1-host1', to make libvirt forget about the snapshot without cleaning up any actual data (leaving any external backing chains intact, and not removing internal snapshots from qcow2 files).
ok. What implications does this have long term? Is there then a related step to remove the data of the snapshot that is no longer referenced? Reclaiming the space would be handy.
Are you trying to revert to that state, or just forget that you ever took the snapshot? Are you okay keeping the two files as a backing chain, or do you want to collapse it into one? And if you DO want to collapse into one file (so you can delete the other), do you want the kept file to be snap1-host1.qcow2 (do a blockpull operation) or host1.img (do a commit operation)?
ok. What I’d like is to have a state where the snapshot I took never existed. Which I assume means that the img file in /var/lib/libvirt/images becomes the file used as the VM as its used? Then I’d like to make a snapshot of the guest - memory and disk state. But the second disk be excluded (its a separate mount point for the guest for MySQL data) But if I can get it working reliably a snapshot of both disks would also be desirable for me.
And as before, given the structure of the VM and its two disks, how is a snapshot created excluding the second disk? vdb in the above xml output?
If you create the snapshot via command line, 'virsh snapshot-create-as host1 --name ... --diskspec vda,file=/path/for/disk --diskspec vdb,snapshot=no' should be sufficient to exclude vdb from the snapshot (you can use the --print-xml option to see what those options would actually pass to the virDomainSnapshotCreateXML command).
ok. in “/path/for/disk” is that a path or a filename - called “disk” and can it be anything? The default seem sot have been /var/lib/libvirt/ Is that not ideal?? Thanks again. This is helping. regards rolf.
Many thanks for your help so far.
Glad to hear it, and hope we can continue to be helpful.
-- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
participants (2)
-
Eric Blake
-
rolf