On 07/21/2014 10:01 PM, shyu wrote:
# rpm -q libvirt qemu-kvm-rhev
libvirt-1.1.1-29.el7.x86_64
qemu-kvm-rhev-1.5.3-60.el7ev_0.2.x86_64
These are downstream builds. Can you reproduce your situation with
upstream libvirt 1.2.6 and qemu 2.1-rc2? It may be that you are hitting
behavior that was introduced by downstream backports.
1. Check source file
# qemu-img info /var/lib/libvirt/images/rhel6.img
image: /var/lib/libvirt/images/rhel6.img
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 1.2G
Disk size tracks how much of the qcow2 file has been allocated, NOT how
much guest data has been allocated.
3. Check destination file's disk size
# qemu-img info /var/lib/libvirt/images/copy.img
image: /var/lib/libvirt/images/copy.img
file format: qcow2
virtual size: 5.0G (5368709120 bytes)
disk size: 2.0G
The thing to remember here is that blockcopy defaults to doing a cluster
at a time, even if the guest has not yet touched every sector within the
cluster. It may be that you are hitting cases where the copy operation
ends up writing an entire cluster in the destination where only a
partial cluster had been allocated in the source. But that does not
necessarily mean the copy is flawed, only that the default granularity
was large enough to inflate the destination with redundant all-zero
sectors in the interest of speeding up the operation, or that the
destination is not as sparse as the source. Qemu offers the
'granularity' parameter to the 'drive-mirror' command to alter the
granularity, but libvirt is not (currently) exposing this knob to the
user so for now libvirt is just relying on qemu defaults.
It may also be a factor of how much copy-on-write dirtying is happening.
If the guest is actively hammering on the disk during the copy
operation, the same cluster may be marked dirty multiple times; if qemu
allocates a new destination cluster for each pass through the dirty
bitmap, it may result in some inflation in size due to clusters that are
written early then discarded as they are later rewritten in a new
allocation. I'm not familiar enough with qemu block handling to know if
this is happening, or even whether qemu could be patched to do better
garbage collection of clusters left unused if it is happening.
There is nothing that libvirt can do about this. I don't think it is a
bug, but you may want to ask on the qemu list, since it is up to qemu
whether or not the copy will be inflated in host size. But inflation is
not a bad thing in itself - the real question is whether the copy
contains the same guest contents as the original at the time the copy
completed - as long as that is the case, even if the host sizes are
different, then the copy is reliable.
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library
http://libvirt.org