Eric-
Why wouldn't a 'virsh blockcopy --pivot domain src dest' be sufficient to
migrate the volumes to a new storage pool?
-Jamie
On Fri, Feb 1, 2013 at 3:44 PM, Eric Blake <eblake(a)redhat.com> wrote:
On 02/01/2013 11:20 AM, Jamie Fargen wrote:
> I am searching for directions for using live block migration to copy
> running vm's to a different storage pool.
>
>
> Example: VM1 running on Host1, the image(s) for VM1 are stored in
> /var/lib/libvirt/images. I'd like to copy the disk image(s) that VM1
> is using to /nfs/images. Without stopping/pausing/powering down the
> VM.
>
> Do you have any examples or documentation of how to accomplish this
process?
Unfortunately, we probably need to do a better job on documentation or
examples; there has been a patch proposed that might help:
https://www.redhat.com/archives/libvir-list/2013-January/msg01637.html
But it is indeed possible to migrate storage with qemu 1.3 or later,
with a guest downtime of only a fraction of a second. True avoidance of
guest downtime during a storage migration requires support for
persistent dirty bitmaps, which won't be added until at least qemu 1.5;
likewise, the lack of persistent dirty bitmaps means the current libvirt
will only let you do live storage migration on a transient guest (the
hope is that the persistent bitmap of qemu 1.5 will let libvirt allow
block mirroring across both guest and libvirtd restarts, and therefore I
can add code to libvirt to make blockcopy work for a persistent guest).
Thankfully, it is possible to make a guest transient, do the storage
migration, then make the guest persistent again.
So, the way I have done live block migration is as follows:
# save off the persistent definition for later
virsh dumpxml --inactive $dom > $dom.xml
# make the guest transient
virsh undefine $dom
# remind myself which disks need migration
virsh domblklist $dom
# for each disk (such as "vda"), do a migration
virsh blockcopy $dom $disk /path/to/new --wait --verbose --pivot
# make the guest persistent again
virsh define $dom.xml
Note that while my example was blocking for each disk, you can also do
migration in parallel for multiple disks at once, if you avoid the
'--wait' parameter to 'virsh blockcopy' and instead use 'virsh
blockjob
$dom $disk --pivot' to do the pivots. An attempt to pivot will fail
until the disk has reached mirroring state; but it does mean that you
have to set up your own polling loop to see when each disk is ready.
Also, a shallow block copy can result in less downtime in waiting for a
disk to reach mirroring state. That is, given the following disk layout
prior to the migration (such as by using 'virsh snapshot-create-as $dom
--disk-only):
base <- snapshot
then avoiding --shallow has to transfer the entire disk image, creating:
copy
while using --shallow only has to transfer the contents of snapashot,
creating:
base <- copy
When using --shallow, libvirt will NOT copy the base file; and since
qemu will prefer to create backing chains that used absolute names to
the base files, the copied leaf image defaults to pointing to the
original base image. If you need to migrate the entire chain including
the base file, then you have to pre-create a blank destination qcow2
file that points to the new desired base image (using qemu-img create),
then use the --reuse-external flag when telling libvirt to do the
blockcopy, and avoid doing the pivot operation until the base file has
been copied to its new location (it is possible to copy the base image
in parallel with waiting for libvirt to reach full mirroring phase of
the shallow blockcopy of the leaf image, and wait do manually do the
pivot until both operations are ready, regardless of which finished first).
Further use of blockpull or blockcommit can reduce the length of a
backing chain without any guest downtime (although as of qemu 1.4, while
blockpull can be used to collapse down to a single element, blockcommit
only works for collapsing a chain of 3 or more files down to a minimum
of 2).
I'm happy to answer more questions you may have.
--
Eric Blake eblake redhat com +1-919-301-3266
Libvirt virtualization library
http://libvirt.org