On 10/16/2012 10:10 PM, Corey Quinn wrote:
On Oct 16, 2012, at 9:15 PM, Eric Blake <eblake(a)redhat.com> wrote:
> On 10/16/2012 04:37 AM, Corey Quinn wrote:
>> I have a KVM VM that's backed by a logical volume on local disk.
>>
>> I'd like to copy / move it to an identically configured host.
>>
>> [root@virt2 cquinn]# virsh migrate --copy-storage-all --verbose --persistent
node1.www qemu+ssh://10.102.1.11/system
>
> Off-hand, that looks right; I just double-checked
>
http://libvirt.org/migration.html to make sure that it looks like a
> valid native migration usage. But do be aware that --copy-storage-all
> is poorly tested, and also that it currently requires you to pre-create
> a destination file by the same name (there has been talk about patching
> libvirt to auto-create the files, but that patch is still in the work,
> and for back-compat reasons, it would require you to pass another option).
>
Hmm, then what's the "proper" method to copy the VM's backend logical
volume? At the moment I'm trying to copy a template VM (used for girt-clone) rather
than having to build one of them per host-- that's tedious and an obnoxious source of
variance.
So it sounds like you are talking about thin-provisioning - having a
common backing file, and then a different qcow2 file per guest that all
point back to the common backing file.
I'm not opposed to dding it via netcat, but the last time I tried
this there was something "off" enough that the VM hung during boot.
For disk images, dd should work. You might also want to look into
libguestfs; it includes tools such as virt-resize and virt-sparsify that
can be used to do copying more efficiently than dd.
>>
>> error: Unable to read from monitor: Connection reset by peer
>
> This most likely means that qemu on the receiving end died quite early;
> perhaps looking at /var/log/libvirt/qemu/node1.www.log will give you
> more clues why.
Hmm:
char device redirected to /dev/pts/2
inet_listen_opts: bind(ipv4,10.102.1.12,5901): Cannot assign requested address
inet_listen_opts: FAILED
Do you have firewall rules preventing native migration? Have you tried
peer-to-peer migration instead (virsh migrate --p2p), as that avoids
needing to punch quite so many holes into the firewall?
This is on the destination node, but 10.102.1.12 is assigned to the source host, not the
destination.
Native migration requires that the destination be able to call home back
to the source - but my guess is the firewall was preventing it from
doing so. You are thus seeing the error message from the destination
qemu saying it can't reach the source qemu. Again, peer-to-peer
migration would avoid this (qemu would then be talking to libvirt,
instead of directly across the network, with the migration data
piggybacked over libvirt's already-open network connection).
Does this need to be an IP that's shared between the two hosts?
Frankly, I don't much care if the vnc display
Native migration requires that the destination can contact the host.
I'm not sure what you are asking about shared IP addresses (that doesn't
make sense for two hosts to share a single IP).
Additionally, the migrate refuses to start unless the VM is running, but that's far
from a requirement from my perspective-- not sure if that's a bug or a feature.
Kind of both - right now, 'virsh migrate' caters only to live migration,
but there's a patch proposed upstream that will allow offline migration
via a single API call:
https://www.redhat.com/archives/libvir-list/2012-October/msg00456.html
Meanwhile, you can already do offline migration with a series of calls:
on source: virsh dumpxml $guest > file
then copy all the disk images and that xml file to destination
on destination: virsh define file
I guess the reason that offline migration hasn't made it into a single
API call yet is because it is so trivial in comparison to live migration.
--
Eric Blake eblake(a)redhat.com +1-919-301-3266
Libvirt virtualization library
http://libvirt.org