On Fri, Apr 28, 2017 at 12:06:11PM +0200, Daniel Kučera wrote:
Hi Martin,
in the meantime, I've found a solution which I consider at least acceptable:
1. create zfs snapshot of domain disk (/dev/zstore/test-volume)
2. save original XML domain definition
3. create snapshot in libvirt like this:
virsh snapshot-create --xmlfile snap.xml --disk-only --no-metadata
test-domain
snap.xml:
<domainsnapshot>
<disks>
<disk name='/dev/zstore/test-volume'>
<source file='/tmp/test-volume.qcow2'/>
</disk>
</disks>
</domainsnapshot>
This creates overlay qcow snapshot on top of my base ZFS volume
3. Transfer ZFS volume (with last snapshot) to remote host
4. Run live migration with setting XML from step 1 as DestXML and PersistXML
parameter and with VIR_MIGRATE_NON_SHARED_INC flag
This creates domain without snapshot on the remote host and migrates only
data from qcow overlay and commits them to the ZFS base volume.
What I don't like about this, is that I need to create and care about qcow
image.
The best solution for me would be to only create dirty block map in qemu
(block-dirty-bitmap-add) in step 3 and then migrate just changed blocks
without creating qcow snapshot.
Do you think it would be possible to implement it like this?
I'm not sure. What would you mark as dirty? There is no cache in qemu,
because you are probably running with cache='none'. So what you would
have to do is basically creating a virtual, memory-based incremental
snapshot on top of the image, then migrate that...
But as I said before, I'm not that familiar with this part of libvirt,
so maybe others will have way easier solution.
S pozdravom / Best regards
Daniel Kucera
2017-04-28 11:03 GMT+02:00 Martin Kletzander <mkletzan(a)redhat.com>:
> On Tue, Apr 04, 2017 at 12:04:42PM +0200, Daniel Kučera wrote:
>
>> Hi all,
>>
>>
> Hi,
>
> I caught your mail in my Spam folder for some reason, maybe the same
> happened for others. I don't have that deep knowledge of the snapshots,
> but I'm replying so that if someone else has it in Spam and they have
> more insight, they can reply.
>
>
> I'm using ZFS on Linux block volumes as my VM storage and want to do live
>> migrations between hypervisors.
>>
>> If I create ZFS snapshot of used volume on source host, send it do
>> destination host (zfs send/recv) and then run live migration with
>> VIR_MIGRATE_NON_SHARED_DISK
>> flag, the migration works OK.
>>
>> But this procedure copies the whole disk twice which is a huge downside.
>>
>> The best solution would be, if libvirt could send the incremental data
>> since last snapshot itself but this feature is not there (AFAIK).
>>
>> So I am thinking about a workaround:
>> 1. Create snapshot using: "virsh snapshot-create --xmlfile snap.xml
>> --disk-only --no-metadata test-domain" which will start writing snapshot
>> data into temporary qcow2 file
>>
>> <domainsnapshot>
>> <disks>
>> <disk name='/dev/zstore/test-volume'>
>> <source file='/tmp/test-volume.qcow2'/>
>> </disk>
>> </disks>
>> </domainsnapshot>
>>
>>
>> 2. Create snapshot of backing ZFS volume and send it to destination host.
>> 3. Migrate the domain
>>
>> Currently, in step 3 I need to create empty qcow snapshot file on the
>> destination host, otherwise the migration fails with: "Operation not
>> supported: pre-creation of storage targets for incremental storage
>> migration is not supported"
>>
>>
> So apart from this it does exactly what you want, right?
>
> My question is: Is it possible to do live migration with blockcommit
>> operation? If not, would it be hard to implement?
>>
>> I imagine it like I would start migration with some special parameter
>> (e.g.
>> VIR_MIGRATE_NON_SHARED_INC_COMMIT) which would only migrate data from
>> qcow
>> snapshot to destination storage.
>>
>> This would ensure the disk consistency and avoid useless whole disk copy.
>>
>>
> What would be the difference compared to what you are doing now (plus a
> block commit)? VIR_MIGRATE_NON_SHARED_INC should already migrate only
> the topmost disk of the whole chain.
>
> Of course lot can be added to the project, but I, for one, am not *that*
> welcoming with regards to things that are meant for pretty narrow use
> case while at the same time they are doable already, using libvirt's API.
>
> Or do you have any other idea how to solve this?
>>
>> BR.
>> Daniel Kucera.
>>
>
> _______________________________________________
>> libvirt-users mailing list
>> libvirt-users(a)redhat.com
>>
https://www.redhat.com/mailman/listinfo/libvirt-users
>>
>