On 09/07/2010 09:01 AM, Alexander Graf wrote:
I'm torn here too. Why not expose both? Have a qemu internal
daemon available that gets a sleep time as parameter and an external "pull
sectors" command. We'll see which one is more useful, but I don't think
it's too much code to justify only having one of the two. And the internal daemon
could be started using a command line parameter, which helps non-managed users.
Let me turn it around and ask, how would libvirt do this? Would they
just use a sleep time parameter and just make use of our command or
would they do something more clever and attempt to detect system idle?
Could we just do that in qemu?
Or would they punt to the end user?
> A related topic is block migration. Today we support pre-copy
migration which means we transfer the block device and then do a live migration. Another
approach is to do a live migration, and on the source, run a block server using image
streaming on the destination to move the device.
>
> With QED, to implement this one would:
>
> 1) launch qemu-nbd on the source while the guest is running
> 2) create a qed file on the destination with copy-on-read enabled and a backing file
using nbd: to point to the source qemu-nbd
> 3) run qemu -incoming on the destination with the qed file
> 4) execute the migration
> 5) when migration completes, begin streaming on the destination to complete the copy
> 6) when the streaming is complete, shut down the qemu-nbd instance on the source
>
> This is a bit involved and we could potentially automate some of this in qemu by
launching qemu-nbd and providing commands to do some of this. Again though, I think the
question is what type of interfaces would libvirt prefer? Low level interfaces + recipes
on how to do high level things or higher level interfaces?
>
Is there anything keeping us from making the QMP socket multiplexable? I was thinking of
something like:
{ command = "nbd_server" ; block = "qemu_block_name" }
{ result = "done" }
<qmp socket turns into nbd socket>
This way we don't require yet another port, don't have to care about conflicts
and get internal qemu block names for free.
Possibly, but something that complicates life here is that an nbd
session would be source -> destination but there's no QMP session
between source -> destination. Instead, there's a session from source
-> management node and destination -> management node so you'd have to
proxy nbd traffic between the two. That gets ugly quick.
Regards,
Anthony Liguori
Alex