On 08.02.2018 11:46, Kashyap Chamarthy wrote:
On Wed, Feb 07, 2018 at 11:26:14PM +0100, David Hildenbrand wrote:
> On 07.02.2018 16:31, Kashyap Chamarthy wrote:
[...]
> Sounds like a similar problem as in
>
https://bugzilla.kernel.org/show_bug.cgi?id=198621
>
> In short: there is no (live) migration support for nested VMX yet. So as
> soon as your guest is using VMX itself ("nVMX"), this is not expected to
> work.
Actually, live migration with nVMX _does_ work insofar as you have
_identical_ CPUs on both source and destination — i.e. use the QEMU
'-cpu host' for the L1 guests. At least that's been the case in my
experience. FWIW, I frequently use that setup in my test environments.
Your mixing use cases. While you talk about migrating a L2, this is
about migrating an L1, running L2.
Migrating an L2 is expected to work just like when migrating an L1, not
running L2. (of course, the usual trouble with CPU models, but upper
layers should check and handle that).
Just to be quadruple sure, I did the test: Migrate an L2 guest (with
non-shared storage), and it worked just fine. (No 'oops'es, no stack
traces, no "kernel BUG" in `dmesg` or serial consoles on L1s. And I can
login to the L2 guest on the destination L1 just fine.)
Once you have the password-less SSH between source and destination, and
a bit of libvirt config setup. I ran the migrate command as following:
$ virsh migrate --verbose --copy-storage-all \
--live cvm1 qemu+tcp://root@f26-vm2/system
Migration: [100 %]
$ echo $?
0
Full details:
https://kashyapc.fedorapeople.org/virt/Migrate-a-nested-guest-08Feb2018.txt
(At the end of the document above, I also posted the libvirt config and
the version details across L0, L1 and L2. So this is a fully repeatable
test.)
--
Thanks,
David / dhildenb