Hi all,
we've been using libvirt/KVM ever since it was included in Ubuntu 10.04
LTS. At first, it wasn't possible to do live block migrations. We got
used to that.
And now block migration has been possible since Ubuntu 12.04 LTS. And
we're excited.
There's just one hitch:
Many of migrations have the result that the system is successfully
migrated, but will no longer run. Sometimes, it still runs but can't
access the network. Sometimes it just freezes completely, and doesn't
even show a kernel panic on the console. And sometimes the network still
works but will crash when sending ICMP messages from within the guest.
And every once in a while the guest system will continue to run.
Most of our guests use virtio for both networking and storage. The guest
OS's are mostly Ubuntu 12.04, but also one 11.04 and several 10.04's. A
few older ones are still around, as well. The older ones don't support
virtio though, so here we use the emulated intel e1000 network card and
IDE storage. The guests are connected to a common bridge on the node,
which provides access to the outside network. There is no NAT.
Oh, and I read that different CPU capabilities (flags, etc.) can cause
live migrations to fail. So I used the recommended procedure to
determine the least common denominator of the CPUs on our nodes and
configured all the guests to use only those CPU capabilities. After a
restart, I verified that the kernel correctly identified the configured
CPU type and flags.
Does anyone have an idea how I can go about debugging the issue? Or has
anyone experienced this issue and found a solution?
Cheers,
Sebastian
--
*Sebastian J. Bronner*
Administrator
D9T GmbH - Magirusstr. 39/1 - D-89077 Ulm
Tel: +49 731 1411 696-0 - Fax: +49 731 3799-220
Geschäftsführer: Daniel Kraft
Sitz und Register: Ulm, HRB 722416
Ust.IdNr: DE 260484638
http://d9t.de - D9T High Performance Hosting
info(a)d9t.de