[libvirt-users] Kernel unresponsive after booting 700+ vm's on a single host

For a research project we are trying to boot a very large amount of tiny, custom built VM's on KVM/ubuntu. The maximum VM-count achieved was 1000, but with substantial slowness, and eventually kernel failure, while the cpu/memory loads were nowhere near limits. Where is the likely bottleneck? Any solutions, workarounds, hacks or dirty tricks? A few more details here (tumbleweed question), and the possibility of an upvote: http://stackoverflow.com/questions/12243129/kvm-qemu-maximum-vm-count-limit Any tips would be much appreciated! Best regards, Alfred Bratterud Assistant Professor Dept. of Computer Science, Oslo and Akershus University College of Applied Sciences P: (+47) 2245 3263 M: (+47) 4102 0222

On 09/10/2012 07:51 AM, Alfred Bratterud wrote:
For a research project we are trying to boot a very large amount of tiny, custom built VM's on KVM/ubuntu. The maximum VM-count achieved was 1000, but with substantial slowness, and eventually kernel failure, while the cpu/memory loads were nowhere near limits. Where is the likely bottleneck? Any solutions, workarounds, hacks or dirty tricks?
Are you using cgroups? There have been some known bottlenecks in the kernel cgroup code, where it scales very miserably; and since libvirt uses a different cgroup per VM by default when cgroups are enabled, that might explain part of the problem. Other than that, if you can profile the slowdowns, I'm sure people would be interested in the results. -- Eric Blake eblake@redhat.com +1-919-301-3266 Libvirt virtualization library http://libvirt.org
participants (2)
-
Alfred Bratterud
-
Eric Blake