On Tue, Sep 27, 2011 at 08:10:21PM +0200, Reeted wrote:
I repost this, this time by also including the libvirt mailing list.
Info on my libvirt: it's the version in Ubuntu 11.04 Natty which is
0.8.8-1ubuntu6.5 . I didn't recompile this one, while Kernel and
qemu-kvm are vanilla and compiled by hand as described below.
My original message follows:
This is really strange.
I just installed a new host with kernel 3.0.3 and Qemu-KVM 0.14.1
compiled by me.
I have created the first VM.
This is on LVM, virtio etc... if I run it directly from bash
console, it boots in 8 seconds (it's a bare ubuntu with no
graphics), while if I boot it under virsh (libvirt) it boots in
20-22 seconds. This is the time from after Grub to the login prompt,
or from after Grub to the ssh-server up.
I was almost able to replicate the whole libvirt command line on the
bash console, and it still goes almost 3x faster when launched from
bash than with virsh start vmname. The part I wasn't able to
replicate is the -netdev part because I still haven't understood the
semantics of it.
-netdev is just an alternative way of setting up networking that
avoids QEMU's nasty VLAN concept. Using -netdev allows QEMU to
use more efficient codepaths for networking, which should improve
the network performance.
This is my bash commandline:
/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm
-m 2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults
-chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc
-boot order=dc,menu=on -drive
file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
-device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-drive
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0
-net nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no
-usb -vnc 127.0.0.1:0 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
This shows KVM is being requested, but we should validate that KVM is
definitely being activated when under libvirt. You can test this by
doing:
virsh qemu-monitor-command vmname1 'info kvm'
Which was taken from libvirt's command line. The only
modifications
I did to the original libvirt commandline (seen with ps aux) were:
- Removed -S
Fine, has no effect on performance.
- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
Has been simplified to: -net nic,model=virtio -net
tap,ifname=tap0,script=no,downscript=no
and manual bridging of the tap0 interface.
You could have equivalently used
-netdev tap,ifname=tap0,script=no,downscript=no,id=hostnet0,vhost=on
-device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
That said, I don't expect this has anything todo with the performance
since booting a guest rarely involves much network I/O unless you're
doing something odd like NFS-root / iSCSI-root.
Firstly I had thought that this could be fault of the VNC: I have
compiled qemu-kvm with no separate vnc thread. I thought that
libvirt might have connected to the vnc server at all times and this
could have slowed down the whole VM.
But then I also tried connecting vith vncviewer to the KVM machine
launched directly from bash, and the speed of it didn't change. So
no, it doesn't seem to be that.
Yeah, I have never seen VNC be responsible for the kind of slowdown
you describe.
BTW: is the slowdown of the VM on "no separate vnc thread"
only in
effect when somebody is actually connected to VNC, or always?
Probably, but again I dont think it is likely to be relevant here.
Also, note that the time difference is not visible in dmesg once the
machine has booted. So it's not a slowdown in detecting devices.
Devices are always detected within the first 3 seconds, according to
dmesg, at 3.6 seconds the first ext4 mount begins. It seems to be
really the OS boot that is slow... it seems an hard disk performance
problem.
There are a couple of things that would be different between running the
VM directly, vs via libvirt.
- Security drivers - SELinux/AppArmour
- CGroups
If it is was AppArmour causing this slowdown I don't think you would have
been the first person to complain, so lets ignore that. Which leaves
cgroups as a likely culprit. Do a
grep cgroup /proc/mounts
if any of them are mounted, then for each cgroups mount in turn,
- Umount the cgroup
- Restart libvirtd
- Test your guest boot performance
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|