I repost this, this time by also including the libvirt mailing list.
Info on my libvirt: it's the version in Ubuntu 11.04 Natty which is
0.8.8-1ubuntu6.5 . I didn't recompile this one, while Kernel and
qemu-kvm are vanilla and compiled by hand as described below.
My original message follows:
This is really strange.
I just installed a new host with kernel 3.0.3 and Qemu-KVM 0.14.1
compiled by me.
I have created the first VM.
This is on LVM, virtio etc... if I run it directly from bash console, it
boots in 8 seconds (it's a bare ubuntu with no graphics), while if I
boot it under virsh (libvirt) it boots in 20-22 seconds. This is the
time from after Grub to the login prompt, or from after Grub to the
ssh-server up.
I was almost able to replicate the whole libvirt command line on the
bash console, and it still goes almost 3x faster when launched from bash
than with virsh start vmname. The part I wasn't able to replicate is the
-netdev part because I still haven't understood the semantics of it.
This is my bash commandline:
/opt/qemu-kvm-0.14.1/bin/qemu-system-x86_64 -M pc-0.14 -enable-kvm -m
2002 -smp 2,sockets=2,cores=1,threads=1 -name vmname1-1 -uuid
ee75e28a-3bf3-78d9-3cba-65aa63973380 -nodefconfig -nodefaults -chardev
socket,id=charmonitor,path=/var/lib/libvirt/qemu/vmname1-1.monitor,server,nowait
-mon chardev=charmonitor,id=monitor,mode=readline -rtc base=utc -boot
order=dc,menu=on -drive
file=/dev/mapper/vgPtpVM-lvVM_Vmname1_d1,if=none,id=drive-virtio-disk0,boot=on,format=raw,cache=none,aio=native
-device
virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0
-drive
if=none,media=cdrom,id=drive-ide0-1-0,readonly=on,format=raw,cache=none,aio=native
-device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -net
nic,model=virtio -net tap,ifname=tap0,script=no,downscript=no -usb -vnc
127.0.0.1:0 -vga cirrus -device
virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5
Which was taken from libvirt's command line. The only modifications I
did to the original libvirt commandline (seen with ps aux) were:
- Removed -S
- Network was: -netdev tap,fd=17,id=hostnet0,vhost=on,vhostfd=18 -device
virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:05:36:60,bus=pci.0,addr=0x3
Has been simplified to: -net nic,model=virtio -net
tap,ifname=tap0,script=no,downscript=no
and manual bridging of the tap0 interface.
Firstly I had thought that this could be fault of the VNC: I have
compiled qemu-kvm with no separate vnc thread. I thought that libvirt
might have connected to the vnc server at all times and this could have
slowed down the whole VM.
But then I also tried connecting vith vncviewer to the KVM machine
launched directly from bash, and the speed of it didn't change. So no,
it doesn't seem to be that.
BTW: is the slowdown of the VM on "no separate vnc thread" only in
effect when somebody is actually connected to VNC, or always?
Also, note that the time difference is not visible in dmesg once the
machine has booted. So it's not a slowdown in detecting devices. Devices
are always detected within the first 3 seconds, according to dmesg, at
3.6 seconds the first ext4 mount begins. It seems to be really the OS
boot that is slow... it seems an hard disk performance problem.
Thank you
R.