Dear Jatin,
Maybe it’s a good idea first to implement Spice:
<video>
<model type='qxl' ram='65536'
vram='65536' heads='1'/>
<address type='pci' domain='0x0000'
bus='0x00' slot='0x02' function='0x0'/>
</video>
<channel type='spicevmc'>
<target type='virtio'
name='com.redhat.spice.0'/>
<address type='virtio-serial'
controller='0' bus='0' port='1'/>
</channel>
Spice should be installed on the host.
Do you use virtio ?
Greetings,
Dominique.
Van: Jatin Davey [mailto:jashokda@cisco.com]
Verzonden: dinsdag 14 april 2015 12:23
Aan: libvirt-users(a)redhat.com
Onderwerp: [libvirt-users] VM Performance using KVM Vs. VMware ESXi
Hi All
We are currently testing our product using KVM as the hypervisor. We are not using KVM as
a bare-metal hypervisor. We use it on top of a RHEL installation. So basically RHEL acts
as our host and using KVM we deploy guests on this system.
We have all along tested and shipped our application image for VMware ESXi installations ,
So this it the first time we are trying our application image on a KVM hypervisor.
On this front i have done some tests to find out how our application's response time
is when deployed on KVM and then compare it with a VM deployed on VMware ESXi. We have a
benchmark test that basically loads the application simulating a load of 100 parallel
users logging into the system and downloading reports. These tests basically use a HTTP
GET query to load the application VM. In addition to that i have taken care to use the
same hardware for both the tests , one with RHEL(Host)+KVM and another with VMware ESXi.
All the hardware specifications for both the servers remain the same. The load test also
remains the same for testing with both the servers.
First observation is that the average response time on the VMware ESXi is : 500
milli-seconds while the application's average response time when deployed using
RHEL(Host)+ KVM is : 1050 milli-seconds. The response time of the application when
deployed on KVM is twice as much as when it is deployed using VMware ESXi.
I did few more tests to find which sub-system on these servers shows varying metrics.
First i started with IOZone to find out if there is any mismatch in the speed with which
data is read / written to the local disk on the two VMs and found that "Read"
speed in the VM that was deployed using RHEL(Host)+KVM was twice as slow as the VM which
was deployed using VMware ESXi.
For more on IoZone , Please refer :
http://www.iozone.org/
more specifically the following IoZone metrics were twice as less when compared to the
server running with VMware ESXi:
Read
Re-read
Reverse-Read
Stride Read
Pread
Note: I had run the IoZone tests on the VMs on both the servers.
Second observation to be made was the output from the "top" command. I could see
that the VM deployed on RHEL(Host)+KVM was showing high numbers for the following metrics
when compared with the VM deployed on VMware ESXi:
load averages
%sy for all the logical processors
%si for all the logical processors
i debugged further to find out which device is causing more interrupts and found it to be
"ide0" , See the output from the /proc/interrupts file below:
The other interrupts apart from ide0 are pretty much similar to the VM deployed using
VMware ESXi.
************/proc/interrupts *******************
[root@localhost ~]# cat /proc/interrupts
CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6
CPU7
0: 795827 0 0 0 0 0 0
0 IO-APIC-edge timer
1: 65 0 0 0 0 0 0
0 IO-APIC-edge i8042
6: 2 0 0 0 0 0 0
0 IO-APIC-edge floppy
8: 0 0 0 0 0 0 0
0 IO-APIC-edge rtc
9: 0 0 0 0 0 0 0
0 IO-APIC-level acpi
10: 425785 0 0 0 0 0 0
0 IO-APIC-level virtio0, eth0
11: 47 0 0 0 0 0 0
0 IO-APIC-level uhci_hcd:usb1, HDA Intel
12: 730 0 0 0 0 0 0
0 IO-APIC-edge i8042
14: 188086 0 0 0 0 0 0
0 IO-APIC-edge ide0
NMI: 0 0 0 0 0 0 0
0
LOC: 795813 795798 795783 795767 795752 795737 795723
795709
ERR: 0
MIS: 0
*********************************************
Any pointers to improving the response time for the VM for RHEL(Host)+KVM installation
would be greatly appreciated.
Thanks
Jatin