Hi,
Thank you for your input.
We already tried several tweaks but without luck. For example adding
io='native' did not help improve the performance. It behaved exactly the
same way before and after. I've read somewhere that cache='writethrough'
could also help improving the performance, but we cannot do that because
we take live snapshots to backup the machine while it runs. When cache
is enabled, we observed that sometimes an external live snapshot cannot
be merged with blockcommit without the host being shut down.
Would you please explain what <cpu mode='host-passthrough' /> should do
to improve the performance?
Switching from IDE to virtio basically means that the host then knows
that it runs on virtualized hardware and can do things differently? But
it also requires to modify the host with specialized drivers that even
influence the boot process. That feels more like a hack than a solution.
We're astonished why the virtualized IO is so much slower. I could
understand a performance penalty of 10% or even 20%, but a drop from
120Mb/s IO read to 1.4Mb/s IO read is suspicious to everyone of us. We'd
have expected at least a throughput of 50Mb/s while reading from disk
which is more than half the IO that the hardware can do. Please note
that we do not observe the hosting machine to peak 100% with CPU or IO
(using top and iotop) when the virtualized host does some io. Is there a
lock contention or something else going on? When running a virtualized
host for example with virtual box we don't see such an impact. What does
virtual box do differently to improve virtualized IO and could that help
libvirt/qemu/kvm?
On 2017-06-15 04:08, Andrea Bolognani wrote:
On Wed, 2017-06-14 at 15:32 -0300, Thiago Oliveira wrote:
[...]
> I can see other thing, for example, change the hda=IDE to virtio.
I'd say switching the disk from IDE to virtio should be the
very first step - and while you're at it, you might as well
use virtio for the network interface too.
--
Andrea Bolognani / Red Hat / Virtualization