Daniel P. Berrange wrote:
QEMU defaults to allowing the host OS to cache all disk I/O. THis has
a
couple of problems
- It is a waste of memory because the guest already caches I/O ops
- It is unsafe on host OS crash - all unflushed guest I/O will be
lost, and there's no ordering guarentees, so metadata updates could
be flushe to disk, while the journal updates were not. Say goodbye
to your filesystem.
- It makes benchmarking more or less impossible / worthless because
what the benchmark things are disk writes just sit around in memory
so guest disk performance appears to exceed host diskperformance.
This patch disables caching on all QEMU guests. NB, Xen has long done this
for both PV & HVM guests - QEMU only gained this ability when -drive was
introduced, and sadly kept the default to unsafe cache=on settings.
I'm for this in general, but I'm a little worried about the "performance
regression" aspect of this. People are going to upgrade to 0.4.7 (or whatever),
and suddenly find that their KVM guests perform much more slowly. This is
better in the end for their data, but we might hear large complaints about it.
Might it be a better idea to make the default "cache=off", but provide a toggle
in the domain XML to turn it back to "cache=on" for the people who really want
it and know what they are doing?
--
Chris Lalancette