On Wed, Oct 08, 2008 at 12:03:33PM +0100, Daniel P. Berrange wrote:
QEMU defaults to allowing the host OS to cache all disk I/O. THis has
a
couple of problems
- It is a waste of memory because the guest already caches I/O ops
- It is unsafe on host OS crash - all unflushed guest I/O will be
lost, and there's no ordering guarentees, so metadata updates could
be flushe to disk, while the journal updates were not. Say goodbye
to your filesystem.
- It makes benchmarking more or less impossible / worthless because
what the benchmark things are disk writes just sit around in memory
so guest disk performance appears to exceed host diskperformance.
This patch disables caching on all QEMU guests. NB, Xen has long done this
for both PV & HVM guests - QEMU only gained this ability when -drive was
introduced, and sadly kept the default to unsafe cache=on settings.
Right !
I think for integrity reason we should revert that default at the
libvirt level and swicth caching to off. I would not be against a
way to reactivate it optionally, assuming we have a clean way to express
it at the XML level (I don't think we have currently maybe an
optional cache="on|off" attribute could be added to device/disk/target)
because in some circumstances like cache of read-only devices available
to multiple domain, it can make sense to keep caching on the host os.
So I'm fine with the patch going in as-is, but maybe we need
one patch on top to reenable the cache in an explicit case by case
basis.
Daniel
P.S.: can you try to get patches with -p to get the contextual function
without them it's harder to review exactly where things goes,
especially when there is a line number shift due to other pending
patches, thanks !
--
Daniel Veillard | libxml Gnome XML XSLT toolkit
http://xmlsoft.org/
daniel(a)veillard.com | Rpmfind RPM search engine
http://rpmfind.net/
http://veillard.com/ | virtualization library
http://libvirt.org/