* Anthony Liguori (anthony(a)codemonkey.ws) wrote:
Mark McLoughlin wrote:
> This is the bit I really don't buy - we're equating qemu caching to IDE
> write-back caching and saying the risk of corruption is the same in both
> cases.
Yes.
I'm with Mark here.
> But doesn't qemu cache data for far, far longer than a
typical IDE disk
> with write-back caching would do? Doesn't that mean you're far, far more
> likely to see fs corruption with qemu caching?
It caches more data, I don't know how much longer it cases than a
typical IDE disk. The guest can crash and that won't cause data loss.
The only thing that will really cause data loss is the host crashing so
it's slightly better than write-back caching from that regard.
> Or put it another way, if we fix it by implementing the disabling of
> write-back caching ... users running a virtual machine will need to run
> "hdparam -W 0 /dev/sda" where they would never have run it on baremetal?
I don't see it as something needing to be fixed because I don't see that
the exposure is significantly greater for a VM than for a real machine.
One host crash corrupting all VM's data? What is the benefit?
Seems like the benefit of caching is only useful when VM's aren't all that
busy. Once the host is heavily committed, the case where it might
benefit most from the extra caching, the host cache will shrink to
essentially nothing. Also, many folks will be running heterogenous
guests (or at least not template based), so in that case it's really
just double caching (i.e. memory overhead).
Seems a no-brainer to me, so I must be confused and/or missing smth.
thanks,
-chris