On Thu, Aug 8, 2013 at 9:39 AM, Martin Kletzander <mkletzan(a)redhat.com> wrote:
At first let me explain that libvirt is not ignoring the cache=none.
This is propagated to qemu as a parameter for it's disk. From qemu's
POV (anyone feel free to correct me if I'm mistaken) this means the file
is opened with O_DIRECT flag; and from the open(2) manual, the O_DIRECT
means "Try to minimize cache effects of the I/O to and from this
file...", that doesn't necessarily mean there is no cache at all.
Thanks for explanation.
But even if it does, this applies to files used as disks, but those
disks are not the only files the process is using. You can check what
othe files the process has mapped, opened etc. from the '/proc'
filesystem or using the 'lsof' utility. All the other files can (and
probably will) take some cache and there is nothing wrong with that.
In my case there was 4GB of caches.
Just now, I have thrashed one instance with many read/writes on
various devices. In total, tens of GB of data. But the cache (on host)
did not grow beyond 3MB. I'm not yet able to reproduce the problem.
Are you trying to resolve an issue or asking just out of curiosity?
Because this is wanted behavior and there should be no need for anyone
to minimize this.
Once or twice, one of our VMs was OOM killed because it reached 1.5 *
memory limit for its cgroup.
Here is an 8GB, instance. Libvirt created cgroup with 12.3GB memory
limit, which we have filled to 98%
[root@dev-cmp08 ~]# cgget -r memory.limit_in_bytes -r
memory.usage_in_bytes libvirt/qemu/i-000009fa
libvirt/qemu/i-000009fa:
memory.limit_in_bytes: 13215727616
memory.usage_in_bytes: 12998287360
The 4G difference is the cache. That's why I'm so interested in what
is consuming the cache on a VM which should be caching in guest only.
Regards,
Brano Zarnovican