On 08/08/2013 05:03 PM, Brano Zarnovican wrote:
On Thu, Aug 8, 2013 at 9:39 AM, Martin Kletzander
<mkletzan(a)redhat.com> wrote:
> At first let me explain that libvirt is not ignoring the cache=none.
> This is propagated to qemu as a parameter for it's disk. From qemu's
> POV (anyone feel free to correct me if I'm mistaken) this means the file
> is opened with O_DIRECT flag; and from the open(2) manual, the O_DIRECT
> means "Try to minimize cache effects of the I/O to and from this
> file...", that doesn't necessarily mean there is no cache at all.
Thanks for explanation.
> But even if it does, this applies to files used as disks, but those
> disks are not the only files the process is using. You can check what
> othe files the process has mapped, opened etc. from the '/proc'
> filesystem or using the 'lsof' utility. All the other files can (and
> probably will) take some cache and there is nothing wrong with that.
In my case there was 4GB of caches.
Just now, I have thrashed one instance with many read/writes on
various devices. In total, tens of GB of data. But the cache (on host)
did not grow beyond 3MB. I'm not yet able to reproduce the problem.
> Are you trying to resolve an issue or asking just out of curiosity?
> Because this is wanted behavior and there should be no need for anyone
> to minimize this.
Once or twice, one of our VMs was OOM killed because it reached 1.5 *
memory limit for its cgroup.
Oh, please report this to us. This is one of the problems we'll be,
unfortunately, dealing with forever, I guess. This limit is just a
"guess" how much qemu might take and we're setting it to make sure host
is not overwhelmed in case qemu is faulty/hacked. Since this isn't ever
possible to set exactly, it already happened that thanks to cgroups,
qemu was killed, so we had to increase the limit.
I Cc'd Michal who might be the right person to know about any further
increase.
However, this behavior won't change with caches. Kernel knows that
those are data (s)he can discard so before killing the process, unneeded
caches will get dropped and after there is nothing to drop, the
procedure falls back to killing the process.
Here is an 8GB, instance. Libvirt created cgroup with 12.3GB memory
limit, which we have filled to 98%
The more it's filled with caches, the better, but if none of those are
caches, whoa!, the limit should be increased.
[root@dev-cmp08 ~]# cgget -r memory.limit_in_bytes -r
memory.usage_in_bytes libvirt/qemu/i-000009fa
libvirt/qemu/i-000009fa:
memory.limit_in_bytes: 13215727616
memory.usage_in_bytes: 12998287360
You can get rid of these problems by setting your own memory limits.
The defaults limit get set only if there is no <memtune> setting in the
domain XML:
http://libvirt.org/formatdomain.html#elementsMemoryTuning
The 4G difference is the cache. That's why I'm so interested
in what
is consuming the cache on a VM which should be caching in guest only.
Regards,
Brano Zarnovican
Hope this helps,
Martin