On Wed, 2015-06-24 at 12:57 -0600, Alex Williamson wrote:
This patch provides scripts for hugepage allocation, as well as a
bit
of infrastructure and common hook config file that I hope may some day
be enabled by default in libvirt. For now, we place the files in
/usr/share and ask users to install the config file and copy or link
the scripts, more like "contrib" scripts for now.
Two methods of hugepage allocation are provided, static and dynamic.
The static mechanism allocates pages at libvirt daemon startup and
releases them at shutdown. It allows full size, locality, and policy
configuration. For instance, if I want to allocate a set of 2M pages
exclusively on host NUMA node 1, it can do that, along with plenty
more. This is especially useful for 1G hugepages on x86, since they
can now be allocated dynamically, but become impractical to allocate
due to memory fragmentation as the host runs. Systems dedicated to
hosting VMs are also likely to prefer static allocation. Static
allocation requires explicit XML entries in the hook config file to
be activated.
The dynamic method allocates hugepages only around the instantiation
of the VM. This is enabled by adding an entry for the domain in the
config file and configuring the domain normally for hugepages. The
dynamic hugepage script is activated via the QEMU domain prepare
hook, reads the domain XML and allocates hugepages as necessary. On
domain shutdown, hugepages are freed via the release hook. This
model is more appropriate for systems that are not dedicated VM
hosts and guests that use hugepage sizes and quantities are are
likely to be dynamically allocated as the VM is started.
In addition to the documentation provided within each script, a README
file is provided with overal instructions and summaries of the
individual scripts.
Signed-off-by: Alex Williamson <alex.williamson(a)redhat.com>
---
I'm going to self-nak this because I think I've misinterpreted the
nodeset on hugepages/page to be host node rather than guest node. I'll
need to re-jig the algorithm. Thanks,
Alex