On Mon, Oct 21, 2019 at 09:21:04PM +0200, Wim Ten Have wrote:
From: Wim ten Have <wim.ten.have(a)oracle.com>
This patch extends guest domain administration by adding a feature that
creates a guest with a NUMA layout, also referred to as vNUMA (Virtual
NUMA).
Errr, that feature already exists. You can create a guest NUMA layout
with this:
<domain>
<cpu>
...
<numa>
<cell id='0' cpus='0-3' memory='512000'
unit='KiB' discard='yes'/>
<cell id='1' cpus='4-7' memory='512000'
unit='KiB' memAccess='shared'/>
</numa>
...
</cpu>
</domain>
[snip]
The changes brought by this patch series add a new libvirt domain
element
named <vnuma> that allows for dynamic 'host' or 'node' partitioning
of
a guest where libvirt inspects the host capabilities and renders a best
guest XML design holding a host matching vNUMA topology.
<domain>
..
<vnuma mode='host|node'
distribution='contiguous|siblings|round-robin|interleave'>
<memory unit='KiB'>524288</memory>
<partition nodeset="1-4,^3" cells="8"/>
</vnuma>
..
</domain>
The content of this <vnuma> element causes libvirt to dynamically
partition the guest domain XML into a 'host' or 'node' numa model
Under <vnuma mode='host' ... > the guest domain is automatically
partitioned according to the "host" capabilities.
Under <vnuma mode='node' ... > the guest domain is partitioned according
to the nodeset and cells under the vnuma partition subelement.
The optional <vnuma> attribute distribution='type' is to indicate the
guest numa cell cpus distribution. This distribution='type' can have
the following values:
- 'contiguous' delivery, under which the cpus enumerate sequentially
over the numa defined cells.
- 'siblings' cpus are distributed over the numa cells matching the host
CPU SMT model.
- 'round-robin' cpus are distributed over the numa cells matching the
host CPU topology.
- 'interleave' cpus are interleaved one at a time over the numa cells.
The optional subelement <memory> specifies the memory size reserved
for the guest to dimension its <numa> <cell id> size. If no memory is
specified, the <vnuma> <memory> setting is acquired from the guest's
total memory, <domain> <memory> setting.
This seems to be just implementing some specific policies to
automagically configure the NUMA config. This is all already
possible for the mgmt apps todo with the existing XML configs
we expose AFAIK. Libvirt's goal is to /not/ implement specific
policies like this, but instead expose the mechanism for apps
to use to define policies as they see fit.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|