On Mon, Mar 04, 2019 at 05:12:40PM +0100, Michal Privoznik wrote:
On 3/4/19 4:19 PM, Igor Mammedov wrote:
> Then I'd guess that most VMs end up with default '-numa node,mem'
> which by design can produce only fake NUMA without ability to manage
> guest RAM on host side. So such VMs aren't getting performance benefits
> or worse run with performance regression (due to wrong sched/mm decisions
> as guest kernel assumes NUMA topology is valid one).
Specifying NUMA distances in libvirt XML makes it generate the modern cmd
line.
AFAIK, specifying any guest NUMA -> Host NUMA affinity makes it use the
modern cmd line. eg I just modified a plain 8 CPU / 2 GB RAM guest
with this:
<numatune>
<memnode cellid='0' mode='strict' nodeset='0'/>
<memnode cellid='1' mode='strict' nodeset='1'/>
</numatune>
<cpu mode='host-model'>
<numa>
<cell id='0' cpus='0-3' memory='1024000'
unit='KiB'/>
<cell id='1' cpus='4-7' memory='1024000'
unit='KiB'/>
</numa>
</cpu>
and I can see libvirt decided to use memdev
-object memory-backend-ram,id=ram-node0,size=1048576000,host-nodes=0,policy=bind
-numa node,nodeid=0,cpus=0-3,memdev=ram-node0
-object memory-backend-ram,id=ram-node1,size=1048576000,host-nodes=1,policy=bind
-numa node,nodeid=1,cpus=4-7,memdev=ram-node1
So unless I'm missing something, we aren't suffering from the problem
described by Igor above even today.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|