On Wed, Apr 21, 2021 at 1:09 PM Daniel P. Berrangé <berrange@redhat.com> wrote:
On Wed, Apr 21, 2021 at 12:53:49PM +0200, Roman Mohr wrote:
> Hi,
>
> I have a question regarding enabling l3 cache emulation on Domains. Can
> this also be enabled without cpu-pinning, or does it need cpu pinning to
> emulate the l3 caches according to the cpus where the guest is pinned to?

I presume you're referring to

  <cpu>
    <cache level='3' mode='emulate|passthrough|none'/>
  </cpu>

There is no hard restriction placed on usage of these modes by QEMU.

Conceptually though, you only want to use "passthrough" mode if you
have configured the sockets/cores/threads topology to match the host
CPUs. In turn you only ever want to set sockets/cores/threads to
match the host if you have done CPU pinning such that the topology
actually matches the host CPUs that have been pinned to.

As a rule of thumb

 - If letting CPUs float

     -> Always uses sockets=1, cores=num-vCPUs, threads=1
     -> cache==emulate
     -> Always use 1 guest NUMA node (ie the default)


Is `emulate` also the default in libvirt? If not, would you see any reason, e.g. thinking about migrations, to not set it always if no cpu pinning is done?
 

 - If strictly pinning CPUs 1:1

     -> Use sockets=N, cores=M, threads=0 to match the topology
        of the CPUs that have been pinned to
     -> cache==passthrough
     -> Configure virtual NUMA nodes if the CPU pinning or guest
        RAM needs cross host NUMA nodes.



Regards,
Daniel
--
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|