Hi Daniel,

Thanks a lot for the quick and detailed explanation. Please see my another query below.

>>In normal usage, the guest vCPUs will be floating arbitrarily across any
>>host physical CPUs.  So trying to match host / guest topology is not only
>>useless, it might actually degrade your performance - eg if you give the
>>guest 1 socket, 1 core and 2 threads, but he vCPUs get scheduled on different
>>thost sockets, the guest OS will make very bad decisions.

The above is true, only if the the host have multiple sockets. As long as the host has single socket, there will not be any performance degradation right?

>>If you are willing, however, to assign dedicated host CPUs to each guest
>>CPU, then you can try to match the host + guest topology. That will improve
>>performance, since the guest CPUs will be fixed to specific host CPUs. THis
>>isn't suitable as a default config though, hence libvirt/QEMU's default
>>behaviour of using sockets for all vCPUs.
Will try the advanced configuration to match the guest to host. I am kind of looking for a configuration which launches qemu with "-cpu host" option.

On 5 June 2017 at 21:10, Daniel P. Berrange <berrange@redhat.com> wrote:
On Mon, Jun 05, 2017 at 08:40:19PM +0530, girish kumar wrote:
> Hi All,
>
> I am new here, please warn me If I am not following proper etiquette of
> this mailing list.
>
> I am breaking my head, regarding why libvirt is defining multiple CPU
> sockets when I increase the vCPU count. As and when I increase the vCPU
> count in libvirt guest XML, it is increasing the CPU sockets in qemu
> instance.
>
> " -smp 4,sockets=4,cores=1,threads=1 " instead " -smp
> 4,sockets=1,cores=4,threads=1"
>
> Does not this lower the performance of the guest as the host and guest
> architectures gets different in that case.
>
> Also please suggest a guest configuration which will take advantage of the
> host CPU architecture.

In normal usage, the guest vCPUs will be floating arbitrarily across any
host physical CPUs.  So trying to match host / guest topology is not only
useless, it might actually degrade your performance - eg if you give the
guest 1 socket, 1 core and 2 threads, but he vCPUs get scheduled on different
thost sockets, the guest OS will make very bad decisions.

By defaulting to giving the guest only sockets, you get fixed predictable
behaviour from the guest OS, as the host OS moves vCPUs threads around.

If you are willing, however, to assign dedicated host CPUs to each guest
CPU, then you can try to match the host + guest topology. That will improve
performance, since the guest CPUs will be fixed to specific host CPUs. THis
isn't suitable as a default config though, hence libvirt/QEMU's default
behaviour of using sockets for all vCPUs.

Regards,
Daniel
--
|: https://berrange.com      -o-    https://www.flickr.com/photos/dberrange :|
|: https://libvirt.org         -o-            https://fstop138.berrange.com :|
|: https://entangle-photo.org    -o-    https://www.instagram.com/dberrange :|



--
Regards,
Girish