
On Fri, Oct 13, 2017 at 04:58:23PM -0400, Waiman Long wrote:
On 10/13/2017 03:01 PM, Eduardo Habkost wrote:
On Wed, Oct 11, 2017 at 04:19:38PM -0400, Waiman Long wrote:
On 10/10/2017 11:50 AM, Eduardo Habkost wrote:
> Yes. Another possibility is to enable it when there is >1 NUMA node in > the guest. We generally don't do this kind of magic but higher layers > (oVirt/OpenStack) do. Can't the guest make this decision, instead of the host? By guest, do you mean the guest OS itself or the admin of the guest VM? It could be either. But even if action is required from the guest admin to get better performance in some cases, I'd argue
On Tue, Oct 10, 2017 at 02:07:25PM -0400, Waiman Long wrote: that the default behavior of a Linux guest shouldn't cause a performance regression if the host stops hiding a feature in CPUID.
I am thinking about maybe adding kernel boot command line option like "unfair_pvspinlock_cpu_threshold=4" which will instruct the OS to use unfair spinlock if the number of CPUs is 4 or less, for example. The default value of 0 will have the same behavior as it is today. Please let me know what you guys think about that. If that's implemented, can't Linux choose a reasonable default for unfair_pvspinlock_cpu_threshold that won't require the admin to manually configure it on most cases? It is hard to have a fixed value as it depends on the CPUs being used as well as the kind of workloads that are being run. Besides, using unfair locks have the undesirable side effect of being subject to lock starvation under certain circumstances. So we may not work it to be turned on by default. Customers have to take their own risk if they want
On 10/10/2017 03:41 PM, Eduardo Habkost wrote: that. Probably I am not seeing all variables involved, so pardon my confusion. Would unfair_pvspinlock_cpu_threshold > num_cpus just disable usage of kvm_pv_unhalt, or make the guest choose a completely different spinlock implementation?
What I am proposing is that if num_cpus <= unfair_pvspinlock_cpu_threshold, the unfair spinlock will be used even if kvm_pv_unhalt is set.
Is the current default behavior of Linux guests when kvm_pv_unhalt is unavailable a good default? If using kvm_pv_unhalt is not always a good idea, why do Linux guests default to eagerly trying to use it only because the host says it's available?
For kernel with CONFIG_PARVIRT_SPINLOCKS, the current default is to use pvqspinlock if kvm_pv_unhalt is enabled, but use unfair spinlock if it is disabled. For kernel with just CONFIG_PARVIRT but no CONFIG_PARAVIRT_SPINLOCKS, the unfair lock will be use no matter the setting of kvm_pv_unhalt. Without those config options, the standard qspinlock will be used.
Thanks for the explanation. Now, I don't know yet what's the best default for a guest that has CONFIG_PARAVIRT_SPINLOCK when it sees a host that supports kvm_pv_unhalt. But I'm arguing that it's the guest responsibility to choose what to do when it detects such a host, instead of expecting the host to hide features from the guest. The guest and the guest administrator have more information to choose what's best. In other words, if exposing kvm_pv_unhalt on CPUID really makes some guests behave poorly, can we fix the guests instead? -- Eduardo