On Wed, Jun 26, 2024 at 09:18:37AM -0500, Praveen K Paladugu wrote:
Hey folks,
My team is working on exposing `cpugroups` to Libvirt while using 'hyperv'
hypervisor with cloud-hypervisor(VMM). cpugroups are relevant in a specific
configuration of hyperv called 'minroot'. In Minroot configuration,
hypervisor artificially restricts Dom0 to run on a subset of cpus (Logical
Processors). The rest of the cpus can be assigned to guests.
cpugroups manage the CPUs assigned to guests and their scheduling
properties. Initially this looks similar to `cpuset` (in cgroups), but the
controls available with cpugroups don't map easily to those in cgroups. For
example:
* "IdleLPs" are the number of Logical Processors in a cpugroup, that should
be reserved to a guest even if they are idle
Are you saying that "IdleLPs" are host CPUs that are reserved for
a guest, but which are NOT currently going to be used for running
any virtual guest CPUs ?
At what point do IdleLPs become used (non-idle) by the guest ?
* "SchedulingPriority", the priority(values between 0..7)
with which to
schedule CPUs in a cpugroup.
We currently have
<vcpusched vcpus='0-4,^3' scheduler='fifo'
priority='1'/>
and 'SchedulePriority' would be conceptually mapping to the 'priority'
value.
It sounds like you're saying that the priority applies to /all/
CPUs in the cpugroup. IOW, if we were to re-use <vcpusched> for
this, we would have to require that the 'vcpus' mask always
covers every CPU in the cpugroup.
It is probably better to just declare a new global element:
<cputune>
<priority>0..7</priority>
</cputune>
since we've got precedent there with global elements for
<shares>, <period>, <quota>, etc setting overall VM policy,
which can option be refined per-vCPU by other elements.
As controls like above don't easily map to anything in cgroups,
using a
driver specific element in Domain xml, to configure cpugroups seems like a
right approach. For example:
I think our general view is that tunable parameters in general are
almost entirely driver specific.
We provide a generic API framework for tunables using the virTypedParameter
arrays. The named tunables listed within the parameter array though, will
generally be different per-driver. Similarly we have the general <cputune>
element, but stuff within that is often per-driver.
If there are some parameters which are common to many drivers that's a
bonus, but I wouldn't say that is a required expectation.
IOW, I don't expect the cloud hypervisor driver to use a custom XML
namespace for this tasks. We should define new XML elements and/or
virTypedParameter constant names as needed, and re-use existing stuff
where sensible.
<ch:cpugroups>
<idle_lps value='4'/>
<scheduling_priority value='6'/>
</ch:cpugroups>
As cpugroups is only relevant while using minroot configuration on hyperv, I
don't see any value in generalizing this setting. So, having some "ch"
driver specific settings seems like a good approach to implement this
feature.
Question1: Do you see any concerns with this approach?
The cpugroup settings can be applied/modified using sysfs interface or using
a cmdline tool on the host. I see Libvirt uses both these mechanisms for
various use cases. But, given a choice, sysfs based interface seems like a
simpler approach to me. With sysfs interface Libvirt does not have to take
install time dependencies on new tools
Question2: Of "sysfs" vs "cmdline tool" which is preferred, given a
choice?
Directly using sysfs is preferrable. It has lower overhead, and we can see
directly what fails allowing clearer error reporting when needed. sysfs is
simple enough that spawning a cmdline tool doesn't reduce our work, and if
anything increases the work.
With regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|