On Thu, Jul 21, 2016 at 05:52:11PM +0200, Andrea Bolognani wrote:
On Wed, 2016-07-20 at 13:49 +0100, Daniel P. Berrange wrote:
> > Additionally, this doesn't tell us anything about whether any
> > host CPU can run a guest CPU: given the above configuration,
> > on ppc64, we know that CPU 1 can run guest threads even though
> > it's offline because CPU 0 is online, but the same isn't true
> > on x86.
> >
> > So we would end up needing three new boolean properties:
> >
> > - online - whether the CPU is online
> > - can_run_vcpus - whether the CPU can run vCPUs
> > - can_pin_vcpus - whether vCPUs can be pinned to the CPU
>
> These two can*vcpus props aren't telling the whole story
> because they are assuming use of KVM - TCG or LXC guests
> would not follow the same rules for runability here.
>
> This is why I think the host capabilities XML should focus
> on describing what hardware actually exists and its state,
> and not say anything about guest runnability.
>
> Historically we've let apps figure out the runnability
> of pCPUs vs guest vCPUs, but even on x86 it isn't as simple
> as you/we make it out to be.
>
> For example, nothing here reflects the fact that the host
> OS could have /sys/fs/cgroups/cpuset/cpuset.cpus configured
> to a subset of CPUs. So already today on x86, just because
> a CPU is listed in the capabilities XML, does not imply that
> you can run a guest on it.
>
> So I think there's a gap for exposing information about
> runability of guests vs host CPUs, that does not really
> fit in the host capabilities. Possibly it could be in the
> guest capabilities, but more likely it would need to be
> a new API entirely.
Why wouldn't domcapabilities be a good place? We already
have some related information (the <vcpu> tag).
Just depends whether we have all the info we need available
in the domcapabilities API. eg I was wondering whether we
would need info about the guest CPU topology (sockets, core,
threads) too. If we don't, then great, it can be in the
domcapabilities.
In any case, regardless of whether or not it will ultimately
be part of domcapabilities, I guess a good starting point is
to sketch out how the xml would look like. I'm thinking of
something like
<cpu id='0' runnable='yes' pinnable='yes'
run_group='0-3'/>
<cpu id='1' runnable='yes' pinnable='no'
run_group='0-3'/>
<cpu id='2' runnable='yes' pinnable='no'
run_group='0-3'/>
<cpu id='3' runnable='yes' pinnable='no'
run_group='0-3'/>
where 'runnable' tells you whether the CPU can run vCPUs,
'pinnable' whether vCPUs can be pinned, and 'run_group' tells
you what CPUs the pinned vCPUs will actually run on? On x86
What's the relationship to guest CPUs and their topology
here ? Is this trying to say that all vCPUs placed in a
run_group must be in the same virtual socket ?
If so is the pinnable attribute trying to imply that when
you change pinning for a vCPU on the first pCPU in a run
group, that it will automatically change pinning of the
other vCPUs on that same pCPU run group ?
it would look simpler:
<cpu id='0' runnable='yes' pinnable='yes'
run_group='0'/>
<cpu id='1' runnable='yes' pinnable='yes'
run_group='1'/>
I think we don't need to add information that can already be
obtained from existing capabilities, such as the siblings
list.
Yep, it'd be nice to avoid duplicating info already exposed in
the host capabilities, such as host topology.
It feels like 'run_group' is however rather duplicating the
info. eg, isn't 'run_group' just directly saying which
cores are part of the same socket.
For sake of clarity can you just back up again & explain exactly
what the rules are wrt PPC & pCPU / vCPU topology and plaement.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|