On Mon, Feb 12, 2018 at 13:42:02 +0000, Daniel Berrange wrote:
On Mon, Feb 12, 2018 at 02:31:46PM +0100, Peter Krempa wrote:
> On Mon, Feb 12, 2018 at 09:39:26 +0000, Daniel Berrange wrote:
> > On Mon, Feb 12, 2018 at 03:54:21AM -0500, Yi Wang wrote:
> > > We can't clear vcpupin settings of XML once we did vcpupin
> > > command, this is not convenient under some condition such
> > > as migration to a host with less CPUs.
> > >
> > > This patch introduces clear feature, which can clear vcpuin
> > > setting of XML using a 'c' option.
> > >
> > > Signed-off-by: Yi Wang <wang.yi59(a)zte.com.cn>
> > > Signed-off-by: Xi Xu <xu.xi8(a)zte.com.cn>
> > > ---
> > > include/libvirt/libvirt-domain.h | 11 +++++++++++
> > > src/qemu/qemu_driver.c | 32 ++++++++++++++++++++++++--------
> >
> > I'm not seeing a good reason for these change. There's no concept of
clearing
> > CPU pinning - the default is simply "pinned" to all possible CPUs,
and the
> > callers should already be able to request all possile CPUs with the current
> > API design.
>
> Well, the thing is that in some cases the default is not pinned to all
> pCPUs, but rather can be taken from other configuration points, e.g.
> global VM pinning bitmap.
Which global VM pinning bitmap are you referring to ?
<vcpu placement='static' cpuset="1-4,^3,6"
current="1">2</vcpu>
The above 'cpuset' is the default pinning for all vcpus, unless a vcpu
specifies individual other pinning.
IIRC, when we spawned QEMU we explicitly set it to use all pCPUs, so
that it doesn't inherit whatever affinity libvirtd itself has. I see
we slightly tuned that to only include CPUs that are currently online
(as reported in /sys/devices/system/cpu/online).
Except I see it is even more complicated. We only use the online
bitmap check if virHostCPUHasBitmap() returns true. We also potentially
asked numad to give us default placement.
Yes, that is the second possible source of pinning information if the
user requests automatic pinning.
So as implemented this _CLEAR flag is still potentially
significantly
different to what the original affinity was set to, which I think is
pretty bad semantically.
I did not look at the implementation here, but basically this should do
the same logic as virDomainDefGetVcpuPinInfoHelper does to determine
which bitmap is actually used. The hierarchy is:
1) specific per-vcpu bitmap
2) automatic pinning from numad if enabled
3) <vcpu cpuset=''>
4) all physical vcpus present on the host.
The above algorithm is used in all the pinning code to determine the
effective bitmap
> As of such the current code does not allow restoring such state since
> pinning to all pCPUs might be a desired legitimate configuration so I've
> removed the hack that deleted the pinning for such configuration a long
> time ago. This means that an explicit removal of the pinning might be a
> desired behaviour of the API, since abusing of any other value to
> restore the default state is not a good idea.
True, but I think it rather a hack to modify this API with a flag _CLEAR
that essentially means "ignore the bitmap parameter", as opposed to
creating an API virDomainClearCPUAffinity(dom).
Yes, this is probably a better solution. The API needs a 'vcpu' argument
though:
virDomainClearVcpuPin(virDomainPtr dom,
unsigned int vcpu,
unsigned int flags)