On 04/17/13 11:16, Daniel P. Berrange wrote:
On Wed, Apr 17, 2013 at 08:47:01AM +0200, Peter Krempa wrote:
> On 04/16/13 19:41, Daniel P. Berrange wrote:
>> On Tue, Apr 16, 2013 at 04:00:10PM +0200, Peter Krempa wrote:
>>> This flag will allow to use qemu guest agent commands to disable
>>> (offline) and enable (online) processors in a live guest that has the
>>> guest agent running.
>>
>> How do guest CPU offline/online state changes relate
>> to the offline/online state changes we have traditionally
>> done via the monitor.
>>
>> ie if we offline a CPU with the guest agent, will then
>> now be visible in the state via the monitor ? And the
>> reverse ?
>
> If you modify the guest state via agent it is not visible to the
> host (except for the VCPU not consuming cpu time).
So this isn't really VCPU hotplug then. It is just toggling
whether the guest OS is scheduling things on that VCPU or
not. As such IMHO, we should not be overloading the existing
API with this functionality - we should have strictly separate
APIs for controlling guest OS vCPU usage.
Hmm, okay that seems fair enough. The virDomainSetvcpus api has the
ideal name for this but mixing the semantics of disabling CPUs with the
angent and ripping them out from the hypervisor might lead to user
confusion.
In this case we need to design a new API. Here are a few suggestions:
1) virDomainSetGuestVCPU(virDomainPtr dom,
unsigned int cpu_id,
bool state,
unsigned int flags);
This api would be very easy for us as only one cpu could be modified at
time, thus no painful error reporting on semi-failed transactions.
Harder to use in mgmt apps as they would need to call it multiple times.
2) virDomainSetGuestVCPUs(virDomainPtr dom,
virBitmapPtr to_online,
virBitmapPtr to_offline,
unsigned int flags);
Doesn't look very nice. Two bitmaps are needed as CPU indexes are not
guaranteed to be contiguous inside of a guest. Is easier to use for mgmt
apps as only a single call is needed. Libvirt will have to solve
failures and maybe even attempt rollback.
3) virDomainSetGuestVCPUs(virDomainPtr dom,
virBitmapPtr cpumap,
bool state,
unsigned int flags);
Variation of 2), one cpu map and a state flag that will determine what
action to take with the cpus provided in the map instead of two separate
maps.
Other possibility would be to expose the cpu state in a kind of array as
the agent monitor functions do it right now, but this wouldn't be
expandable and would require users to use that new data structure.
Again, the getter functions will need to do the same, so that the user
will be able to obtain a map of the system to do decisions about
offlining and onlining processors.
In the future we will also need similar APIs for classic cpu hotplug as
qemu is probably going to support it in that way. With the classic
hotplug we probably won't need to take care of sparse cpu ID's.
Any other design or naming suggestions are welcome.
Peter