Corrections on proposal:
1) PinVcpus
Replace:
* @cpumap: pointer to a bit map of real CPUs (format in virVcpuInfo)
* @maplen: length of cpumap, in 8-bit bytes
by:
* @cpumap: pointer to a bit map of real CPUs (in 8-bit bytes).
* Each bit set to 1 means that corresponding CPU is usable.
* Bytes are stored in little-endian order: CPU0-7, 8-15...
* In each byte, lowest CPU number is least significant bit.
* @maplen: number of bytes in cpumap, from 1 up to size of CPU map in
* underlying virtualization system (Xen...).
* If maplen < size, missing bytes are set to zero.
* If maplen > size, failure code is returned.
2) GetVcpu
Add 4rth argument:
* @maplen: number of bytes in cpumap field of virVcpuInfo
virDomainGetVcpus(virDomainPtr domain, virVcpuInfoPtr info, int maxinfo,
int maplen)
3) Structure VcpuInfo
Suppress: #define VIR_MAX_CPUS 256
Replace:
unsigned char cpumap[VIR_MAX_CPUS/8]; /* Bit map of usable real CPUs.
by:
unsigned char cpumap[]; /* Bit map of usable real CPUs.
Variable length: it may be less or greater than size of CPU map in
underlying virtualization system (Xen...).
4) Accessor macros: to be defined later.
Veuillez répondre à veillard(a)redhat.com
Pour : michel.ponceau(a)bull.net
cc : libvir-list(a)redhat.com
Objet : Re: Proposal : add 3 functions to Libvirt API, for virtual CPUs
On Fri, Jun 30, 2006 at 04:00:45PM +0200, michel.ponceau(a)bull.net wrote:
For our administration, we need the following actions, while
concerned
domain is running:
1) Change the number of virtual CPUs.
2) Change the pinning (affinity) of a virtual CPU on real CPUs.
3) Get detailed information for each virtual CPU.
Currently there is no Libvirt function provided for that. We suggest to
add the following 3 functions (in libvirt.c):
/**
* virDomainSetVcpus:
* @domain: pointer to domain object, or NULL for Domain0
* @nvcpus: the new number of virtual CPUs for this domain
*
* Dynamically change the number of virtual CPUs used by the domain.
* Note that this call may fail if the underlying virtualization
hypervisor
* does not support it or if growing the number is arbitrary limited.
* This function requires priviledged access to the hypervisor.
*
* Returns 0 in case of success, -1 in case of failure.
*/
int virDomainSetVcpus(virDomainPtr domain, unsigned int nvcpus)
okay
/**
* virDomainPinVcpu:
* @domain: pointer to domain object, or NULL for Domain0
* @vcpu: virtual CPU number
* @cpumap: pointer to a bit map of real CPUs (format in virVcpuInfo)
* @maplen: length of cpumap, in 8-bit bytes
*
* Dynamically change the real CPUs which can be allocated to a virtual
CPU.
* This function requires priviledged access to the hypervisor.
*
* Returns 0 in case of success, -1 in case of failure.
*/
int virDomainPinVcpu(virDomainPtr domain, unsigned int vcpu,
unsigned char *cpumap, int maplen)
Can you explain more clearly what is the format of cpumap ? An example
would be welcome, and that would be needed for the doc and maybe testing.
What would happen if maplen is < or > than the number of CPU divided by 8
?
/**
* virDomainGetVcpus:
* @domain: pointer to domain object, or NULL for Domain0
* @info: pointer to an array of virVcpuInfo structures
* @maxinfo: number of structures in info array
*
* Extract information about virtual CPUs of a domain, store it in info
array.
*
* Returns the number of info filled in case of success, -1 in case of
failure.
*/
int virDomainGetVcpus(virDomainPtr domain, virVcpuInfoPtr info, int
maxinfo)
Hum ... now the problem with that API entry point is that we 'burn' the
maximum 256 processors in the ABI, i.e. if we ever need to go past 256
client and servers need to be recompiled. Maybe this is not a real problem
in practice but that's annoying. Is there existing APIs doing this kind
of things (in POSIX for example), and what hard limit did they use ?
Maybe
int virDomainGetVcpusNr(virDomainPtr domain, int nr, virVcpuInfoPtr
info,
int maxCPU);
Where the maxCPU is defined by the client as the number of real CPU
it defined in its virVcpuInfoPtr and then an iteration over the virtual
CPU defined in the domain is possible too.
Of course if the domain uses many virtual CPUs this would become expensive
but somehow I don't see that being the common use, I would rather guess
the domains created use a few CPUs even if instantiated on a very large
machine.
This
with the following structure (in libvirt.h):
/**
* virVcpuInfo: structure for information about a virtual CPU in a
domain.
*/
#define VIR_MAX_CPUS 256
Hum, there is already NUMA machines with more than 256 processors,
it's annoying to define an API limit when you know it is already
breakable.
typedef enum {
VIR_VCPU_OFFLINE = 0, /* the virtual CPU is offline */
VIR_VCPU_RUNNING = 1, /* the virtual CPU is running */
VIR_VCPU_BLOCKED = 2, /* the virtual CPU is blocked on
resource
*/
} virVcpuState;
typedef struct _virVcpuInfo virVcpuInfo;
struct _virVcpuInfo {
unsigned int number; /* virtual CPU number */
int state; /* value from virVcpuState */
unsigned long long cpuTime; /* CPU time used, in nanoseconds */
int cpu; /* real CPU number, or -1 if offline */
unsigned char cpumap[VIR_MAX_CPUS/8]; /* Bit map of usable real
CPUs.
Each bit set to 1 means that corresponding CPU is
usable.
Bytes are stored in little-endian order: CPU0-7,
8-15...
In each byte, lowest CPU number is least significant bit
*/
};
typedef virVcpuInfo *virVcpuInfoPtr;
Hum, maybe some accessors should be provided in the API, letting the
client
code handle access and having to take care of indianness issues doesn't
feel
very nice. Something like the FD_CLR/FD_ISSET/FD_SET/FD_ZERO equivalents
when using the select() POSIx call.
I have successfully tried those functions via Xen hypervisor, except
for
the first (SetVcpus) where hypervisor operation DOM0_MAX_VCPUS fails
(maybe it is not possible on a running domain ?). That function was
successful via Xen daemon.
Maybe that operation requires more than one hypervisor call to actually
enable
the processors. The simpler would be to look at the code in xend about
what's
needed there, maybe the kernel need to be made aware of that and I would
expect
this to be a xenstore operation.
At least we know we can fallback to xend if the hypervisor call doesn't
work
directly.
I don't know if virDomainGetVcpus should really be mutated into
virDomainGetVcpusNr, I would certainly like to be able to keep that API
extensible for more than 256 CPUs. Maybe I'm just too cautious there,
I would really like feedback from others on this issue !
In the meantime sending the current set of patches you developped could
allow to look closely at the 2 calls that I feel okay with.
thanks and sorry it took so long !
Daniel
--
Daniel Veillard | Red Hat
http://redhat.com/
veillard(a)redhat.com | libxml GNOME XML XSLT toolkit
http://xmlsoft.org/
http://veillard.com/ | Rpmfind RPM search engine
http://rpmfind.net/