On Mon, Jul 20, 2015 at 18:37:27 +0200, Andrea Bolognani wrote:
Swap out all instances of cpu_set_t and replace them with virBitmap,
which some of the code was already using anyway.
The changes are pretty mechanical, with one notable exception: an
assumption has been added on the max value we can run into while
reading either socket_it or core_id.
While this specific assumption was not in place before, we were
using cpu_set_t improperly by not making sure not to set any bit
past CPU_SETSIZE or explicitly allocating bigger bitmaps; in fact
the default size of a cpu_set_t, 1024, is way too low to run our
testsuite, which includes core_id values in the 2000s.
---
src/nodeinfo.c | 65 ++++++++++++++++++++++++++++++++++------------------------
1 file changed, 38 insertions(+), 27 deletions(-)
ACK, I agree that the maximum ID can be unified across libvirt in a
separate patch.