On Fri, Jul 17, 2015 at 18:13:26 +0200, Andrea Bolognani wrote:
Swap out all instances of cpu_set_t and replace them with virBitmap,
which some of the code was already using anyway.
The changes are pretty mechanical, with one notable exception: an
assumption has been added on the max value we can run into while
reading either socket_it or core_id.
While this specific assumption was not in place before, we were
using cpu_set_t improperly by not making sure not to set any bit
past CPU_SETSIZE or explicitly allocating bigger bitmaps; in fact
the default size of a cpu_set_t, 1024, is way too low to run our
testsuite, which includes core_id values in the 2000s.
---
src/nodeinfo.c | 65 ++++++++++++++++++++++++++++++++++------------------------
1 file changed, 38 insertions(+), 27 deletions(-)
diff --git a/src/nodeinfo.c b/src/nodeinfo.c
index 44983b8..61a3b33 100644
--- a/src/nodeinfo.c
+++ b/src/nodeinfo.c
@@ -30,7 +30,6 @@
#include <errno.h>
#include <dirent.h>
#include <sys/utsname.h>
-#include <sched.h>
#include "conf/domain_conf.h"
#if defined(__FreeBSD__) || defined(__APPLE__)
@@ -388,19 +387,6 @@ virNodeParseSocket(const char *dir,
return ret;
}
-# ifndef CPU_COUNT
-static int
-CPU_COUNT(cpu_set_t *set)
-{
- size_t i, count = 0;
-
- for (i = 0; i < CPU_SETSIZE; i++)
- if (CPU_ISSET(i, set))
- count++;
- return count;
-}
-# endif /* !CPU_COUNT */
-
/* parses a node entry, returning number of processors in the node and
* filling arguments */
static int
@@ -415,15 +401,18 @@ virNodeParseNode(const char *sysfs_prefix,
int *threads,
int *offline)
{
+ /* Biggest value we can expect to be used as either socket id
+ * or core id. Bitmaps will need to be sized accordingly */
+ const int ID_MAX = 4095;
I think this should be a more global setting. We have quite a few places
where we invent arbitrary maximum cpu counts. One of them is
virProcessSetAffinity.
int ret = -1;
int processors = 0;
DIR *cpudir = NULL;
struct dirent *cpudirent = NULL;
virBitmapPtr present_cpumap = NULL;
+ virBitmapPtr sockets_map = NULL;
+ virBitmapPtr *cores_maps = NULL;
int sock_max = 0;
- cpu_set_t sock_map;
int sock;
- cpu_set_t *core_maps = NULL;
int core;
size_t i;
int siblings;
@@ -445,7 +434,9 @@ virNodeParseNode(const char *sysfs_prefix,
goto cleanup;
/* enumerate sockets in the node */
- CPU_ZERO(&sock_map);
+ if (!(sockets_map = virBitmapNew(ID_MAX + 1)))
+ goto cleanup;
+
while ((direrr = virDirRead(cpudir, &cpudirent, node)) > 0) {
if (sscanf(cpudirent->d_name, "cpu%u", &cpu) != 1)
continue;
@@ -462,7 +453,15 @@ virNodeParseNode(const char *sysfs_prefix,
/* Parse socket */
if ((sock = virNodeParseSocket(node, arch, cpu)) < 0)
goto cleanup;
- CPU_SET(sock, &sock_map);
+ if (sock > ID_MAX) {
+ virReportError(VIR_ERR_INTERNAL_ERROR,
+ _("Socket %d can't be handled (max socket is
%d)"),
+ sock, ID_MAX);
+ goto cleanup;
+ }
+
+ if (virBitmapSetBit(sockets_map, sock) < 0)
+ goto cleanup;
Like many others, this code would benefit from auto-allocating bitmaps
rather than the static ones. Nothing to change though here.
if (sock > sock_max)
sock_max = sock;
Otherwise looks good to me, but I'd really want to avoid multiple
definitions of the same maximum variable.
Peter