On systems with humongous pages (16GiB) and 32bit int it's easy
to hit integer overflow in virNumaGetPages(). What happens is,
inside of virNumaGetPages() as we process hugepages for given
NUMA node (e.g. in order to produce capabilities XML), we keep a
sum of sizes of pools in an ULL variable (huge_page_sum). In each
iteration, the variable is incremented by 1024 * page_size *
page_avail. Now, page_size is just an uint, so we have:
ULL += U * U * ULL;
and because of associativity, U * U is computed first and since
we have two operands of the same type, no type expansion happens.
But this means, for humongous pages (like 16GiB) the
multiplication overflows.
Therefore, move the multiplication out of the loop. This helps in
two ways:
1) now we have ULL += U * ULL; which expands the uint in
multiplication,
2) it saves couple of CPU cycles.
Resolves:
https://issues.redhat.com/browse/RHEL-16749
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
src/util/virnuma.c | 7 ++++---
1 file changed, 4 insertions(+), 3 deletions(-)
diff --git a/src/util/virnuma.c b/src/util/virnuma.c
index 5053a70c61..9393c20875 100644
--- a/src/util/virnuma.c
+++ b/src/util/virnuma.c
@@ -787,9 +787,7 @@ virNumaGetPages(int node,
tmp_free[ntmp] = page_free;
ntmp++;
- /* page_size is in kibibytes while we want huge_page_sum
- * in just bytes. */
- huge_page_sum += 1024 * page_size * page_avail;
+ huge_page_sum += page_size * page_avail;
}
if (direrr < 0)
@@ -800,6 +798,9 @@ virNumaGetPages(int node,
VIR_REALLOC_N(tmp_avail, ntmp + 1);
VIR_REALLOC_N(tmp_free, ntmp + 1);
+ /* page_size is in kibibytes while we want huge_page_sum in just bytes. */
+ huge_page_sum *= 1024;
+
if (virNumaGetPageInfo(node, system_page_size, huge_page_sum,
&tmp_avail[ntmp], &tmp_free[ntmp]) < 0)
return -1;
--
2.41.0