
On 04/24/2018 10:42 AM, Daniel P. Berrangé wrote:
On Mon, Apr 23, 2018 at 05:44:38PM +0200, Michal Privoznik wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1569678
On some large systems (with ~400GB of RAM) it is possible for unsigned int to overflow in which case we report invalid number of 4K pages pool size. Switch to unsigned long long.
It isn't very obvious from the code diff what the actual problem is, but it is mentioned in the bug that we hit overflow in virNumaGetPages when doing:
huge_page_sum += 1024 * page_size * page_avail;
because although 'huge_page_sum' is an unsigned long long, the page_size and page_avail are both unsigned int, so the promotion to unsigned long long doesn't happen until the sum has been calculated, by which time we've already overflowed. Can you mention that we're specifically solving this huge_page_sum overflow in the commit message
Turning page_avail into a unsigned long long is not strictly needed until we need ability to represent more than 2^32 4k pages, which IIUC equates to 16 TB of RAM. That's not outside the realm of possibility, so makes sense that we change it to unsigned long long to avoid future problems. Can you also mention that we're solving this limit too in the commit message.
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com>
Adjusted and pushed. Thank you. Michal