On 19.06.2014 13:14, Daniel P. Berrange wrote:
On Thu, Jun 19, 2014 at 12:06:44PM +0100, Daniel P. Berrange wrote:
> On Mon, Jun 16, 2014 at 05:08:26PM +0200, Michal Privoznik wrote:
>> +int
>> +virNumaGetPageInfo(int node,
>> + unsigned int page_size,
>> + unsigned int *page_avail,
>> + unsigned int *page_free)
>> +{
>> + int ret = -1;
>> + long system_page_size = sysconf(_SC_PAGESIZE);
>> +
>> + /* sysconf() returns page size in bytes,
>> + * the @page_size is however in kibibytes */
>> + if (page_size == system_page_size / 1024) {
>> + unsigned long long memsize, memfree;
>> +
>> + /* TODO: come up with better algorithm that takes huge pages into
>> + * account. The problem is huge pages cut off regular memory. */
>
> Hmm, so this code is returning normal page count that ignores the fact
> that some pages are not in fact usable because they've been stolen for
> huge pages ? I was thinking that the total memory reported by the kernel
> was reduced when you allocated huage pages, but testing now, it seems I
> was mistaken in that belief. So this is a bit of a nasty gotcha because
> a user of this API would probably expect that the sum of page size *
> page count for all page sizes would equal total physical RAM (give or
> take).
>
> I still like the idea of including the default page size in this info,
> but perhaps we should disable the default system page size for now &
> revisit later if we can figure out a way to accurately report it,
> rather than reporting misleading info.
I should have said, ACK to either #ifdef 0 system page size or ACK to
your previous version of this patch, unless someone has better ideas
to accurately report total + free info for default page sizes.
Regards,
Daniel
Okay, I'm squashing this in prior to pushing:
diff --git a/src/util/virnuma.c b/src/util/virnuma.c
index a59feca..c8e7f40 100644
--- a/src/util/virnuma.c
+++ b/src/util/virnuma.c
@@ -663,6 +663,7 @@ virNumaGetPageInfo(int node,
/* sysconf() returns page size in bytes,
* the @page_size is however in kibibytes */
if (page_size == system_page_size / 1024) {
+#if 0
unsigned long long memsize, memfree;
/* TODO: come up with better algorithm that takes huge pages into
@@ -680,6 +681,11 @@ virNumaGetPageInfo(int node,
if (page_free)
*page_free = memfree / system_page_size;
+#else
+ virReportError(VIR_ERR_ARGUMENT_UNSUPPORTED, "%s",
+ _("system page size are not supported yet"));
+ goto cleanup;
+#endif /* 0 */
} else {
if (virNumaGetHugePageInfo(node, page_size, page_avail, page_free) < 0)
goto cleanup;
@@ -727,6 +733,10 @@ virNumaGetPages(int node,
unsigned int ntmp = 0;
size_t i;
bool exchange;
+
+#if 0
+ /* This has to be disabled until the time the issue in
+ * virNumaGetPageInfo is resolved. Sorry. */
long system_page_size;
/* sysconf() returns page size in bytes,
@@ -745,6 +755,7 @@ virNumaGetPages(int node,
goto cleanup;
tmp_size[ntmp] = system_page_size;
ntmp++;
+#endif /* 0 */
/* Now that we got ordinary system pages, lets get info on huge pages */
if (virNumaGetHugePageInfoPath(&path, node, 0, NULL) < 0)
Hopefully I'll come up with resolution soon.
Michal