On 06/24/2014 11:56 AM, Michal Privoznik wrote:
On 24.06.2014 10:15, Ján Tomko wrote:
> On 06/23/2014 03:59 PM, Michal Privoznik wrote:
>> + * kernel limitation. The problem is, if there are some huge
>> + * pages allocated, they are accounted under the 'MemUsed' field
>> + * in the meminfo file instead of being subtracted from the
>> + * 'MemTotal'. We must do the subtraction ourselves.
>> + * If unsure, pass 0.
>> + *
>> * If you're interested in just one bit, pass NULL to the other one.
>> *
>> * As a special case, if @node == -1, overall info is fetched
>> @@ -657,6 +666,7 @@ virNumaGetHugePageInfo(int node,
>> int
>> virNumaGetPageInfo(int node,
>> unsigned int page_size,
>> + unsigned long long huge_page_sum,
>> unsigned int *page_avail,
>> unsigned int *page_free)
>> {
>> @@ -666,7 +676,6 @@ virNumaGetPageInfo(int node,
>> /* sysconf() returns page size in bytes,
>> * the @page_size is however in kibibytes */
>> if (page_size == system_page_size / 1024) {
>> -# if 0
>> unsigned long long memsize, memfree;
>>
>> /* TODO: come up with better algorithm that takes huge pages into
>> @@ -679,16 +688,14 @@ virNumaGetPageInfo(int node,
>> goto cleanup;
>> }
>>
>> + /* see description above */
>
> I think you can just move the comment above here.
This one I'd really like in the function description. I mean, the
@huge_page_sum is a function argument which certainly deserves a description
and the aim of documenting internal functions is to not force other
programmers to actually read the function code. So I'm keeping it there.
Oh, I thought it was pointing at the TODO comment:
/* TODO: come up with better algorithm that takes huge pages into
* account. The problem is huge pages cut off regular memory. */
Anyway, the comment "see description above" seems pointless to me.
Jan