On Thu, Jun 19, 2014 at 12:06:44PM +0100, Daniel P. Berrange wrote:
On Mon, Jun 16, 2014 at 05:08:26PM +0200, Michal Privoznik wrote:
> +int
> +virNumaGetPageInfo(int node,
> + unsigned int page_size,
> + unsigned int *page_avail,
> + unsigned int *page_free)
> +{
> + int ret = -1;
> + long system_page_size = sysconf(_SC_PAGESIZE);
> +
> + /* sysconf() returns page size in bytes,
> + * the @page_size is however in kibibytes */
> + if (page_size == system_page_size / 1024) {
> + unsigned long long memsize, memfree;
> +
> + /* TODO: come up with better algorithm that takes huge pages into
> + * account. The problem is huge pages cut off regular memory. */
Hmm, so this code is returning normal page count that ignores the fact
that some pages are not in fact usable because they've been stolen for
huge pages ? I was thinking that the total memory reported by the kernel
was reduced when you allocated huage pages, but testing now, it seems I
was mistaken in that belief. So this is a bit of a nasty gotcha because
a user of this API would probably expect that the sum of page size *
page count for all page sizes would equal total physical RAM (give or
take).
I still like the idea of including the default page size in this info,
but perhaps we should disable the default system page size for now &
revisit later if we can figure out a way to accurately report it,
rather than reporting misleading info.
I should have said, ACK to either #ifdef 0 system page size or ACK to
your previous version of this patch, unless someone has better ideas
to accurately report total + free info for default page sizes.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|