On Tue, Aug 08, 2017 at 05:04:15PM +0200, Michal Privoznik wrote:
https://bugzilla.redhat.com/show_bug.cgi?id=1458638
This code is so complicated because we allow enabling the same
bits at many places. Just like in this case: huge pages can be
enabled by global <hugepages/> element under <memoryBacking> or
on per <memory/> basis. To complicate things a bit more, users
are allowed to not specify any specific page size in which case
s/to not specify/not to specify/
reads a bit nicely, or even
s/to not specify any/to specify no/ or "allowed to omit the page size"
Choice is yours
Reviewed-by: Martin Kletzander <mkletzan(a)redhat.com>
One more note that I told you in person, but I can't resist the urge to
share it more widely. The code is gross because it can't be made much
better due to the fact that we keep the parsed data the same way they
are in the XML. I think this could be made nicer if we were to fill all
the information during parsing.
When parsing /memory/hugepages we would fill in missing information
right away. Then, when parsing /cpu/numa/cell we could copy that
information to the specific cells (if missing) and because the whole
definition is already available from parsing /memory/hugepages there's
less lookups. Finally, when parsing /devices/memory/source we can
directly get missing info from the numa cell if nodemask is present or
global hugepages if it is not, and we don't have to lookup anything,
because it is already guaranteed to be filled in for the previous parts.
I hope it's understandable. What I hope even more is that someone
refactors the code this way =)