
On Thu, 2020-12-03 at 17:04 -0300, Daniel Henrique Barboza wrote:
On 12/3/20 11:37 AM, Andrea Bolognani wrote:
This is where I'm a bit confused. IIUC the new value for <memory>, 1572992 KiB, is exactly 1 GiB (initial NUMA memory) + 512 MiB (NVDIMM guest area size) + 128 KiB (NVDIMM label size). Is that the value we expect users to see in the XML? If the label size were not there I would certainly say yes, but those extra 128 KiB make me pause. Then again, the <target><size> for the <memory type='nvdimm'> element also includes the label size, so perhaps it's all good? I just want to make sure it is intentional :)
This is intentional. The target_size of the NVDIMM must contain the size of the guest visible area (256MB aligned) plus the label_size.
The last bit of confusion is given by the fact that the <currentMemory> element is not updated along with the <memory> element. How will that work? Do I understand correctly that the guest will actually get the full <memory> size, but if a memory balloon is also present then the difference between <memory> and <currentMemory> will be (temporarily) returned to the host using that mechanism?
Yes. <memory> is the max amount of memory the guest can have at boot time. For our case (pSeries) it consists of the base RAM + space for the DMA window for VFIO devices and PHBs and hotplug. This is what is being directly impacted by patch 06 and this series as a whole.
<currentMemory> is represented by our internal value of def->mem.cur_balloon. If there is a balloon device then <currentMemory> follows the lead of the device. If there is no RAM ballooning, def->mem.cur_balloon = <memory> = <currentMemory>.
Thank you for your explanation! It all sounds good :) -- Andrea Bolognani / Red Hat / Virtualization