On 12/3/20 11:37 AM, Andrea Bolognani wrote:
On Wed, 2020-12-02 at 15:08 +0100, Andrea Bolognani wrote:
> On Wed, 2020-12-02 at 09:14 -0300, Daniel Henrique Barboza wrote:
>> Andrea/Peter, do you want to take a look again to see if there's
>> something that I missed?
>
> Yeah, sorry for not being very responsive, and thanks a lot Michal
> for picking up the slack! I'll try to give it at least a quick look
> by the end of the day.
Sorry I didn't managed to get back to you yesterday but, given that
we managed to get this at least partially wrong in every previous
iteration, you'll hopefully forgive me for being perhaps a bit
overcautious O:-)
Always good to try to minimize error :)
As mentioned elsewhere, in the process of trying to convince myself
that these changes are in fact correct I found it useful be able to
make a direct comparison between the ABI_UPDATE case and the vanilla
one, and to facilitate that I've produced a couple of simple
patches[1] that I'm hoping you'll agree to rebase your work on top
of. I have actually already done that locally, so feel free to simply
pick up the corresponding branch[2].
That's what I'll end up doing. I'll probably push patches 1, 2 and 4
first, since they got the R-bs and aren't affected by this rebase,
then I'll make more adjustments based on your review and post a v3.
Anyway, assuming you're working from that branch, here are the
differences that are introduced by using ABI_UPDATE:
$ diff -Naru tests/qemuxml2xmloutdata/memory-hotplug-ppc64-nonuma.xml
tests/qemuxml2xmloutdata/memory-hotplug-ppc64-nonuma-abi-update.xml
--- tests/qemuxml2xmloutdata/memory-hotplug-ppc64-nonuma.xml 2020-12-03
14:19:21.486145577 +0100
+++ tests/qemuxml2xmloutdata/memory-hotplug-ppc64-nonuma-abi-update.xml 2020-12-03
14:57:09.857621727 +0100
@@ -24,13 +24,13 @@
<panic model='pseries'/>
<memory model='dimm'>
<target>
- <size unit='KiB'>523264</size>
+ <size unit='KiB'>524288</size>
</target>
<address type='dimm' slot='0'/>
</memory>
<memory model='dimm'>
<target>
- <size unit='KiB'>524287</size>
+ <size unit='KiB'>524288</size>
</target>
<address type='dimm' slot='1'/>
</memory>
$ diff -Naru tests/qemuxml2argvdata/memory-hotplug-ppc64-nonuma.args
tests/qemuxml2argvdata/memory-hotplug-ppc64-nonuma-abi-update.args
--- tests/qemuxml2argvdata/memory-hotplug-ppc64-nonuma.args 2020-12-03
14:19:21.486145577 +0100
+++ tests/qemuxml2argvdata/memory-hotplug-ppc64-nonuma-abi-update.args 2020-12-03
14:57:09.857621727 +0100
@@ -11,7 +11,7 @@
-name QEMUGuest1 \
-S \
-machine pseries,accel=kvm,usb=off,dump-guest-core=off \
--m size=1310720k,slots=16,maxmem=4194304k \
+-m size=1048576k,slots=16,maxmem=4194304k \
-realtime mlock=off \
-smp 1,sockets=1,cores=1,threads=1 \
-object memory-backend-ram,id=memdimm0,size=536870912 \
$
You explain very well the command line change in the commit message
for patch 6/6, and the output XML now reflects the aligned size for
DIMMs that was used on the command line even before your changes, so
this all looks good.
$ diff -Naru tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64.xml
tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64-abi-update.xml
--- tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64.xml 2020-12-03
14:57:09.857621727 +0100
+++ tests/qemuxml2xmloutdata/memory-hotplug-nvdimm-ppc64-abi-update.xml 2020-12-03
14:57:09.857621727 +0100
@@ -2,7 +2,7 @@
<name>QEMUGuest1</name>
<uuid>c7a5fdbd-edaf-9455-926a-d65c16db1809</uuid>
<maxMemory slots='16'
unit='KiB'>1099511627776</maxMemory>
- <memory unit='KiB'>1267710</memory>
+ <memory unit='KiB'>1572992</memory>
<currentMemory unit='KiB'>1267710</currentMemory>
<vcpu placement='static' cpuset='0-1'>2</vcpu>
<os>
@@ -34,7 +34,7 @@
<path>/tmp/nvdimm</path>
</source>
<target>
- <size unit='KiB'>550000</size>
+ <size unit='KiB'>524416</size>
<node>0</node>
<label>
<size unit='KiB'>128</size>
$ diff -Naru tests/qemuxml2argvdata/memory-hotplug-nvdimm-ppc64.args
tests/qemuxml2argvdata/memory-hotplug-nvdimm-ppc64-abi-update.args
$
This is where I'm a bit confused. IIUC the new value for <memory>,
1572992 KiB, is exactly 1 GiB (initial NUMA memory) + 512 MiB (NVDIMM
guest area size) + 128 KiB (NVDIMM label size). Is that the value we
expect users to see in the XML? If the label size were not there I
would certainly say yes, but those extra 128 KiB make me pause. Then
again, the <target><size> for the <memory type='nvdimm'>
element also
includes the label size, so perhaps it's all good? I just want to
make sure it is intentional :)
This is intentional. The target_size of the NVDIMM must contain the
size of the guest visible area (256MB aligned) plus the label_size.
The last bit of confusion is given by the fact that the
<currentMemory> element is not updated along with the <memory>
element. How will that work? Do I understand correctly that the guest
will actually get the full <memory> size, but if a memory balloon is
also present then the difference between <memory> and <currentMemory>
will be (temporarily) returned to the host using that mechanism?
Yes. <memory> is the max amount of memory the guest can have at boot
time. For our case (pSeries) it consists of the base RAM + space for
the DMA window for VFIO devices and PHBs and hotplug. This is what is
being directly impacted by patch 06 and this series as a whole.
<currentMemory> is represented by our internal value of def->mem.cur_balloon.
If there is a balloon device then <currentMemory> follows the lead
of the device. If there is no RAM ballooning,
def->mem.cur_balloon = <memory> = <currentMemory>.
Thanks,
DHB
Sorry again for all the questions and for delaying your work. I'm
just trying to make sure we don't have to come back to it again in a
couple of months O:-)
[1]
https://www.redhat.com/archives/libvir-list/2020-December/msg00244.html
[2]
https://gitlab.com/abologna/libvirt/-/tree/ppc64-memalign