On 11/13/20 7:30 AM, Andrea Bolognani wrote:
On Wed, 2020-11-11 at 19:07 -0300, Daniel Henrique Barboza wrote:
> +++ b/tests/qemuxml2argvdata/memory-hotplug-ppc64-nonuma.args
> @@ -11,7 +11,7 @@ QEMU_AUDIO_DRV=none \
> -name QEMUGuest1 \
> -S \
> -machine pseries,accel=kvm,usb=off,dump-guest-core=off \
> --m size=1310720k,slots=16,maxmem=4194304k \
> +-m size=1048576k,slots=16,maxmem=4194304k \
> -realtime mlock=off \
> -smp 1,sockets=1,cores=1,threads=1 \
> -object memory-backend-ram,id=memdimm0,size=536870912 \
This doesn't look right: AFAIK the initial memory size is
guest-visible, so by changing how the alignment is performed you
might both change the guest ABI across guest boots (if libvirt is
upgraded in between them) and break migration (if either the source
or destination host is running the newer libvirt but the other side
isn't).
Good point. I failed to consider ABI stability for ppc64 guest migration.
Yes, this will break older guests that happen to have extra memory. In
fact, this can be specially harmful for migration.
This means that I can't proceed with any other ppc64 changes made here.
Aligning ppc64 DIMMs in PostParse will achieve the same results even
without this patch - the DIMMs will be aligned before qemuDomainAlignMemorySizes()
and initialmem will be rounded to a more precise value of 'currentMemory'.
I can think of ways to align ppc64 DIMMs while not touching initialmem,
but all of them will require extra state in the domain definition. The
benefits are there (the DIMMs will be aligned in the live XML) but I'm
not sure it's worth the extra code.
Thanks for pointing this out. I'll evaluate if the x86 bits are still
valuable and re-send them, since they're not messing with initialmem
calculation of x86 guests.
DHB
Did I miss something that makes this okay?