On 4/26/24 4:07 AM, Daniel P. Berrangé wrote:
On Thu, Apr 25, 2024 at 04:41:02PM -0600, Jim Fehlig via Devel
wrote:
> On 4/17/24 5:12 PM, Jim Fehlig wrote:
>> Hi All,
>>
>> While Fabiano has been working on improving save/restore performance in
>> qemu, I've been tinkering with the same in libvirt. The end goal is to
>> introduce a new VIR_DOMAIN_SAVE_PARALLEL flag for save/restore, along
>> with a VIR_DOMAIN_SAVE_PARAM_PARALLEL_CONNECTIONS parameter to specify
>> the number of concurrent channels used for the save/restore. Recall
>> Claudio previously posted a patch series implementing parallel
>> save/restore completely in libvirt, using qemu's multifd functionality
>> [1].
>>
>> A good starting point on this journey is supporting the new mapped-ram
>> capability in qemu 9.0 [2]. Since mapped-ram is a new on-disk format, I
>> assume we'll need a new QEMU_SAVE_VERSION 3 when using it? Otherwise I'm
>> not sure how to detect if a saved image is in mapped-ram format vs the
>> existing, sequential stream format.
>
> While hacking on a POC, I discovered the save data cookie and assume the use
> of mapped-ram could be recorded there?
The issue with that is the semantics around old libivrt loading
the new image. Old libvirt won't know to look for 'mapped-ram'
element/attribute in the XML cookie, so will think it is a
traditional image with hilariously predictable results :-)
Haha :-). I need to recall we aim to support new-to-old migration upstream. We
limit our downstream support scope, and this type of migration scenario is one
that falls in the unsupported bucket.
Regards,
Jim