domain: how long is new xml in saved file

Hi administrator, I am a cloud compute developer. I need some help from you about libvirt. I have a work to modify a image file which is saved by virDomainSave() or virDomainSaveFlags(). So virDomainSaveImageGetXMLDesc() and virDomainSaveImageDefineXML() are APIs I choosed to do. Because I found a sentence: A save file can be inspected or modified slightly with virDomainSaveImageGetXMLDesc() and virDomainSaveImageDefineXML(). But an error is happened when I do like that. libvirt: QEMU Driver error: operation failed: new xml too large to fit in file. I found that if I increase strlen(xml_old) to strlen(xml_new). if (strlen(xml_new) - strlen(xml_old) <= 29) { this is right; } but if (strlen(xml_new) - strlen(xml_old) >= 50) { this is error; } But I don't choose to find an accurate number. I think this value will be affected by some factors. For example: memory alignment, range safety or other rules. ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + strlen(xml_old) + free space + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ + strlen(xml_new) + ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ I really want to know how long free space I can use. Can you convert slightly to a precise number? Thank you for taking so long to read my Email. Looking forward to your reply. ps: OS: CentOS7.4 libvirt: 4.5.0 hypervisor: KVM Sincerely, Vincent Wu

On 4/24/20 6:38 AM, Vincent Wu wrote:
The save format is fragile. At the beginning there is a header which describes the file, then there is libvirt section (which contains the domain XML and a cookie) and then there is QEMU section (where QEMU saved the guest memory). Because of this, we have to have the check you are hitting in place so that we don't accidentally overwrite the QEMU section. But, what you can do is provide the changed XML not in virDomainSaveImageDefineXML() but provide it to virDomainRestoreFlags() which doesn't check for XML length. Michal

On Fri, Apr 24, 2020 at 02:33:13PM +0200, Michal Privoznik wrote:
On 4/24/20 6:38 AM, Vincent Wu wrote:
The save format is fragile. At the beginning there is a header which describes the file, then there is libvirt section (which contains the domain XML and a cookie) and then there is QEMU section (where QEMU saved the guest memory). Because of this, we have to have the check you are hitting in place so that we don't accidentally overwrite the QEMU section.
BTW, does anyone recall why we were so restrictive on the XML length in the first place ? I looked at history and didn't see why we did it this way. It occurrs to me that given guest typical RAM sizes measuring many 100's of MB, we could easily make the header section have 1 MB of padding, and thus allow essentially arbitrary XML updates without worry about hitting a size limit.
But, what you can do is provide the changed XML not in virDomainSaveImageDefineXML() but provide it to virDomainRestoreFlags() which doesn't check for XML length.
Michal
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On 4/24/20 7:37 AM, Daniel P. Berrangé wrote:
On Fri, Apr 24, 2020 at 02:33:13PM +0200, Michal Privoznik wrote:
On 4/24/20 6:38 AM, Vincent Wu wrote:
The save format is fragile. At the beginning there is a header which describes the file, then there is libvirt section (which contains the domain XML and a cookie) and then there is QEMU section (where QEMU saved the guest memory). Because of this, we have to have the check you are hitting in place so that we don't accidentally overwrite the QEMU section.
BTW, does anyone recall why we were so restrictive on the XML length in the first place ? I looked at history and didn't see why we did it this way.
It occurrs to me that given guest typical RAM sizes measuring many 100's of MB, we could easily make the header section have 1 MB of padding, and thus allow essentially arbitrary XML updates without worry about hitting a size limit.
We've had guest XML reaching 1M before, but I agree that the initial saved image creation should include padding to a nice boundary to make future edits less likely to overflow the reserved heading. On new enough Linux, some file systems support fallocate(FALLOC_FL_INSERT_RANGE) which can splice in a hole (all later file contents are shifted in offsets); maybe our save code could take advantage of that to repair existing saved images with insufficient header size in a more efficient manner than manually shifting the rest of the file contents ourselves. -- Eric Blake, Principal Software Engineer Red Hat, Inc. +1-919-301-3226 Virtualization: qemu.org | libvirt.org

On Fri, 2020-04-24 at 10:20 -0500, Eric Blake wrote:
On 4/24/20 7:37 AM, Daniel P. Berrangé wrote:
On Fri, Apr 24, 2020 at 02:33:13PM +0200, Michal Privoznik wrote:
On 4/24/20 6:38 AM, Vincent Wu wrote:
The save format is fragile. At the beginning there is a header which describes the file, then there is libvirt section (which contains the domain XML and a cookie) and then there is QEMU section (where QEMU saved the guest memory). Because of this, we have to have the check you are hitting in place so that we don't accidentally overwrite the QEMU section.
BTW, does anyone recall why we were so restrictive on the XML length in the first place ? I looked at history and didn't see why we did it this way.
It occurrs to me that given guest typical RAM sizes measuring many 100's of MB, we could easily make the header section have 1 MB of padding, and thus allow essentially arbitrary XML updates without worry about hitting a size limit.
We've had guest XML reaching 1M before, but I agree that the initial saved image creation should include padding to a nice boundary to make future edits less likely to overflow the reserved heading.
On new enough Linux, some file systems support fallocate(FALLOC_FL_INSERT_RANGE) which can splice in a hole (all later file contents are shifted in offsets); maybe our save code could take advantage of that to repair existing saved images with insufficient header size in a more efficient manner than manually shifting the rest of the file contents ourselves.
There's a bug filed for this: https://bugzilla.redhat.com/show_bug.cgi?id=1229255 Both you and Dan commented on it at some point, but I thought I'd bring it up in case you forgot - it was a while ago :) -- Andrea Bolognani / Red Hat / Virtualization

On Mon, Apr 27, 2020 at 10:17:52AM +0200, Andrea Bolognani wrote:
On Fri, 2020-04-24 at 10:20 -0500, Eric Blake wrote:
On 4/24/20 7:37 AM, Daniel P. Berrangé wrote:
On Fri, Apr 24, 2020 at 02:33:13PM +0200, Michal Privoznik wrote:
On 4/24/20 6:38 AM, Vincent Wu wrote:
The save format is fragile. At the beginning there is a header which describes the file, then there is libvirt section (which contains the domain XML and a cookie) and then there is QEMU section (where QEMU saved the guest memory). Because of this, we have to have the check you are hitting in place so that we don't accidentally overwrite the QEMU section.
BTW, does anyone recall why we were so restrictive on the XML length in the first place ? I looked at history and didn't see why we did it this way.
It occurrs to me that given guest typical RAM sizes measuring many 100's of MB, we could easily make the header section have 1 MB of padding, and thus allow essentially arbitrary XML updates without worry about hitting a size limit.
We've had guest XML reaching 1M before, but I agree that the initial saved image creation should include padding to a nice boundary to make future edits less likely to overflow the reserved heading.
On new enough Linux, some file systems support fallocate(FALLOC_FL_INSERT_RANGE) which can splice in a hole (all later file contents are shifted in offsets); maybe our save code could take advantage of that to repair existing saved images with insufficient header size in a more efficient manner than manually shifting the rest of the file contents ourselves.
There's a bug filed for this:
https://bugzilla.redhat.com/show_bug.cgi?id=1229255
Both you and Dan commented on it at some point, but I thought I'd bring it up in case you forgot - it was a while ago :)
Hmm, I was wondering where the "pad = 1024" line referenced in that bz comment #3 went to, and I found it was removed in commit 6b9b21db7079888a05d192b079e68290bdf14a76 Author: Peter Krempa <pkrempa@redhat.com> Date: Wed Feb 17 13:10:11 2016 +0100 qemu: Remove unnecessary calculations in qemuDomainSaveMemory Now that the file migration doesn't require us to use 'dd' and other legacy stuff for too old qemus we don't even have to calcuate the offsets and other stuff. So ever since then, we appear to have had ZERO padding present at all. LOoking at the saved state image for a VM appears to confirm this At the end of the guest XML, we have a single NUL, then the cookie XML, single NUL and then the QEMU stream marker <seclabel type='dynamic' model='selinux' relabel='yes'/> </domain> ^@<cookie> <slirpHelper/> </cookie> ^@QEVM^@ So it is no wonder it isn't possible to edit the image to make it longer Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
participants (5)
-
Andrea Bolognani
-
Daniel P. Berrangé
-
Eric Blake
-
Michal Privoznik
-
Vincent Wu