On Mon, May 13, 2019 at 03:25:43PM +0100, Daniel P. Berrangé wrote:
On Mon, May 13, 2019 at 03:28:24PM +0200, Michal Privoznik wrote:
> On 5/13/19 12:17 PM, Daniel P. Berrangé wrote:
> > The background
> > ==============
> >
> > Since the first commit libvirt has attempted to handle out of memory (OOM)
> > errors in the same way it does for any other problem. Namely, a virError will
> > be raised, and the method will jump to its cleanup/error label. The error
> > bubbles up the stack and in theory someone or something will catch this and do
> > something sensible/useful. This has long been considered "best
practice" for
> > most C libraries, especially those used for system services. This mail makes
> > the argument that it is in fact /not/ "best practice" to try to
handle OOM.
> >
> > OOM handling is very difficult to get correct because it is something that
> > developers almost never encounter and thus the code paths are rarely run. We
> > designed our memory allocation APIs such that we get compile time errors if
> > code forgets to check the return value for failure. This is good as it
> > eliminates an entire class of bugs. Our error handling goto label pattern
> > tries to align OOM handling with other general error handling which is more
> > commonly tested.
> >
> > We have code in the allocators which lets us run unit tests simulating the
> > failure of any allocation that is made during the test. Executing this is
> > extraordinarily time consuming as some of our unit tests have many 1000's
or
> > even 10's of 1000's of allocations. The running time to simulate OOM
for each
> > allocation is O(N^2) which does not scale well. As a result we've probably
> > only run these OOM tests a handful of times over the years.
>
> Well, I've tried this and so far, the only problems I've found were in
tests
> where we're ignoring revals of VIR_ALLOC() or its friends (e.g. vir*New())
> or we're checking it after it's used already.
Hmm, I guess you didn't check the same as me, as I've found crashes
$ VIR_TEST_OOM=1 ./qemuxml2argvtest
....snip lots of test output...
631) QEMU XML-2-ARGV tpm-passthrough ... OK
Test OOM for nalloc=162
...................................................................................................................................Segmentation
fault (core dumped)
There are a bunch of earlier messages that aren't causing crashes, but
are showing that we're silently generating unexpected output for cli
args, or are raising bizarre error messages instead of "out of memory".
> >
> > The tests show we generally do remarkly well at OOM handling, but none the
> > less we have *always* found bugs in our handling where we fail to report the
> > OOM (as in, continue to run but with missing data), or we crash when cleaning
> > up from OOM. Our unit tests don't cover anywhere near all of our codebase
> > though. We have great coverage of XML parsing/formatting, and sporadic
coverage
> > of everything else.
> >
> > IOW, despite our clever API designs & coding patterns designed to improve
our
> > OOM handling, we can not have confidence that we will actually operate
correctly
> > in an OOM scenario.
>
> I beg to disagree. The fact that our cleanup paths are in 99% the same as
> error paths means that if there are no mem-leaks (and if transformation to
> VIR_AUTOFREE is done the chances are low), then we propably do good on OOM
> too.
I don't think that holds true, as we can see from the wierd error codes
reported in OOM scenarios, or the crashes, or the incorrect ARGV being
generated silently.
We do remarkably well, but we still fail despite our best efforts. I can
not have confidence in this at runtime in a production world if we're
not even going to report correct error messages or silently created
malformed data.
> > The implementation
> > ==================
> >
> > Assuming a decision to abort on OOM, libvirt can nwo follow QEMU's lead
and
> > make use of the GLib library.
>
> No, please no. Firstly, glib is a new C dialect that only a few of us speak.
Of course not everyone will know the APIs right now, but I'm confident any
of us our easily capable of learning what it provides as we need to. We're
already constantly learning new APIs as we introduce more frameworks in
libvirt. QEMU's adoption of GLib shows it can be done without any real
difficulty & by using it in libvirt too, we'll make it easier for QEMU
contributors to understand libvirt as we won't have reinvented the wheel
so much.
> Secondly, glib adds some padding around its objects => libvirt's memory
> footprint will grow. Thirdly, it's yet another library to link with (on my
> system libvirt links with 53 libraries already).
I don't think the padding on objects is significant enough to be a concern
when placed in context of what libvirt already does. Most of the per-VM
memory usage is around the data we store for that VM, especially the huge
parsed XML configs spanning 100's of structs - that won't change and will
dominate any extra padding on objects. Libvirt memory usage as a whole is
tiny compared to the memory allocated to the actual QEMU VM RAM areas.
QEMU is already linking to it so it will be resident both in the installed
footprint and in memory. The only overhead is thus from runtime allocations.
So I don't think there's any reason to avoid it in terms of memory overhead
Actually let me correct that a little, as glib is actually split into
multiple ELF libraries libglib, libgobject, libgio, libgmodule, libgthread.
QEMU uses libglib only, not the libgobject / libgio. QEMU re-invented
GOBject itself and libgio didn't exist when QEMU first used glib, so it
has reinvented large parts of libgio too. QEMU's min glib version is now
new enough that it should really start using libgio.
So currently the disk install has all the libraries, but only libglib
is resident in ram for QEMU.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|