Am 09.08.2013 17:58, schrieb Anthony Liguori:
Even if we had an algorithm for calculating memory overhead (we
don't),
glibc will still introduce uncertainty since malloc(size) doesn't
translate to allocating size bytes from the kernel. When you throw in
fragmentation too it becomes extremely hard to predict.
The only practical way of doing this would be to have QEMU gracefully
handle malloc() == NULL so that you could set a limit and gracefully
degrade. We don't though so setting a limit is likely to get you in
trouble.
FWIW my QOM realize work is targetted at reducing likelihood that
device_add blows up QEMU due to OOM in object_new(). But before I can
change qdev-monitor.c I still need to tweak core QOM to either get at
TypeImpl::instance_size or to introduce an object_try_new() function
using g_try_malloc0() rather than g_malloc0(). That's where proper child
struct composition comes into play.
The major variance in runtime memory consumption was so far attributed
to block and network I/O, without ever getting exact proof points...
Regards,
Andreas
--
SUSE LINUX Products GmbH, Maxfeldstr. 5, 90409 Nürnberg, Germany
GF: Jeff Hawn, Jennifer Guild, Felix Imendörffer; HRB 16746 AG Nürnberg