On Fri, Apr 04, 2014 at 01:56:09PM +0100, Ian Campbell wrote:
On Fri, 2014-04-04 at 14:51 +0200, Daniel P. Berrange wrote:
> On Fri, Apr 04, 2014 at 11:34:17AM +0100, Ian Campbell wrote:
> > On Fri, 2014-04-04 at 12:31 +0200, Stefan Bader wrote:
> > > On 04.04.2014 11:48, Ian Campbell wrote:
> > > > On Fri, 2014-04-04 at 11:36 +0200, Stefan Bader wrote:
> > > >> + /*
> > > >> + * Take the first defined video device (graphics card) to
display
> > > >> + * on the first graphics device (display).
> > > >> + * Right now only type and vram info is used and anything
beside
> > > >> + * type xen and vga is mapped to cirrus.
> > > >> + */
> > > >> + if (def->nvideos) {
> > > >> + unsigned int min_vram = 8 * 1024;
> > > >> +
> > > >> + switch (def->videos[0]->type) {
> > > >> + case VIR_DOMAIN_VIDEO_TYPE_VGA:
> > > >> + case VIR_DOMAIN_VIDEO_TYPE_XEN:
> > > >> + b_info->u.hvm.vga.kind =
LIBXL_VGA_INTERFACE_TYPE_STD;
> > > >> + /*
> > > >> + * Libxl enforces a minimal VRAM size of 8M when
using
> > > >> + *
LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN_TRADITIONAL or
> > > >> + * 16M for LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN.
> > > >> + * Avoid build failures and go with the minimum
if less
> > > >> + * is specified.
> > > >
> > > > Build failures? Do you mean "domain build" rather than
"libvirt build"?
> > > >
> > > > I'm not sure about this fixing up the GIGO from the user side,
but
> > > > that's a libvirt policy decision I suppose? If it were me I would
just
> > > > let libxl provide the error and expect people to fix their domain
config
> > > > rather than silently giving them something other than what they
asked
> > > > for. What if increasing the VRAM causes a cascading failure due e.g.
to
> > > > lack of memory? That's going to be tricky to debug I think!
> > >
> > > In the end its a start a domain with such a config. Which seems to be what
I
> > > would end up with in my testing with an admittedly older version of
virt-manager
> > > connecting to a libvirtd running an initial version of this patch without
that part.
> > > The error seen on the front-end was something along the lines of
"failed to get
> > > enough memory to start the guest" (the libxl log on the other side
had the
> > > better error message). And the gui always reduces the memory below the
minimum
> > > for both the options (VGA and "Xen").
> > > That is the reason I went for "meh, go for the minimum
anyways".
> >
> > Does the libvirt protocol require the client to provide all the sizes
> > etc with no provision for asking the server side to pick a sane default?
>
> The XML does not have to include the VGA ram size. If it is omitted the
> we fill in a default value after initial parsing is done.
I guess the issue is that whatever client Stefan is using is including
the VGA ram size, with a value which it turns out is not allowed.
Yeah, that's probably a historical accident in virt-manager trying to guess
the default values itself rather than delegating.
Regards,
Daniel
--
|:
http://berrange.com -o-
http://www.flickr.com/photos/dberrange/ :|
|:
http://libvirt.org -o-
http://virt-manager.org :|
|:
http://autobuild.org -o-
http://search.cpan.org/~danberr/ :|
|:
http://entangle-photo.org -o-
http://live.gnome.org/gtk-vnc :|