Daniel P. Berrange wrote:
On Fri, Nov 21, 2008 at 11:13:04PM +0100, Guido G?nther wrote:
> Hi,
> I just ran across these oddities when using a bit more libvirt+xen:
>
> 1.) virsh setmaxmem:
>
> On a running domain:
> # virsh setmaxmem domain 256000
> completes but virsh dumpxml as well as the config.sxp still shows the
> old amount of memory. Looks as the set_maxmem hypercall simply gets
> ignored. xm mem-max works as expected. Smells like a bug in the ioctl?
>
The setmaxmem API is not performance critical, so it sounds like we
should first try setting it via XenD, and use Hypervisor as the
fallback instead.
I have a patch for 0.4.6 in suse packages to do just this. Using xend,
you also get the value changed in dom config.
> 2.) virsh list:
>
> Sometimes (didn't find a pattern yet) when shutting down a running
> domains and restarting it I'm seeing:
>
> Id Name State
> ----------------------------------
> 0 Domain-0 running
> 2 foo idle
> libvir: Xen Daemon error : GET operation failed: xend_get: error from xen daemon:
> libvir: Xen Daemon error : GET operation failed: xend_get: error from xen daemon:
> libvir: Xen Daemon error : GET operation failed: xend_get: error from xen daemon:
> libvir: Xen Daemon error : GET operation failed: xend_get: error from xen daemon:
> 7 bar idle
>
> Note that the number of errors the corresponds to the number of
> shutdowns. VirXen_getdomaininfolist returns 7 in the above case.
> virDomainLookupByID later on fails for these "additional" domains.
>
This is basically a XenD bug. What's happening is that the domain
has been shutdown, and got most of the way through cleanup, as far
as the hypervisor is concerned. But something is still hanging around
keeping the domain from being completely terminated. In this case
XenD takes the dubious approach of just pretending the domain does
not exist. So libvirt sees it exists in the hypervisor, but when
asking XenD for more data, it gets that error. This really really
sucks.
I spent some time looking into this bug as well. I found that we ask HV
for number of domains and get back more than actually exist. We
subsequently query xend about such domains and get the error message
noted. It turned out being a 'dead domain' memory leak in xen itself.
Jan Beulich plugged the hole and sent patch upstream but I can't seem to
find the relevant c/s now :-(. Anyhow, with Jan's fix I no longer see
these error messages.
Cheers,
Jim