[libvirt] [RFC PATCH] lxc: don't return error on GetInfo when cgroups not yet set up

Nova (openstack) calls libvirt to create a container, then periodically checks using GetInfo to see whether the container is up. If it does this too quickly, then libvirt returns an error, which in libvirt.py causes an exception to be raised, the same type as if the container was bad. This may not be the best way to handle it, but with this patch, we assume that a -ENOENT return from virCgroupForDomain means the cgroups are not yet set up, and so we return the same values for cpu and memory usage as if the domain was not active. Signed-off-by: Serge Hallyn <serge.hallyn@canonical.com> --- src/lxc/lxc_driver.c | 37 +++++++++++++++++++++---------------- 1 files changed, 21 insertions(+), 16 deletions(-) diff --git a/src/lxc/lxc_driver.c b/src/lxc/lxc_driver.c index 4b62600..a68b8e7 100644 --- a/src/lxc/lxc_driver.c +++ b/src/lxc/lxc_driver.c @@ -542,26 +542,31 @@ static int lxcDomainGetInfo(virDomainPtr dom, info->cpuTime = 0; info->memory = vm->def->mem.cur_balloon; } else { - if (virCgroupForDomain(driver->cgroup, vm->def->name, &cgroup, 0) != 0) { + int ret = virCgroupForDomain(driver->cgroup, vm->def->name, &cgroup, 0); + if (ret == -ENOENT) { + /* cgroups are not set up yet */ + info->cpuTime = 0; + info->memory = vm->def->mem.cur_balloon; + } else if (ret != 0) { lxcError(VIR_ERR_INTERNAL_ERROR, _("Unable to get cgroup for %s"), vm->def->name); goto cleanup; - } - - if (virCgroupGetCpuacctUsage(cgroup, &(info->cpuTime)) < 0) { - lxcError(VIR_ERR_OPERATION_FAILED, - "%s", _("Cannot read cputime for domain")); - goto cleanup; - } - if ((rc = virCgroupGetMemoryUsage(cgroup, &(info->memory))) < 0) { - lxcError(VIR_ERR_OPERATION_FAILED, - "%s", _("Cannot read memory usage for domain")); - if (rc == -ENOENT) { - /* Don't fail if we can't read memory usage due to a lack of - * kernel support */ - info->memory = 0; - } else + } else { + if (virCgroupGetCpuacctUsage(cgroup, &(info->cpuTime)) < 0) { + lxcError(VIR_ERR_OPERATION_FAILED, + "%s", _("Cannot read cputime for domain")); goto cleanup; + } + if ((rc = virCgroupGetMemoryUsage(cgroup, &(info->memory))) < 0) { + lxcError(VIR_ERR_OPERATION_FAILED, + "%s", _("Cannot read memory usage for domain")); + if (rc == -ENOENT) { + /* Don't fail if we can't read memory usage due to a lack of + * kernel support */ + info->memory = 0; + } else + goto cleanup; + } } } -- 1.7.5.4

On Wed, Sep 28, 2011 at 02:14:52PM -0500, Serge E. Hallyn wrote:
Nova (openstack) calls libvirt to create a container, then periodically checks using GetInfo to see whether the container is up. If it does this too quickly, then libvirt returns an error, which in libvirt.py causes an exception to be raised, the same type as if the container was bad.
lxcDomainGetInfo(), holds a mutex on 'dom' for the duration of its execution. It checks for virDomainObjIsActive() before trying to use the cgroups. lxcDomainStart(), holds the mutex on 'dom' for the duration of its execution, and does not return until the container is running and cgroups are present. Similarly when we delete the cgroups, we again hold the lock on 'dom'. Thus any time viDomainObjIsActive() returns true, AFAICT, we have guaranteed that the cgroup does in fact exist. So can't see how control gets to the 'else' part of this condition if the cgroups don't exist like you describe. if (!virDomainObjIsActive(vm) || driver->cgroup == NULL) { info->cpuTime = 0; info->memory = vm->def->mem.cur_balloon; } else { if (virCgroupForDomain(driver->cgroup, vm->def->name, &cgroup, 0) != 0) { lxcError(VIR_ERR_INTERNAL_ERROR, _("Unable to get cgroup for %s"), vm->def->name); goto cleanup; } What libvirt version were you seeing this behaviour with ? Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

Quoting Daniel P. Berrange (berrange@redhat.com):
On Wed, Sep 28, 2011 at 02:14:52PM -0500, Serge E. Hallyn wrote:
Nova (openstack) calls libvirt to create a container, then periodically checks using GetInfo to see whether the container is up. If it does this too quickly, then libvirt returns an error, which in libvirt.py causes an exception to be raised, the same type as if the container was bad.
lxcDomainGetInfo(), holds a mutex on 'dom' for the duration of its execution. It checks for virDomainObjIsActive() before trying to use the cgroups.
lxcDomainStart(), holds the mutex on 'dom' for the duration of its execution, and does not return until the container is running and cgroups are present.
Yup, now that you mention it, I do see that. So this shouldn't be happening. Can't explain it, but copious fprintf debugging still suggests it is :) Is it possible that vm->def->id is not being set to -1 when it is first defined, and I'm catching it between define and start? I would think that would show up as much more broken, though I'm not seeing where vm->def->id gets set to -1 during domain definition. Well, I'll keep digging then. Thanks for setting me straight on the mutex!
Similarly when we delete the cgroups, we again hold the lock on 'dom'.
Thus any time viDomainObjIsActive() returns true, AFAICT, we have guaranteed that the cgroup does in fact exist.
So can't see how control gets to the 'else' part of this condition if the cgroups don't exist like you describe.
if (!virDomainObjIsActive(vm) || driver->cgroup == NULL) { info->cpuTime = 0; info->memory = vm->def->mem.cur_balloon; } else { if (virCgroupForDomain(driver->cgroup, vm->def->name, &cgroup, 0) != 0) { lxcError(VIR_ERR_INTERNAL_ERROR, _("Unable to get cgroup for %s"), vm->def->name); goto cleanup; }
What libvirt version were you seeing this behaviour with ?
0.9.2 thanks, -serge

Quoting Daniel P. Berrange (berrange@redhat.com):
Nova (openstack) calls libvirt to create a container, then periodically checks using GetInfo to see whether the container is up. If it does this too quickly, then libvirt returns an error, which in libvirt.py causes an exception to be raised, the same type as if the container was bad. lxcDomainGetInfo(), holds a mutex on 'dom' for the duration of its execution. It checks for virDomainObjIsActive() before
On Wed, Sep 28, 2011 at 02:14:52PM -0500, Serge E. Hallyn wrote: trying to use the cgroups.
Yes, it does, but
lxcDomainStart(), holds the mutex on 'dom' for the duration of its execution, and does not return until the container is running and cgroups are present.
No. It calls the lxc_controller with --background. The controller main task in turn exits before the cgroups have been set up. There is the race. -serge

Quoting Serge E. Hallyn (serge.hallyn@canonical.com):
Quoting Daniel P. Berrange (berrange@redhat.com):
Nova (openstack) calls libvirt to create a container, then periodically checks using GetInfo to see whether the container is up. If it does this too quickly, then libvirt returns an error, which in libvirt.py causes an exception to be raised, the same type as if the container was bad. lxcDomainGetInfo(), holds a mutex on 'dom' for the duration of its execution. It checks for virDomainObjIsActive() before
On Wed, Sep 28, 2011 at 02:14:52PM -0500, Serge E. Hallyn wrote: trying to use the cgroups.
Yes, it does, but
lxcDomainStart(), holds the mutex on 'dom' for the duration of its execution, and does not return until the container is running and cgroups are present.
No. It calls the lxc_controller with --background. The controller main task in turn exits before the cgroups have been set up. There is the race.
So what is the right fix here? Should the controller write out another file when it is past the part which should be locked, and the driver waits for that file to exist before it drops the driver mutex? If we do that, do we risk having the driver hang when the controller has hung? -serge

On Thu, Sep 29, 2011 at 10:12:17PM -0500, Serge E. Hallyn wrote:
Quoting Daniel P. Berrange (berrange@redhat.com):
Nova (openstack) calls libvirt to create a container, then periodically checks using GetInfo to see whether the container is up. If it does this too quickly, then libvirt returns an error, which in libvirt.py causes an exception to be raised, the same type as if the container was bad. lxcDomainGetInfo(), holds a mutex on 'dom' for the duration of its execution. It checks for virDomainObjIsActive() before
On Wed, Sep 28, 2011 at 02:14:52PM -0500, Serge E. Hallyn wrote: trying to use the cgroups.
Yes, it does, but
lxcDomainStart(), holds the mutex on 'dom' for the duration of its execution, and does not return until the container is running and cgroups are present.
No. It calls the lxc_controller with --background. The controller main task in turn exits before the cgroups have been set up. There is the race.
The lxcDomainStart() method isn't actually waiting on the child pid directly, so the --background flag ought not to matter. We have a pipe that we pass into the controller, which we wait on for a notification after running the process. The controller does not notify the 'handshake' FD until after cgroups have been setup, unless I'm mis-interpreting our code Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

Quoting Daniel P. Berrange (berrange@redhat.com):
On Thu, Sep 29, 2011 at 10:12:17PM -0500, Serge E. Hallyn wrote:
Quoting Daniel P. Berrange (berrange@redhat.com):
Nova (openstack) calls libvirt to create a container, then periodically checks using GetInfo to see whether the container is up. If it does this too quickly, then libvirt returns an error, which in libvirt.py causes an exception to be raised, the same type as if the container was bad. lxcDomainGetInfo(), holds a mutex on 'dom' for the duration of its execution. It checks for virDomainObjIsActive() before
On Wed, Sep 28, 2011 at 02:14:52PM -0500, Serge E. Hallyn wrote: trying to use the cgroups.
Yes, it does, but
lxcDomainStart(), holds the mutex on 'dom' for the duration of its execution, and does not return until the container is running and cgroups are present.
No. It calls the lxc_controller with --background. The controller main task in turn exits before the cgroups have been set up. There is the race.
The lxcDomainStart() method isn't actually waiting on the child pid directly, so the --background flag ought not to matter. We have a pipe that we pass into the controller, which we wait on for a notification after running the process. The controller does not notify the 'handshake' FD until after cgroups have been setup, unless I'm mis-interpreting our code
That's the call to lxcContainerWaitForContinue(), right? If so, that's done by lxcContainerChild(), which is called by the lxc_controller. AFAICS there is nothing in the lxc_driver which will wait on that before dropping the driver->lock mutex. -serge

Haven't tested this, but I think the following patch should fix the race, by forcing lxc_driver to hang on lxcMonitorClient() until after the lxc_controller has set up the cgroups, ensuring that that happens before the driver is unlocked. (I'll test tomorrow) Index: libvirt-0.9.2/src/lxc/lxc_controller.c =================================================================== --- libvirt-0.9.2.orig/src/lxc/lxc_controller.c 2011-10-02 20:30:23.988539174 -0500 +++ libvirt-0.9.2/src/lxc/lxc_controller.c 2011-10-02 20:30:34.392538998 -0500 @@ -611,7 +611,6 @@ unsigned int nveths, char **veths, int monitor, - int client, int appPty) { int rc = -1; @@ -622,6 +621,7 @@ virDomainFSDefPtr root; char *devpts = NULL; char *devptmx = NULL; + int client; if (socketpair(PF_UNIX, SOCK_STREAM, 0, control) < 0) { virReportSystemError(errno, "%s", @@ -634,6 +634,13 @@ if (lxcSetContainerResources(def) < 0) goto cleanup; + /* Accept initial client which is the libvirtd daemon */ + if ((client = accept(monitor, NULL, 0)) < 0) { + virReportSystemError(errno, "%s", + _("Failed to accept a connection from driver")); + goto cleanup; + } + /* * If doing a chroot style setup, we need to prepare * a private /dev/pts for the child now, which they @@ -922,14 +929,7 @@ /* Initialize logging */ virLogSetFromEnv(); - /* Accept initial client which is the libvirtd daemon */ - if ((client = accept(monitor, NULL, 0)) < 0) { - virReportSystemError(errno, "%s", - _("Failed to accept a connection from driver")); - goto cleanup; - } - - rc = lxcControllerRun(def, nveths, veths, monitor, client, appPty); + rc = lxcControllerRun(def, nveths, veths, monitor, appPty); cleanup:

On Fri, Sep 30, 2011 at 11:00:53AM -0500, Serge Hallyn wrote:
Quoting Daniel P. Berrange (berrange@redhat.com):
On Thu, Sep 29, 2011 at 10:12:17PM -0500, Serge E. Hallyn wrote:
Quoting Daniel P. Berrange (berrange@redhat.com):
Nova (openstack) calls libvirt to create a container, then periodically checks using GetInfo to see whether the container is up. If it does this too quickly, then libvirt returns an error, which in libvirt.py causes an exception to be raised, the same type as if the container was bad. lxcDomainGetInfo(), holds a mutex on 'dom' for the duration of its execution. It checks for virDomainObjIsActive() before
On Wed, Sep 28, 2011 at 02:14:52PM -0500, Serge E. Hallyn wrote: trying to use the cgroups.
Yes, it does, but
lxcDomainStart(), holds the mutex on 'dom' for the duration of its execution, and does not return until the container is running and cgroups are present.
No. It calls the lxc_controller with --background. The controller main task in turn exits before the cgroups have been set up. There is the race.
The lxcDomainStart() method isn't actually waiting on the child pid directly, so the --background flag ought not to matter. We have a pipe that we pass into the controller, which we wait on for a notification after running the process. The controller does not notify the 'handshake' FD until after cgroups have been setup, unless I'm mis-interpreting our code
That's the call to lxcContainerWaitForContinue(), right? If so, that's done by lxcContainerChild(), which is called by the lxc_controller. AFAICS there is nothing in the lxc_driver which will wait on that before dropping the driver->lock mutex.
In lxcVmStart(), which runs while driver->lock is held we have the following section of code in play: .... if (virCommandRun(cmd, NULL) < 0) goto cleanup; if (VIR_CLOSE(handshakefds[1]) < 0) { virReportSystemError(errno, "%s", _("could not close handshake fd")); goto cleanup; } /* Connect to the controller as a client *first* because * this will block until the child has written their * pid file out to disk */ if ((priv->monitor = lxcMonitorClient(driver, vm)) < 0) goto cleanup; /* And get its pid */ if ((r = virPidFileRead(driver->stateDir, vm->def->name, &vm->pid)) < 0) { virReportSystemError(-r, _("Failed to read pid file %s/%s.pid"), driver->stateDir, vm->def->name); goto cleanup; } vm->def->id = vm->pid; virDomainObjSetState(vm, VIR_DOMAIN_RUNNING, reason); if (lxcContainerWaitForContinue(handshakefds[0]) < 0) { .... The 'virCommandRun' is where libvirt_lxc controller is forked (in the background). The main libvirt LXC driver code then blocks on this 'lxcContainerWaitForContinue(handshakefds[0]) line, for the controller to finish initializing. The LXC controller 'main' method received the handshake FD and invokes lxcControllerRun(). This method does various setup tasks, in particular the following: .... if (lxcSetContainerResources(def) < 0) goto cleanup; ... if ((container = lxcContainerStart(def, nveths, veths, control[1], containerhandshake[1], containerPtyPath)) < 0) goto cleanup; VIR_FORCE_CLOSE(control[1]); VIR_FORCE_CLOSE(containerhandshake[1]); if (lxcControllerMoveInterfaces(nveths, veths, container) < 0) goto cleanup; if (lxcContainerSendContinue(control[0]) < 0) { virReportSystemError(errno, "%s", _("Unable to send container continue message")); goto cleanup; } if (lxcContainerWaitForContinue(containerhandshake[0]) < 0) { virReportSystemError(errno, "%s", _("error receiving signal from container")); goto cleanup; } /* Now the container is fully setup... */ .... /* ...and reduce our privileges */ if (lxcControllerClearCapabilities() < 0) goto cleanup; if (lxcContainerSendContinue(handshakefd) < 0) { virReportSystemError(errno, "%s", _("error sending continue signal to parent")); goto cleanup; } VIR_FORCE_CLOSE(handshakefd); lxcSetContainerResources() is what creates the cgroups. The very last thing we do, once everything is configured and the 'init' process is finally running, is to notify the handshake FD 'lxcContainerSendContinue(handshakefd)' which finally allows the libvirtd LXC code in 'lxcVmStart' to continue running and return to the client. Only at that point is it possible for other API calls to be made which touch the cgroups I'm trying to reproduce the kind of race condition problem scenario you describe using the following sequence of commands: # virsh -c lxc:/// start helloworld ; virsh -c lxc:/// dominfo helloworld ; virsh -c lxc:/// destroy helloworld Domain helloworld started Id: 7015 Name: helloworld UUID: a099376e-a803-ca94-f99c-d9a8f9a30088 OS Type: exe State: running CPU(s): 1 CPU time: 0.0s Max memory: 102400 kB Used memory: 280 kB Persistent: yes Autostart: disable Managed save: unknown Domain helloworld destroyed Even if I add a 'sleep(10)' statement as the first line of the lxcSetContainerResources() method, I can't seem to trigger any race wrt cgroup creation & virsh dominfo. Is there a better test case you can show me to reproduce what you're seeing ? Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

Quoting Daniel P. Berrange (berrange@redhat.com):
The LXC controller 'main' method received the handshake FD and invokes lxcControllerRun(). This method does various setup tasks, in particular the following:
.... if (lxcSetContainerResources(def) < 0) goto cleanup; ... ... if (lxcContainerSendContinue(handshakefd) < 0) { virReportSystemError(errno, "%s", _("error sending continue signal to parent")); goto cleanup; } VIR_FORCE_CLOSE(handshakefd);
Thanks, Daniel. You're right! This is fixed in git, by the patch 'lxc: controller: Improve container error reporting' (which does much more than it says :). The following patch is how I had just fixed 0.9.2 this morning. It'll be nicer if I can get the git commit cherrypicked. I can't wait till I can upgrade! thanks, -serge Description: Make lxc driver hold sem until controller is far enough The lxc driver currently does not wait until the container has set up its cgroups before dropping the driver mutex. First, move the controller's accept of the monitor socket wait until it has set up the cgroups. Second, because a connect does not actually wait for an accept to happen, force the driver to wait with a silly two-way read/write handshake. Since the monitor socket is also used elsewhere, make this handshake happen everywhere. Author: Serge Hallyn <serge.hallyn@canonical.com> Bug-Ubuntu: https://bugs.launchpad.net/ubuntu/+source/nova/+bug/842845 Forwarded: not-needed Index: libvirt-0.9.2/src/lxc/lxc_controller.c =================================================================== --- libvirt-0.9.2.orig/src/lxc/lxc_controller.c 2011-10-03 13:10:31.098934902 -0500 +++ libvirt-0.9.2/src/lxc/lxc_controller.c 2011-10-03 13:10:53.823679619 -0500 @@ -432,6 +432,8 @@ numEvents = epoll_wait(epollFd, &epollEvent, 1, timeout); if (numEvents > 0) { if (epollEvent.data.fd == monitor) { + int ret; + char go[4]; int fd = accept(monitor, NULL, 0); if (fd < 0) { /* First reflex may be simply to declare accept failure @@ -457,6 +459,17 @@ _("epoll_ctl(client) failed")); goto cleanup; } + ret = read(client, go, 3); + if (ret < 0) { + virReportSystemError(errno, "%s", + _("Failed to read 'go' from driver")); + goto cleanup; + } + ret = write(client, "go", 3); + if (ret < 0) { + virReportSystemError(errno, "%s", _("Failed to write 'go' to container")); + goto cleanup; + } } else if (client != -1 && epollEvent.data.fd == client) { if (0 > epoll_ctl(epollFd, EPOLL_CTL_DEL, client, &epollEvent)) { virReportSystemError(errno, "%s", @@ -611,10 +624,9 @@ unsigned int nveths, char **veths, int monitor, - int client, int appPty) { - int rc = -1; + int ret, rc = -1; int control[2] = { -1, -1}; int containerPty = -1; char *containerPtyPath = NULL; @@ -622,6 +634,8 @@ virDomainFSDefPtr root; char *devpts = NULL; char *devptmx = NULL; + char go[4]; + int client; if (socketpair(PF_UNIX, SOCK_STREAM, 0, control) < 0) { virReportSystemError(errno, "%s", @@ -631,8 +645,29 @@ root = virDomainGetRootFilesystem(def); + VIR_DEBUG("About to set resources and cgroups\n"); if (lxcSetContainerResources(def) < 0) goto cleanup; + VIR_DEBUG("Done setting resources and cgroups\n"); + + /* Accept initial client which is the libvirtd daemon */ + if ((client = accept(monitor, NULL, 0)) < 0) { + virReportSystemError(errno, "%s", + _("Failed to accept a connection from driver")); + goto cleanup; + } + VIR_DEBUG("Accepted monitor fd from driver\n"); + ret = read(client, go, 3); + if (ret < 0) { + virReportSystemError(errno, "%s", + _("Failed to read 'go' from driver")); + goto cleanup; + } + ret = write(client, "go", 3); + if (ret < 0) { + virReportSystemError(errno, "%s", _("Failed to write 'go' to container")); + goto cleanup; + } /* * If doing a chroot style setup, we need to prepare @@ -765,7 +800,6 @@ { pid_t pid; int rc = 1; - int client; char *name = NULL; int nveths = 0; char **veths = NULL; @@ -922,14 +956,7 @@ /* Initialize logging */ virLogSetFromEnv(); - /* Accept initial client which is the libvirtd daemon */ - if ((client = accept(monitor, NULL, 0)) < 0) { - virReportSystemError(errno, "%s", - _("Failed to accept a connection from driver")); - goto cleanup; - } - - rc = lxcControllerRun(def, nveths, veths, monitor, client, appPty); + rc = lxcControllerRun(def, nveths, veths, monitor, appPty); cleanup: Index: libvirt-0.9.2/src/lxc/lxc_driver.c =================================================================== --- libvirt-0.9.2.orig/src/lxc/lxc_driver.c 2011-10-03 13:10:27.608663571 -0500 +++ libvirt-0.9.2/src/lxc/lxc_driver.c 2011-10-03 13:10:53.823679619 -0500 @@ -1160,8 +1160,9 @@ virDomainObjPtr vm) { char *sockpath = NULL; - int fd; + int fd, r; struct sockaddr_un addr; + char go[4]; if (virAsprintf(&sockpath, "%s/%s.sock", driver->stateDir, vm->def->name) < 0) { @@ -1189,6 +1190,17 @@ goto error; } + r = write(fd, "go", 3); + if (r < 0) { + virReportSystemError(errno, "%s", _("Failed to write 'go' to container")); + goto error; + } + r = read(fd, go, 3); + if (r < 0) { + virReportSystemError(errno, "%s", _("Failed to read 'go' from container")); + goto error; + } + VIR_FREE(sockpath); return fd; @@ -1491,6 +1503,7 @@ * pid file out to disk */ if ((priv->monitor = lxcMonitorClient(driver, vm)) < 0) goto cleanup; + VIR_DEBUG("driver: got the monitor socket from client\n"); /* And get its pid */ if ((r = virFileReadPid(driver->stateDir, vm->def->name, &vm->pid)) != 0) {

On Mon, Oct 03, 2011 at 02:03:18PM -0500, Serge E. Hallyn wrote:
Quoting Daniel P. Berrange (berrange@redhat.com):
The LXC controller 'main' method received the handshake FD and invokes lxcControllerRun(). This method does various setup tasks, in particular the following:
.... if (lxcSetContainerResources(def) < 0) goto cleanup; ... ... if (lxcContainerSendContinue(handshakefd) < 0) { virReportSystemError(errno, "%s", _("error sending continue signal to parent")); goto cleanup; } VIR_FORCE_CLOSE(handshakefd);
Thanks, Daniel. You're right! This is fixed in git, by the patch 'lxc: controller: Improve container error reporting' (which does much more than it says :). The following patch is how I had just fixed 0.9.2 this morning. It'll be nicer if I can get the git commit cherrypicked. I can't wait till I can upgrade!
Ahhhhh, that explains it :-) Good to know its fixed then. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
participants (3)
-
Daniel P. Berrange
-
Serge E. Hallyn
-
Serge Hallyn