
On Tue, Jun 19, 2018 at 05:03 PM +0200, "Daniel P. Berrangé" <berrange@redhat.com> wrote:
On Tue, Jun 19, 2018 at 03:55:09PM +0100, Daniel P. Berrangé wrote:
On Tue, Jun 19, 2018 at 04:51:13PM +0200, Erik Skultety wrote:
On Mon, Jun 18, 2018 at 12:38:06PM +0200, Marc Hartmayer wrote:
On Mon, Jun 18, 2018 at 09:47 AM +0200, Erik Skultety <eskultet@redhat.com> wrote:
On Fri, Jun 15, 2018 at 03:31:34PM +0200, Marc Hartmayer wrote:
On Fri, Jun 15, 2018 at 01:39 PM +0200, Marc Hartmayer <mhartmay@linux.ibm.com> wrote: > If srv->workers is a NULL pointer, as it is the case if there are no > workers, then don't try to dereference it. > > Signed-off-by: Marc Hartmayer <mhartmay@linux.ibm.com> > Reviewed-by: Boris Fiuczynski <fiuczy@linux.ibm.com> > --- > src/rpc/virnetserver.c | 22 +++++++++++++++------- > 1 file changed, 15 insertions(+), 7 deletions(-) > > diff --git a/src/rpc/virnetserver.c b/src/rpc/virnetserver.c > index 5ae809e372..be6f610880 100644 > --- a/src/rpc/virnetserver.c > +++ b/src/rpc/virnetserver.c > @@ -933,13 +933,21 @@ virNetServerGetThreadPoolParameters(virNetServerPtr srv, > size_t *jobQueueDepth) > { > virObjectLock(srv); > - > - *minWorkers = virThreadPoolGetMinWorkers(srv->workers); > - *maxWorkers = virThreadPoolGetMaxWorkers(srv->workers); > - *freeWorkers = virThreadPoolGetFreeWorkers(srv->workers); > - *nWorkers = virThreadPoolGetCurrentWorkers(srv->workers); > - *nPrioWorkers = virThreadPoolGetPriorityWorkers(srv->workers); > - *jobQueueDepth = virThreadPoolGetJobQueueDepth(srv->workers); > + if (srv->workers) { > + *minWorkers = virThreadPoolGetMinWorkers(srv->workers); > + *maxWorkers = virThreadPoolGetMaxWorkers(srv->workers); > + *freeWorkers = virThreadPoolGetFreeWorkers(srv->workers); > + *nWorkers = virThreadPoolGetCurrentWorkers(srv->workers); > + *nPrioWorkers = virThreadPoolGetPriorityWorkers(srv->workers); > + *jobQueueDepth = virThreadPoolGetJobQueueDepth(srv->workers); > + } else { > + *minWorkers = 0; > + *maxWorkers = 0; > + *freeWorkers = 0; > + *nWorkers = 0; > + *nPrioWorkers = 0; > + *jobQueueDepth = 0; > + } > > virObjectUnlock(srv); > return 0; > -- > 2.13.6
After thinking again it probably makes more sense (and the code more beautiful) to initialize the worker pool even for maxworker=0 (within
I don't understand why should we do that.
Because right now there are several functionalities broken. Segmentation faults in virNetServerGet/SetThreadPoolParameters, it’s not possible to start with maxworkers=0 and then change it at runtime via
Naturally, since no workers means noone to process the request, that is IMHO the expected behaviour.
Yes, a daemon should either run with no workers, or should run with 1 or more workers. It is not value to change between these two modes.
So if there's a codepath that lets you change from 0 -> 1 workers, or the reverse, we should make sure that reports an error.
Essentially workers=0 is only intended for things like virtlockd or virlogd which don't need to be multithreaded, or indeed must *never* be multithreaded to avoid tickling kernel bugs like virtlockd did in the past.
Also note that workers=0 will cause libvirtd to deadlock, because the QEMU driver (and others too) assume that they run in a seperate thread from the main event loop.
Shall we then disallow this value for the option "max_workers" in libvirtd.conf?
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|
-- Beste Grüße / Kind regards Marc Hartmayer IBM Deutschland Research & Development GmbH Vorsitzende des Aufsichtsrats: Martina Koederitz Geschäftsführung: Dirk Wittkopp Sitz der Gesellschaft: Böblingen Registergericht: Amtsgericht Stuttgart, HRB 243294