[libvirt] [PATCH 0/2] Rework client connection handling

There are few cases where users don't want to raise 'max_client', but are doing many concurrent connection and don't want them to fail too. However, we are currently accept()-ing the incoming request even though we have reached the limit. If that's the case, error is reported and connection is thrown away. Gross. What about leaving the requests we know we can't handle yet in the listen() queue and accepting only those we know we can handle? For more info see: https://bugzilla.redhat.com/show_bug.cgi?id=981729 Michal Privoznik (2): RPC: Don't accept client if it would overcommit max_clients Introduce max_queued_clients daemon/libvirtd-config.c | 1 + daemon/libvirtd-config.h | 1 + daemon/libvirtd.aug | 1 + daemon/libvirtd.c | 4 ++++ daemon/libvirtd.conf | 6 ++++++ src/locking/lock_daemon.c | 2 +- src/lxc/lxc_controller.c | 1 + src/rpc/virnetserver.c | 40 ++++++++++++++++++++++++++++++++++++++++ src/rpc/virnetserverservice.c | 15 +++++++++++++-- src/rpc/virnetserverservice.h | 6 ++++++ 10 files changed, 74 insertions(+), 3 deletions(-) -- 1.8.1.5

Currently, even if max_client limit is hit, we accept() incoming connection request, but close it immediately. This has disadvantage of not using listen() queue. We should accept() only those clients we know we can serve and let all other wait in the (limited) queue. --- src/rpc/virnetserver.c | 40 ++++++++++++++++++++++++++++++++++++++++ src/rpc/virnetserverservice.c | 9 +++++++++ src/rpc/virnetserverservice.h | 4 ++++ 3 files changed, 53 insertions(+) diff --git a/src/rpc/virnetserver.c b/src/rpc/virnetserver.c index cb770c3..7ea1707 100644 --- a/src/rpc/virnetserver.c +++ b/src/rpc/virnetserver.c @@ -315,6 +315,34 @@ static int virNetServerDispatchNewClient(virNetServerServicePtr svc, } +static int +virNetServerDispatchCheck(virNetServerServicePtr svc, + virNetSocketPtr socket, + void *opaque) +{ + virNetServerPtr srv = opaque; + int ret = -1; + + VIR_DEBUG("svc=%p socket=%p opaque=%p", svc, socket, opaque); + + if (srv->nclients >= srv->nclients_max) { + VIR_DEBUG("Not accepting client now due to max_clients setting (%zu)", + srv->nclients_max); + + /* We need to temporarily disable the server services in order to + * prevent event loop calling us over and over again. */ + + virNetServerUpdateServices(srv, false); + + goto cleanup; + } + + ret = 0; +cleanup: + return ret; +} + + static void virNetServerFatalSignal(int sig, siginfo_t *siginfo ATTRIBUTE_UNUSED, void *context ATTRIBUTE_UNUSED) @@ -981,6 +1009,7 @@ int virNetServerAddService(virNetServerPtr srv, virNetServerServiceSetDispatcher(svc, virNetServerDispatchNewClient, + virNetServerDispatchCheck, srv); virObjectUnlock(srv); @@ -1110,6 +1139,8 @@ void virNetServerRun(virNetServerPtr srv) virNetServerClientClose(srv->clients[i]); if (virNetServerClientIsClosed(srv->clients[i])) { virNetServerClientPtr client = srv->clients[i]; + bool update_services; + if (srv->nclients > 1) { memmove(srv->clients + i, srv->clients + i + 1, @@ -1120,7 +1151,16 @@ void virNetServerRun(virNetServerPtr srv) srv->nclients = 0; } + /* Enable services if we can accept a new client. + * The new client can be accepted if we are at the limit. */ + update_services = srv->nclients == srv->nclients_max - 1; + virObjectUnlock(srv); + if (update_services) { + /* Now it makes sense to accept() a new client. */ + VIR_DEBUG("Re-enabling services"); + virNetServerUpdateServices(srv, true); + } virObjectUnref(client); virObjectLock(srv); diff --git a/src/rpc/virnetserverservice.c b/src/rpc/virnetserverservice.c index 632f03d..a47c8f8 100644 --- a/src/rpc/virnetserverservice.c +++ b/src/rpc/virnetserverservice.c @@ -46,6 +46,7 @@ struct _virNetServerService { #endif virNetServerServiceDispatchFunc dispatchFunc; + virNetServerServiceDispatchCheckFunc dispatchCheckFunc; void *dispatchOpaque; }; @@ -74,6 +75,12 @@ static void virNetServerServiceAccept(virNetSocketPtr sock, virNetServerServicePtr svc = opaque; virNetSocketPtr clientsock = NULL; + if (svc->dispatchCheckFunc && + svc->dispatchCheckFunc(svc, sock, svc->dispatchOpaque) < 0) { + /* Accept declined */ + goto cleanup; + } + if (virNetSocketAccept(sock, &clientsock) < 0) goto cleanup; @@ -419,9 +426,11 @@ virNetTLSContextPtr virNetServerServiceGetTLSContext(virNetServerServicePtr svc) void virNetServerServiceSetDispatcher(virNetServerServicePtr svc, virNetServerServiceDispatchFunc func, + virNetServerServiceDispatchCheckFunc checkFunc, void *opaque) { svc->dispatchFunc = func; + svc->dispatchCheckFunc = checkFunc; svc->dispatchOpaque = opaque; } diff --git a/src/rpc/virnetserverservice.h b/src/rpc/virnetserverservice.h index 1ece503..e9e5389 100644 --- a/src/rpc/virnetserverservice.h +++ b/src/rpc/virnetserverservice.h @@ -36,6 +36,9 @@ enum { typedef int (*virNetServerServiceDispatchFunc)(virNetServerServicePtr svc, virNetSocketPtr sock, void *opaque); +typedef int (*virNetServerServiceDispatchCheckFunc)(virNetServerServicePtr svc, + virNetSocketPtr sock, + void *opaque); virNetServerServicePtr virNetServerServiceNewTCP(const char *nodename, const char *service, @@ -77,6 +80,7 @@ virNetTLSContextPtr virNetServerServiceGetTLSContext(virNetServerServicePtr svc) void virNetServerServiceSetDispatcher(virNetServerServicePtr svc, virNetServerServiceDispatchFunc func, + virNetServerServiceDispatchCheckFunc checkFunc, void *opaque); void virNetServerServiceToggle(virNetServerServicePtr svc, -- 1.8.1.5

On Thu, Jul 25, 2013 at 04:23:32PM +0200, Michal Privoznik wrote:
Currently, even if max_client limit is hit, we accept() incoming connection request, but close it immediately. This has disadvantage of not using listen() queue. We should accept() only those clients we know we can serve and let all other wait in the (limited) queue. --- src/rpc/virnetserver.c | 40 ++++++++++++++++++++++++++++++++++++++++ src/rpc/virnetserverservice.c | 9 +++++++++ src/rpc/virnetserverservice.h | 4 ++++ 3 files changed, 53 insertions(+)
diff --git a/src/rpc/virnetserver.c b/src/rpc/virnetserver.c index cb770c3..7ea1707 100644 --- a/src/rpc/virnetserver.c +++ b/src/rpc/virnetserver.c @@ -315,6 +315,34 @@ static int virNetServerDispatchNewClient(virNetServerServicePtr svc, }
+static int +virNetServerDispatchCheck(virNetServerServicePtr svc, + virNetSocketPtr socket, + void *opaque) +{ + virNetServerPtr srv = opaque; + int ret = -1; + + VIR_DEBUG("svc=%p socket=%p opaque=%p", svc, socket, opaque); + + if (srv->nclients >= srv->nclients_max) { + VIR_DEBUG("Not accepting client now due to max_clients setting (%zu)", + srv->nclients_max); + + /* We need to temporarily disable the server services in order to + * prevent event loop calling us over and over again. */ + + virNetServerUpdateServices(srv, false); + + goto cleanup; + } + + ret = 0; +cleanup: + return ret; +} + + static void virNetServerFatalSignal(int sig, siginfo_t *siginfo ATTRIBUTE_UNUSED, void *context ATTRIBUTE_UNUSED) @@ -981,6 +1009,7 @@ int virNetServerAddService(virNetServerPtr srv,
virNetServerServiceSetDispatcher(svc, virNetServerDispatchNewClient, + virNetServerDispatchCheck, srv);
virObjectUnlock(srv); @@ -1110,6 +1139,8 @@ void virNetServerRun(virNetServerPtr srv) virNetServerClientClose(srv->clients[i]); if (virNetServerClientIsClosed(srv->clients[i])) { virNetServerClientPtr client = srv->clients[i]; + bool update_services; + if (srv->nclients > 1) { memmove(srv->clients + i, srv->clients + i + 1, @@ -1120,7 +1151,16 @@ void virNetServerRun(virNetServerPtr srv) srv->nclients = 0; }
+ /* Enable services if we can accept a new client. + * The new client can be accepted if we are at the limit. */ + update_services = srv->nclients == srv->nclients_max - 1; + virObjectUnlock(srv); + if (update_services) { + /* Now it makes sense to accept() a new client. */ + VIR_DEBUG("Re-enabling services"); + virNetServerUpdateServices(srv, true); + } virObjectUnref(client); virObjectLock(srv);
diff --git a/src/rpc/virnetserverservice.c b/src/rpc/virnetserverservice.c index 632f03d..a47c8f8 100644 --- a/src/rpc/virnetserverservice.c +++ b/src/rpc/virnetserverservice.c @@ -46,6 +46,7 @@ struct _virNetServerService { #endif
virNetServerServiceDispatchFunc dispatchFunc; + virNetServerServiceDispatchCheckFunc dispatchCheckFunc; void *dispatchOpaque; };
@@ -74,6 +75,12 @@ static void virNetServerServiceAccept(virNetSocketPtr sock, virNetServerServicePtr svc = opaque; virNetSocketPtr clientsock = NULL;
+ if (svc->dispatchCheckFunc && + svc->dispatchCheckFunc(svc, sock, svc->dispatchOpaque) < 0) { + /* Accept declined */ + goto cleanup; + } +
Rather than having this callback, can we not simply change virNetServerAddClient() to call virNetServerUpdateServices(srv, false); when a new client causes us to hit the max limit ? Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 25.07.2013 16:37, Daniel P. Berrange wrote:
On Thu, Jul 25, 2013 at 04:23:32PM +0200, Michal Privoznik wrote:
Currently, even if max_client limit is hit, we accept() incoming connection request, but close it immediately. This has disadvantage of not using listen() queue. We should accept() only those clients we know we can serve and let all other wait in the (limited) queue. --- src/rpc/virnetserver.c | 40 ++++++++++++++++++++++++++++++++++++++++ src/rpc/virnetserverservice.c | 9 +++++++++ src/rpc/virnetserverservice.h | 4 ++++ 3 files changed, 53 insertions(+)
+ if (svc->dispatchCheckFunc && + svc->dispatchCheckFunc(svc, sock, svc->dispatchOpaque) < 0) { + /* Accept declined */ + goto cleanup; + } +
Rather than having this callback, can we not simply change virNetServerAddClient() to call virNetServerUpdateServices(srv, false); when a new client causes us to hit the max limit ?
No, because that callback is called *after* accept() which I am trying to avoid. Michal
Daniel

On Thu, Jul 25, 2013 at 04:43:52PM +0200, Michal Privoznik wrote:
On 25.07.2013 16:37, Daniel P. Berrange wrote:
On Thu, Jul 25, 2013 at 04:23:32PM +0200, Michal Privoznik wrote:
Currently, even if max_client limit is hit, we accept() incoming connection request, but close it immediately. This has disadvantage of not using listen() queue. We should accept() only those clients we know we can serve and let all other wait in the (limited) queue. --- src/rpc/virnetserver.c | 40 ++++++++++++++++++++++++++++++++++++++++ src/rpc/virnetserverservice.c | 9 +++++++++ src/rpc/virnetserverservice.h | 4 ++++ 3 files changed, 53 insertions(+)
+ if (svc->dispatchCheckFunc && + svc->dispatchCheckFunc(svc, sock, svc->dispatchOpaque) < 0) { + /* Accept declined */ + goto cleanup; + } +
Rather than having this callback, can we not simply change virNetServerAddClientb() to call virNetServerUpdateServices(srv, false); when a new client causes us to hit the max limit ?
No, because that callback is called *after* accept() which I am trying to avoid.
I'm not sure I see wha the problem with doing that is. IIUC, with your patch, if we have max clients == 30, and have 30 current active clients, and a 31st comes in, virNetServerServiceAccept is invoked. We then call dispatchCheckFunc see that we're at the limit, don't accept() the connection and disable the event polling from then on. If we gave responsibility to the virNetServerAddClient callback, then if we have max clients == 30, and then 30th client comes in, we accept that, call virNetServerAddClient(), which sees we're at the limti and so disables event polling. So its the difference between disabling event polling at the time the max client is accepted, or disabling it when the max + 1 client arrives and is about to be accepted. IMHO the former makes more sense. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

This configuration knob lets user to set the length of queue of connection requests waiting to be accept()-ed by the daemon. IOW, it just controls the @backlog passed to listen: int listen(int sockfd, int backlog); --- daemon/libvirtd-config.c | 1 + daemon/libvirtd-config.h | 1 + daemon/libvirtd.aug | 1 + daemon/libvirtd.c | 4 ++++ daemon/libvirtd.conf | 6 ++++++ src/locking/lock_daemon.c | 2 +- src/lxc/lxc_controller.c | 1 + src/rpc/virnetserverservice.c | 6 ++++-- src/rpc/virnetserverservice.h | 2 ++ 9 files changed, 21 insertions(+), 3 deletions(-) diff --git a/daemon/libvirtd-config.c b/daemon/libvirtd-config.c index 107a9cf..c816fda 100644 --- a/daemon/libvirtd-config.c +++ b/daemon/libvirtd-config.c @@ -414,6 +414,7 @@ daemonConfigLoadOptions(struct daemonConfig *data, GET_CONF_INT(conf, filename, min_workers); GET_CONF_INT(conf, filename, max_workers); GET_CONF_INT(conf, filename, max_clients); + GET_CONF_INT(conf, filename, max_queued_clients); GET_CONF_INT(conf, filename, prio_workers); diff --git a/daemon/libvirtd-config.h b/daemon/libvirtd-config.h index 973e0ea..a24d5d2 100644 --- a/daemon/libvirtd-config.h +++ b/daemon/libvirtd-config.h @@ -63,6 +63,7 @@ struct daemonConfig { int min_workers; int max_workers; int max_clients; + int max_queued_clients; int prio_workers; diff --git a/daemon/libvirtd.aug b/daemon/libvirtd.aug index 7c56a41..70fce5c 100644 --- a/daemon/libvirtd.aug +++ b/daemon/libvirtd.aug @@ -56,6 +56,7 @@ module Libvirtd = let processing_entry = int_entry "min_workers" | int_entry "max_workers" | int_entry "max_clients" + | int_entry "max_queued_clients" | int_entry "max_requests" | int_entry "max_client_requests" | int_entry "prio_workers" diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c index 9f7fd8a..402b494 100644 --- a/daemon/libvirtd.c +++ b/daemon/libvirtd.c @@ -485,6 +485,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif false, + config->max_queued_clients, config->max_client_requests))) goto error; if (sock_path_ro) { @@ -497,6 +498,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif true, + config->max_queued_clients, config->max_client_requests))) goto error; } @@ -522,6 +524,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif false, + config->max_queued_clients, config->max_client_requests))) goto error; @@ -562,6 +565,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, config->auth_tls, ctxt, false, + config->max_queued_clients, config->max_client_requests))) { virObjectUnref(ctxt); goto error; diff --git a/daemon/libvirtd.conf b/daemon/libvirtd.conf index af4493e..5353927 100644 --- a/daemon/libvirtd.conf +++ b/daemon/libvirtd.conf @@ -257,6 +257,12 @@ # over all sockets combined. #max_clients = 20 +# The maximum length of queue of connections waiting to be +# accepted by the daemon. Note, that some protocols supporting +# retransmission may obey this so that a later reattempt at +# connection succeeds. +#max_queued_clients = 1000 + # The minimum limit sets the number of workers to start up # initially. If the number of active clients exceeds this, diff --git a/src/locking/lock_daemon.c b/src/locking/lock_daemon.c index c4c1727..c45f45c 100644 --- a/src/locking/lock_daemon.c +++ b/src/locking/lock_daemon.c @@ -671,7 +671,7 @@ virLockDaemonSetupNetworkingNative(virNetServerPtr srv, const char *sock_path) #if WITH_GNUTLS NULL, #endif - false, 1))) + false, 0, 1))) return -1; if (virNetServerAddService(srv, svc, NULL) < 0) { diff --git a/src/lxc/lxc_controller.c b/src/lxc/lxc_controller.c index 124ab19..ed73ab0 100644 --- a/src/lxc/lxc_controller.c +++ b/src/lxc/lxc_controller.c @@ -745,6 +745,7 @@ static int virLXCControllerSetupServer(virLXCControllerPtr ctrl) NULL, #endif false, + 0, 5))) goto error; diff --git a/src/rpc/virnetserverservice.c b/src/rpc/virnetserverservice.c index a47c8f8..1739aa5 100644 --- a/src/rpc/virnetserverservice.c +++ b/src/rpc/virnetserverservice.c @@ -104,6 +104,7 @@ virNetServerServicePtr virNetServerServiceNewTCP(const char *nodename, virNetTLSContextPtr tls, #endif bool readonly, + size_t max_queued_clients, size_t nrequests_client_max) { virNetServerServicePtr svc; @@ -129,7 +130,7 @@ virNetServerServicePtr virNetServerServiceNewTCP(const char *nodename, goto error; for (i = 0; i < svc->nsocks; i++) { - if (virNetSocketListen(svc->socks[i], 0) < 0) + if (virNetSocketListen(svc->socks[i], max_queued_clients) < 0) goto error; /* IO callback is initially disabled, until we're ready @@ -162,6 +163,7 @@ virNetServerServicePtr virNetServerServiceNewUNIX(const char *path, virNetTLSContextPtr tls, #endif bool readonly, + size_t max_queued_clients, size_t nrequests_client_max) { virNetServerServicePtr svc; @@ -192,7 +194,7 @@ virNetServerServicePtr virNetServerServiceNewUNIX(const char *path, goto error; for (i = 0; i < svc->nsocks; i++) { - if (virNetSocketListen(svc->socks[i], 0) < 0) + if (virNetSocketListen(svc->socks[i], max_queued_clients) < 0) goto error; /* IO callback is initially disabled, until we're ready diff --git a/src/rpc/virnetserverservice.h b/src/rpc/virnetserverservice.h index e9e5389..1143f4a 100644 --- a/src/rpc/virnetserverservice.h +++ b/src/rpc/virnetserverservice.h @@ -47,6 +47,7 @@ virNetServerServicePtr virNetServerServiceNewTCP(const char *nodename, virNetTLSContextPtr tls, # endif bool readonly, + size_t max_queued_clients, size_t nrequests_client_max); virNetServerServicePtr virNetServerServiceNewUNIX(const char *path, mode_t mask, @@ -56,6 +57,7 @@ virNetServerServicePtr virNetServerServiceNewUNIX(const char *path, virNetTLSContextPtr tls, # endif bool readonly, + size_t max_queued_clients, size_t nrequests_client_max); virNetServerServicePtr virNetServerServiceNewFD(int fd, int auth, -- 1.8.1.5

On Thu, Jul 25, 2013 at 04:23:33PM +0200, Michal Privoznik wrote:
This configuration knob lets user to set the length of queue of connection requests waiting to be accept()-ed by the daemon. IOW, it just controls the @backlog passed to listen:
int listen(int sockfd, int backlog); --- daemon/libvirtd-config.c | 1 + daemon/libvirtd-config.h | 1 + daemon/libvirtd.aug | 1 + daemon/libvirtd.c | 4 ++++ daemon/libvirtd.conf | 6 ++++++ src/locking/lock_daemon.c | 2 +- src/lxc/lxc_controller.c | 1 + src/rpc/virnetserverservice.c | 6 ++++-- src/rpc/virnetserverservice.h | 2 ++ 9 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/daemon/libvirtd-config.c b/daemon/libvirtd-config.c index 107a9cf..c816fda 100644 --- a/daemon/libvirtd-config.c +++ b/daemon/libvirtd-config.c @@ -414,6 +414,7 @@ daemonConfigLoadOptions(struct daemonConfig *data, GET_CONF_INT(conf, filename, min_workers); GET_CONF_INT(conf, filename, max_workers); GET_CONF_INT(conf, filename, max_clients); + GET_CONF_INT(conf, filename, max_queued_clients);
GET_CONF_INT(conf, filename, prio_workers);
diff --git a/daemon/libvirtd-config.h b/daemon/libvirtd-config.h index 973e0ea..a24d5d2 100644 --- a/daemon/libvirtd-config.h +++ b/daemon/libvirtd-config.h @@ -63,6 +63,7 @@ struct daemonConfig { int min_workers; int max_workers; int max_clients; + int max_queued_clients;
int prio_workers;
diff --git a/daemon/libvirtd.aug b/daemon/libvirtd.aug index 7c56a41..70fce5c 100644 --- a/daemon/libvirtd.aug +++ b/daemon/libvirtd.aug @@ -56,6 +56,7 @@ module Libvirtd = let processing_entry = int_entry "min_workers" | int_entry "max_workers" | int_entry "max_clients" + | int_entry "max_queued_clients" | int_entry "max_requests" | int_entry "max_client_requests" | int_entry "prio_workers" diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c index 9f7fd8a..402b494 100644 --- a/daemon/libvirtd.c +++ b/daemon/libvirtd.c @@ -485,6 +485,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif false, + config->max_queued_clients, config->max_client_requests))) goto error; if (sock_path_ro) { @@ -497,6 +498,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif true, + config->max_queued_clients, config->max_client_requests))) goto error; } @@ -522,6 +524,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif false, + config->max_queued_clients, config->max_client_requests))) goto error;
@@ -562,6 +565,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, config->auth_tls, ctxt, false, + config->max_queued_clients, config->max_client_requests))) { virObjectUnref(ctxt); goto error; diff --git a/daemon/libvirtd.conf b/daemon/libvirtd.conf index af4493e..5353927 100644 --- a/daemon/libvirtd.conf +++ b/daemon/libvirtd.conf @@ -257,6 +257,12 @@ # over all sockets combined. #max_clients = 20
+# The maximum length of queue of connections waiting to be +# accepted by the daemon. Note, that some protocols supporting +# retransmission may obey this so that a later reattempt at +# connection succeeds. +#max_queued_clients = 1000 +
# The minimum limit sets the number of workers to start up # initially. If the number of active clients exceeds this, diff --git a/src/locking/lock_daemon.c b/src/locking/lock_daemon.c index c4c1727..c45f45c 100644 --- a/src/locking/lock_daemon.c +++ b/src/locking/lock_daemon.c @@ -671,7 +671,7 @@ virLockDaemonSetupNetworkingNative(virNetServerPtr srv, const char *sock_path) #if WITH_GNUTLS NULL, #endif - false, 1))) + false, 0, 1))) return -1;
if (virNetServerAddService(srv, svc, NULL) < 0) {
I think we probably want to have a non-zero backlog for virtlockd, otherwise we could get ECONNREFUSED errors when multiple VMs start in parallel, since each one will trigger a connect(). Probably worth making it a config file parameter too, while you're at it.
diff --git a/src/lxc/lxc_controller.c b/src/lxc/lxc_controller.c index 124ab19..ed73ab0 100644 --- a/src/lxc/lxc_controller.c +++ b/src/lxc/lxc_controller.c @@ -745,6 +745,7 @@ static int virLXCControllerSetupServer(virLXCControllerPtr ctrl) NULL, #endif false, + 0, 5))) goto error;
diff --git a/src/rpc/virnetserverservice.c b/src/rpc/virnetserverservice.c index a47c8f8..1739aa5 100644 --- a/src/rpc/virnetserverservice.c +++ b/src/rpc/virnetserverservice.c @@ -104,6 +104,7 @@ virNetServerServicePtr virNetServerServiceNewTCP(const char *nodename, virNetTLSContextPtr tls, #endif bool readonly, + size_t max_queued_clients, size_t nrequests_client_max) { virNetServerServicePtr svc; @@ -129,7 +130,7 @@ virNetServerServicePtr virNetServerServiceNewTCP(const char *nodename, goto error;
for (i = 0; i < svc->nsocks; i++) { - if (virNetSocketListen(svc->socks[i], 0) < 0) + if (virNetSocketListen(svc->socks[i], max_queued_clients) < 0) goto error;
/* IO callback is initially disabled, until we're ready @@ -162,6 +163,7 @@ virNetServerServicePtr virNetServerServiceNewUNIX(const char *path, virNetTLSContextPtr tls, #endif bool readonly, + size_t max_queued_clients, size_t nrequests_client_max) { virNetServerServicePtr svc; @@ -192,7 +194,7 @@ virNetServerServicePtr virNetServerServiceNewUNIX(const char *path, goto error;
for (i = 0; i < svc->nsocks; i++) { - if (virNetSocketListen(svc->socks[i], 0) < 0) + if (virNetSocketListen(svc->socks[i], max_queued_clients) < 0) goto error;
/* IO callback is initially disabled, until we're ready diff --git a/src/rpc/virnetserverservice.h b/src/rpc/virnetserverservice.h index e9e5389..1143f4a 100644 --- a/src/rpc/virnetserverservice.h +++ b/src/rpc/virnetserverservice.h @@ -47,6 +47,7 @@ virNetServerServicePtr virNetServerServiceNewTCP(const char *nodename, virNetTLSContextPtr tls, # endif bool readonly, + size_t max_queued_clients, size_t nrequests_client_max); virNetServerServicePtr virNetServerServiceNewUNIX(const char *path, mode_t mask, @@ -56,6 +57,7 @@ virNetServerServicePtr virNetServerServiceNewUNIX(const char *path, virNetTLSContextPtr tls, # endif bool readonly, + size_t max_queued_clients, size_t nrequests_client_max); virNetServerServicePtr virNetServerServiceNewFD(int fd, int auth,
Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 25.07.2013 16:34, Daniel P. Berrange wrote:
On Thu, Jul 25, 2013 at 04:23:33PM +0200, Michal Privoznik wrote:
This configuration knob lets user to set the length of queue of connection requests waiting to be accept()-ed by the daemon. IOW, it just controls the @backlog passed to listen:
int listen(int sockfd, int backlog); --- daemon/libvirtd-config.c | 1 + daemon/libvirtd-config.h | 1 + daemon/libvirtd.aug | 1 + daemon/libvirtd.c | 4 ++++ daemon/libvirtd.conf | 6 ++++++ src/locking/lock_daemon.c | 2 +- src/lxc/lxc_controller.c | 1 + src/rpc/virnetserverservice.c | 6 ++++-- src/rpc/virnetserverservice.h | 2 ++ 9 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/daemon/libvirtd-config.c b/daemon/libvirtd-config.c index 107a9cf..c816fda 100644 --- a/daemon/libvirtd-config.c +++ b/daemon/libvirtd-config.c @@ -414,6 +414,7 @@ daemonConfigLoadOptions(struct daemonConfig *data, GET_CONF_INT(conf, filename, min_workers); GET_CONF_INT(conf, filename, max_workers); GET_CONF_INT(conf, filename, max_clients); + GET_CONF_INT(conf, filename, max_queued_clients);
GET_CONF_INT(conf, filename, prio_workers);
diff --git a/daemon/libvirtd-config.h b/daemon/libvirtd-config.h index 973e0ea..a24d5d2 100644 --- a/daemon/libvirtd-config.h +++ b/daemon/libvirtd-config.h @@ -63,6 +63,7 @@ struct daemonConfig { int min_workers; int max_workers; int max_clients; + int max_queued_clients;
int prio_workers;
diff --git a/daemon/libvirtd.aug b/daemon/libvirtd.aug index 7c56a41..70fce5c 100644 --- a/daemon/libvirtd.aug +++ b/daemon/libvirtd.aug @@ -56,6 +56,7 @@ module Libvirtd = let processing_entry = int_entry "min_workers" | int_entry "max_workers" | int_entry "max_clients" + | int_entry "max_queued_clients" | int_entry "max_requests" | int_entry "max_client_requests" | int_entry "prio_workers" diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c index 9f7fd8a..402b494 100644 --- a/daemon/libvirtd.c +++ b/daemon/libvirtd.c @@ -485,6 +485,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif false, + config->max_queued_clients, config->max_client_requests))) goto error; if (sock_path_ro) { @@ -497,6 +498,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif true, + config->max_queued_clients, config->max_client_requests))) goto error; } @@ -522,6 +524,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif false, + config->max_queued_clients, config->max_client_requests))) goto error;
@@ -562,6 +565,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, config->auth_tls, ctxt, false, + config->max_queued_clients, config->max_client_requests))) { virObjectUnref(ctxt); goto error; diff --git a/daemon/libvirtd.conf b/daemon/libvirtd.conf index af4493e..5353927 100644 --- a/daemon/libvirtd.conf +++ b/daemon/libvirtd.conf @@ -257,6 +257,12 @@ # over all sockets combined. #max_clients = 20
+# The maximum length of queue of connections waiting to be +# accepted by the daemon. Note, that some protocols supporting +# retransmission may obey this so that a later reattempt at +# connection succeeds. +#max_queued_clients = 1000 +
# The minimum limit sets the number of workers to start up # initially. If the number of active clients exceeds this, diff --git a/src/locking/lock_daemon.c b/src/locking/lock_daemon.c index c4c1727..c45f45c 100644 --- a/src/locking/lock_daemon.c +++ b/src/locking/lock_daemon.c @@ -671,7 +671,7 @@ virLockDaemonSetupNetworkingNative(virNetServerPtr srv, const char *sock_path) #if WITH_GNUTLS NULL, #endif - false, 1))) + false, 0, 1))) return -1;
if (virNetServerAddService(srv, svc, NULL) < 0) {
I think we probably want to have a non-zero backlog for virtlockd, otherwise we could get ECONNREFUSED errors when multiple VMs start in parallel, since each one will trigger a connect(). Probably worth making it a config file parameter too, while you're at it.
In fact, 0 gets translated in virNetSocketListen to 30. Michal

On Thu, Jul 25, 2013 at 04:44:57PM +0200, Michal Privoznik wrote:
On 25.07.2013 16:34, Daniel P. Berrange wrote:
On Thu, Jul 25, 2013 at 04:23:33PM +0200, Michal Privoznik wrote:
This configuration knob lets user to set the length of queue of connection requests waiting to be accept()-ed by the daemon. IOW, it just controls the @backlog passed to listen:
int listen(int sockfd, int backlog); --- daemon/libvirtd-config.c | 1 + daemon/libvirtd-config.h | 1 + daemon/libvirtd.aug | 1 + daemon/libvirtd.c | 4 ++++ daemon/libvirtd.conf | 6 ++++++ src/locking/lock_daemon.c | 2 +- src/lxc/lxc_controller.c | 1 + src/rpc/virnetserverservice.c | 6 ++++-- src/rpc/virnetserverservice.h | 2 ++ 9 files changed, 21 insertions(+), 3 deletions(-)
diff --git a/daemon/libvirtd-config.c b/daemon/libvirtd-config.c index 107a9cf..c816fda 100644 --- a/daemon/libvirtd-config.c +++ b/daemon/libvirtd-config.c @@ -414,6 +414,7 @@ daemonConfigLoadOptions(struct daemonConfig *data, GET_CONF_INT(conf, filename, min_workers); GET_CONF_INT(conf, filename, max_workers); GET_CONF_INT(conf, filename, max_clients); + GET_CONF_INT(conf, filename, max_queued_clients);
GET_CONF_INT(conf, filename, prio_workers);
diff --git a/daemon/libvirtd-config.h b/daemon/libvirtd-config.h index 973e0ea..a24d5d2 100644 --- a/daemon/libvirtd-config.h +++ b/daemon/libvirtd-config.h @@ -63,6 +63,7 @@ struct daemonConfig { int min_workers; int max_workers; int max_clients; + int max_queued_clients;
int prio_workers;
diff --git a/daemon/libvirtd.aug b/daemon/libvirtd.aug index 7c56a41..70fce5c 100644 --- a/daemon/libvirtd.aug +++ b/daemon/libvirtd.aug @@ -56,6 +56,7 @@ module Libvirtd = let processing_entry = int_entry "min_workers" | int_entry "max_workers" | int_entry "max_clients" + | int_entry "max_queued_clients" | int_entry "max_requests" | int_entry "max_client_requests" | int_entry "prio_workers" diff --git a/daemon/libvirtd.c b/daemon/libvirtd.c index 9f7fd8a..402b494 100644 --- a/daemon/libvirtd.c +++ b/daemon/libvirtd.c @@ -485,6 +485,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif false, + config->max_queued_clients, config->max_client_requests))) goto error; if (sock_path_ro) { @@ -497,6 +498,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif true, + config->max_queued_clients, config->max_client_requests))) goto error; } @@ -522,6 +524,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, NULL, #endif false, + config->max_queued_clients, config->max_client_requests))) goto error;
@@ -562,6 +565,7 @@ static int daemonSetupNetworking(virNetServerPtr srv, config->auth_tls, ctxt, false, + config->max_queued_clients, config->max_client_requests))) { virObjectUnref(ctxt); goto error; diff --git a/daemon/libvirtd.conf b/daemon/libvirtd.conf index af4493e..5353927 100644 --- a/daemon/libvirtd.conf +++ b/daemon/libvirtd.conf @@ -257,6 +257,12 @@ # over all sockets combined. #max_clients = 20
+# The maximum length of queue of connections waiting to be +# accepted by the daemon. Note, that some protocols supporting +# retransmission may obey this so that a later reattempt at +# connection succeeds. +#max_queued_clients = 1000 +
# The minimum limit sets the number of workers to start up # initially. If the number of active clients exceeds this, diff --git a/src/locking/lock_daemon.c b/src/locking/lock_daemon.c index c4c1727..c45f45c 100644 --- a/src/locking/lock_daemon.c +++ b/src/locking/lock_daemon.c @@ -671,7 +671,7 @@ virLockDaemonSetupNetworkingNative(virNetServerPtr srv, const char *sock_path) #if WITH_GNUTLS NULL, #endif - false, 1))) + false, 0, 1))) return -1;
if (virNetServerAddService(srv, svc, NULL) < 0) {
I think we probably want to have a non-zero backlog for virtlockd, otherwise we could get ECONNREFUSED errors when multiple VMs start in parallel, since each one will trigger a connect(). Probably worth making it a config file parameter too, while you're at it.
In fact, 0 gets translated in virNetSocketListen to 30.
Ah ok, ACK then. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

Currently, even if max_client limit is hit, we accept() incoming connection request, but close it immediately. This has disadvantage of not using listen() queue. We should accept() only those clients we know we can serve and let all other wait in the (limited) queue. --- diff to v1: -moved logic from virNetServerDispatchCheck to virNetServerAddClient src/rpc/virnetserver.c | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-) diff --git a/src/rpc/virnetserver.c b/src/rpc/virnetserver.c index cb770c3..2306e10 100644 --- a/src/rpc/virnetserver.c +++ b/src/rpc/virnetserver.c @@ -115,6 +115,8 @@ struct _virNetServer { static virClassPtr virNetServerClass; static void virNetServerDispose(void *obj); +static void virNetServerUpdateServicesLocked(virNetServerPtr srv, + bool enabled); static int virNetServerOnceInit(void) { @@ -253,30 +255,36 @@ cleanup: static int virNetServerAddClient(virNetServerPtr srv, virNetServerClientPtr client) { virObjectLock(srv); if (srv->nclients >= srv->nclients_max) { virReportError(VIR_ERR_RPC, _("Too many active clients (%zu), dropping connection from %s"), srv->nclients_max, virNetServerClientRemoteAddrString(client)); goto error; } if (virNetServerClientInit(client) < 0) goto error; if (VIR_EXPAND_N(srv->clients, srv->nclients, 1) < 0) goto error; srv->clients[srv->nclients-1] = client; virObjectRef(client); + if (srv->nclients == srv->nclients_max) { + /* Temporarily stop accepting new clients */ + VIR_DEBUG("Temporarily suspending services due to max_clients"); + virNetServerUpdateServicesLocked(srv, false); + } + virNetServerClientSetDispatcher(client, virNetServerDispatchNewMessage, srv); virNetServerClientInitKeepAlive(client, srv->keepaliveInterval, srv->keepaliveCount); virObjectUnlock(srv); return 0; @@ -1034,98 +1042,113 @@ static void virNetServerAutoShutdownTimer(int timerid ATTRIBUTE_UNUSED, } +static void +virNetServerUpdateServicesLocked(virNetServerPtr srv, + bool enabled) +{ + size_t i; + + for (i = 0; i < srv->nservices; i++) + virNetServerServiceToggle(srv->services[i], enabled); +} + + void virNetServerUpdateServices(virNetServerPtr srv, bool enabled) { - size_t i; - virObjectLock(srv); - for (i = 0; i < srv->nservices; i++) - virNetServerServiceToggle(srv->services[i], enabled); - + virNetServerUpdateServicesLocked(srv, enabled); virObjectUnlock(srv); } void virNetServerRun(virNetServerPtr srv) { int timerid = -1; bool timerActive = false; size_t i; virObjectLock(srv); if (srv->mdns && virNetServerMDNSStart(srv->mdns) < 0) goto cleanup; srv->quit = 0; if (srv->autoShutdownTimeout && (timerid = virEventAddTimeout(-1, virNetServerAutoShutdownTimer, srv, NULL)) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("Failed to register shutdown timeout")); goto cleanup; } VIR_DEBUG("srv=%p quit=%d", srv, srv->quit); while (!srv->quit) { /* A shutdown timeout is specified, so check * if any drivers have active state, if not * shutdown after timeout seconds */ if (srv->autoShutdownTimeout) { if (timerActive) { if (srv->clients) { VIR_DEBUG("Deactivating shutdown timer %d", timerid); virEventUpdateTimeout(timerid, -1); timerActive = false; } } else { if (!srv->clients) { VIR_DEBUG("Activating shutdown timer %d", timerid); virEventUpdateTimeout(timerid, srv->autoShutdownTimeout * 1000); timerActive = true; } } } virObjectUnlock(srv); if (virEventRunDefaultImpl() < 0) { virObjectLock(srv); VIR_DEBUG("Loop iteration error, exiting"); break; } virObjectLock(srv); reprocess: for (i = 0; i < srv->nclients; i++) { /* Coverity 5.3.0 couldn't see that srv->clients is non-NULL * if srv->nclients is non-zero. */ sa_assert(srv->clients); if (virNetServerClientWantClose(srv->clients[i])) virNetServerClientClose(srv->clients[i]); if (virNetServerClientIsClosed(srv->clients[i])) { virNetServerClientPtr client = srv->clients[i]; if (srv->nclients > 1) { memmove(srv->clients + i, srv->clients + i + 1, sizeof(*srv->clients) * (srv->nclients - (i + 1))); VIR_SHRINK_N(srv->clients, srv->nclients, 1); } else { VIR_FREE(srv->clients); srv->nclients = 0; } + /* Enable services if we can accept a new client. + * The new client can be accepted if we are at the limit. */ + if (srv->nclients == srv->nclients_max - 1) { + /* Now it makes sense to accept() a new client. */ + VIR_DEBUG("Re-enabling services"); + virNetServerUpdateServicesLocked(srv, true); + } + virObjectUnlock(srv); virObjectUnref(client); virObjectLock(srv); goto reprocess; } } } -- 1.8.1.5

On 07/26/2013 01:13 AM, Michal Privoznik wrote:
Currently, even if max_client limit is hit, we accept() incoming connection request, but close it immediately. This has disadvantage of not using listen() queue. We should accept() only those clients we know we can serve and let all other wait in the (limited) queue. ---
diff to v1: -moved logic from virNetServerDispatchCheck to virNetServerAddClient
src/rpc/virnetserver.c | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-)
Looks like a better location to me. You may still want to wait for Dan's review, but you have my ACK, and I think this is probably safe for 1.1.1 as a bug-fix. -- Eric Blake eblake redhat com +1-919-301-3266 Libvirt virtualization library http://libvirt.org

On Fri, Jul 26, 2013 at 09:13:43AM +0200, Michal Privoznik wrote:
Currently, even if max_client limit is hit, we accept() incoming connection request, but close it immediately. This has disadvantage of not using listen() queue. We should accept() only those clients we know we can serve and let all other wait in the (limited) queue. ---
diff to v1: -moved logic from virNetServerDispatchCheck to virNetServerAddClient
src/rpc/virnetserver.c | 33 ++++++++++++++++++++++++++++----- 1 file changed, 28 insertions(+), 5 deletions(-)
ACK, I like this version more. NB, I think we must wait till post-1.1.1 to push this, since it is really a new feature and so not suitable for freeze policy. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
participants (3)
-
Daniel P. Berrange
-
Eric Blake
-
Michal Privoznik