[libvirt] [PATCH 0/5] Fix misc problems in the RPC code

This series of patches fixes a couple of memory leaks, one crash and a missing reply to the final RPC message on a connection being closed

From: "Daniel P. Berrange" <berrange@redhat.com> If a client disconnects while it has a stream active, there is a race condition which could see libvirtd crash. This is because the client struct may be freed before the last stream event has triggered. THis is trivially solved by holding an extra reference on the client for the stream callbak * daemon/stream.c: Acquire reference on client when adding the stream callback --- daemon/stream.c | 13 ++++++++++++- 1 files changed, 12 insertions(+), 1 deletions(-) diff --git a/daemon/stream.c b/daemon/stream.c index 56d79c2..28f6c32 100644 --- a/daemon/stream.c +++ b/daemon/stream.c @@ -104,6 +104,15 @@ daemonStreamMessageFinished(virNetMessagePtr msg, daemonStreamUpdateEvents(stream); } + +static void +daemonStreamEventFreeFunc(void *opaque) +{ + virNetServerClientPtr client = opaque; + + virNetServerClientFree(client); +} + /* * Callback that gets invoked when a stream becomes writable/readable */ @@ -361,9 +370,11 @@ int daemonAddClientStream(virNetServerClientPtr client, } if (virStreamEventAddCallback(stream->st, 0, - daemonStreamEvent, client, NULL) < 0) + daemonStreamEvent, client, + daemonStreamEventFreeFunc) < 0) return -1; + virNetServerClientRef(client); if ((stream->filterID = virNetServerClientAddFilter(client, daemonStreamFilter, stream)) < 0) { -- 1.7.6

On 07/08/2011 05:57 AM, Daniel P. Berrange wrote:
From: "Daniel P. Berrange" <berrange@redhat.com>
If a client disconnects while it has a stream active, there is a race condition which could see libvirtd crash. This is because the client struct may be freed before the last stream event has triggered. THis is trivially solved by holding an extra reference
s/THis/This/
on the client for the stream callbak
* daemon/stream.c: Acquire reference on client when adding the stream callback --- daemon/stream.c | 13 ++++++++++++- 1 files changed, 12 insertions(+), 1 deletions(-)
ACK. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

From: "Daniel P. Berrange" <berrange@redhat.com> In one exit path we forgot to free the virNetMessage object causing a large memory leak for streams which send alot of data. Some other paths were calling VIR_FREE directly instead of virNetMessageFree although this was (currently) harmless. * src/rpc/virnetclientstream.c: Fix leak of msg object * src/rpc/virnetclientprogram.c: Call virNetMessageFree instead of VIR_FREE --- src/rpc/virnetclientprogram.c | 4 ++-- src/rpc/virnetclientstream.c | 3 ++- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/src/rpc/virnetclientprogram.c b/src/rpc/virnetclientprogram.c index 8414ad8..c39520a 100644 --- a/src/rpc/virnetclientprogram.c +++ b/src/rpc/virnetclientprogram.c @@ -329,11 +329,11 @@ int virNetClientProgramCall(virNetClientProgramPtr prog, goto error; } - VIR_FREE(msg); + virNetMessageFree(msg); return 0; error: - VIR_FREE(msg); + virNetMessageFree(msg); return -1; } diff --git a/src/rpc/virnetclientstream.c b/src/rpc/virnetclientstream.c index d5efab1..fe15acd 100644 --- a/src/rpc/virnetclientstream.c +++ b/src/rpc/virnetclientstream.c @@ -361,11 +361,12 @@ int virNetClientStreamSendPacket(virNetClientStreamPtr st, if (virNetClientSend(client, msg, wantReply) < 0) goto error; + virNetMessageFree(msg); return nbytes; error: - VIR_FREE(msg); + virNetMessageFree(msg); return -1; } -- 1.7.6

On 07/08/2011 05:57 AM, Daniel P. Berrange wrote:
From: "Daniel P. Berrange" <berrange@redhat.com>
In one exit path we forgot to free the virNetMessage object causing a large memory leak for streams which send alot of data. Some other
s/alot/a lot/
paths were calling VIR_FREE directly instead of virNetMessageFree although this was (currently) harmless.
* src/rpc/virnetclientstream.c: Fix leak of msg object * src/rpc/virnetclientprogram.c: Call virNetMessageFree instead of VIR_FREE --- src/rpc/virnetclientprogram.c | 4 ++-- src/rpc/virnetclientstream.c | 3 ++- 2 files changed, 4 insertions(+), 3 deletions(-)
ACK. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

From: "Daniel P. Berrange" <berrange@redhat.com> When sending back the final OK or ERROR message on completion of a stream, we were not decrementing the 'nrequests' tracker on the client. With the default requests limit of '5', this meant once a client had created 5 streams, they are unable to process any further RPC calls. There was also a bug when handling an error from decoding a message length header, which meant a client connection would not immediately be closed. * src/rpc/virnetserverclient.c: Fix release of request after stream completion & mark client for close on error --- src/rpc/virnetserverclient.c | 8 ++++++-- 1 files changed, 6 insertions(+), 2 deletions(-) diff --git a/src/rpc/virnetserverclient.c b/src/rpc/virnetserverclient.c index 30d7fcb..6aeb3a4 100644 --- a/src/rpc/virnetserverclient.c +++ b/src/rpc/virnetserverclient.c @@ -700,8 +700,10 @@ readmore: /* Either done with length word header */ if (client->rx->bufferLength == VIR_NET_MESSAGE_LEN_MAX) { - if (virNetMessageDecodeLength(client->rx) < 0) + if (virNetMessageDecodeLength(client->rx) < 0) { + client->wantClose = true; return; + } virNetServerClientUpdateEvent(client); @@ -831,7 +833,9 @@ virNetServerClientDispatchWrite(virNetServerClientPtr client) /* Get finished msg from head of tx queue */ msg = virNetMessageQueueServe(&client->tx); - if (msg->header.type == VIR_NET_REPLY) { + if (msg->header.type == VIR_NET_REPLY || + (msg->header.type == VIR_NET_STREAM && + msg->header.status != VIR_NET_CONTINUE)) { client->nrequests--; /* See if the recv queue is currently throttled */ if (!client->rx && -- 1.7.6

On 07/08/2011 05:57 AM, Daniel P. Berrange wrote:
From: "Daniel P. Berrange" <berrange@redhat.com>
When sending back the final OK or ERROR message on completion of a stream, we were not decrementing the 'nrequests' tracker on the client. With the default requests limit of '5', this meant once a client had created 5 streams, they are unable to process any further RPC calls. There was also a bug when handling an error from decoding a message length header, which meant a client connection would not immediately be closed.
* src/rpc/virnetserverclient.c: Fix release of request after stream completion & mark client for close on error --- src/rpc/virnetserverclient.c | 8 ++++++-- 1 files changed, 6 insertions(+), 2 deletions(-)
ACK. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

From: "Daniel P. Berrange" <berrange@redhat.com> When closing a remote connection we issue a (fairly pointless) 'CLOSE' RPC call to the daemon. If this fails we skip all the cleanup of private data, but the virConnectPtr object still gets released as normal. This causes a memory leak. Since the CLOSE RPC call is pretty pointless, just carry on freeing the remote driver if it fails. * src/remote/remote_driver.c: Ignore failure to issue CLOSE RPC call --- src/remote/remote_driver.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-) diff --git a/src/remote/remote_driver.c b/src/remote/remote_driver.c index 4eea3a4..da04a93 100644 --- a/src/remote/remote_driver.c +++ b/src/remote/remote_driver.c @@ -824,10 +824,12 @@ get_transport_from_scheme (char *scheme) static int doRemoteClose (virConnectPtr conn, struct private_data *priv) { + int ret = 0; + if (call (conn, priv, 0, REMOTE_PROC_CLOSE, (xdrproc_t) xdr_void, (char *) NULL, (xdrproc_t) xdr_void, (char *) NULL) == -1) - return -1; + ret = -1; virNetTLSContextFree(priv->tls); priv->tls = NULL; @@ -846,7 +848,7 @@ doRemoteClose (virConnectPtr conn, struct private_data *priv) virDomainEventStateFree(priv->domainEventState); priv->domainEventState = NULL; - return 0; + return ret; } static int -- 1.7.6

On 07/08/2011 05:57 AM, Daniel P. Berrange wrote:
From: "Daniel P. Berrange" <berrange@redhat.com>
When closing a remote connection we issue a (fairly pointless) 'CLOSE' RPC call to the daemon. If this fails we skip all the cleanup of private data, but the virConnectPtr object still gets released as normal. This causes a memory leak. Since the CLOSE RPC call is pretty pointless, just carry on freeing the remote driver if it fails.
* src/remote/remote_driver.c: Ignore failure to issue CLOSE RPC call --- src/remote/remote_driver.c | 6 ++++-- 1 files changed, 4 insertions(+), 2 deletions(-)
ACK. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org

From: "Daniel P. Berrange" <berrange@redhat.com> The dispatch for the CLOSE RPC call was invoking the method virNetServerClientClose(). This caused the client connection to be immediately terminated. This meant the reply to the final RPC message was never sent. Prior to the RPC rewrite we merely flagged the connection for closing, and actually closed it when the next RPC call dispatch had completed. * daemon/remote.c: Flag connection for a delayed close * daemon/stream.c: Update to use new API for closing failed connection * src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h: Add support for a delayed connection close. Rename the virNetServerClientMarkClose method to virNetServerClientImmediateClose to clarify its semantics --- daemon/remote.c | 4 ++-- daemon/stream.c | 6 +++--- src/rpc/virnetserverclient.c | 13 ++++++++++++- src/rpc/virnetserverclient.h | 5 +++-- 4 files changed, 20 insertions(+), 8 deletions(-) diff --git a/daemon/remote.c b/daemon/remote.c index 2889908..a2e79ef 100644 --- a/daemon/remote.c +++ b/daemon/remote.c @@ -483,11 +483,11 @@ cleanup: static int remoteDispatchClose(virNetServerPtr server ATTRIBUTE_UNUSED, - virNetServerClientPtr client, + virNetServerClientPtr client ATTRIBUTE_UNUSED, virNetMessageHeaderPtr hdr ATTRIBUTE_UNUSED, virNetMessageErrorPtr rerr ATTRIBUTE_UNUSED) { - virNetServerClientClose(client); + virNetServerClientDelayedClose(client); return 0; } diff --git a/daemon/stream.c b/daemon/stream.c index 28f6c32..4a8f1ee 100644 --- a/daemon/stream.c +++ b/daemon/stream.c @@ -338,7 +338,7 @@ int daemonFreeClientStream(virNetServerClientPtr client, memset(msg, 0, sizeof(*msg)); msg->header.type = VIR_NET_REPLY; if (virNetServerClientSendMessage(client, msg) < 0) { - virNetServerClientMarkClose(client); + virNetServerClientImmediateClose(client); virNetMessageFree(msg); ret = -1; } @@ -608,7 +608,7 @@ daemonStreamHandleWrite(virNetServerClientPtr client, virNetMessageQueueServe(&stream->rx); if (ret < 0) { virNetMessageFree(msg); - virNetServerClientMarkClose(client); + virNetServerClientImmediateClose(client); return -1; } @@ -623,7 +623,7 @@ daemonStreamHandleWrite(virNetServerClientPtr client, msg->header.type = VIR_NET_REPLY; if (virNetServerClientSendMessage(client, msg) < 0) { virNetMessageFree(msg); - virNetServerClientMarkClose(client); + virNetServerClientImmediateClose(client); return -1; } } diff --git a/src/rpc/virnetserverclient.c b/src/rpc/virnetserverclient.c index 6aeb3a4..742c3a4 100644 --- a/src/rpc/virnetserverclient.c +++ b/src/rpc/virnetserverclient.c @@ -61,6 +61,7 @@ struct _virNetServerClient { int refs; bool wantClose; + bool delayedClose; virMutex lock; virNetSocketPtr sock; int auth; @@ -587,7 +588,14 @@ bool virNetServerClientIsClosed(virNetServerClientPtr client) return closed; } -void virNetServerClientMarkClose(virNetServerClientPtr client) +void virNetServerClientDelayedClose(virNetServerClientPtr client) +{ + virNetServerClientLock(client); + client->delayedClose = true; + virNetServerClientUnlock(client); +} + +void virNetServerClientImmediateClose(virNetServerClientPtr client) { virNetServerClientLock(client); client->wantClose = true; @@ -852,6 +860,9 @@ virNetServerClientDispatchWrite(virNetServerClientPtr client) virNetMessageFree(msg); virNetServerClientUpdateEvent(client); + + if (client->delayedClose) + client->wantClose = true; } } } diff --git a/src/rpc/virnetserverclient.h b/src/rpc/virnetserverclient.h index 66510c3..3d2e1fb 100644 --- a/src/rpc/virnetserverclient.h +++ b/src/rpc/virnetserverclient.h @@ -86,9 +86,10 @@ void virNetServerClientSetDispatcher(virNetServerClientPtr client, virNetServerClientDispatchFunc func, void *opaque); void virNetServerClientClose(virNetServerClientPtr client); - bool virNetServerClientIsClosed(virNetServerClientPtr client); -void virNetServerClientMarkClose(virNetServerClientPtr client); + +void virNetServerClientDelayedClose(virNetServerClientPtr client); +void virNetServerClientImmediateClose(virNetServerClientPtr client); bool virNetServerClientWantClose(virNetServerClientPtr client); int virNetServerClientInit(virNetServerClientPtr client); -- 1.7.6

On 07/08/2011 05:57 AM, Daniel P. Berrange wrote:
From: "Daniel P. Berrange" <berrange@redhat.com>
The dispatch for the CLOSE RPC call was invoking the method virNetServerClientClose(). This caused the client connection to be immediately terminated. This meant the reply to the final RPC message was never sent. Prior to the RPC rewrite we merely flagged the connection for closing, and actually closed it when the next RPC call dispatch had completed.
* daemon/remote.c: Flag connection for a delayed close * daemon/stream.c: Update to use new API for closing failed connection * src/rpc/virnetserverclient.c, src/rpc/virnetserverclient.h: Add support for a delayed connection close. Rename the virNetServerClientMarkClose method to virNetServerClientImmediateClose to clarify its semantics --- daemon/remote.c | 4 ++-- daemon/stream.c | 6 +++--- src/rpc/virnetserverclient.c | 13 ++++++++++++- src/rpc/virnetserverclient.h | 5 +++-- 4 files changed, 20 insertions(+), 8 deletions(-)
ACK. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org
participants (2)
-
Daniel P. Berrange
-
Eric Blake