
On 08/02/2011 04:41 AM, Daniel P. Berrange wrote:
On Mon, Aug 01, 2011 at 01:53:19PM -0600, Eric Blake wrote:
Steps to reproduce this problem (vm1 is not running): for i in `seq 50`; do virsh managedsave vm1& done; killall virsh
Pre-patch, virNetServerClientClose could end up setting client->sock to NULL prior to other cleanup functions trying to use client->sock. This fixes things by checking for NULL in more places, and by deferring the cleanup until after all queued messages have been served.
@@ -570,10 +577,7 @@ void virNetServerClientClose(virNetServerClientPtr client) virNetTLSSessionFree(client->tls); client->tls = NULL; } - if (client->sock) { - virNetSocketFree(client->sock); - client->sock = NULL; - } + client->wantClose = true;
while (client->rx) { virNetMessagePtr msg @@ -586,6 +590,11 @@ void virNetServerClientClose(virNetServerClientPtr client) virNetMessageFree(msg); }
+ if (client->sock) { + virNetSocketFree(client->sock); + client->sock = NULL; + } + virNetServerClientUnlock(client); }
I'm curious why you have these last 2 hunks of the patc ? The entire of the virNetServerClientClose() is executed under the mutex, so AFAICT moving these 4 lines to later in the function can have no effect. The assignment to wantClose ought to not be needed at this point either, since this flag is used by the server to decide to invoke the virNetServerClientClose method.
I did it because virNetMessageFree calls a callback, and I didn't want to audit whether that callback might temporarily drop the mutex or reference client->sock.
That all said, these 2 hunks are at worst harmless, so ACK to the patch.
Fair enough, so I pushed as-is. -- Eric Blake eblake@redhat.com +1-801-349-2682 Libvirt virtualization library http://libvirt.org