On Fri, Oct 08, 2021 at 01:56:24PM +0200, Michal Prívozník wrote:
On 10/8/21 1:36 PM, Daniel P. Berrangé wrote:
> On Fri, Oct 08, 2021 at 01:12:49PM +0200, Michal Privoznik wrote:
>> When a server decides to close a client, the
>> virNetServerClientCloseLocked() is called. In here various
>> cleanup steps are taken, but the most important part (from this
>> commit's POV at least) is the way that the socket is closed.
>> Firstly, removal of the socket associated with the client from
>> the event loop is signalized and then the socket is unrefed. The
>> socket is not closed just yet though, because the event loop
>> holds a reference to it. This reference will be freed as soon as
>> the event loop wakes up and starts issuing callbacks (in this
>> case virNetSocketEventFree()).
>>
>> So far, this is how things usually work. But if the daemon
>> reaches the number of opened files limit, things start to work
>> differently.
>>
>> If the RLIMIT_NOFILE limit is reached and there's a client that
>> wants to connect then the event loop wakes up, sees POLLIN on the
>> socket and calls virNetServerServiceAccept() which in turn calls
>> virNetSocketAccept(). But because of the limit, accept() fails
>> with EMFILE leaving the POLLIN event unhandled. The dispatch then
>> continues to next FDs with events on them. BUT, it will NOT call
>> the socket removal callback (virNetSocketEventFree()) because it
>> has low priority (G_PRIORITY_DEFAULT_IDLE). Per glib's
>> documentation:
>
> if virNetSocketAccept fails, there's no client socket to
> remove/close. AFAIK, we are not removing the server
> socket either.
>
> So I'm not following which socket is supposedly being
> removed in this scenario ?
Right. I didn't phrase it properly. Let's call the
/var/run/libvirt/libvirt-sock SockA and accepted socket (i.e. per
client) SockB.
Let the NOFILE limit be just the right value so that both SockA and
SockB can exist, but no more. Now, let's have another client trying to
connect(). The event loop wakes up, because there's an event on SockA.
But the connection can't be accept()-ed. Alright, so let the already
connected client disconnect (that is, SockB). This will result in:
g_idle_add(virEventGLibHandleRemoveIdle, SockB); /* I've made a
shortcut here, in reality there's a data variable that points to SockB,
but you get the idea */
Now, because we still haven't handled the connection attempt at SockA
the idle source won't ever be called because it has lower priority.
Therefore, the event loop is stuck in endless loop trying to accept()
connection on SockA.
Ah i see, "idle" callbacks aren't called because the event loop
isn't idle.
So with this change, we'll let the idle callback run, which will
close the FD which will let libvird accept the client which gets
us out of the 100% CPU burn loop..... unless there were two
pending clients....
So, ACK to this patch as a quick fix, but it feels like we
ultimately have a more serious problem.
I wonder if we should temporarily stop watching events on
the incoming server if we get EMFILE first time on accept,
and set a timer to re-enable events 60 seconds later. Rinse,
repeat.
>> * Each event source is assigned a priority. The default
priority,
>> * %G_PRIORITY_DEFAULT, is 0. Values less than 0 denote higher priorities.
>> * Values greater than 0 denote lower priorities. Events from high priority
>> * sources are always processed before events from lower priority sources.
>>
>> and per g_idle_add() documentation:
>>
>> * Adds a function to be called whenever there are no higher priority
>> * events pending to the default main loop. The function is given the
>> * default idle priority, %G_PRIORITY_DEFAULT_IDLE.
>>
>> Now, because we did not accept() the client we are constantly
>> seeing POLLIN on the main socket and thus the removal of the
>> client socket won't ever happen.
>>
>> The fix is to set at least the same priority as other sources,
>> but since we want to just close an FD, let's give it the highest
>> priority and call it before handling other events.
>>
>> This issue can be easily reproduced, for instance:
>>
>> # ulimit -S -n 40 (tweak this number if needed)
>> # ./src/libvirtd
>>
>> from another terminal:
>>
>> # for ((i=0; i<100; i++)); do virsh list & done; virsh list
>>
>> The last `virsh list` must not get stuck.
>
> The bug below describes libvirt core dumps rather than
> hangs. Do you know why it dumped ?
Because of glib. We called some glib function which internally wanted to
open a pipe which failed. Thus glib aborted. I don't remember the exact
function, but I can look it up if you want.
Oh, so this patch is not actally fixing the core dump scenario
then. Even after we fix the bug that prevents clients disconnecting,
it will still dump if this API is called at a point where we are in
a EMFILE state ?
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|