Quite a long mail, I'm skipping all but the original report here.
From: Ihar Smertsin <I.Smertsin(a)sam-solutions.net>
To: Aliaksandr Chabatar <A.Chabatar(a)sam-solutions.net>
Date: Tue, 8 Feb 2011 15:05:59 +0200
Subject: Libvirt issues in windows
Hello Alexandr,
On the client version of the library under Windows detected the following errors:
1. When you call some library functions is a growth of handles of resources.
Growth handles can be found as follows:
run the task manager;
run the utility virsh;
connect this tool to the server is running libvirtd service; (virsh -c qemu + tcp: / /
192.168.117.107/system)
execute any command, for example list;
if we continue to call a command list, then the task manager can be found that the
number of handles will increase by about 5-6,
The exact same situation arises every time you call some library functions, such as:
virConnectListDomains, virConnectNumOfDomains, virDomainLookupByID,
virDomainLookupByName and others;
I can reproduce this problem and fixed it. See
https://www.redhat.com/archives/libvir-list/2011-March/msg00809.html
for the patch. The next libvirt release scheduled for end of March
(IIRC) will contain it.
The problem was that libvirt uses a conditional variable during remote
communication. This conditional variable wasn't freed correctly.
Resulting in leaking one handle per remote call. virsh commands like
list do multiple remote calls resulting in leaking multiple handles at
once.
2. In some cases, there is a growth of CPU resources. It happened in
the following situations:
We have a service that detects different types of virtual systems, including KVM.
This service uses the client library libvirt. Our network has a host that is running
windows 2008, which supports hyper-v virtualization.
Our service is trying to determine the type of virtualization trying to connect to the
system under different protocols.
When the turn of the KVM, then there is the following situation. When we call the
function virConnectOpen with parameter (qemu + tcp: / / 192.168.117.178:135 / system)
error occurs. (Error: internal error received hangup / error event on socket)
And then our service starts to take the CPU up to 20-25%.
I didn't look into this one in detail yet. It could be that something
is not properly cleanup when this error occurs and this results in the
high CPU load. This will need some further investigation.
Matthias