[libvirt] Problem of host CPU topology parsing

Hi, We have problem of host CPU topology parsing on special platforms (general platforms are fine). E.g. On a AMD machine with 48 CPUs [1] (4 sockets, 6 cores indeed [2]), VIR_NODEINFO_MAXCPUS [3] will always return 24 as the total CPU number. As a result, a domain without "cpuset" or "placement='auto'" (which drives numad) will only be pinned to part of the host CPUs (in the case of the 48 CPUS AMD machine, domain process will be pinned to only the first 24 CPUs), which cause the performance lost. And actually it's also functional bug, as the "cpuset" specified by user could be truncated. If a domain uses "placement='auto'", and the advisory nodeset returned from numad has node(s) exceeds the wrong max CPU number, the domain will fail to start, as the bitmask passed to sched_setaffinity is fully filled with zero. "nodeinfo.cpus" has the right number though, but I'm not sure if it should be used as the max CPU number. VIR_NODEINFO_MAXCPUS is used in many places, and I'd never want to fix something here, but break other things there. Anyone has thought on how to sort the fucky topology out? [1] http://fpaste.org/MTtz/ [2] http://fpaste.org/mtoA/, http://fpaste.org/EPLd/ [3] #define VIR_NODEINFO_MAXCPUS(nodeinfo) ((nodeinfo).nodes*(nodeinfo).sockets*(nodeinfo).cores*(nodeinfo).threads) Regards, Osier

On Fri, May 11, 2012 at 04:21:48PM +0800, Osier Yang wrote:
Hi,
We have problem of host CPU topology parsing on special platforms (general platforms are fine). E.g.
On a AMD machine with 48 CPUs [1] (4 sockets, 6 cores indeed [2]), VIR_NODEINFO_MAXCPUS [3] will always return 24 as the total CPU number.
If it is returning 24, then surely we have the 'nodes' value wrong in the virNodeInfo ? It sounds like it should have been set to 2 (4 * 6 * 2 => 48)
[3] #define VIR_NODEINFO_MAXCPUS(nodeinfo) ((nodeinfo).nodes*(nodeinfo).sockets*(nodeinfo).cores*(nodeinfo).threads)
Rgeards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 2012年05月11日 16:35, Daniel P. Berrange wrote:
On Fri, May 11, 2012 at 04:21:48PM +0800, Osier Yang wrote:
Hi,
We have problem of host CPU topology parsing on special platforms (general platforms are fine). E.g.
On a AMD machine with 48 CPUs [1] (4 sockets, 6 cores indeed [2]), VIR_NODEINFO_MAXCPUS [3] will always return 24 as the total CPU number.
If it is returning 24, then surely we have the 'nodes' value wrong in the virNodeInfo ? It sounds like it should have been set to 2 (4 * 6 * 2 => 48)
/* nodeinfo->sockets is supposed to be a number of sockets per NUMA node, * however if NUMA nodes are not composed of whole sockets, we just lie * about the number of NUMA nodes and force apps to check capabilities XML * for the actual NUMA topology. */ if (nodeinfo->sockets % nodeinfo->nodes == 0) nodeinfo->sockets /= nodeinfo->nodes; else nodeinfo->nodes = 1; Jirka said this was for a fix, but I don't quite understand it, what does the "nodeinfo.nodes" mean actually? Shouldn't it be 8 (for the 48 CPUs machine) instead? But then we will be wrong again with using VIR_NODEINFO_MAXCPUS.
[3] #define VIR_NODEINFO_MAXCPUS(nodeinfo) ((nodeinfo).nodes*(nodeinfo).sockets*(nodeinfo).cores*(nodeinfo).threads)
Rgeards, Daniel

On 2012年05月11日 16:40, Osier Yang wrote:
On 2012年05月11日 16:35, Daniel P. Berrange wrote:
On Fri, May 11, 2012 at 04:21:48PM +0800, Osier Yang wrote:
Hi,
We have problem of host CPU topology parsing on special platforms (general platforms are fine). E.g.
On a AMD machine with 48 CPUs [1] (4 sockets, 6 cores indeed [2]), VIR_NODEINFO_MAXCPUS [3] will always return 24 as the total CPU number.
If it is returning 24, then surely we have the 'nodes' value wrong in the virNodeInfo ? It sounds like it should have been set to 2 (4 * 6 * 2 => 48)
/* nodeinfo->sockets is supposed to be a number of sockets per NUMA node, * however if NUMA nodes are not composed of whole sockets, we just lie * about the number of NUMA nodes and force apps to check capabilities XML * for the actual NUMA topology. */ if (nodeinfo->sockets % nodeinfo->nodes == 0) nodeinfo->sockets /= nodeinfo->nodes; else nodeinfo->nodes = 1;
Jirka said this was for a fix, but I don't quite understand it, what does the "nodeinfo.nodes" mean actually? Shouldn't it be 8 (for the 48 CPUs machine) instead?But then we will be wrong again with using VIR_NODEINFO_MAXCPUS.
The capabilities output: http://fpaste.org/0SG5/
[3] #define VIR_NODEINFO_MAXCPUS(nodeinfo) ((nodeinfo).nodes*(nodeinfo).sockets*(nodeinfo).cores*(nodeinfo).threads)
Rgeards, Daniel
-- libvir-list mailing list libvir-list@redhat.com https://www.redhat.com/mailman/listinfo/libvir-list

On 11.05.2012 10:40, Osier Yang wrote:
On 2012年05月11日 16:35, Daniel P. Berrange wrote:
On Fri, May 11, 2012 at 04:21:48PM +0800, Osier Yang wrote:
Hi,
We have problem of host CPU topology parsing on special platforms (general platforms are fine). E.g.
On a AMD machine with 48 CPUs [1] (4 sockets, 6 cores indeed [2]), VIR_NODEINFO_MAXCPUS [3] will always return 24 as the total CPU number.
If it is returning 24, then surely we have the 'nodes' value wrong in the virNodeInfo ? It sounds like it should have been set to 2 (4 * 6 * 2 => 48)
/* nodeinfo->sockets is supposed to be a number of sockets per NUMA node, * however if NUMA nodes are not composed of whole sockets, we just lie * about the number of NUMA nodes and force apps to check capabilities XML * for the actual NUMA topology. */ if (nodeinfo->sockets % nodeinfo->nodes == 0) nodeinfo->sockets /= nodeinfo->nodes; else nodeinfo->nodes = 1;
Jirka said this was for a fix, but I don't quite understand it, what does the "nodeinfo.nodes" mean actually? Shouldn't it be 8 (for the 48 CPUs machine) instead? But then we will be wrong again with using VIR_NODEINFO_MAXCPUS.
Why do you think it will be wrong? My understanding is that VIR_NODEINFO_MAXCPUS just tell the max number of possible cpus not the actual. So if it's over 48 we are safe. Btw: the code above seems like a hack to me. Michal

On Fri, May 11, 2012 at 10:47:06 +0200, Michal Privoznik wrote:
On 11.05.2012 10:40, Osier Yang wrote:
/* nodeinfo->sockets is supposed to be a number of sockets per NUMA node, * however if NUMA nodes are not composed of whole sockets, we just lie * about the number of NUMA nodes and force apps to check capabilities XML * for the actual NUMA topology. */ if (nodeinfo->sockets % nodeinfo->nodes == 0) nodeinfo->sockets /= nodeinfo->nodes; else nodeinfo->nodes = 1;
Jirka said this was for a fix, but I don't quite understand it, what does the "nodeinfo.nodes" mean actually? Shouldn't it be 8 (for the 48 CPUs machine) instead? But then we will be wrong again with using VIR_NODEINFO_MAXCPUS.
Why do you think it will be wrong? My understanding is that VIR_NODEINFO_MAXCPUS just tell the max number of possible cpus not the actual. So if it's over 48 we are safe.
Not really, the macro should count exactly the number of CPUs available to host, otherwise lots of other issues (incl. backward compatibility) appear. It is just a badly named macro that should never exist but we can't do anything with it since it is our public API.
Btw: the code above seems like a hack to me.
Yes, it is a hack but it's unfortunately required because we can't change the macro. Anyway, I agree with Daniel that the bug most likely lies somewhere in the code that populates nodeinfo structure. Jirka

On 2012年05月11日 17:01, Jiri Denemark wrote:
On Fri, May 11, 2012 at 10:47:06 +0200, Michal Privoznik wrote:
On 11.05.2012 10:40, Osier Yang wrote:
/* nodeinfo->sockets is supposed to be a number of sockets per NUMA node, * however if NUMA nodes are not composed of whole sockets, we just lie * about the number of NUMA nodes and force apps to check capabilities XML * for the actual NUMA topology. */ if (nodeinfo->sockets % nodeinfo->nodes == 0) nodeinfo->sockets /= nodeinfo->nodes; else nodeinfo->nodes = 1;
Jirka said this was for a fix, but I don't quite understand it, what does the "nodeinfo.nodes" mean actually? Shouldn't it be 8 (for the 48 CPUs machine) instead? But then we will be wrong again with using VIR_NODEINFO_MAXCPUS.
Why do you think it will be wrong? My understanding is that VIR_NODEINFO_MAXCPUS just tell the max number of possible cpus not the actual. So if it's over 48 we are safe.
Not really, the macro should count exactly the number of CPUs available to host, otherwise lots of other issues (incl. backward compatibility) appear. It is just a badly named macro that should never exist but we can't do anything with it since it is our public API.
Btw: the code above seems like a hack to me.
Yes, it is a hack but it's unfortunately required because we can't change the macro.
Anyway, I agree with Daniel that the bug most likely lies somewhere in the code that populates nodeinfo structure.
Jirka
In /proc/cpuinfo: <snip> cpu cores : 12 </snip> However, there are only 6 core IDs, as showed in http://fpaste.org/mtoA/. And we parse the core_id file of each CPU as: core = parse_core(cpu); if (!CPU_ISSET(core, &core_mask)) { CPU_SET(core, &core_mask); nodeinfo->cores++; } and thus get only 6 cores. Don't known how 12 in /proc/cpuinfo is figured out. But could it be a clue? Regards, Osier

On Fri, May 11, 2012 at 05:42:34PM +0800, Osier Yang wrote:
On 2012年05月11日 17:01, Jiri Denemark wrote:
On Fri, May 11, 2012 at 10:47:06 +0200, Michal Privoznik wrote:
On 11.05.2012 10:40, Osier Yang wrote:
/* nodeinfo->sockets is supposed to be a number of sockets per NUMA node, * however if NUMA nodes are not composed of whole sockets, we just lie * about the number of NUMA nodes and force apps to check capabilities XML * for the actual NUMA topology. */ if (nodeinfo->sockets % nodeinfo->nodes == 0) nodeinfo->sockets /= nodeinfo->nodes; else nodeinfo->nodes = 1;
Jirka said this was for a fix, but I don't quite understand it, what does the "nodeinfo.nodes" mean actually? Shouldn't it be 8 (for the 48 CPUs machine) instead? But then we will be wrong again with using VIR_NODEINFO_MAXCPUS.
Why do you think it will be wrong? My understanding is that VIR_NODEINFO_MAXCPUS just tell the max number of possible cpus not the actual. So if it's over 48 we are safe.
Not really, the macro should count exactly the number of CPUs available to host, otherwise lots of other issues (incl. backward compatibility) appear. It is just a badly named macro that should never exist but we can't do anything with it since it is our public API.
Btw: the code above seems like a hack to me.
Yes, it is a hack but it's unfortunately required because we can't change the macro.
Anyway, I agree with Daniel that the bug most likely lies somewhere in the code that populates nodeinfo structure.
Jirka
In /proc/cpuinfo:
<snip> cpu cores : 12 </snip>
However, there are only 6 core IDs, as showed in http://fpaste.org/mtoA/. And we parse the core_id file of each CPU as:
core = parse_core(cpu); if (!CPU_ISSET(core, &core_mask)) { CPU_SET(core, &core_mask); nodeinfo->cores++; }
and thus get only 6 cores. Don't known how 12 in /proc/cpuinfo is figured out. But could it be a clue?
Ahhh. The AMD 12 "core" CPUs are in fact a pair of 6 core CPUs with 2 NUMA nodes in the CPU itself. http://frankdenneman.nl/2011/01/amd-magny-cours-and-esx/ "Instead of developing one CPU with 12 cores, the Magny Cours is actually two 6 core “Bulldozer” CPUs combined in to one package." So we need to take account of this when calculating the NUMA nodes Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Fri, May 11, 2012 at 10:53:12 +0100, Daniel P. Berrange wrote:
On Fri, May 11, 2012 at 05:42:34PM +0800, Osier Yang wrote:
On 2012年05月11日 17:01, Jiri Denemark wrote:
On Fri, May 11, 2012 at 10:47:06 +0200, Michal Privoznik wrote:
On 11.05.2012 10:40, Osier Yang wrote:
/* nodeinfo->sockets is supposed to be a number of sockets per NUMA node, * however if NUMA nodes are not composed of whole sockets, we just lie * about the number of NUMA nodes and force apps to check capabilities XML * for the actual NUMA topology. */ if (nodeinfo->sockets % nodeinfo->nodes == 0) nodeinfo->sockets /= nodeinfo->nodes; else nodeinfo->nodes = 1;
Jirka said this was for a fix, but I don't quite understand it, what does the "nodeinfo.nodes" mean actually? Shouldn't it be 8 (for the 48 CPUs machine) instead? But then we will be wrong again with using VIR_NODEINFO_MAXCPUS.
Why do you think it will be wrong? My understanding is that VIR_NODEINFO_MAXCPUS just tell the max number of possible cpus not the actual. So if it's over 48 we are safe.
Not really, the macro should count exactly the number of CPUs available to host, otherwise lots of other issues (incl. backward compatibility) appear. It is just a badly named macro that should never exist but we can't do anything with it since it is our public API.
Btw: the code above seems like a hack to me.
Yes, it is a hack but it's unfortunately required because we can't change the macro.
Anyway, I agree with Daniel that the bug most likely lies somewhere in the code that populates nodeinfo structure.
Jirka
In /proc/cpuinfo:
<snip> cpu cores : 12 </snip>
However, there are only 6 core IDs, as showed in http://fpaste.org/mtoA/. And we parse the core_id file of each CPU as:
core = parse_core(cpu); if (!CPU_ISSET(core, &core_mask)) { CPU_SET(core, &core_mask); nodeinfo->cores++; }
and thus get only 6 cores. Don't known how 12 in /proc/cpuinfo is figured out. But could it be a clue?
Ahhh. The AMD 12 "core" CPUs are in fact a pair of 6 core CPUs with 2 NUMA nodes in the CPU itself.
Oh, so the problem is that two 6-core CPUs share the same socket and thus have the same physical ID. So it's either 8 6-core CPUs or 4 12-core CPUs. Not sure which one is better to present. The first one is the real thing and the second one is how AMD presents the reality :-) Anyway, we should do something with /* Parse core */ core = parse_core(cpu); if (!CPU_ISSET(core, &core_mask)) { CPU_SET(core, &core_mask); nodeinfo->cores++; } /* Parse socket */ sock = parse_socket(cpu); if (!CPU_ISSET(sock, &socket_mask)) { CPU_SET(sock, &socket_mask); nodeinfo->sockets++; } which just ignores duplicate physical/core IDs. I feel like this was added there for some reason, though... Jirka

On 2012年05月11日 18:07, Jiri Denemark wrote:
On Fri, May 11, 2012 at 10:53:12 +0100, Daniel P. Berrange wrote:
On Fri, May 11, 2012 at 05:42:34PM +0800, Osier Yang wrote:
On 2012年05月11日 17:01, Jiri Denemark wrote:
On Fri, May 11, 2012 at 10:47:06 +0200, Michal Privoznik wrote:
On 11.05.2012 10:40, Osier Yang wrote:
/* nodeinfo->sockets is supposed to be a number of sockets per NUMA node, * however if NUMA nodes are not composed of whole sockets, we just lie * about the number of NUMA nodes and force apps to check capabilities XML * for the actual NUMA topology. */ if (nodeinfo->sockets % nodeinfo->nodes == 0) nodeinfo->sockets /= nodeinfo->nodes; else nodeinfo->nodes = 1;
Jirka said this was for a fix, but I don't quite understand it, what does the "nodeinfo.nodes" mean actually? Shouldn't it be 8 (for the 48 CPUs machine) instead? But then we will be wrong again with using VIR_NODEINFO_MAXCPUS.
Why do you think it will be wrong? My understanding is that VIR_NODEINFO_MAXCPUS just tell the max number of possible cpus not the actual. So if it's over 48 we are safe.
Not really, the macro should count exactly the number of CPUs available to host, otherwise lots of other issues (incl. backward compatibility) appear. It is just a badly named macro that should never exist but we can't do anything with it since it is our public API.
Btw: the code above seems like a hack to me.
Yes, it is a hack but it's unfortunately required because we can't change the macro.
Anyway, I agree with Daniel that the bug most likely lies somewhere in the code that populates nodeinfo structure.
Jirka
In /proc/cpuinfo:
<snip> cpu cores : 12 </snip>
However, there are only 6 core IDs, as showed in http://fpaste.org/mtoA/. And we parse the core_id file of each CPU as:
core = parse_core(cpu); if (!CPU_ISSET(core,&core_mask)) { CPU_SET(core,&core_mask); nodeinfo->cores++; }
and thus get only 6 cores. Don't known how 12 in /proc/cpuinfo is figured out. But could it be a clue?
Ahhh. The AMD 12 "core" CPUs are in fact a pair of 6 core CPUs with 2 NUMA nodes in the CPU itself.
Oh, so the problem is that two 6-core CPUs share the same socket and thus have the same physical ID. So it's either 8 6-core CPUs or 4 12-core CPUs. Not sure which one is better to present. The first one is the real thing and the second one is how AMD presents the reality :-) Anyway, we should do something with
/* Parse core */ core = parse_core(cpu); if (!CPU_ISSET(core,&core_mask)) { CPU_SET(core,&core_mask); nodeinfo->cores++; }
/* Parse socket */ sock = parse_socket(cpu); if (!CPU_ISSET(sock,&socket_mask)) { CPU_SET(sock,&socket_mask); nodeinfo->sockets++; }
which just ignores duplicate physical/core IDs. I feel like this was added there for some reason, though...
Do you mean remove the checking of duplicate physical/core IDs? if so, we will get both nodeinfo->cores and nodeinfo->sockets with value 48. Regards, Osier

On 2012年05月11日 16:47, Michal Privoznik wrote:
On 11.05.2012 10:40, Osier Yang wrote:
On 2012年05月11日 16:35, Daniel P. Berrange wrote:
On Fri, May 11, 2012 at 04:21:48PM +0800, Osier Yang wrote:
Hi,
We have problem of host CPU topology parsing on special platforms (general platforms are fine). E.g.
On a AMD machine with 48 CPUs [1] (4 sockets, 6 cores indeed [2]), VIR_NODEINFO_MAXCPUS [3] will always return 24 as the total CPU number.
If it is returning 24, then surely we have the 'nodes' value wrong in the virNodeInfo ? It sounds like it should have been set to 2 (4 * 6 * 2 => 48)
/* nodeinfo->sockets is supposed to be a number of sockets per NUMA node, * however if NUMA nodes are not composed of whole sockets, we just lie * about the number of NUMA nodes and force apps to check capabilities XML * for the actual NUMA topology. */ if (nodeinfo->sockets % nodeinfo->nodes == 0) nodeinfo->sockets /= nodeinfo->nodes; else nodeinfo->nodes = 1;
Jirka said this was for a fix, but I don't quite understand it, what does the "nodeinfo.nodes" mean actually? Shouldn't it be 8 (for the 48 CPUs machine) instead? But then we will be wrong again with using VIR_NODEINFO_MAXCPUS.
Why do you think it will be wrong? My understanding is that VIR_NODEINFO_MAXCPUS just tell the max number of possible cpus not the actual. So if it's over 48 we are safe.
No, one example of the potetial problems is command "virsh vcpuinfo", the current problem is the display of CPU is truncated, if we change it to 8, then it's extended.
Btw: the code above seems like a hack to me.
Michal
Regards, Osier

On Fri, May 11, 2012 at 04:40:08PM +0800, Osier Yang wrote:
On 2012年05月11日 16:35, Daniel P. Berrange wrote:
On Fri, May 11, 2012 at 04:21:48PM +0800, Osier Yang wrote:
Hi,
We have problem of host CPU topology parsing on special platforms (general platforms are fine). E.g.
On a AMD machine with 48 CPUs [1] (4 sockets, 6 cores indeed [2]), VIR_NODEINFO_MAXCPUS [3] will always return 24 as the total CPU number.
If it is returning 24, then surely we have the 'nodes' value wrong in the virNodeInfo ? It sounds like it should have been set to 2 (4 * 6 * 2 => 48)
/* nodeinfo->sockets is supposed to be a number of sockets per NUMA node, * however if NUMA nodes are not composed of whole sockets, we just lie * about the number of NUMA nodes and force apps to check capabilities XML * for the actual NUMA topology. */ if (nodeinfo->sockets % nodeinfo->nodes == 0) nodeinfo->sockets /= nodeinfo->nodes; else nodeinfo->nodes = 1;
Jirka said this was for a fix, but I don't quite understand it, what does the "nodeinfo.nodes" mean actually? Shouldn't it be 8 (for the 48 CPUs machine) instead? But then we will be wrong again with using VIR_NODEINFO_MAXCPUS.
In the capabilities XML you post all the nodes have the same number of sockets, so that workaround should not have been coming into effect. I think there's a flaw in the code that populates 'nodeinfo' earlier than this. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On Fri, May 11, 2012 at 09:35:48 +0100, Daniel P. Berrange wrote:
On Fri, May 11, 2012 at 04:21:48PM +0800, Osier Yang wrote:
Hi,
We have problem of host CPU topology parsing on special platforms (general platforms are fine). E.g.
On a AMD machine with 48 CPUs [1] (4 sockets, 6 cores indeed [2]), VIR_NODEINFO_MAXCPUS [3] will always return 24 as the total CPU number.
If it is returning 24, then surely we have the 'nodes' value wrong in the virNodeInfo ? It sounds like it should have been set to 2 (4 * 6 * 2 => 48)
According to capabilities XML, there are 8 NUMA nodes so setting them to 2 wouldn't make much sense. It would be cool to know the correct hardware topology but I'm not sure how to get that so that we can detect which part of our detection mechanism is wrong. /proc/cpuinfo lists 48 CPUs but with only 4 distinct physical IDs and 6 distinct core IDs, which looks suspicious to me. Jirka

On Fri, May 11, 2012 at 10:46:26AM +0200, Jiri Denemark wrote:
On Fri, May 11, 2012 at 09:35:48 +0100, Daniel P. Berrange wrote:
On Fri, May 11, 2012 at 04:21:48PM +0800, Osier Yang wrote:
Hi,
We have problem of host CPU topology parsing on special platforms (general platforms are fine). E.g.
On a AMD machine with 48 CPUs [1] (4 sockets, 6 cores indeed [2]), VIR_NODEINFO_MAXCPUS [3] will always return 24 as the total CPU number.
If it is returning 24, then surely we have the 'nodes' value wrong in the virNodeInfo ? It sounds like it should have been set to 2 (4 * 6 * 2 => 48)
According to capabilities XML, there are 8 NUMA nodes so setting them to 2 wouldn't make much sense. It would be cool to know the correct hardware topology but I'm not sure how to get that so that we can detect which part of our detection mechanism is wrong. /proc/cpuinfo lists 48 CPUs but with only 4 distinct physical IDs and 6 distinct core IDs, which looks suspicious to me.
We probably want to grab the sysfs / proc files for this machine (and a few other different machines) and use them to extend the nodeinfotest test case data, since this is clearly a troublesome bit of code. Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|
participants (4)
-
Daniel P. Berrange
-
Jiri Denemark
-
Michal Privoznik
-
Osier Yang