[libvirt] [PATCH 0/7 v2] CPU tuning improvements

As proposed in https://www.redhat.com/archives/libvir-list/2012-October/msg00522.html, This patch set is trying to resolve the conflicts between <vcpu> , <vcpupin>, and <emulatorpin>, when doing CPU tuning for domain process or vcpu threads. PATCH 1/7 changes all the doc together, as the 3 elements are related with each other too tightly, splitting the changes in separate patches will just make things obscure and hard to review. See commit log of PATCH 1/7 for overview/details about the solution. Osier Yang (7): doc: Clarify the relationship between <vcpu>, <vcpupin>, and <emulatorpin> conf: Do not allow vcpupin's cpuid exceed current vcpus number conf: Initialize the pinning policy for vcpus qemu: Create or remove cgroup when doing vcpu hotpluging qemu: Initialize cpuset for hotplugged vcpu as def->cpuset conf: Prohibit emulatorpin if vcpu placement is auto qemu: Ignore def->cpumask if emulatorpin is specified docs/formatdomain.html.in | 42 +++++++++++------ src/conf/domain_conf.c | 103 ++++++++++++++++++++++++++++++++++++++++- src/conf/domain_conf.h | 3 + src/libvirt_private.syms | 1 + src/qemu/qemu_driver.c | 110 +++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_process.c | 3 +- 6 files changed, 243 insertions(+), 19 deletions(-) -- 1.7.7.6 Regards, Osier

These 3 elements conflicts with each other in either the doc or the underlying codes. Current problems: Problem 1: The doc shouldn't simply say "These settings are superseded by CPU tuning. " for element <vcpu>. As except the tuning, <vcpu> allows to specify the current, maxmum vcpu number. Apart from that, <vcpu> also allows to specify the placement as "auto", which binds the domain process to the advisory nodeset from numad. Problem 2: Doc for <vcpu> says its "cpuset" specify the physical CPUs that the vcpus can be pinned. But it's not the truth, as actually it only pin domain process to the specified physical CPUs. So either it's a document bug, or code bug. Problem 3: Doc for <vcpupin> says it supersed "cpuset" of <vcpu>, it's not quite correct, as each <vcpupin> specify the pinning policy only for one vcpu. How about the ones which doesn't have <vcpupin> specified? it says the vcpu will be pinned to all available physical CPUs, but what's the meaning of attribute "cpuset" of <vcpu> then? Problem 4: Doc for <emulatorpin> says it pin the emulator threads (domain process in other context, perhaps another follow up patch to cleanup the inconsistency is needed) to the physical CPUs specified its attribute "cpuset". Which conflicts with <vcpu>'s "cpuset". And actually in the underlying codes, it set the affinity for domain process twice if both "cpuset" for <vcpu> and <emulatorpin> are specified, and <emulatorpin>'s pinning will override <vcpu>'s. Problem 5: When "placement" of <vcpu> is "auto" (I.e. uses numad to get the advisory nodeset to which the domain process is pinned to), it will also be overridden by <emulatorpin>, This patch is trying to sort out the conflicts or bugs by: 1) Don't say <vcpu> is superseded by <cputune> 2) Keep the semanteme for "cpuset" of <vcpu> (I.e. Still says it specify the physical CPUs the virtual CPUs). But modifying it to mention it also set the pinning policy for domain process, and the CPU placement of domain process specified by "cpuset" of <vcpu> will be ingored if <emulatorpin> specified, and similary, the CPU placement of vcpu thread will be ignored if it has <vcpupin> specified, for vcpu which doesn't have <vcpupin> specified, it inherits "cpuset" of <vcpu>. 3) Don't say <vcpu> is supersed by <vcpupin>. If neither <vcpupin> nor "cpuset" of <vcpu> is specified, the vcpu will be pinned to all available pCPUs. 4) If neither <emulatorpin> nor "cpuset" of <vcpu> is specified, the domain process (emulator threads in the context) will be pinned to all available pCPUs. 5) If "placement" of <vcpu> is "auto", <emulatorpin> is not allowed. 6) hotplugged vcpus will also inherit "cpuset" of <vcpu> Codes changes according to above document changes: 1) Inherit def->cpumask for each vcpu which doesn't have <vcpupin> specified, during parsing. 2) ping the vcpu which doesn't have <vcpupin> specified to def->cpumask either by cgroup for sched_setaffinity(2), which is actually done by 1). 3) Error out if "placement" == "auto", and <emulatorpin> is specified. Otherwise, <emulatorpin> is honored, and "cpuset" of <cpuset> is ignored. 4) Setup cgroup for each hotplugged vcpu, and setup the pinning policy by either cgroup or sched_setaffinity(2). 5) Remove cgroup and <vcpupin> for each hot unplugged vcpu. Patches are following. --- docs/formatdomain.html.in | 42 +++++++++++++++++++++++++++--------------- 1 files changed, 27 insertions(+), 15 deletions(-) diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index d664e7e..1ae8cf4 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -357,8 +357,18 @@ the maximum supported by the hypervisor. <span class="since">Since 0.4.4</span>, this element can contain an optional <code>cpuset</code> attribute, which is a comma-separated - list of physical CPU numbers that virtual CPUs can be pinned - to. Each element in that list is either a single CPU number, + list of physical CPU numbers that domain process and virtual CPUs + can be pinned to by default. (NB: The pinning policy of domain + process and virtual CPUs can be specified separately by + <code>cputune</code>. If attribute <code>emulatorpin</code> + of <code>cputune</code> is specified, <code>cpuset</code> + specified by <code>vcpu</code> here will be ingored; Similarly, + For virtual CPUs which has <code>vcpupin</code> specified, + <code>cpuset</code> specified by <code>cpuset</code> here + will be ignored; For virtual CPUs which doesn't have + <code>vcpupin</code> specified, it will be pinned to the physical + CPUs specified by <code>cpuset</code> here). + Each element in that list is either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. <span class="since">Since 0.8.5</span>, the optional attribute <code>current</code> can @@ -374,8 +384,7 @@ if it's specified. If both <code>cpuset</code> and <code>placement</code> are not specified, or if <code>placement</code> is "static", but no <code>cpuset</code> is specified, the domain process will be pinned to - all the available physical CPUs. These settings are superseded - by <a href="#elementsCPUTuning">CPU tuning</a>. + all the available physical CPUs. </dd> </dl> @@ -411,23 +420,26 @@ <dt><code>vcpupin</code></dt> <dd> The optional <code>vcpupin</code> element specifies which of host's - physical CPUs the domain VCPU will be pinned to. This setting supersedes - previous VCPU placement specified in <a href="#elementsCPUAllocation">CPU - Allocation</a> using <code>vcpu</code> element. If this is omitted, - each VCPU is pinned to all the physical CPUs by default. It contains two - required attributes, the attribute <code>vcpu</code> specifies vcpu id, - and the attribute <code>cpuset</code> is same as - attribute <code>cpuset</code> - of element <code>vcpu</code>. (NB: Only qemu driver support) + physical CPUs the domain VCPU will be pinned to. If this is omitted, + and attribute <code>cpuset</code> of element <code>vcpu</code> is + not specified, the vCPU is pinned to all the physical CPUs by default. + It contains two required attributes, the attribute <code>vcpu</code> + specifies vcpu id, and the attribute <code>cpuset</code> is same as + attribute <code>cpuset</code> of element <code>vcpu</code>. + (NB: Only qemu driver support) <span class="since">Since 0.9.0</span> </dd> <dt><code>emulatorpin</code></dt> <dd> The optional <code>emulatorpin</code> element specifies which of host physical CPUs the "emulator", a subset of a domain not including vcpu, - will be pinned to. If this is omitted, "emulator" is pinned to all - the physical CPUs by default. It contains one required attribute - <code>cpuset</code> specifying which physical CPUs to pin to. + will be pinned to. If this is omitted, and attribute + <code>cpuset</code> of element <code>vcpu</code> is not specified, + "emulator" is pinned to all the physical CPUs by default. It contains + one required attribute <code>cpuset</code> specifying which physical + CPUs to pin to. NB, <code>emulatorpin</code> is not allowed if + attribute <code>placement</code> of element <code>vcpu</code> is + "auto". </dd> <dt><code>shares</code></dt> <dd> -- 1.7.7.6

On 10/12/2012 11:50 AM, Osier Yang wrote:
These 3 elements conflicts with each other in either the doc or the underlying codes.
Current problems:
Problem 1:
The doc shouldn't simply say "These settings are superseded by CPU tuning. " for element <vcpu>. As except the tuning, <vcpu> allows to specify the current, maxmum vcpu number. Apart from that, <vcpu> also allows to specify the placement as "auto", which binds the domain process to the advisory nodeset from numad.
Problem 2:
Doc for <vcpu> says its "cpuset" specify the physical CPUs that the vcpus can be pinned. But it's not the truth, as actually it only pin domain process to the specified physical CPUs. So either it's a document bug, or code bug.
Problem 3:
Doc for <vcpupin> says it supersed "cpuset" of <vcpu>, it's not quite correct, as each <vcpupin> specify the pinning policy only for one vcpu. How about the ones which doesn't have <vcpupin> specified? it says the vcpu will be pinned to all available physical CPUs, but what's the meaning of attribute "cpuset" of <vcpu> then?
Problem 4:
Doc for <emulatorpin> says it pin the emulator threads (domain process in other context, perhaps another follow up patch to cleanup the inconsistency is needed) to the physical CPUs specified its attribute "cpuset". Which conflicts with <vcpu>'s "cpuset". And actually in the underlying codes, it set the affinity for domain process twice if both "cpuset" for <vcpu> and <emulatorpin> are specified, and <emulatorpin>'s pinning will override <vcpu>'s.
Problem 5:
When "placement" of <vcpu> is "auto" (I.e. uses numad to get the advisory nodeset to which the domain process is pinned to), it will also be overridden by <emulatorpin>,
This patch is trying to sort out the conflicts or bugs by:
1) Don't say <vcpu> is superseded by <cputune>
2) Keep the semanteme for "cpuset" of <vcpu> (I.e. Still says it specify the physical CPUs the virtual CPUs). But modifying it to mention it also set the pinning policy for domain process, and the CPU placement of domain process specified by "cpuset" of <vcpu> will be ingored if <emulatorpin> specified, and similary, the CPU placement of vcpu thread will be ignored if it has <vcpupin> specified, for vcpu which doesn't have <vcpupin> specified, it inherits "cpuset" of <vcpu>.
3) Don't say <vcpu> is supersed by <vcpupin>. If neither <vcpupin> nor "cpuset" of <vcpu> is specified, the vcpu will be pinned to all available pCPUs.
4) If neither <emulatorpin> nor "cpuset" of <vcpu> is specified, the domain process (emulator threads in the context) will be pinned to all available pCPUs.
5) If "placement" of <vcpu> is "auto", <emulatorpin> is not allowed.
6) hotplugged vcpus will also inherit "cpuset" of <vcpu>
Codes changes according to above document changes:
1) Inherit def->cpumask for each vcpu which doesn't have <vcpupin> specified, during parsing.
2) ping the vcpu which doesn't have <vcpupin> specified to def->cpumask either by cgroup for sched_setaffinity(2), which is actually done by 1).
3) Error out if "placement" == "auto", and <emulatorpin> is specified. Otherwise, <emulatorpin> is honored, and "cpuset" of <cpuset> is ignored.
4) Setup cgroup for each hotplugged vcpu, and setup the pinning policy by either cgroup or sched_setaffinity(2).
5) Remove cgroup and <vcpupin> for each hot unplugged vcpu.
Patches are following.
--- docs/formatdomain.html.in | 42 +++++++++++++++++++++++++++--------------- 1 files changed, 27 insertions(+), 15 deletions(-)
diff --git a/docs/formatdomain.html.in b/docs/formatdomain.html.in index d664e7e..1ae8cf4 100644 --- a/docs/formatdomain.html.in +++ b/docs/formatdomain.html.in @@ -357,8 +357,18 @@ the maximum supported by the hypervisor. <span class="since">Since 0.4.4</span>, this element can contain an optional <code>cpuset</code> attribute, which is a comma-separated - list of physical CPU numbers that virtual CPUs can be pinned - to. Each element in that list is either a single CPU number, + list of physical CPU numbers that domain process and virtual CPUs + can be pinned to by default. (NB: The pinning policy of domain + process and virtual CPUs can be specified separately by + <code>cputune</code>. If attribute <code>emulatorpin</code> + of <code>cputune</code> is specified, <code>cpuset</code> + specified by <code>vcpu</code> here will be ingored; Similarly,
s/ingored/ignored/
+ For virtual CPUs which has <code>vcpupin</code> specified,
s/F/f/; s/has/have/
+ <code>cpuset</code> specified by <code>cpuset</code> here
The second 'cpuset' should be 'vcpu', doesn't it?
+ will be ignored; For virtual CPUs which doesn't have
s/doesn't/don't/
+ <code>vcpupin</code> specified, it will be pinned to the physical + CPUs specified by <code>cpuset</code> here).
I think you can leave out the sentence from '; For' onwards since it is explained already in previous sentences.
+ Each element in that list is either a single CPU number, a range of CPU numbers, or a caret followed by a CPU number to be excluded from a previous range. <span class="since">Since 0.8.5</span>, the optional attribute <code>current</code> can @@ -374,8 +384,7 @@ if it's specified. If both <code>cpuset</code> and <code>placement</code> are not specified, or if <code>placement</code> is "static", but no <code>cpuset</code> is specified, the domain process will be pinned to - all the available physical CPUs. These settings are superseded - by <a href="#elementsCPUTuning">CPU tuning</a>. + all the available physical CPUs. </dd> </dl>
@@ -411,23 +420,26 @@ <dt><code>vcpupin</code></dt> <dd> The optional <code>vcpupin</code> element specifies which of host's - physical CPUs the domain VCPU will be pinned to. This setting supersedes - previous VCPU placement specified in <a href="#elementsCPUAllocation">CPU - Allocation</a> using <code>vcpu</code> element. If this is omitted, - each VCPU is pinned to all the physical CPUs by default. It contains two - required attributes, the attribute <code>vcpu</code> specifies vcpu id, - and the attribute <code>cpuset</code> is same as - attribute <code>cpuset</code> - of element <code>vcpu</code>. (NB: Only qemu driver support) + physical CPUs the domain VCPU will be pinned to. If this is omitted, + and attribute <code>cpuset</code> of element <code>vcpu</code> is + not specified, the vCPU is pinned to all the physical CPUs by default. + It contains two required attributes, the attribute <code>vcpu</code> + specifies vcpu id, and the attribute <code>cpuset</code> is same as + attribute <code>cpuset</code> of element <code>vcpu</code>. + (NB: Only qemu driver support) <span class="since">Since 0.9.0</span> </dd> <dt><code>emulatorpin</code></dt> <dd> The optional <code>emulatorpin</code> element specifies which of host physical CPUs the "emulator", a subset of a domain not including vcpu, - will be pinned to. If this is omitted, "emulator" is pinned to all - the physical CPUs by default. It contains one required attribute - <code>cpuset</code> specifying which physical CPUs to pin to. + will be pinned to. If this is omitted, and attribute + <code>cpuset</code> of element <code>vcpu</code> is not specified, + "emulator" is pinned to all the physical CPUs by default. It contains + one required attribute <code>cpuset</code> specifying which physical + CPUs to pin to. NB, <code>emulatorpin</code> is not allowed if + attribute <code>placement</code> of element <code>vcpu</code> is + "auto". </dd> <dt><code>shares</code></dt> <dd>
ACK with those fixes. Martin

Setting pinning policy for vcpu which exceed current vcpus number makes no sense, and it could cause problem for APIs which associate the vcpu thread id with cgroup. --- src/conf/domain_conf.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 3c3d0ae..d8ea8ce 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8784,9 +8784,9 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, if (n && VIR_ALLOC_N(def->cputune.vcpupin, n) < 0) goto no_memory; - if (n > def->maxvcpus) { + if (n > def->vcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, - "%s", _("vcpupin nodes must be less than maxvcpus")); + "%s", _("vcpupin nodes must be less than current vcpus")); goto error; } -- 1.7.7.6

On 10/12/2012 11:50 AM, Osier Yang wrote:
Setting pinning policy for vcpu which exceed current vcpus number makes no sense, and it could cause problem for APIs which associate the vcpu thread id with cgroup. --- src/conf/domain_conf.c | 4 ++-- 1 files changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 3c3d0ae..d8ea8ce 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8784,9 +8784,9 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, if (n && VIR_ALLOC_N(def->cputune.vcpupin, n) < 0) goto no_memory;
- if (n > def->maxvcpus) { + if (n > def->vcpus) { virReportError(VIR_ERR_INTERNAL_ERROR, - "%s", _("vcpupin nodes must be less than maxvcpus")); + "%s", _("vcpupin nodes must be less than current vcpus")); goto error; }
Old machines wouldn't be loaded with newer libvirt with this change. I propose silently ignoring all the pinning over def->vcpus and error out with the same condition as now. Martin

Setting pinning policy for vcpu which exceeds current vcpus number just makes no sense, however, it could cause various problems, E.g. <vcpu current='1'>4</vcpu> <cputune> <vcpupin vcpuid='3' cpuset='4'/> </cputune> % virsh start linux error: Failed to start domain linux error: cannot set CPU affinity on process 32534: No such process We must have some odd codes underlying which produces the "on process 32534", but the point is why we not to prevent earlier when parsing? Note that this is only one of the problem it could cause. This patch is to ignore the <vcpupin> for not onlined vcpus. --- src/conf/domain_conf.c | 10 +++++++++- 1 files changed, 9 insertions(+), 1 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 3c3d0ae..5141be6 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8806,7 +8806,15 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, goto error; } - def->cputune.vcpupin[def->cputune.nvcpupin++] = vcpupin; + if (vcpupin->vcpuid >= def->vcpus) + /* To avoid the regression when daemon loading + * domain confs, we can't simply error out if + * <vcpupin> nodes greater than current vcpus, + * ignoring them instead. + */ + VIR_WARN("Ignore vcpupin for not onlined vcpus"); + else + def->cputune.vcpupin[def->cputune.nvcpupin++] = vcpupin; } VIR_FREE(nodes); -- 1.7.7.6

On 10/12/2012 03:52 PM, Osier Yang wrote:
Setting pinning policy for vcpu which exceeds current vcpus number just makes no sense, however, it could cause various problems, E.g.
<vcpu current='1'>4</vcpu> <cputune> <vcpupin vcpuid='3' cpuset='4'/> </cputune>
% virsh start linux error: Failed to start domain linux error: cannot set CPU affinity on process 32534: No such process
Well pointed out.
We must have some odd codes underlying which produces the "on process 32534", but the point is why we not to prevent earlier when parsing? Note that this is only one of the problem it could cause.
We should definitely fix it there as well. However I *think* that it is caused just by us trying to move the processor more times, which would be fixed by this. But that's just an idea.
This patch is to ignore the <vcpupin> for not onlined vcpus.
--- src/conf/domain_conf.c | 10 +++++++++- 1 files changed, 9 insertions(+), 1 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 3c3d0ae..5141be6 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8806,7 +8806,15 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, goto error; }
- def->cputune.vcpupin[def->cputune.nvcpupin++] = vcpupin; + if (vcpupin->vcpuid >= def->vcpus) + /* To avoid the regression when daemon loading + * domain confs, we can't simply error out if + * <vcpupin> nodes greater than current vcpus, + * ignoring them instead. + */ + VIR_WARN("Ignore vcpupin for not onlined vcpus"); + else + def->cputune.vcpupin[def->cputune.nvcpupin++] = vcpupin; } VIR_FREE(nodes);
ACK, Martin

Document for <vcpu>'s "cpuset" says: Since 0.4.4, this element can contain an optional cpuset attribute, which is a comma-separated list of physical CPU numbers that virtual CPUs can be pinned to. However, it's not the truth, libvirt actually pins the domain process to the specified pCPUs by "cpuset" of <vcpu>. And the vcpu thread are pinned to all available pCPUs if no <vcpupin> is specified for it. PATCH 1/7 provides more details on how this set resolve the confused relationship between <vcpu> and <vcpupin>. This patch is to implement the codes to inherit <vcpu>'s "cpuset" for vcpu that doesn't have <vcpupin> specified, and <vcpupin> for these vcpu will be ignored when formating. Underlying driver implementation will make sure the vcpu thread pinned to correct pCPUs. --- src/conf/domain_conf.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 70 insertions(+), 1 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index d8ea8ce..e404a68 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8810,6 +8810,41 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, } VIR_FREE(nodes); + /* Initialize the pinning policy for vcpus which doesn't has + * the policy specified explicitly as def->cpuset. + */ + if (def->cpumask) { + if (!def->cputune.vcpupin) { + if (VIR_ALLOC_N(def->cputune.vcpupin, def->vcpus) < 0) { + virReportOOMError(); + goto error; + } + } else { + if (VIR_REALLOC_N(def->cputune.vcpupin, def->vcpus) < 0) { + virReportOOMError(); + goto error; + } + } + + for (i = 0; i < def->vcpus; i++) { + if (!virDomainVcpuPinIsDuplicate(def->cputune.vcpupin, + def->cputune.nvcpupin, + i)) { + virDomainVcpuPinDefPtr vcpupin = NULL; + + if (VIR_ALLOC(vcpupin) < 0) { + virReportOOMError(); + goto error; + } + + vcpupin->cpumask = virBitmapNew(VIR_DOMAIN_CPUMASK_LEN); + virBitmapCopy(vcpupin->cpumask, def->cpumask); + vcpupin->vcpuid = i; + def->cputune.vcpupin[def->cputune.nvcpupin++] = vcpupin; + } + } + } + if ((n = virXPathNodeSet("./cputune/emulatorpin", ctxt, &nodes)) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("cannot extract emulatorpin nodes")); @@ -13320,6 +13355,31 @@ virDomainHubDefFormat(virBufferPtr buf, return 0; } +/* + * Return true if no <vcpupin> specified in domain XML + * (I.e. all vcpus inherit the cpuset from "cpuset" of + * <vcpu>). Or false otherwise. + */ +static bool +virDomainIsAllVcpupinInherited(virDomainDefPtr def) +{ + int i; + + if (!def->cpumask) { + if (!def->cputune.vcpupin) + return true; + else + return false; + } else { + for (i = 0; i < def->cputune.nvcpupin; i++) { + if (!virBitmapEqual(def->cputune.vcpupin[i]->cpumask, + def->cpumask)) + return false; + } + + return true; + } +} #define DUMPXML_FLAGS \ (VIR_DOMAIN_XML_SECURE | \ @@ -13493,7 +13553,8 @@ virDomainDefFormatInternal(virDomainDefPtr def, virBufferAsprintf(buf, " current='%u'", def->vcpus); virBufferAsprintf(buf, ">%u</vcpu>\n", def->maxvcpus); - if (def->cputune.shares || def->cputune.vcpupin || + if (def->cputune.shares || + (def->cputune.vcpupin && !virDomainIsAllVcpupinInherited(def)) || def->cputune.period || def->cputune.quota || def->cputune.emulatorpin || def->cputune.emulator_period || def->cputune.emulator_quota) @@ -13521,6 +13582,14 @@ virDomainDefFormatInternal(virDomainDefPtr def, if (def->cputune.vcpupin) { for (i = 0; i < def->cputune.nvcpupin; i++) { + /* Ignore the vcpupin which inherit from "cpuset" + * of "<vcpu>." + */ + if (def->cpumask && + virBitmapEqual(def->cpumask, + def->cputune.vcpupin[i]->cpumask)) + continue; + virBufferAsprintf(buf, " <vcpupin vcpu='%u' ", def->cputune.vcpupin[i]->vcpuid); -- 1.7.7.6

On 10/12/2012 11:50 AM, Osier Yang wrote:
Document for <vcpu>'s "cpuset" says:
Since 0.4.4, this element can contain an optional cpuset attribute, which is a comma-separated list of physical CPU numbers that virtual CPUs can be pinned to.
However, it's not the truth, libvirt actually pins the domain process to the specified pCPUs by "cpuset" of <vcpu>. And the vcpu thread are pinned to all available pCPUs if no <vcpupin> is specified for it.
PATCH 1/7 provides more details on how this set resolve the confused relationship between <vcpu> and <vcpupin>. This patch is to implement the codes to inherit <vcpu>'s "cpuset" for vcpu that doesn't have <vcpupin> specified, and <vcpupin> for these vcpu will be ignored when formating. Underlying driver implementation will make sure the vcpu thread pinned to correct pCPUs. --- src/conf/domain_conf.c | 71 +++++++++++++++++++++++++++++++++++++++++++++++- 1 files changed, 70 insertions(+), 1 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index d8ea8ce..e404a68 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8810,6 +8810,41 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, } VIR_FREE(nodes);
+ /* Initialize the pinning policy for vcpus which doesn't has + * the policy specified explicitly as def->cpuset. + */ + if (def->cpumask) { + if (!def->cputune.vcpupin) { + if (VIR_ALLOC_N(def->cputune.vcpupin, def->vcpus) < 0) { + virReportOOMError(); + goto error; + } + } else { + if (VIR_REALLOC_N(def->cputune.vcpupin, def->vcpus) < 0) { + virReportOOMError(); + goto error; + } + }
You can always use VIR_REALLOC_N, as I mentioned with v1, no need for the condition.
+ + for (i = 0; i < def->vcpus; i++) { + if (!virDomainVcpuPinIsDuplicate(def->cputune.vcpupin, + def->cputune.nvcpupin, + i)) { + virDomainVcpuPinDefPtr vcpupin = NULL; + + if (VIR_ALLOC(vcpupin) < 0) { + virReportOOMError(); + goto error; + } + + vcpupin->cpumask = virBitmapNew(VIR_DOMAIN_CPUMASK_LEN); + virBitmapCopy(vcpupin->cpumask, def->cpumask); + vcpupin->vcpuid = i; + def->cputune.vcpupin[def->cputune.nvcpupin++] = vcpupin; + } + } + } + if ((n = virXPathNodeSet("./cputune/emulatorpin", ctxt, &nodes)) < 0) { virReportError(VIR_ERR_INTERNAL_ERROR, "%s", _("cannot extract emulatorpin nodes")); [...]
Not exactly what I had in mind, but it fixes all the things I was worried about, so ACK (with the REALLOC fixed). Martin

Various APIs use cgroup to either set or get the statistics of host or guest. Hotplug or hot unplug new vcpus without creating or removing the cgroup for the vcpus could cause problems for those APIs. E.g. % virsh vcpucount dom maximum config 10 maximum live 10 current config 1 current live 1 % virsh setvcpu dom 2 % virsh schedinfo dom --set vcpu_quota=1000 Scheduler : posix error: Unable to find vcpu cgroup for rhel6.2(vcpu: 1): No such file or directory This patch fixes the problem by creating cgroups for each of the onlined vcpus, and destroying cgroups for each of the offlined vcpus. --- src/qemu/qemu_driver.c | 53 ++++++++++++++++++++++++++++++++++++++++++++++++ 1 files changed, 53 insertions(+), 0 deletions(-) diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 0787039..6622a35 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3601,6 +3601,8 @@ static int qemudDomainHotplugVcpus(struct qemud_driver *driver, int vcpus = oldvcpus; pid_t *cpupids = NULL; int ncpupids; + virCgroupPtr cgroup = NULL; + virCgroupPtr cgroup_vcpu = NULL; qemuDomainObjEnterMonitor(driver, vm); @@ -3654,6 +3656,53 @@ static int qemudDomainHotplugVcpus(struct qemud_driver *driver, goto cleanup; } + if (virCgroupForDomain(driver->cgroup, vm->def->name, &cgroup, 0) == 0) { + int rv = -1; + + if (nvcpus > oldvcpus) { + for (i = oldvcpus; i < nvcpus; i++) { + /* Create cgroup for the onlined vcpu */ + rv = virCgroupForVcpu(cgroup, i, &cgroup_vcpu, 1); + if (rv < 0) { + virReportSystemError(-rv, + _("Unable to create vcpu cgroup for %s(vcpu:" + " %d)"), + vm->def->name, i); + goto cleanup; + } + + /* Add vcpu thread to the cgroup */ + rv = virCgroupAddTask(cgroup_vcpu, cpupids[i]); + if (rv < 0) { + virReportSystemError(-rv, + _("unable to add vcpu %d task %d to cgroup"), + i, cpupids[i]); + virCgroupRemove(cgroup_vcpu); + goto cleanup; + } + + virCgroupFree(&cgroup_vcpu); + } + } else { + for (i = oldvcpus - 1; i >= nvcpus; i--) { + rv = virCgroupForVcpu(cgroup, i, &cgroup_vcpu, 0); + if (rv < 0) { + virReportSystemError(-rv, + _("Unable to access vcpu cgroup for %s(vcpu:" + " %d)"), + vm->def->name, i); + goto cleanup; + } + + /* Remove cgroup for the offlined vcpu */ + virCgroupRemove(cgroup_vcpu); + virCgroupFree(&cgroup_vcpu); + } + } + + virCgroupFree(&cgroup); + } + priv->nvcpupids = ncpupids; VIR_FREE(priv->vcpupids); priv->vcpupids = cpupids; @@ -3664,6 +3713,10 @@ cleanup: vm->def->vcpus = vcpus; VIR_FREE(cpupids); virDomainAuditVcpu(vm, oldvcpus, nvcpus, "update", rc == 1); + if (cgroup) + virCgroupFree(&cgroup); + if (cgroup_vcpu) + virCgroupFree(&cgroup_vcpu); return ret; unsupported: -- 1.7.7.6

On 10/12/2012 11:50 AM, Osier Yang wrote:
Various APIs use cgroup to either set or get the statistics of host or guest. Hotplug or hot unplug new vcpus without creating or removing the cgroup for the vcpus could cause problems for those APIs. E.g.
% virsh vcpucount dom maximum config 10 maximum live 10 current config 1 current live 1
% virsh setvcpu dom 2
% virsh schedinfo dom --set vcpu_quota=1000 Scheduler : posix error: Unable to find vcpu cgroup for rhel6.2(vcpu: 1): No such file or directory
This patch fixes the problem by creating cgroups for each of the onlined vcpus, and destroying cgroups for each of the offlined vcpus. ---
No change, so my previous ACK stands. Martin

The onlined vcpu pinning policy should inherit def->cpuset if it's not specified explicitly, and the affinity should be set in this case. Oppositely, the offlined vcpu pinning policy should be free()'ed. --- src/conf/domain_conf.c | 21 +++++++++++++ src/conf/domain_conf.h | 3 ++ src/libvirt_private.syms | 1 + src/qemu/qemu_driver.c | 75 ++++++++++++++++++++++++++++++++++++++++----- 4 files changed, 91 insertions(+), 9 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index e404a68..17254e3 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8398,6 +8398,27 @@ error: goto cleanup; } +/* + * Return the vcpupin related with the vcpu id on SUCCESS, or + * NULL on failure. + */ +virDomainVcpuPinDefPtr +virDomainLookupVcpuPin(virDomainDefPtr def, + int vcpuid) +{ + int i; + + if (!def->cputune.vcpupin) + return NULL; + + for (i = 0; i < def->cputune.nvcpupin; i++) { + if (def->cputune.vcpupin[i]->vcpuid == vcpuid) + return def->cputune.vcpupin[i]; + } + + return NULL; +} + static int virDomainDefMaybeAddController(virDomainDefPtr def, int type, int idx) diff --git a/src/conf/domain_conf.h b/src/conf/domain_conf.h index cc63da1..5ca1820 100644 --- a/src/conf/domain_conf.h +++ b/src/conf/domain_conf.h @@ -2302,4 +2302,7 @@ virDomainNetDefPtr virDomainNetFind(virDomainDefPtr def, int virDomainList(virConnectPtr conn, virHashTablePtr domobjs, virDomainPtr **domains, unsigned int flags); +virDomainVcpuPinDefPtr virDomainLookupVcpuPin(virDomainDefPtr def, + int vcpuid); + #endif /* __DOMAIN_CONF_H */ diff --git a/src/libvirt_private.syms b/src/libvirt_private.syms index fe31bbe..6ea1308 100644 --- a/src/libvirt_private.syms +++ b/src/libvirt_private.syms @@ -423,6 +423,7 @@ virDomainLiveConfigHelperMethod; virDomainLoadAllConfigs; virDomainLockFailureTypeFromString; virDomainLockFailureTypeToString; +virDomainLookupVcpuPin; virDomainMemballoonModelTypeFromString; virDomainMemballoonModelTypeToString; virDomainMemDumpTypeFromString; diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c index 6622a35..b3e8424 100644 --- a/src/qemu/qemu_driver.c +++ b/src/qemu/qemu_driver.c @@ -3603,6 +3603,7 @@ static int qemudDomainHotplugVcpus(struct qemud_driver *driver, int ncpupids; virCgroupPtr cgroup = NULL; virCgroupPtr cgroup_vcpu = NULL; + bool cgroup_available = false; qemuDomainObjEnterMonitor(driver, vm); @@ -3656,11 +3657,13 @@ static int qemudDomainHotplugVcpus(struct qemud_driver *driver, goto cleanup; } - if (virCgroupForDomain(driver->cgroup, vm->def->name, &cgroup, 0) == 0) { - int rv = -1; + cgroup_available = (virCgroupForDomain(driver->cgroup, vm->def->name, + &cgroup, 0) == 0); - if (nvcpus > oldvcpus) { - for (i = oldvcpus; i < nvcpus; i++) { + if (nvcpus > oldvcpus) { + for (i = oldvcpus; i < nvcpus; i++) { + if (cgroup_available) { + int rv = -1; /* Create cgroup for the onlined vcpu */ rv = virCgroupForVcpu(cgroup, i, &cgroup_vcpu, 1); if (rv < 0) { @@ -3680,11 +3683,62 @@ static int qemudDomainHotplugVcpus(struct qemud_driver *driver, virCgroupRemove(cgroup_vcpu); goto cleanup; } + } - virCgroupFree(&cgroup_vcpu); + /* Inherit def->cpuset */ + if (vm->def->cpumask) { + /* vm->def->cputune.vcpupin can't be NULL if + * vm->def->cpumask is not NULL. + */ + virDomainVcpuPinDefPtr vcpupin = NULL; + + if (VIR_REALLOC_N(vm->def->cputune.vcpupin, + vm->def->cputune.nvcpupin + 1) < 0) { + virReportOOMError(); + goto cleanup; + } + + if (VIR_ALLOC(vcpupin) < 0) { + virReportOOMError(); + goto cleanup; + } + + vcpupin->cpumask = virBitmapNew(VIR_DOMAIN_CPUMASK_LEN); + virBitmapCopy(vcpupin->cpumask, vm->def->cpumask); + vcpupin->vcpuid = i; + vm->def->cputune.vcpupin[vm->def->cputune.nvcpupin++] = vcpupin; + + if (cgroup_available) { + if (qemuSetupCgroupVcpuPin(cgroup_vcpu, + vm->def->cputune.vcpupin, + vm->def->cputune.nvcpupin, i) < 0) { + virReportError(VIR_ERR_OPERATION_INVALID, + _("failed to set cpuset.cpus in cgroup" + " for vcpu %d"), i); + ret = -1; + goto cleanup; + } + } else { + if (virProcessInfoSetAffinity(cpupids[i], + vcpupin->cpumask) < 0) { + virReportError(VIR_ERR_SYSTEM_ERROR, + _("failed to set cpu affinity for vcpu %d"), + i); + ret = -1; + goto cleanup; + } + } } - } else { - for (i = oldvcpus - 1; i >= nvcpus; i--) { + + virCgroupFree(&cgroup_vcpu); + } + } else { + for (i = oldvcpus - 1; i >= nvcpus; i--) { + virDomainVcpuPinDefPtr vcpupin = NULL; + + if (cgroup_available) { + int rv = -1; + rv = virCgroupForVcpu(cgroup, i, &cgroup_vcpu, 0); if (rv < 0) { virReportSystemError(-rv, @@ -3698,9 +3752,12 @@ static int qemudDomainHotplugVcpus(struct qemud_driver *driver, virCgroupRemove(cgroup_vcpu); virCgroupFree(&cgroup_vcpu); } - } - virCgroupFree(&cgroup); + /* Free vcpupin setting */ + if ((vcpupin = virDomainLookupVcpuPin(vm->def, i))) { + VIR_FREE(vcpupin); + } + } } priv->nvcpupids = ncpupids; -- 1.7.7.6

On 10/12/2012 11:50 AM, Osier Yang wrote:
The onlined vcpu pinning policy should inherit def->cpuset if it's not specified explicitly, and the affinity should be set in this case. Oppositely, the offlined vcpu pinning policy should be free()'ed. --- src/conf/domain_conf.c | 21 +++++++++++++ src/conf/domain_conf.h | 3 ++ src/libvirt_private.syms | 1 + src/qemu/qemu_driver.c | 75 ++++++++++++++++++++++++++++++++++++++++----- 4 files changed, 91 insertions(+), 9 deletions(-)
ACK, Martin

When vcpu placement is "auto", the domain process will be pinned to advisory nodeset from querying numad, While emulatorpin will override the pinning. That means both of them are to set the pinning policy for domain process, but conflicts with each other. This patch prohibit specifiying emulatorpin if vcpu placement is "auto". See more details in PATCH 1/7. --- src/conf/domain_conf.c | 7 +++++++ 1 files changed, 7 insertions(+), 0 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 17254e3..dda3a12 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8873,6 +8873,13 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, } if (n) { + if (def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("emulatorpin is not allowed when vcpu " + "placement is auto")) + goto error; + } + if (n > 1) { virReportError(VIR_ERR_XML_ERROR, "%s", _("only one emulatorpin is supported")); -- 1.7.7.6

On 10/12/2012 11:50 AM, Osier Yang wrote:
When vcpu placement is "auto", the domain process will be pinned to advisory nodeset from querying numad, While emulatorpin will override the pinning. That means both of them are to set the pinning policy for domain process, but conflicts with each other.
This patch prohibit specifiying emulatorpin if vcpu placement is "auto". See more details in PATCH 1/7. --- src/conf/domain_conf.c | 7 +++++++ 1 files changed, 7 insertions(+), 0 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index 17254e3..dda3a12 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8873,6 +8873,13 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, }
if (n) { + if (def->placement_mode == VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO) { + virReportError(VIR_ERR_XML_ERROR, "%s", + _("emulatorpin is not allowed when vcpu " + "placement is auto")) + goto error; + } + if (n > 1) { virReportError(VIR_ERR_XML_ERROR, "%s", _("only one emulatorpin is supported"));
This is the same thing as with those vcpupins before. We cannot make old machines onloadable. Martin

When vcpu placement is "auto", the domain process will be pinned to advisory nodeset from querying numad, While emulatorpin will override the pinning. That means both of them are to set the pinning policy for domain process, but conflicts with each other. This patch ingore emulatorpin if vcpu placement is "auto", because <vcpu> placement can't be simply ignored for <numatune> placement could default to it. See more details in PATCH 1/7. --- src/conf/domain_conf.c | 7 ++++++- 1 files changed, 6 insertions(+), 1 deletions(-) diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index b0b1d08..6668819 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8872,7 +8872,12 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, goto error; } - if (n) { + /* Ignore emulatorpin if <vcpu> placement is "auto", they + * conflicts with each other, and <vcpu> placement can't be + * simply ignored, as <numatune>'s placement defaults to it. + */ + if (n && + (def->placement_mode != VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO)) { if (n > 1) { virReportError(VIR_ERR_XML_ERROR, "%s", _("only one emulatorpin is supported")); -- 1.7.7.6

On 10/12/2012 04:44 PM, Osier Yang wrote:
When vcpu placement is "auto", the domain process will be pinned to advisory nodeset from querying numad, While emulatorpin will override the pinning. That means both of them are to set the pinning policy for domain process, but conflicts with each other.
This patch ingore emulatorpin if vcpu placement is "auto", because
s/ingore/ignores/
<vcpu> placement can't be simply ignored for <numatune> placement could default to it. See more details in PATCH 1/7. --- src/conf/domain_conf.c | 7 ++++++- 1 files changed, 6 insertions(+), 1 deletions(-)
diff --git a/src/conf/domain_conf.c b/src/conf/domain_conf.c index b0b1d08..6668819 100644 --- a/src/conf/domain_conf.c +++ b/src/conf/domain_conf.c @@ -8872,7 +8872,12 @@ static virDomainDefPtr virDomainDefParseXML(virCapsPtr caps, goto error; }
- if (n) { + /* Ignore emulatorpin if <vcpu> placement is "auto", they + * conflicts with each other, and <vcpu> placement can't be + * simply ignored, as <numatune>'s placement defaults to it. + */ + if (n && + (def->placement_mode != VIR_DOMAIN_CPU_PLACEMENT_MODE_AUTO)) { if (n > 1) { virReportError(VIR_ERR_XML_ERROR, "%s", _("only one emulatorpin is supported"));
I neglected to mention I wanted to have VIR_WARN there (as with the [2/7]), sorry for that. I still think it should be there, though, so ACK, but with that warn in there. Martin

If the vcpu placement is "static", it's just fine to ignore the def->cpumask if emulatorpin is specified. See more details in PATCH 1/7. --- src/qemu/qemu_process.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-) diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index f8a2bfd..63a51c4 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2618,7 +2618,8 @@ static int qemuProcessHook(void *data) /* This must be done after cgroup placement to avoid resetting CPU * affinity */ - if (qemuProcessInitCpuAffinity(h->driver, h->vm, h->nodemask) < 0) + if (!def->cputune.emulatorpin && + qemuProcessInitCpuAffinity(h->driver, h->vm, h->nodemask) < 0) goto cleanup; if (qemuProcessInitNumaMemoryPolicy(h->vm, h->nodemask) < 0) -- 1.7.7.6

On 10/12/2012 11:50 AM, Osier Yang wrote:
If the vcpu placement is "static", it's just fine to ignore the def->cpumask if emulatorpin is specified. See more details in PATCH 1/7. --- src/qemu/qemu_process.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index f8a2bfd..63a51c4 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2618,7 +2618,8 @@ static int qemuProcessHook(void *data)
/* This must be done after cgroup placement to avoid resetting CPU * affinity */ - if (qemuProcessInitCpuAffinity(h->driver, h->vm, h->nodemask) < 0) + if (!def->cputune.emulatorpin && + qemuProcessInitCpuAffinity(h->driver, h->vm, h->nodemask) < 0) goto cleanup;
if (qemuProcessInitNumaMemoryPolicy(h->vm, h->nodemask) < 0)
I think it is more readable in qemuProcessInitCpuAffinity, makes more sense to me, what do you think? Martin

On 10/12/2012 04:16 PM, Martin Kletzander wrote:
On 10/12/2012 11:50 AM, Osier Yang wrote:
If the vcpu placement is "static", it's just fine to ignore the def->cpumask if emulatorpin is specified. See more details in PATCH 1/7. --- src/qemu/qemu_process.c | 3 ++- 1 files changed, 2 insertions(+), 1 deletions(-)
diff --git a/src/qemu/qemu_process.c b/src/qemu/qemu_process.c index f8a2bfd..63a51c4 100644 --- a/src/qemu/qemu_process.c +++ b/src/qemu/qemu_process.c @@ -2618,7 +2618,8 @@ static int qemuProcessHook(void *data)
/* This must be done after cgroup placement to avoid resetting CPU * affinity */ - if (qemuProcessInitCpuAffinity(h->driver, h->vm, h->nodemask) < 0) + if (!def->cputune.emulatorpin && + qemuProcessInitCpuAffinity(h->driver, h->vm, h->nodemask) < 0) goto cleanup;
if (qemuProcessInitNumaMemoryPolicy(h->vm, h->nodemask) < 0)
I think it is more readable in qemuProcessInitCpuAffinity, makes more sense to me, what do you think?
But anyway, no problem here, so ACK, because we are sure that if there is emulatorpin specified, then there is no placement='auto' as well as no need for checking cpumask (handled with cgroups already. Martin

On 2012年10月12日 17:50, Osier Yang wrote:
As proposed in https://www.redhat.com/archives/libvir-list/2012-October/msg00522.html,
This patch set is trying to resolve the conflicts between<vcpu> ,<vcpupin>, and<emulatorpin>, when doing CPU tuning for domain process or vcpu threads.
PATCH 1/7 changes all the doc together, as the 3 elements are related with each other too tightly, splitting the changes in separate patches will just make things obscure and hard to review. See commit log of PATCH 1/7 for overview/details about the solution.
Osier Yang (7): doc: Clarify the relationship between<vcpu>,<vcpupin>, and <emulatorpin> conf: Do not allow vcpupin's cpuid exceed current vcpus number conf: Initialize the pinning policy for vcpus qemu: Create or remove cgroup when doing vcpu hotpluging qemu: Initialize cpuset for hotplugged vcpu as def->cpuset conf: Prohibit emulatorpin if vcpu placement is auto qemu: Ignore def->cpumask if emulatorpin is specified
docs/formatdomain.html.in | 42 +++++++++++------ src/conf/domain_conf.c | 103 ++++++++++++++++++++++++++++++++++++++++- src/conf/domain_conf.h | 3 + src/libvirt_private.syms | 1 + src/qemu/qemu_driver.c | 110 +++++++++++++++++++++++++++++++++++++++++++++ src/qemu/qemu_process.c | 3 +- 6 files changed, 243 insertions(+), 19 deletions(-)
Now pushed the set after more testing, (with nit on 6/7 fixed). Regards, Osier
participants (2)
-
Martin Kletzander
-
Osier Yang