[libvirt] [PATCH] docs: Add pit timer policy change description to news.xml
by Maxim Nestratov
Add bug fix description of appying correct pit timer policy.
---
docs/news.xml | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/docs/news.xml b/docs/news.xml
index e341ba2..d616ebd 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -184,6 +184,17 @@
libvirt can recognize.
</description>
</change>
+ <change>
+ <summary>
+ qemu: Fix pit timer tick policy
+ </summary>
+ <description>
+ By a mistake, when <code>policy=delay</code> pit timer tick policy
+ was specified, <code>policy=discard</code> was used instead. Now it
+ is possible to use both <code>discard</code> and <code>delay</code>
+ policies correctly.
+ </description>
+ </change>
</section>
</release>
<release version="v2.5.0" date="2016-12-04">
--
2.4.11
7 years, 9 months
[libvirt] [V3] RFC for support cache tune in libvirt
by Qiao, Liyong
Add support for cache allocation.
Thanks Martin for the previous version comments, this is the v3 version for RFC , I’v have some PoC code [2]. The follow changes are partly finished by the PoC.
#Propose Changes
## virsh command line
1. Extend output of nodeinfo, to expose L3 cache size for Level 3 (last level cache size).
This will expose how many cache on a host which can be used.
root@s2600wt:~/linux# virsh nodeinfo | grep L3
L3 cache size: 56320 KiB
2. Extend capabilities outputs.
virsh capabilities | grep resctrl
<cpu>
...
<resctrl name='L3' unit='KiB' cache_size='56320' cache_unit='2816'/>
</cpu>
This will tell that the host have enabled resctrl(which you can find it in /sys/fs/resctrl),
And it supports to allocate 'L3' type cache, total 'L3' cache size is 56320 KiB, and the minimum unit size of 'L3' cache is 2816 KiB.
P.S. L3 cache size unit is the minum l3 cache unit can be allocated. It's hardware related and can not be changed.
3. Add new virsh command 'nodecachestats':
This API is to expose vary cache resouce left on each hardware (cpu socket).
It will be formated as:
<resource_type>.<resource_id>: left size KiB
for example I have a 2 socket cpus host, and I'v enabled cat_l3 feature only
root@s2600wt:~/linux# virsh nodecachestats
L3.0 : 56320 KiB
L3.1 : 56320 KiB
P.S. resource_type can be L3, L3DATA, L3CODE, L2 for now.
4. Add new interface to manage how many cache can be allociated for a domain
root@s2600wt:~/linux# virsh cachetune kvm02 --l3.count 2
root@s2600wt:~/linux# virsh cachetune kvm02
l3.count : 2
This will allocate 2 units(2816 * 2) l3 cache for domain kvm02
## Domain XML changes
Cache Tuneing
<domain>
...
<cachetune>
<l3_cache_count>2</l3_cache_count>
</cachetune>
...
</domain>
## Restriction for using cache tune on multiple sockets' host.
The l3 cache is per socket resource, kernel need to konw about what's affinity looks like, so for a VM which running on a mulitple socket's host, it should have NUMA setting or vcpuset pin setting. Or cache tune will fail.
[1] kernel support https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/tree/arch/x86/ke...
[2] libvirt PoC(not finished yet) https://github.com/taget/libvirt/commits/cat_new
Best Regards
Eli Qiao(乔立勇)OpenStack Core team OTC Intel.
--
7 years, 9 months
[libvirt] Query regarding sharing of vcpu's (cpu alloc ratio 1.0) between VM's
by BharaniKumar Gedela
HI,
I have a Q about sharing a vcpu across Vm's (cpu alloc ratio 1.0 and HT not
enabled)?
I have a Use case where a VM needs vcpu's and one of the vcpu is dedicated
to the VM for some traffic processing. We want the other vcpu which is used
for control processing and can be shared with a similar VM's control
processing.
Is this possible and support in openstack/libvirt/KVM?
If so could you please advice how to test it?
Regards,
Bharani..
7 years, 9 months
[libvirt] Regarding Migration Statistics
by Anubhav Guleria
Greetings,
I am writing a code using libvirt API to migrate VM between two physical
hosts *(QEMU/KVM) , *say some *n *number of times.
*1)* I am using right now* virDomainPtr virDomainMigrate (.......) *and to
calculate the total migration time I am using something like this:
*clock_gettime(CLOCK_MONOTONIC_RAW,&begin); *
*migrate*(domainToMigrate,nodeToMigrate);
*clock_gettime(CLOCK_MONOTONIC_RAW,&end);*
*Total Migration Time = end.tv_sec-begin.tv_sec*
Is this correct way to calculate total migration time. And is there some
way to calculate the downtime (not how to set it)?
*2) *I am interested in identifying in particular other statistics of
migration like :
*Number of iterations in Pre Copy*, *Memory transferred in each iteration*
etc.
I was going through the API and found* virDomainJobInfo
<http://libvirt.org/html/libvirt-libvirt-domain.html#virDomainJobInfo> and
virDomainGetJobStats
<http://libvirt.org/html/libvirt-libvirt-domain.html#virDomainGetJobStats>
*functions.But
how to use them is not very clear. Can anyone point me to right place to
achieve this objective?
Thanks in advance.
And sorry if that was too silly to ask.
Anubhav
7 years, 9 months
[libvirt] OpenStack/libvirt CAT interface
by Marcelo Tosatti
There have been queries about the OpenStack interface
for CAT:
http://bugzilla.redhat.com/show_bug.cgi?id=1299678
Comment 2 says:
Sahid Ferdjaoui 2016-01-19 10:58:48 EST
A spec will have to be addressed, after a first look this feature needs
some work in several components of Nova to maintain/schedule/consume
host's cache. I can work on that spec and implementation it when libvirt
will provides information about cache and feature to use it for guests.
I could add a comment about parameters to resctrltool, but since
this depends on the libvirt interface, it would be good to know
what the libvirt interface exposes first.
I believe it should be essentially similar to OpenStack's
"reserved_host_memory_mb":
Set the reserved_host_memory_mb to reserve RAM for host
processes. For
the purposes of testing I am going to use the default of 512 MB:
reserved_host_memory_mb=512
But rather use:
rdt_cat_cache_reservation=type=code/data/both,size=10mb,cacheid=2;
type=code/data/both,size=2mb,cacheid=1;...
(per-vcpu).
Where cache-id is optional.
What is cache-id (from Documentation/x86/intel_rdt_ui.txt on recent
kernel sources):
Cache IDs
---------
On current generation systems there is one L3 cache per socket and L2
caches are generally just shared by the hyperthreads on a core, but this
isn't an architectural requirement. We could have multiple separate L3
caches on a socket, multiple cores could share an L2 cache. So instead
of using "socket" or "core" to define the set of logical cpus sharing
a resource we use a "Cache ID". At a given cache level this will be a
unique number across the whole system (but it isn't guaranteed to be a
contiguous sequence, there may be gaps). To find the ID for each
logical
CPU look in /sys/devices/system/cpu/cpu*/cache/index*/id
WHAT THE USER NEEDS TO SPECIFY FOR VIRTUALIZATION (KVM-RT)
==========================================================
For virtualization the following scenario is desired,
on a given socket:
* VM-A with VCPUs VM-A.vcpu-1, VM-A.vcpu-2.
* VM-B with VCPUs VM-B.vcpu-1, VM-B.vcpu-2.
With one realtime workload on each vcpu-2.
Assume VM-A.vcpu-2 on pcpu 3.
Assume VM-B.vcpu-2 on pcpu 5.
Assume pcpus 0-5 on cacheid 0.
We want VM-A.vcpu-2 to have a certain region of cache reserved,
and VM-B.vcpu-2 as well. vcpu-1 for both VMs can use the default group
(that is not have reserved L3 cache).
This translates to the following resctrltool-style reservations:
res.vm-a.vcpu-2
type=both,size=VM-A-RESSIZE,cache-id=0
res.vm-b.vcpu-2
type=both,size=VM-B-RESSIZE,cache-id=0
Which translate to the following in resctrlfs:
res.vm-a.vcpu-2
type=both,size=VM-A-RESSIZE,cache-id=0
type=both,size=default-size,cache-id=1
...
res.vm-b.vcpu-2
type=both,size=VM-B-RESSIZE,cache-id=0
type=both,size=default-size,cache-id=1
...
Which is what we want, since the VCPUs are pinned.
res.vm-a.vcpu-1 and res.vm-b.vcpu-1 don't need to
be assigned to any reservation, which means they'll
remain on the default group.
RESTRICTIONS TO THE SYNTAX ABOVE
================================
Rules for the parameters:
* type=code must be paired with type=data entry.
ABOUT THE LIST INTERFACE
========================
About an interface for listing the reservations
of the system to OpenStack.
I think that what OpenStack needs is to check, before
starting a guest on a given host, that there is sufficient
space available for the reservation.
To do that, it can:
1) resctrltool list (the end of the output mentions
how much free space available there is), or
via resctrlfs directly (have to lock the filesystem,
read each directory, AND each schemata, and count
number of zero bits).
2) Via libvirt
Should fix resctrltool/API to list amount of contiguous free space
BTW.
7 years, 9 months
[libvirt] RFC: Use __attribute__ ((cleanup) in libvirt ?
by Daniel P. Berrange
For those who don't already know, GCC and CLang both implement a C language
extension that enables automatic free'ing of resources when variables go
out of scope. This is done by annotating the variable with the "cleanup"
attribute, pointing to a function the compiler will wire up a call to when
unwinding the stack. Since the annotation points to an arbitrary user
defined function, you're not limited to simple free() like semantics. The
cleanup function could unlock a mutex, or decrement a reference count, etc
This annotation is used extensively by systemd, and libguestfs, amongst
other projects. This obviously doesn't bring full garbage collection to
C, but it does enable the code to be simplified. By removing the need to
put in many free() (or equiv) calls to cleanup state, the "interesting"
logic in the code stands out more, not being obscured by cleanup calls
and goto jumps.
I'm wondering what people think of making use of this in libvirt ?
To my mind the only real reason to *not* use it, would be to maintain
code portability to non-GCC/non-CLang compilers. OS-X, *BSD and *Linux
all use GCC or CLang or both, so its a non-issue there. So the only place
this could cause pain is people building libvirt on Win32, who are using
the Microsoft compilers instead og GCC.
IMHO, it is perfectly valid for us to declare that MSVC is unsupported
with Libvirt and users must use GCC to build on Windows, either natively
via cygwin, or cross-build from Linux hosts.
As an example of what it would involve...
This commit enables the basic helper macros in libguestfs:
https://github.com/libguestfs/libguestfs/commit/98b64650c852ccc9a8eef8b96...
These commits make use of them
https://github.com/libguestfs/libguestfs/commit/61162bdce1a00a921a47eb3e7...
https://github.com/libguestfs/libguestfs/commit/5a3da366268825b26b470cde3...
https://github.com/libguestfs/libguestfs/commit/791ad3e9e600ef528e3e5a8d5...
Finally, I'm absolutely *not* volunteering to actually implement this
idea myself, as I don't have the free time. I just want to raise it
as a discussion item, and if we agree its something we'd do, then we
can make it a GSoC idea, or let any other interested person hack
on it at will. There's no need for a "big bang" convert everything
approach, we can do it incrementally.
Regards,
Daniel
--
|: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :|
|: http://libvirt.org -o- http://virt-manager.org :|
|: http://entangle-photo.org -o- http://search.cpan.org/~danberr/ :|
7 years, 9 months
[libvirt] Question about hypervisor <features> that are not tristate
by Jim Fehlig
Happy new year!
Nearly a year ago I reported an issue with the <hap> hypervisor feature on Xen
[1] and am now seeing a similar issue with the <pae> feature. Setting the
default value of pae changed between xend and libxl. When not specified, xend
would enable pae for HVM domains. Clients such as xm and the old libvirt driver
did not have to explicitly enable it. In libxl, the pae field within
libxl_domain_build_info is initialized to 0. Clients must enable pae, and indeed
xl will do so if pae=1 is not specified in the xl.cfg.
The xend behavior prevents libvirt from disabling pae, whereas the libxl behvior
causes a guest ABI change (config that worked with libvirt+xend doesn't with
libvirt+libxl). The libxl behavior also forces management software (e.g.
OpenStack nova) to add <pae> where it wasn't needed before.
To solve this problem for <hap>, it was changed it to a tristate [2], allowing
it to be turned off with explicit <hap state='off'/>, and on if not specified or
<hap/> or <hap state='on'/>. Should <pae> (and the remaining hypervisor features
that are not tristate) be converted to tristate similar to <hap>? Alternatively,
I could simply set pae=1 for all HVM domains in the libxl driver. Like the old
libvirt+xend behavior it couldn't be turned off, but I don't think there is a
practical use-case to do so. At least no one has complained over all the years
of libvirt+xend use.
Regards,
Jim
[1] https://www.redhat.com/archives/libvir-list/2016-February/msg00197.html
[2] https://www.redhat.com/archives/libvir-list/2016-March/msg00001.html
7 years, 9 months
[libvirt] [PATCH] news: Reflect hugepages patch
by Michal Privoznik
In f55afd8 I've made libvirt to construct hugepage path on
per-domain basis. However, this change was not reflected in
the NEWS file.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
I'd push this right away, but I rather wait for our NEWS
police^Wvolunteer to check it.
docs/news.xml | 11 +++++++++++
1 file changed, 11 insertions(+)
diff --git a/docs/news.xml b/docs/news.xml
index e341ba248..f6f17d55f 100644
--- a/docs/news.xml
+++ b/docs/news.xml
@@ -184,6 +184,17 @@
libvirt can recognize.
</description>
</change>
+ <change>
+ <summary>
+ qemu: Create hugepage path on per domain basis
+ </summary>
+ <description>
+ Historically, all hugepage enabled domains shared the same path under
+ hugetlbfs. This left libvirt unable to correctly set security labels
+ on it. With this release, however, each domain is put into a
+ separate path which is also correctly labeled.
+ </description>
+ </change>
</section>
</release>
<release version="v2.5.0" date="2016-12-04">
--
2.11.0
7 years, 9 months
[libvirt] [PATCH] qemuDomainSetupAllInputs: Update debug message
by Michal Privoznik
Due to a copy-paste error, the debug message reads:
Setting up disks
It should have been:
Setting up inputs.
Signed-off-by: Michal Privoznik <mprivozn(a)redhat.com>
---
Pushed under trivial rule.
src/qemu/qemu_domain.c | 4 ++--
1 file changed, 2 insertions(+), 2 deletions(-)
diff --git a/src/qemu/qemu_domain.c b/src/qemu/qemu_domain.c
index b26c02bda..473d0c1a2 100644
--- a/src/qemu/qemu_domain.c
+++ b/src/qemu/qemu_domain.c
@@ -7275,14 +7275,14 @@ qemuDomainSetupAllInputs(virQEMUDriverPtr driver,
{
size_t i;
- VIR_DEBUG("Setting up disks");
+ VIR_DEBUG("Setting up inputs");
for (i = 0; i < vm->def->ninputs; i++) {
if (qemuDomainSetupInput(driver,
vm->def->inputs[i],
devPath) < 0)
return -1;
}
- VIR_DEBUG("Setup all disks");
+ VIR_DEBUG("Setup all inputs");
return 0;
}
--
2.11.0
7 years, 9 months
[libvirt] [PATCH for 3.0.0 0/2] Fix qemuMonitorJSONParseCPUModelProperty
by Jiri Denemark
qemuMonitorJSONParseCPUModelProperty was made a little bit to strict
which could cause compatibility issues between libvirt 3.0.0 and QEMU
2.9.0 (or newer depending on when query-cpu-model-expansion is
implemented for x86).
Jiri Denemark (2):
qemu: Don't check CPU model property key
qemu: Ignore non-boolean CPU model properties
src/qemu/qemu_monitor_json.c | 15 +++------------
1 file changed, 3 insertions(+), 12 deletions(-)
--
2.11.0
7 years, 9 months