
-- Best regards Eli 天涯无处不重逢 a leaf duckweed belongs to the sea, where not to meet in life Sent with Sparrow (http://www.sparrowmailapp.com/?sig) On Tuesday, 7 March 2017 at 5:33 AM, Marcelo Tosatti wrote:
On Mon, Mar 06, 2017 at 05:50:43PM +0800, Eli Qiao wrote:
Add new virsh command line `nodecachestats` to expose the cache usage on a node.
Signed-off-by: Eli Qiao <liyong.qiao@intel.com (mailto:liyong.qiao@intel.com)> --- src/libvirt_private.syms | 3 ++- src/qemu/qemu_driver.c | 12 ++++++++++ src/util/virresctrl.c | 62 ++++++++++++++++++++++++++++++++++++++++++++++++ src/util/virresctrl.h | 8 +++++++ tools/virsh-host.c | 49 ++++++++++++++++++++++++++++++++++++++ 5 files changed, 133 insertions(+), 1 deletion(-)
It perhaps would be good to also report the largest contiguous region available: So that if the space is fragmented, management software can detect the situation beforehand.
Like this perhaps:
# LD_LIBRARY_PATH=/root/git/libvirt/src/.libs/ # /root/git/virshnodegetcachestats ret=0 nparams=10 L3.0: free=11520, max_contiguous=X. L3.1: free=11520, max_contiguous=X.
Marclo, thanks for your testing and comments. Actually, I only expose the contiguous cache which can be allocated to VMs for now. so free == max_contiguous. Othere will be one case: schemata 1 = 1000 0000 schemata 2 = 0010 0000 default schemata should be 0001 1111 (schemata should be contiguous, so it can’t be 01011111 and free_size = 5 * min_cache_unit virsh nodecachestats will report free=5 * min_cache_unit next if a new vm requires cache of 1 * min_cache_unit schemata 1 = 1000 0000 schemata 2 = 0010 0000 new VM’s schemata = 0100 0000 default schemata should be 0001 1111 so free_size is still 5 * min_cache_unit. and virsh nodecachestats will report free=5 * min_cache_unit I am not sure if that will meaningful to user we expose (fragmented) cache which can not be allocated to VMs.
Or:
L3.0.free: 11520 L3.0.max_contiguous=X. L3.1.free: 11520 L3.1.max_contiguous=X.
Not sure what is the preferred way to do this in libvirt.
Otherwise, testing looks good now.