On Fri, Jan 08, 2016 at 10:49:37 +0100, Jiri Denemark wrote:
memory_dirty_rate corresponds to dirty-pages-rate in QEMU and
memory_iteration is what QEMU reports in dirty-sync-count.
Signed-off-by: Jiri Denemark <jdenemar(a)redhat.com>
Looks more like a feature rather than just "cleanup"
---
include/libvirt/libvirt-domain.h | 19 +++++++++++++++++++
src/qemu/qemu_domain.c | 8 ++++++++
src/qemu/qemu_migration.c | 12 ++++++++++++
src/qemu/qemu_monitor.h | 2 ++
src/qemu/qemu_monitor_json.c | 4 ++++
tools/virsh-domain.c | 16 ++++++++++++++++
6 files changed, 61 insertions(+)
diff --git a/include/libvirt/libvirt-domain.h b/include/libvirt/libvirt-domain.h
index a1ea6a5..d26faa5 100644
--- a/include/libvirt/libvirt-domain.h
+++ b/include/libvirt/libvirt-domain.h
@@ -2724,6 +2724,25 @@ int virDomainAbortJob(virDomainPtr dom);
*/
# define VIR_DOMAIN_JOB_MEMORY_BPS "memory_bps"
+/** VIR_DOMAIN_JOB_MEMORY_DIRTY_RATE:
+ *
+ * virDomainGetJobStats field: number of memory pages dirtied by the guest
+ * per second, as VIR_TYPED_PARAM_ULLONG. This statistics makes sense only
+ * when live migration is running.
Do we document somewhere how to convert page sizes into memory sizes (or
rather how big the pages are/whether hugepages count as 1 or more
etc...?)
+ */
+# define VIR_DOMAIN_JOB_MEMORY_DIRTY_RATE "memory_dirty_rate"
+
+/**
+ * VIR_DOMAIN_JOB_MEMORY_ITERATION:
+ *
+ * virDomainGetJobStats field: current iteration over domain's memory
+ * during live migration, as VIR_TYPED_PARAM_ULLONG. This is set to zero
+ * when memory starts to be transferred and the value is increased by one
+ * every time a new iteration is started to transfer memory pages dirtied
+ * since the last iteration.
+ */
+# define VIR_DOMAIN_JOB_MEMORY_ITERATION "memory_iteration"
ACK