On 12/16/14 09:04, Eric Blake wrote:
Wire up backing chain recursion. Note that for now, we just use
the same allocation numbers for read-only backing files as what
offline domains would report. It is not the correct allocation
number for qcow2 over block devices during block-commit, and it
misses out on the fact that qemu also reports read statistics on
backing files that are worth knowing about (seriously - for a
thin-provisioned setup, it would be nice to easily get at a count
of how many reads were serviced from the backing file in relation
to reads serviced by the active layer). But it is at least
sufficient to prove that the algorithm is working, and to let
other people start coding to the interface while waiting for
later patches that get the correct information.
For a running domain, where one of the two images has a backing
file, I see the traditional output:
$ virsh domstats --block testvm2
Domain: 'testvm2'
block.count=2
block.0.name=vda
block.0.path=/tmp/wrapper.qcow2
block.0.rd.reqs=1
block.0.rd.bytes=512
block.0.rd.times=28858
block.0.wr.reqs=0
block.0.wr.bytes=0
block.0.wr.times=0
block.0.fl.reqs=0
block.0.fl.times=0
block.0.allocation=0
block.0.capacity=1310720000
block.0.physical=200704
block.1.name=vdb
block.1.path=/dev/sda7
block.1.rd.reqs=0
block.1.rd.bytes=0
block.1.rd.times=0
block.1.wr.reqs=0
block.1.wr.bytes=0
block.1.wr.times=0
block.1.fl.reqs=0
block.1.fl.times=0
block.1.allocation=0
block.1.capacity=1310720000
vs. the new output:
$ virsh domstats --block --backing testvm2
Domain: 'testvm2'
block.count=3
block.0.name=vda
block.0.path=/tmp/wrapper.qcow2
block.0.rd.reqs=1
block.0.rd.bytes=512
block.0.rd.times=28858
block.0.wr.reqs=0
block.0.wr.bytes=0
block.0.wr.times=0
block.0.fl.reqs=0
block.0.fl.times=0
block.0.allocation=0
block.0.capacity=1310720000
block.0.physical=200704
block.1.name=vda
block.1.path=/dev/sda6
block.1.backingIndex=1
block.1.allocation=1073741824
block.1.capacity=1310720000
block.1.physical=1073741824
block.2.name=vdb
block.2.path=/dev/sda7
block.2.rd.reqs=0
block.2.rd.bytes=0
block.2.rd.times=0
block.2.wr.reqs=0
block.2.wr.bytes=0
block.2.wr.times=0
block.2.fl.reqs=0
block.2.fl.times=0
block.2.allocation=0
block.2.capacity=1310720000
* src/qemu/qemu_driver.c (QEMU_DOMAIN_STATS_BACKING): New internal
enum bit.
(qemuConnectGetAllDomainStats): Recognize new user flag, and pass
details to...
(qemuDomainGetStatsBlock): ...here, where we can do longer recursion.
(qemuDomainGetStatsOneBlock): Output new field.
* src/qemu/qemu_domain.c (qemuDomainStorageAlias): Tolerate NULL
alias input for offline domain.
Signed-off-by: Eric Blake <eblake(a)redhat.com>
---
src/qemu/qemu_domain.c | 3 +++
src/qemu/qemu_driver.c | 57 +++++++++++++++++++++++++++++++++++++++-----------
2 files changed, 48 insertions(+), 12 deletions(-)
@@ -18651,18 +18673,25 @@ qemuDomainGetStatsBlock(virQEMUDriverPtr
driver,
for (; i < dom->def->ndisks; i++) {
virDomainDiskDefPtr disk = dom->def->disks[i];
+ virStorageSourcePtr src = disk->src;
+ unsigned int backing_idx = 0;
- if (qemuDomainGetStatsOneBlock(driver, cfg, dom, record, maxparams,
- disk, disk->src, i, abbreviated,
- stats) < 0)
- goto cleanup;
+ while (src && (!backing_idx || backing)) {
I'd make the special case of the first element a bit more obvious by
using "backing_idx == 0" rather than the negation sign used usually with
pointers.
+ if (qemuDomainGetStatsOneBlock(driver, cfg, dom, record,
maxparams,
+ disk, src, visited, backing_idx,
+ abbreviated, stats) < 0)
+ goto cleanup;
+ visited++;
+ backing_idx++;
+ src = src->backingStore;
+ }
}
ret = 0;
As an additional thought. The stats are now extermely long for VMs with
a long backing chain and few of the stats tend to be 0 always. We could
start skipping them perhaps (at least for the non-top, devices)
ACK.
Peter