
Am 25.07.2018 um 17:57 hat Peter Krempa geschrieben:
This series adds support for starting and hotplug of disks with -blockdev/blockdev-add.
Blockjobs are not supported and thus the last patch should not be applied yet as some refactoring of the jobs is required.
At the beginning of the series there are a few cleanup patches which may be pushed even at this point.
The main reason this is in RFC state is that block stats reporting does not work.
The following command:
{"execute":"query-blockstats","arguments":{"query-nodes":true}}
query-nodes was added in commit f71eaa74c0b by Fam and Max, CCed. I'm not sure what it was needed for at all and the commit message doesn't help with that, but I suppose the addition was related to wr_highest_offset (see below).
Returns no reasonable data:
{ "stats": { "flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "failed_wr_operations": 0, "failed_rd_operations": 0, "wr_merged": 0, "wr_bytes": 0, "timed_stats": [
], "failed_flush_operations": 0, "account_invalid": false, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_merged": 0, "rd_bytes": 0, "invalid_flush_operations": 0, "account_failed": false, "rd_operations": 0, "invalid_wr_operations": 0, "invalid_rd_operations": 0 }, "node-name": "libvirt-7-storage" }, { "parent": { "stats": { "flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "failed_wr_operations": 0, "failed_rd_operations": 0, "wr_merged": 0, "wr_bytes": 0, "timed_stats": [
], "failed_flush_operations": 0, "account_invalid": false, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_merged": 0, "rd_bytes": 0, "invalid_flush_operations": 0, "account_failed": false, "rd_operations": 0, "invalid_wr_operations": 0, "invalid_rd_operations": 0 }, "node-name": "libvirt-7-storage" }, "stats": { "flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "failed_wr_operations": 0, "failed_rd_operations": 0, "wr_merged": 0, "wr_bytes": 0, "timed_stats": [
], "failed_flush_operations": 0, "account_invalid": false, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_merged": 0, "rd_bytes": 0, "invalid_flush_operations": 0, "account_failed": false, "rd_operations": 0, "invalid_wr_operations": 0, "invalid_rd_operations": 0 }, "node-name": "libvirt-7-format" },
the 'libvirt-7-storage' and 'libvirt-7-format' nodes represent the ISO backing the CDROM used to boot the VM so reads were executed.
The only value that even exists that the node level is wr_highest_offset. All the other stats are only accounted at the device level, so we can't return anything meaningful here.
In the old approach when we use -drive and query-nodes is false the output looks like this:
{ "device": "drive-ide0-0-0", "parent": { "stats": { "flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "failed_wr_operations": 0, "failed_rd_operations": 0, "wr_merged": 0, "wr_bytes": 0, "timed_stats": [
], "failed_flush_operations": 0, "account_invalid": false, "rd_total_time_ns": 0, "flush_operations": 0, "wr_operations": 0, "rd_merged": 0, "rd_bytes": 0, "invalid_flush_operations": 0, "account_failed": false, "rd_operations": 0, "invalid_wr_operations": 0, "invalid_rd_operations": 0 }, "node-name": "#block080" }, "stats": { "flush_total_time_ns": 0, "wr_highest_offset": 0, "wr_total_time_ns": 0, "failed_wr_operations": 0, "failed_rd_operations": 0, "wr_merged": 0, "wr_bytes": 0, "timed_stats": [
], "failed_flush_operations": 0, "account_invalid": true, "rd_total_time_ns": 204236271, "flush_operations": 0, "wr_operations": 0, "rd_merged": 0, "rd_bytes": 30046628, "invalid_flush_operations": 0, "account_failed": true, "idle_time_ns": 18766797619, "rd_operations": 14680, "invalid_wr_operations": 0, "invalid_rd_operations": 0 }, "node-name": "#block152" },
I also get all zeroes when I use 'query-nodes' true on a machine started with -drive.
Without the stats we'd not achieve feature parity unfortunately.
Kevin, could you please have a look?
As the information you're interested in is device-level information, keep using query-nodes=false and identify which device it is for with the 'device' field (the result contains a 'node-name', but this is not suitable to identify the device when there are two users of the same node). Kevin