[libvirt] strange behavior when using iotune

Hi. I'm try to shape disk via total_iops_sec in libvirt libvirt 1.2.10 qemu 2.0.0 Firstly when i'm run vm with predefined <total_iops_sec>5000</total_iops_sec> i have around 11000 iops (dd if=/dev/sda bs=512K of=/dev/null) After that i'm try to set via virsh --total_iops_sec 10 to want to minimize io, but nothing changed. After that i'm reboot vm with <total_iops_sec>10</total_iops_sec> and get very slow io, but this expected. But libvirt says that i have is around 600 iops. My questions is - why i can't change total_iops_sec in run-time, and why entered values does not equal values getting from libvirt ? Thanks for any suggestions and any help. -- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru jabber: vase@selfip.ru

On Mon, Nov 24, 2014 at 3:02 PM, Vasiliy Tolstov <v.tolstov@selfip.ru> wrote:
Hi. I'm try to shape disk via total_iops_sec in libvirt libvirt 1.2.10 qemu 2.0.0
Firstly when i'm run vm with predefined <total_iops_sec>5000</total_iops_sec> i have around 11000 iops (dd if=/dev/sda bs=512K of=/dev/null) After that i'm try to set via virsh --total_iops_sec 10 to want to minimize io, but nothing changed. After that i'm reboot vm with <total_iops_sec>10</total_iops_sec> and get very slow io, but this expected. But libvirt says that i have is around 600 iops.
My questions is - why i can't change total_iops_sec in run-time, and why entered values does not equal values getting from libvirt ?
Thanks for any suggestions and any help.
-- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru jabber: vase@selfip.ru
Hello Vasiliy, can you please check actual values via qemu-monitor-command domid '{ "execute": "query-block"}', just to be sure to pin the potential problem to the emulator itself?

2014-11-24 16:57 GMT+03:00 Andrey Korolyov <andrey@xdel.ru>:
Hello Vasiliy,
can you please check actual values via qemu-monitor-command domid '{ "execute": "query-block"}', just to be sure to pin the potential problem to the emulator itself?
virsh qemu-monitor-command 11151 '{ "execute": "query-block"}' | jq '.' { "return": [ { "io-status": "ok", "device": "drive-scsi0-0-0-0", "locked": false, "removable": false, "inserted": { "iops_rd": 0, "image": { "virtual-size": 21474836480, "filename": "/dev/vg3/11151", "format": "raw", "actual-size": 0, "dirty-flag": false }, "iops_wr": 0, "ro": false, "backing_file_depth": 0, "drv": "raw", "iops": 5000, "bps_wr": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "iops_max": 500, "file": "/dev/vg3/11151", "encryption_key_missing": false }, "type": "unknown" } ], "id": "libvirt-22" } i'm used this site http://www.ssdfreaks.com/content/599/how-to-convert-mbps-to-iops-or-calculat... root@11151:~# dd if=/dev/sda bs=4K of=/dev/null 5242880+0 records in 5242880+0 records out 21474836480 bytes (21 GB) copied, 45.2557 s, 475 MB/s so in case of 5000 iops i need to get only 19-20 MB/s -- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru jabber: vase@selfip.ru

On Mon, Nov 24, 2014 at 5:09 PM, Vasiliy Tolstov <v.tolstov@selfip.ru> wrote:
2014-11-24 16:57 GMT+03:00 Andrey Korolyov <andrey@xdel.ru>:
Hello Vasiliy,
can you please check actual values via qemu-monitor-command domid '{ "execute": "query-block"}', just to be sure to pin the potential problem to the emulator itself?
virsh qemu-monitor-command 11151 '{ "execute": "query-block"}' | jq '.' { "return": [ { "io-status": "ok", "device": "drive-scsi0-0-0-0", "locked": false, "removable": false, "inserted": { "iops_rd": 0, "image": { "virtual-size": 21474836480, "filename": "/dev/vg3/11151", "format": "raw", "actual-size": 0, "dirty-flag": false }, "iops_wr": 0, "ro": false, "backing_file_depth": 0, "drv": "raw", "iops": 5000, "bps_wr": 0, "encrypted": false, "bps": 0, "bps_rd": 0, "iops_max": 500, "file": "/dev/vg3/11151", "encryption_key_missing": false }, "type": "unknown" } ], "id": "libvirt-22" }
i'm used this site http://www.ssdfreaks.com/content/599/how-to-convert-mbps-to-iops-or-calculat... root@11151:~# dd if=/dev/sda bs=4K of=/dev/null 5242880+0 records in 5242880+0 records out 21474836480 bytes (21 GB) copied, 45.2557 s, 475 MB/s
so in case of 5000 iops i need to get only 19-20 MB/s
-- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru jabber: vase@selfip.ru
I am not sure for friendliness of possible dd interpretations for new leaky bucket mechanism, as its results can be a little confusing even for fio (all operations which are above the limit for long-running test will have 250ms latency, putting down score numbers in most popular tests like UnixBench), also w/o sync options these results are almost meaningless. May be fio with direct=1|fsync=1 (for fs) will give a more appropriate numbers in your case.

2014-11-24 17:18 GMT+03:00 Andrey Korolyov <andrey@xdel.ru>:
I am not sure for friendliness of possible dd interpretations for new leaky bucket mechanism, as its results can be a little confusing even for fio (all operations which are above the limit for long-running test will have 250ms latency, putting down score numbers in most popular tests like UnixBench), also w/o sync options these results are almost meaningless. May be fio with direct=1|fsync=1 (for fs) will give a more appropriate numbers in your case.
My fail. I'm forget to add iflag=direct to dd. Now all fine i get is around 20 MB/s which compared to 5000 iops. Thanks. -- Vasiliy Tolstov, e-mail: v.tolstov@selfip.ru jabber: vase@selfip.ru
participants (2)
-
Andrey Korolyov
-
Vasiliy Tolstov