Hi,
I'd like to resurrect this thread:
https://www.redhat.com/archives/libvir-list/2017-February/msg01084.html
Recent benchmarks have demonstrated that using large values for
poll-max-ns significantly decreases the perceived latency in the Guest,
at the expense of the iothread using more CPU:
- virtio-blk+iothread, 16 vCPUs, null_blk=200us and default poll-max-ns
randread: (groupid=0, jobs=4): err= 0: pid=1314: Thu Feb 15 06:24:06 2018
read: IOPS=15.0k, BW=58.7MiB/s (61.6MB/s)(587MiB/10001msec)
clat (usec): min=98, max=2016, avg=257.98, stdev=22.91
lat (usec): min=100, max=2017, avg=259.37, stdev=22.99
randread: (groupid=0, jobs=8): err= 0: pid=1359: Thu Feb 15 06:25:03 2018
read: IOPS=29.8k, BW=117MiB/s (122MB/s)(1166MiB/10002msec)
clat (usec): min=33, max=3818, avg=260.92, stdev=32.02
lat (usec): min=34, max=3819, avg=262.14, stdev=32.02
randread: (groupid=0, jobs=16): err= 0: pid=1339: Thu Feb 15 06:24:41 2018
read: IOPS=55.9k, BW=218MiB/s (229MB/s)(2182MiB/10002msec)
clat (usec): min=37, max=3390, avg=279.19, stdev=34.53
lat (usec): min=38, max=3391, avg=280.41, stdev=34.54
- virtio-blk+iothread, 16 vCPUs, null_blk=200us and poll-max-ns=1000000
randread: (groupid=0, jobs=4): err= 0: pid=1361: Thu Feb 15 06:31:47 2018
read: IOPS=16.2k, BW=63.3MiB/s (66.3MB/s)(633MiB/10001msec)
clat (usec): min=72, max=2790, avg=240.12, stdev=22.28
lat (usec): min=73, max=2791, avg=241.30, stdev=22.28
randread: (groupid=0, jobs=8): err= 0: pid=1342: Thu Feb 15 06:30:51 2018
read: IOPS=32.1k, BW=125MiB/s (132MB/s)(1255MiB/10001msec)
clat (usec): min=30, max=5474, avg=242.14, stdev=46.24
lat (usec): min=31, max=5475, avg=243.33, stdev=46.25
randread: (groupid=0, jobs=16): err= 0: pid=1324: Thu Feb 15 06:30:11 2018
read: IOPS=61.8k, BW=241MiB/s (253MB/s)(2413MiB/10002msec)
clat (usec): min=26, max=2931, avg=251.89, stdev=38.37
lat (usec): min=27, max=2932, avg=253.11, stdev=38.38
I think this trade-off should be user's decision. Layered products may
consider abstracting this configuration under simplified VM tuning
attributes.
Sergio.