* Claudio Fontana (cfontana(a)suse.de) wrote:
On 3/17/22 2:41 PM, Claudio Fontana wrote:
> On 3/17/22 11:25 AM, Daniel P. Berrangé wrote:
>> On Thu, Mar 17, 2022 at 11:12:11AM +0100, Claudio Fontana wrote:
>>> On 3/16/22 1:17 PM, Claudio Fontana wrote:
>>>> On 3/14/22 6:48 PM, Daniel P. Berrangé wrote:
>>>>> On Mon, Mar 14, 2022 at 06:38:31PM +0100, Claudio Fontana wrote:
>>>>>> On 3/14/22 6:17 PM, Daniel P. Berrangé wrote:
>>>>>>> On Sat, Mar 12, 2022 at 05:30:01PM +0100, Claudio Fontana
wrote:
>>>>>>>> the first user is the qemu driver,
>>>>>>>>
>>>>>>>> virsh save/resume would slow to a crawl with a default
pipe size (64k).
>>>>>>>>
>>>>>>>> This improves the situation by 400%.
>>>>>>>>
>>>>>>>> Going through io_helper still seems to incur in some
penalty (~15%-ish)
>>>>>>>> compared with direct qemu migration to a nc socket to a
file.
>>>>>>>>
>>>>>>>> Signed-off-by: Claudio Fontana <cfontana(a)suse.de>
>>>>>>>> ---
>>>>>>>> src/qemu/qemu_driver.c | 6 +++---
>>>>>>>> src/qemu/qemu_saveimage.c | 11 ++++++-----
>>>>>>>> src/util/virfile.c | 12 ++++++++++++
>>>>>>>> src/util/virfile.h | 1 +
>>>>>>>> 4 files changed, 22 insertions(+), 8 deletions(-)
>>>>>>>>
>>>>>>>> Hello, I initially thought this to be a qemu performance
issue,
>>>>>>>> so you can find the discussion about this in
qemu-devel:
>>>>>>>>
>>>>>>>> "Re: bad virsh save /dev/null performance (600
MiB/s max)"
>>>>>>>>
>>>>>>>>
https://lists.gnu.org/archive/html/qemu-devel/2022-03/msg03142.html
>>
>>
>>> Current results show these experimental averages maximum throughput
>>> migrating to /dev/null per each FdWrapper Pipe Size (as per QEMU QMP
>>> "query-migrate", tests repeated 5 times for each).
>>> VM Size is 60G, most of the memory effectively touched before migration,
>>> through user application allocating and touching all memory with
>>> pseudorandom data.
>>>
>>> 64K: 5200 Mbps (current situation)
>>> 128K: 5800 Mbps
>>> 256K: 20900 Mbps
>>> 512K: 21600 Mbps
>>> 1M: 22800 Mbps
>>> 2M: 22800 Mbps
>>> 4M: 22400 Mbps
>>> 8M: 22500 Mbps
>>> 16M: 22800 Mbps
>>> 32M: 22900 Mbps
>>> 64M: 22900 Mbps
>>> 128M: 22800 Mbps
>>>
>>> This above is the throughput out of patched libvirt with multiple Pipe Sizes
for the FDWrapper.
>>
>> Ok, its bouncing around with noise after 1 MB. So I'd suggest that
>> libvirt attempt to raise the pipe limit to 1 MB by default, but
>> not try to go higher.
>>
>>> As for the theoretical limit for the libvirt architecture,
>>> I ran a qemu migration directly issuing the appropriate QMP
>>> commands, setting the same migration parameters as per libvirt,
>>> and then migrating to a socket netcatted to /dev/null via
>>> {"execute": "migrate", "arguments": {
"uri", "unix:///tmp/netcat.sock" } } :
>>>
>>> QMP: 37000 Mbps
>>
>>> So although the Pipe size improves things (in particular the
>>> large jump is for the 256K size, although 1M seems a very good value),
>>> there is still a second bottleneck in there somewhere that
>>> accounts for a loss of ~14200 Mbps in throughput.
Interesting addition: I tested quickly on a system with faster cpus and larger VM sizes,
up to 200GB,
and the difference in throughput libvirt vs qemu is basically the same ~14500 Mbps.
~50000 mbps qemu to netcat socket to /dev/null
~35500 mbps virsh save to /dev/null
Seems it is not proportional to cpu speed by the looks of it (not a totally fair
comparison because the VM sizes are different).
It might be closer to RAM or cache bandwidth limited though; for an extra copy.
Dave
Ciao,
C
>>
>> In the above tests with libvirt, were you using the
>> --bypass-cache flag or not ?
>
> No, I do not. Tests with ramdisk did not show a notable difference for me,
>
> but tests with /dev/null were not possible, since the command line is not accepted:
>
> # virsh save centos7 /dev/null
> Domain 'centos7' saved to /dev/null
> [OK]
>
> # virsh save centos7 /dev/null --bypass-cache
> error: Failed to save domain 'centos7' to /dev/null
> error: Failed to create file '/dev/null': Invalid argument
>
>
>>
>> Hopefully use of O_DIRECT doesn't make a difference for
>> /dev/null, since the I/O is being immediately thrown
>> away and so ought to never go into I/O cache.
>>
>> In terms of the comparison, we still have libvirt iohelper
>> giving QEMU a pipe, while your test above gives QEMU a
>> UNIX socket.
>>
>> So I still wonder if the delta is caused by the pipe vs socket
>> difference, as opposed to netcat vs libvirt iohelper code.
>
> I'll look into this aspect, thanks!
--
Dr. David Alan Gilbert / dgilbert(a)redhat.com / Manchester, UK