On 3/17/22 4:03 PM, Dr. David Alan Gilbert wrote:
* Claudio Fontana (cfontana(a)suse.de) wrote:
> On 3/17/22 2:41 PM, Claudio Fontana wrote:
>> On 3/17/22 11:25 AM, Daniel P. Berrangé wrote:
>>> On Thu, Mar 17, 2022 at 11:12:11AM +0100, Claudio Fontana wrote:
>>>> On 3/16/22 1:17 PM, Claudio Fontana wrote:
>>>>> On 3/14/22 6:48 PM, Daniel P. Berrangé wrote:
>>>>>> On Mon, Mar 14, 2022 at 06:38:31PM +0100, Claudio Fontana wrote:
>>>>>>> On 3/14/22 6:17 PM, Daniel P. Berrangé wrote:
>>>>>>>> On Sat, Mar 12, 2022 at 05:30:01PM +0100, Claudio Fontana
wrote:
>>>>>>>>> the first user is the qemu driver,
>>>>>>>>>
>>>>>>>>> virsh save/resume would slow to a crawl with a
default pipe size (64k).
>>>>>>>>>
>>>>>>>>> This improves the situation by 400%.
>>>>>>>>>
>>>>>>>>> Going through io_helper still seems to incur in some
penalty (~15%-ish)
>>>>>>>>> compared with direct qemu migration to a nc socket to
a file.
>>>>>>>>>
>>>>>>>>> Signed-off-by: Claudio Fontana
<cfontana(a)suse.de>
>>>>>>>>> ---
>>>>>>>>> src/qemu/qemu_driver.c | 6 +++---
>>>>>>>>> src/qemu/qemu_saveimage.c | 11 ++++++-----
>>>>>>>>> src/util/virfile.c | 12 ++++++++++++
>>>>>>>>> src/util/virfile.h | 1 +
>>>>>>>>> 4 files changed, 22 insertions(+), 8 deletions(-)
>>>>>>>>>
>>>>>>>>> Hello, I initially thought this to be a qemu
performance issue,
>>>>>>>>> so you can find the discussion about this in
qemu-devel:
>>>>>>>>>
>>>>>>>>> "Re: bad virsh save /dev/null performance (600
MiB/s max)"
>>>>>>>>>
>>>>>>>>>
https://lists.gnu.org/archive/html/qemu-devel/2022-03/msg03142.html
>>>
>>>
>>>> Current results show these experimental averages maximum throughput
>>>> migrating to /dev/null per each FdWrapper Pipe Size (as per QEMU QMP
>>>> "query-migrate", tests repeated 5 times for each).
>>>> VM Size is 60G, most of the memory effectively touched before migration,
>>>> through user application allocating and touching all memory with
>>>> pseudorandom data.
>>>>
>>>> 64K: 5200 Mbps (current situation)
>>>> 128K: 5800 Mbps
>>>> 256K: 20900 Mbps
>>>> 512K: 21600 Mbps
>>>> 1M: 22800 Mbps
>>>> 2M: 22800 Mbps
>>>> 4M: 22400 Mbps
>>>> 8M: 22500 Mbps
>>>> 16M: 22800 Mbps
>>>> 32M: 22900 Mbps
>>>> 64M: 22900 Mbps
>>>> 128M: 22800 Mbps
>>>>
>>>> This above is the throughput out of patched libvirt with multiple Pipe
Sizes for the FDWrapper.
>>>
>>> Ok, its bouncing around with noise after 1 MB. So I'd suggest that
>>> libvirt attempt to raise the pipe limit to 1 MB by default, but
>>> not try to go higher.
>>>
>>>> As for the theoretical limit for the libvirt architecture,
>>>> I ran a qemu migration directly issuing the appropriate QMP
>>>> commands, setting the same migration parameters as per libvirt,
>>>> and then migrating to a socket netcatted to /dev/null via
>>>> {"execute": "migrate", "arguments": {
"uri", "unix:///tmp/netcat.sock" } } :
>>>>
>>>> QMP: 37000 Mbps
>>>
>>>> So although the Pipe size improves things (in particular the
>>>> large jump is for the 256K size, although 1M seems a very good value),
>>>> there is still a second bottleneck in there somewhere that
>>>> accounts for a loss of ~14200 Mbps in throughput.
>
>
> Interesting addition: I tested quickly on a system with faster cpus and larger VM
sizes, up to 200GB,
> and the difference in throughput libvirt vs qemu is basically the same ~14500 Mbps.
>
> ~50000 mbps qemu to netcat socket to /dev/null
> ~35500 mbps virsh save to /dev/null
>
> Seems it is not proportional to cpu speed by the looks of it (not a totally fair
comparison because the VM sizes are different).
It might be closer to RAM or cache bandwidth limited though; for an extra copy.
I was thinking about sendfile(2) in iohelper, but that probably can't work as the
input fd is a socket, I am getting EINVAL.
One thing that I noticed is:
ommit afe6e58aedcd5e27ea16184fed90b338569bd042
Author: Jiri Denemark <jdenemar(a)redhat.com>
Date: Mon Feb 6 14:40:48 2012 +0100
util: Generalize virFileDirectFd
virFileDirectFd was used for accessing files opened with O_DIRECT using
libvirt_iohelper. We will want to use the helper for accessing files
regardless on O_DIRECT and thus virFileDirectFd was generalized and
renamed to virFileWrapperFd.
And in particular the comment in src/util/virFile.c:
/* XXX support posix_fadvise rather than O_DIRECT, if the kernel support
* for that is decent enough. In that case, we will also need to
* explicitly support VIR_FILE_WRAPPER_NON_BLOCKING since
* VIR_FILE_WRAPPER_BYPASS_CACHE alone will no longer require spawning
* iohelper.
*/
by Jiri Denemark.
I have lots of questions here, and I tried to involve Jiri and Andrea Righi here, who a
long time ago proposed a POSIX_FADV_NOREUSE implementation.
1) What is the reason iohelper was introduced?
2) Was Jiri's comment about the missing linux implementation of POSIX_FADV_NOREUSE?
3) if using O_DIRECT is the only reason for iohelper to exist (...?), would replacing it
with posix_fadvise remove the need for iohelper?
4) What has stopped Andreas' or another POSIX_FADV_NOREUSE implementation in the
kernel?
Lots of questions..
Thanks for all your insight,
Claudio
Dave
> Ciao,
>
> C
>
>>>
>>> In the above tests with libvirt, were you using the
>>> --bypass-cache flag or not ?
>>
>> No, I do not. Tests with ramdisk did not show a notable difference for me,
>>
>> but tests with /dev/null were not possible, since the command line is not
accepted:
>>
>> # virsh save centos7 /dev/null
>> Domain 'centos7' saved to /dev/null
>> [OK]
>>
>> # virsh save centos7 /dev/null --bypass-cache
>> error: Failed to save domain 'centos7' to /dev/null
>> error: Failed to create file '/dev/null': Invalid argument
>>
>>
>>>
>>> Hopefully use of O_DIRECT doesn't make a difference for
>>> /dev/null, since the I/O is being immediately thrown
>>> away and so ought to never go into I/O cache.
>>>
>>> In terms of the comparison, we still have libvirt iohelper
>>> giving QEMU a pipe, while your test above gives QEMU a
>>> UNIX socket.
>>>
>>> So I still wonder if the delta is caused by the pipe vs socket
>>> difference, as opposed to netcat vs libvirt iohelper code.
>>
>> I'll look into this aspect, thanks!
>