On 3/14/22 6:48 PM, Daniel P. Berrangé wrote:
> On Mon, Mar 14, 2022 at 06:38:31PM +0100, Claudio Fontana wrote:
>> On 3/14/22 6:17 PM, Daniel P. Berrangé wrote:
>>> On Sat, Mar 12, 2022 at 05:30:01PM +0100, Claudio Fontana wrote:
>>>> the first user is the qemu driver,
>>>>
>>>> virsh save/resume would slow to a crawl with a default pipe size (64k).
>>>>
>>>> This improves the situation by 400%.
>>>>
>>>> Going through io_helper still seems to incur in some penalty (~15%-ish)
>>>> compared with direct qemu migration to a nc socket to a file.
>>>>
>>>> Signed-off-by: Claudio Fontana <cfontana(a)suse.de>
>>>> ---
>>>> src/qemu/qemu_driver.c | 6 +++---
>>>> src/qemu/qemu_saveimage.c | 11 ++++++-----
>>>> src/util/virfile.c | 12 ++++++++++++
>>>> src/util/virfile.h | 1 +
>>>> 4 files changed, 22 insertions(+), 8 deletions(-)
>>>>
>>>> Hello, I initially thought this to be a qemu performance issue,
>>>> so you can find the discussion about this in qemu-devel:
>>>>
>>>> "Re: bad virsh save /dev/null performance (600 MiB/s max)"
>>>>
>>>>
https://lists.gnu.org/archive/html/qemu-devel/2022-03/msg03142.html
>>>>
>>>> RFC since need to validate idea, and it is only lightly tested:
>>>>
>>>> save - about 400% benefit in throughput, getting around 20 Gbps to
/dev/null,
>>>> and around 13 Gbps to a ramdisk.
>>>> By comparison, direct qemu migration to a nc socket is around
24Gbps.
>>>>
>>>> restore - not tested, _should_ also benefit in the
"bypass_cache" case
>>>> coredump - not tested, _should_ also benefit like for save
>>>>
>>>> Thanks for your comments and review,
>>>>
>>>> Claudio
>>>>
>>>>
>>>> diff --git a/src/qemu/qemu_driver.c b/src/qemu/qemu_driver.c
>>>> index c1b3bd8536..be248c1e92 100644
>>>> --- a/src/qemu/qemu_driver.c
>>>> +++ b/src/qemu/qemu_driver.c
>>>> @@ -3044,7 +3044,7 @@ doCoreDump(virQEMUDriver *driver,
>>>> virFileWrapperFd *wrapperFd = NULL;
>>>> int directFlag = 0;
>>>> bool needUnlink = false;
>>>> - unsigned int flags = VIR_FILE_WRAPPER_NON_BLOCKING;
>>>> + unsigned int wrapperFlags = VIR_FILE_WRAPPER_NON_BLOCKING |
VIR_FILE_WRAPPER_BIG_PIPE;
>>>> const char *memory_dump_format = NULL;
>>>> g_autoptr(virQEMUDriverConfig) cfg =
virQEMUDriverGetConfig(driver);
>>>> g_autoptr(virCommand) compressor = NULL;
>>>> @@ -3059,7 +3059,7 @@ doCoreDump(virQEMUDriver *driver,
>>>>
>>>> /* Create an empty file with appropriate ownership. */
>>>> if (dump_flags & VIR_DUMP_BYPASS_CACHE) {
>>>> - flags |= VIR_FILE_WRAPPER_BYPASS_CACHE;
>>>> + wrapperFlags |= VIR_FILE_WRAPPER_BYPASS_CACHE;
>>>> directFlag = virFileDirectFdFlag();
>>>> if (directFlag < 0) {
>>>> virReportError(VIR_ERR_OPERATION_FAILED, "%s",
>>>> @@ -3072,7 +3072,7 @@ doCoreDump(virQEMUDriver *driver,
>>>> &needUnlink)) < 0)
>>>> goto cleanup;
>>>>
>>>> - if (!(wrapperFd = virFileWrapperFdNew(&fd, path, flags)))
>>>> + if (!(wrapperFd = virFileWrapperFdNew(&fd, path,
wrapperFlags)))
>>>> goto cleanup;
>>>>
>>>> if (dump_flags & VIR_DUMP_MEMORY_ONLY) {
>>>> diff --git a/src/qemu/qemu_saveimage.c b/src/qemu/qemu_saveimage.c
>>>> index c0139041eb..1b522a1542 100644
>>>> --- a/src/qemu/qemu_saveimage.c
>>>> +++ b/src/qemu/qemu_saveimage.c
>>>> @@ -267,7 +267,7 @@ qemuSaveImageCreate(virQEMUDriver *driver,
>>>> int fd = -1;
>>>> int directFlag = 0;
>>>> virFileWrapperFd *wrapperFd = NULL;
>>>> - unsigned int wrapperFlags = VIR_FILE_WRAPPER_NON_BLOCKING;
>>>> + unsigned int wrapperFlags = VIR_FILE_WRAPPER_NON_BLOCKING |
VIR_FILE_WRAPPER_BIG_PIPE;
>>>>
>>>> /* Obtain the file handle. */
>>>> if ((flags & VIR_DOMAIN_SAVE_BYPASS_CACHE)) {
>>>> @@ -463,10 +463,11 @@ qemuSaveImageOpen(virQEMUDriver *driver,
>>>> if ((fd = qemuDomainOpenFile(cfg, NULL, path, oflags, NULL)) <
0)
>>>> return -1;
>>>>
>>>> - if (bypass_cache &&
>>>> - !(*wrapperFd = virFileWrapperFdNew(&fd, path,
>>>> -
VIR_FILE_WRAPPER_BYPASS_CACHE)))
>>>> - return -1;
>>>> + if (bypass_cache) {
>>>> + unsigned int wrapperFlags = VIR_FILE_WRAPPER_BYPASS_CACHE |
VIR_FILE_WRAPPER_BIG_PIPE;
>>>> + if (!(*wrapperFd = virFileWrapperFdNew(&fd, path,
wrapperFlags)))
>>>> + return -1;
>>>> + }
>>>>
>>>> data = g_new0(virQEMUSaveData, 1);
>>>>
>>>> diff --git a/src/util/virfile.c b/src/util/virfile.c
>>>> index a04f888e06..fdacd17890 100644
>>>> --- a/src/util/virfile.c
>>>> +++ b/src/util/virfile.c
>>>> @@ -282,6 +282,18 @@ virFileWrapperFdNew(int *fd, const char *name,
unsigned int flags)
>>>>
>>>> ret->cmd = virCommandNewArgList(iohelper_path, name, NULL);
>>>>
>>>> + if (flags & VIR_FILE_WRAPPER_BIG_PIPE) {
>>>> + /*
>>>> + * virsh save/resume would slow to a crawl with a default pipe
size (usually 64k).
>>>> + * This improves the situation by 400%, although going through
io_helper still incurs
>>>> + * in a performance penalty compared with a direct qemu
migration to a socket.
>>>> + */
>>>> + int pipe_sz, rv = virFileReadValueInt(&pipe_sz,
"/proc/sys/fs/pipe-max-size");
>>>
>>> This is fine as an experiment but I don't think it is that safe
>>> to use in the real world. There could be a variety of reasons why
>>> an admin can enlarge this value, and we shouldn't assume the max
>>> size is sensible for libvirt/QEMU to use.
>>>
>>> I very much suspect there are diminishing returns here in terms
>>> of buffer sizes.
>>>
>>> 64k is obvious too small, but 1 MB, may be sufficiently large
>>> that the bottleneck is then elsewhere in our code. IOW, If the
>>> pipe max size is 100 MB, we shouldn't blindly use it. Can you
>>> do a few tests with varying sizes to see where a sensible
>>> tradeoff falls ?
>>
>>
>> Hi Daniel,
>>
>> this is a very good point. Actually I see very diminishing returns after the
default pipe-max-size (1MB).
>>
>> The idea was that beyond allowing larger size, the admin could have set a
_smaller_ pipe-max-size,
>> so we want to use that in that case, otherwise an attempt to use 1MB would result
in EPERM, if the process does not have CAP_SYS_RESOURCE or CAP_SYS_ADMIN.
>> I am not sure if used with Kubevirt, for example, CAP_SYS_RESOURCE or
CAP_SYS_ADMIN would be available...?
>>
>> So maybe one idea could be to use the minimum between /proc/sys/fs/pipe-max-size
and for example 1MB, but will do more testing to see where the actual break point is.
>
> That's reasonable.
>
Just as an update: still running tests with various combinations, and larger VMs (to RAM,
to slow disk, and now to nvme).
For now no clear winner yet. There seems to be a significant benefit already going from
1MB (my previous default) to 2MB,
but anything more than 16MB seems to not improve anything at all.
But I just need to do more testing, more runs.
Thanks,
Claudio
Current results show these experimental averages maximum throughput migrating to /dev/null
per each FdWrapper Pipe Size (as per QEMU QMP "query-migrate", tests repeated 5
times for each).
VM Size is 60G, most of the memory effectively touched before migration, through user
application allocating and touching all memory with pseudorandom data.
64K: 5200 Mbps (current situation)
128K: 5800 Mbps
256K: 20900 Mbps
512K: 21600 Mbps
1M: 22800 Mbps
2M: 22800 Mbps
4M: 22400 Mbps
8M: 22500 Mbps
16M: 22800 Mbps
32M: 22900 Mbps
64M: 22900 Mbps
128M: 22800 Mbps
This above is the throughput out of patched libvirt with multiple Pipe Sizes for the
FDWrapper.
As for the theoretical limit for the libvirt architecture,
I ran a qemu migration directly issuing the appropriate QMP commands, setting the same
migration parameters as per libvirt, and then migrating to a socket netcatted to /dev/null
via
{"execute": "migrate", "arguments": { "uri",
"unix:///tmp/netcat.sock" } } :
QMP: 37000 Mbps
---
So although the Pipe size improves things (in particular the large jump is for the 256K
size, although 1M seems a very good value),
there is still a second bottleneck in there somewhere that accounts for a loss of ~14200
Mbps in throughput.
Thanks,
Claudio