Hi, Peter,Lei,Jinpu.
On 2024/5/8 0:28, Peter Xu wrote:
On Tue, May 07, 2024 at 01:50:43AM +0000, Gonglei (Arei) wrote:
> Hello,
>
>> -----Original Message-----
>> From: Peter Xu [mailto:peterx@redhat.com]
>> Sent: Monday, May 6, 2024 11:18 PM
>> To: Gonglei (Arei) <arei.gonglei(a)huawei.com>
>> Cc: Daniel P. Berrangé <berrange(a)redhat.com>; Markus Armbruster
>> <armbru(a)redhat.com>; Michael Galaxy <mgalaxy(a)akamai.com>; Yu Zhang
>> <yu.zhang(a)ionos.com>; Zhijian Li (Fujitsu) <lizhijian(a)fujitsu.com>;
Jinpu Wang
>> <jinpu.wang(a)ionos.com>; Elmar Gerdes <elmar.gerdes(a)ionos.com>;
>> qemu-devel(a)nongnu.org; Yuval Shaia <yuval.shaia.ml(a)gmail.com>; Kevin Wolf
>> <kwolf(a)redhat.com>; Prasanna Kumar Kalever
>> <prasanna.kalever(a)redhat.com>; Cornelia Huck <cohuck(a)redhat.com>;
>> Michael Roth <michael.roth(a)amd.com>; Prasanna Kumar Kalever
>> <prasanna4324(a)gmail.com>; integration(a)gluster.org; Paolo Bonzini
>> <pbonzini(a)redhat.com>; qemu-block(a)nongnu.org; devel(a)lists.libvirt.org;
>> Hanna Reitz <hreitz(a)redhat.com>; Michael S. Tsirkin
<mst(a)redhat.com>;
>> Thomas Huth <thuth(a)redhat.com>; Eric Blake <eblake(a)redhat.com>; Song
>> Gao <gaosong(a)loongson.cn>; Marc-André Lureau
>> <marcandre.lureau(a)redhat.com>; Alex Bennée <alex.bennee(a)linaro.org>;
>> Wainer dos Santos Moschetta <wainersm(a)redhat.com>; Beraldo Leal
>> <bleal(a)redhat.com>; Pannengyuan <pannengyuan(a)huawei.com>;
>> Xiexiangyou <xiexiangyou(a)huawei.com>
>> Subject: Re: [PATCH-for-9.1 v2 2/3] migration: Remove RDMA protocol handling
>>
>> On Mon, May 06, 2024 at 02:06:28AM +0000, Gonglei (Arei) wrote:
>>> Hi, Peter
>>
>> Hey, Lei,
>>
>> Happy to see you around again after years.
>>
> Haha, me too.
>
>>> RDMA features high bandwidth, low latency (in non-blocking lossless
>>> network), and direct remote memory access by bypassing the CPU (As you
>>> know, CPU resources are expensive for cloud vendors, which is one of
>>> the reasons why we introduced offload cards.), which TCP does not have.
>>
>> It's another cost to use offload cards, v.s. preparing more cpu resources?
>>
> Software and hardware offload converged architecture is the way to go for all cloud
vendors
> (Including comprehensive benefits in terms of performance, cost, security, and
innovation speed),
> it's not just a matter of adding the resource of a DPU card.
>
>>> In some scenarios where fast live migration is needed (extremely short
>>> interruption duration and migration duration) is very useful. To this
>>> end, we have also developed RDMA support for multifd.
>>
>> Will any of you upstream that work? I'm curious how intrusive would it be
>> when adding it to multifd, if it can keep only 5 exported functions like what
>> rdma.h does right now it'll be pretty nice. We also want to make sure it
works
>> with arbitrary sized loads and buffers, e.g. vfio is considering to add IO loads
to
>> multifd channels too.
>>
>
> In fact, we sent the patchset to the community in 2021. Pls see:
>
https://lore.kernel.org/all/20210203185906.GT2950@work-vm/T/
Yes, I have sent the patchset of multifd support for rdma migration by taking over my
colleague, and also
sorry for not keeping on this work at that time due to some reasons.
And also I am strongly agree with Lei that the RDMA protocol has some special advantages
against with TCP
in some scenario, and we are indeed to use it in our product.
I wasn't aware of that for sure in the past..
Multifd has changed quite a bit in the last 9.0 release, that may not apply
anymore. One thing to mention is please look at Dan's comment on possible
use of rsocket.h:
https://lore.kernel.org/all/ZjJm6rcqS5EhoKgK@redhat.com/
And Jinpu did help provide an initial test result over the library:
https://lore.kernel.org/qemu-devel/CAMGffEk8wiKNQmoUYxcaTHGtiEm2dwoCF_W7T...
It looks like we have a chance to apply that in QEMU.
>
>
>> One thing to note that the question here is not about a pure performance
>> comparison between rdma and nics only. It's about help us make a decision
>> on whether to drop rdma, iow, even if rdma performs well, the community still
>> has the right to drop it if nobody can actively work and maintain it.
>> It's just that if nics can perform as good it's more a reason to drop,
unless
>> companies can help to provide good support and work together.
>>
>
> We are happy to provide the necessary review and maintenance work for RDMA
> if the community needs it.
>
> CC'ing Chuan Zheng.
I'm not sure whether you and Jinpu's team would like to work together and
provide a final solution for rdma over multifd. It could be much simpler
than the original 2021 proposal if the rsocket API will work out.
Thanks,
That's a good news to see the socket abstraction for RDMA!
When I was developed the series above, the most pain is the RDMA migration has no
QIOChannel abstraction and i need to take a 'fake channel'
for it which is awkward in code implementation.
So, as far as I know, we can do this by
i. the first thing is that we need to evaluate the rsocket is good enough to satisfy our
QIOChannel fundamental abstraction
ii. if it works right, then we will continue to see if it can give us opportunity to hide
the detail of rdma protocol
into rsocket by remove most of code in rdma.c and also some hack in migration main
process.
iii. implement the advanced features like multi-fd and multi-uri for rdma migration.
Since I am not familiar with rsocket, I need some times to look at it and do some quick
verify with rdma migration based on rsocket.
But, yes, I am willing to involved in this refactor work and to see if we can make this
migration feature more better:)
--
Regards.
Chuan