One thing to keep in mind here (despite me not having any hardware to
test)
was that one of the original goals here
in the RDMA implementation was not simply raw throughput nor raw latency,
but a lack of CPU utilization in kernel
space due to the offload. While it is entirely possible that newer hardware
w/ TCP might compete, the significant
reductions in CPU usage in the TCP/IP stack were a big win at the time.
Just something to consider while you're doing the testing........
I just noticed this thread; some random notes from a somewhat
fragmented memory of this:
a) Long long ago, I also tried rsocket;
as I remember the library was quite flaky at the time.
b) A lot of the complexity in the rdma migration code comes from
emulating a stream to carry the migration control data and interleaving
that with the actual RAM copy. I believe the original design used
a separate TCP socket for the control data, and just used the RDMA
for the data - that should be a lot simpler (but alas was rejected
in review early on)
c) I can't rememmber the last benchmarks I did; but I think I did
manage to beat RDMA with multifd; but yes, multifd does eat host CPU
where as RDMA barely uses a whisper.
d) The 'zero-copy-send' option in migrate may well get some of that
CPU time back; but if I remember we were still bottle necked on
the receive side. (I can't remember if zero-copy-send worked with
multifd?)
e) Someone made a good suggestion (sorry can't remember who) - that the
RDMA migration structure was the wrong way around - it should be the
destination which initiates an RDMA read, rather than the source
doing a write; then things might become a LOT simpler; you just need
to send page ranges to the destination and it can pull it.
That might work nicely for postcopy.
Dave
- Michael
On 5/9/24 03:58, Zheng Chuan wrote:
> Hi, Peter,Lei,Jinpu.
>
> On 2024/5/8 0:28, Peter Xu wrote:
> > On Tue, May 07, 2024 at 01:50:43AM +0000, Gonglei (Arei) wrote:
> > > Hello,
> > >
> > > > -----Original Message-----
> > > > From: Peter Xu [mailto:peterx@redhat.com]
> > > > Sent: Monday, May 6, 2024 11:18 PM
> > > > To: Gonglei (Arei) <arei.gonglei(a)huawei.com>
> > > > Cc: Daniel P. Berrangé <berrange(a)redhat.com>; Markus
Armbruster
> > > > <armbru(a)redhat.com>; Michael Galaxy <mgalaxy(a)akamai.com>;
Yu Zhang
> > > > <yu.zhang(a)ionos.com>; Zhijian Li (Fujitsu)
<lizhijian(a)fujitsu.com>; Jinpu Wang
> > > > <jinpu.wang(a)ionos.com>; Elmar Gerdes
<elmar.gerdes(a)ionos.com>;
> > > > qemu-devel(a)nongnu.org; Yuval Shaia <yuval.shaia.ml(a)gmail.com>;
Kevin Wolf
> > > > <kwolf(a)redhat.com>; Prasanna Kumar Kalever
> > > > <prasanna.kalever(a)redhat.com>; Cornelia Huck
<cohuck(a)redhat.com>;
> > > > Michael Roth <michael.roth(a)amd.com>; Prasanna Kumar Kalever
> > > > <prasanna4324(a)gmail.com>; integration(a)gluster.org; Paolo
Bonzini
> > > > <pbonzini(a)redhat.com>; qemu-block(a)nongnu.org;
devel(a)lists.libvirt.org;
> > > > Hanna Reitz <hreitz(a)redhat.com>; Michael S. Tsirkin
<mst(a)redhat.com>;
> > > > Thomas Huth <thuth(a)redhat.com>; Eric Blake
<eblake(a)redhat.com>; Song
> > > > Gao <gaosong(a)loongson.cn>; Marc-André Lureau
> > > > <marcandre.lureau(a)redhat.com>; Alex Bennée
<alex.bennee(a)linaro.org>;
> > > > Wainer dos Santos Moschetta <wainersm(a)redhat.com>; Beraldo
Leal
> > > > <bleal(a)redhat.com>; Pannengyuan
<pannengyuan(a)huawei.com>;
> > > > Xiexiangyou <xiexiangyou(a)huawei.com>
> > > > Subject: Re: [PATCH-for-9.1 v2 2/3] migration: Remove RDMA protocol
handling
> > > >
> > > > On Mon, May 06, 2024 at 02:06:28AM +0000, Gonglei (Arei) wrote:
> > > > > Hi, Peter
> > > > Hey, Lei,
> > > >
> > > > Happy to see you around again after years.
> > > >
> > > Haha, me too.
> > >
> > > > > RDMA features high bandwidth, low latency (in non-blocking
lossless
> > > > > network), and direct remote memory access by bypassing the CPU
(As you
> > > > > know, CPU resources are expensive for cloud vendors, which is
one of
> > > > > the reasons why we introduced offload cards.), which TCP does
not have.
> > > > It's another cost to use offload cards, v.s. preparing more cpu
resources?
> > > >
> > > Software and hardware offload converged architecture is the way to go for
all cloud vendors
> > > (Including comprehensive benefits in terms of performance, cost, security,
and innovation speed),
> > > it's not just a matter of adding the resource of a DPU card.
> > >
> > > > > In some scenarios where fast live migration is needed (extremely
short
> > > > > interruption duration and migration duration) is very useful. To
this
> > > > > end, we have also developed RDMA support for multifd.
> > > > Will any of you upstream that work? I'm curious how intrusive
would it be
> > > > when adding it to multifd, if it can keep only 5 exported functions
like what
> > > > rdma.h does right now it'll be pretty nice. We also want to make
sure it works
> > > > with arbitrary sized loads and buffers, e.g. vfio is considering to
add IO loads to
> > > > multifd channels too.
> > > >
> > > In fact, we sent the patchset to the community in 2021. Pls see:
> > >
https://urldefense.com/v3/__https://lore.kernel.org/all/20210203185906.GT...
> Yes, I have sent the patchset of multifd support for rdma migration by taking over
my colleague, and also
> sorry for not keeping on this work at that time due to some reasons.
> And also I am strongly agree with Lei that the RDMA protocol has some special
advantages against with TCP
> in some scenario, and we are indeed to use it in our product.
>
> > I wasn't aware of that for sure in the past..
> >
> > Multifd has changed quite a bit in the last 9.0 release, that may not apply
> > anymore. One thing to mention is please look at Dan's comment on possible
> > use of rsocket.h:
> >
> >
https://urldefense.com/v3/__https://lore.kernel.org/all/ZjJm6rcqS5EhoKgK@...
> >
> > And Jinpu did help provide an initial test result over the library:
> >
> >
https://urldefense.com/v3/__https://lore.kernel.org/qemu-devel/CAMGffEk8w...
> >
> > It looks like we have a chance to apply that in QEMU.
> >
> > >
> > > > One thing to note that the question here is not about a pure
performance
> > > > comparison between rdma and nics only. It's about help us make a
decision
> > > > on whether to drop rdma, iow, even if rdma performs well, the
community still
> > > > has the right to drop it if nobody can actively work and maintain
it.
> > > > It's just that if nics can perform as good it's more a reason
to drop, unless
> > > > companies can help to provide good support and work together.
> > > >
> > > We are happy to provide the necessary review and maintenance work for
RDMA
> > > if the community needs it.
> > >
> > > CC'ing Chuan Zheng.
> > I'm not sure whether you and Jinpu's team would like to work together
and
> > provide a final solution for rdma over multifd. It could be much simpler
> > than the original 2021 proposal if the rsocket API will work out.
> >
> > Thanks,
> >
> That's a good news to see the socket abstraction for RDMA!
> When I was developed the series above, the most pain is the RDMA migration has no
QIOChannel abstraction and i need to take a 'fake channel'
> for it which is awkward in code implementation.
> So, as far as I know, we can do this by
> i. the first thing is that we need to evaluate the rsocket is good enough to satisfy
our QIOChannel fundamental abstraction
> ii. if it works right, then we will continue to see if it can give us opportunity to
hide the detail of rdma protocol
> into rsocket by remove most of code in rdma.c and also some hack in migration
main process.
> iii. implement the advanced features like multi-fd and multi-uri for rdma
migration.
>
> Since I am not familiar with rsocket, I need some times to look at it and do some
quick verify with rdma migration based on rsocket.
> But, yes, I am willing to involved in this refactor work and to see if we can make
this migration feature more better:)
>
>
--
-----Open up your eyes, open up your mind, open up your code -------
/ Dr. David Alan Gilbert | Running GNU/Linux | Happy \
\ dave @