
Hi,
-----Original Message----- From: Peter Xu [mailto:peterx@redhat.com] Sent: Thursday, June 6, 2024 5:19 AM To: Dr. David Alan Gilbert <dave@treblig.org> Cc: Michael Galaxy <mgalaxy@akamai.com>; zhengchuan <zhengchuan@huawei.com>; Gonglei (Arei) <arei.gonglei@huawei.com>; Daniel P. Berrangé <berrange@redhat.com>; Markus Armbruster <armbru@redhat.com>; Yu Zhang <yu.zhang@ionos.com>; Zhijian Li (Fujitsu) <lizhijian@fujitsu.com>; Jinpu Wang <jinpu.wang@ionos.com>; Elmar Gerdes <elmar.gerdes@ionos.com>; qemu-devel@nongnu.org; Yuval Shaia <yuval.shaia.ml@gmail.com>; Kevin Wolf <kwolf@redhat.com>; Prasanna Kumar Kalever <prasanna.kalever@redhat.com>; Cornelia Huck <cohuck@redhat.com>; Michael Roth <michael.roth@amd.com>; Prasanna Kumar Kalever <prasanna4324@gmail.com>; integration@gluster.org; Paolo Bonzini <pbonzini@redhat.com>; qemu-block@nongnu.org; devel@lists.libvirt.org; Hanna Reitz <hreitz@redhat.com>; Michael S. Tsirkin <mst@redhat.com>; Thomas Huth <thuth@redhat.com>; Eric Blake <eblake@redhat.com>; Song Gao <gaosong@loongson.cn>; Marc-André Lureau <marcandre.lureau@redhat.com>; Alex Bennée <alex.bennee@linaro.org>; Wainer dos Santos Moschetta <wainersm@redhat.com>; Beraldo Leal <bleal@redhat.com>; Pannengyuan <pannengyuan@huawei.com>; Xiexiangyou <xiexiangyou@huawei.com> Subject: Re: [PATCH-for-9.1 v2 2/3] migration: Remove RDMA protocol handling
On Wed, Jun 05, 2024 at 08:48:28PM +0000, Dr. David Alan Gilbert wrote:
I just noticed this thread; some random notes from a somewhat fragmented memory of this:
a) Long long ago, I also tried rsocket;
https://lists.gnu.org/archive/html/qemu-devel/2015-01/msg02040.html
as I remember the library was quite flaky at the time.
Hmm interesting. There also looks like a thread doing rpoll().
Yeh, I can't actually remember much more about what I did back then!
Heh, that's understandable and fair. :)
I hope Lei and his team has tested >4G mem, otherwise definitely worth checking. Lei also mentioned there're rsocket bugs they found in the cover letter, but not sure what's that about.
It would probably be a good idea to keep track of what bugs are in flight with it, and try it on a few RDMA cards to see what problems get triggered. I think I reported a few at the time, but I gave up after feeling it was getting very hacky.
Agreed. Maybe we can have a list of that in the cover letter or even QEMU's migration/rmda doc page.
Lei, if you think that makes sense please do so in your upcoming posts. There'll need to have a list of things you encountered in the kernel driver and it'll be even better if there're further links to read on each problem.
OK, no problem. There are two bugs: Bug 1: https://github.com/linux-rdma/rdma-core/commit/23985e25aebb559b761872313f8ca... his commit introduces a bug that causes QEMU suspension. When the timeout parameter of the rpoll is not -1 or 0, the program is suspended occasionally. Problem analysis: During the first rpoll, In line 3297, rs_poll_enter () performs pollcnt++. In this case, the value of pollcnt is 1. In line 3302, timeout expires and the function exits. Note that rs_poll_exit () is not --pollcnt here. In this case, the value of pollcnt is 1. During the second rpoll, pollcnt++ is performed in line 3297 rs_poll_enter (). In this case, the value of pollcnt is 2. If no timeout expires and the poll return value is greater than 0, the rs_poll_stop () function is executed. Because the if (--pollcnt) condition is false, suspendpoll = 1 is executed. Go back to the do while loop inside rpoll, again rs_poll_enter () now if (suspendpoll) condition is true, execute pthread_yield (); and return -EBUSY, Then, the do while loop in the rpoll is returned. Because the if (rs_poll_enter ()) condition is true, the rs_poll_enter () function is executed again after the continue operation. As a result, the program is suspended. Root cause: In line 3302, rs_poll_exit () is not executed before the timeout expires function exits. Bug 2: In rsocket.c, there is a receive queue int accept_queue[2] implemented by socketpair. The listen_svc thread in rsocket.c is responsible for receiving connections and writing them to the accept_queue[1]. When raccept () is called, a connection is received from accept_queue[0]. In the test case, qio_channel_wait(QIO_CHANNEL(lioc), G_IO_IN); waits for a readable event (waiting for a connection), rpoll () checks if accept_queue[0] has a readable event, However, this poll does not poll accept_queue[0]. After the timeout expires, rpoll () obtains the readable event of accept_queue[0] from rs_poll_arm again. Impaction: The accept operation can be performed only after 5000 ms. Of course, we can shorten this time by echoing the millisecond time > /etc/rdma/rsocket/wake_up_interval. Regards, -Gonglei
e) Someone made a good suggestion (sorry can't remember who) -
that the
RDMA migration structure was the wrong way around - it should
be the
destination which initiates an RDMA read, rather than the source doing a write; then things might become a LOT simpler; you just
need
to send page ranges to the destination and it can pull it. That might work nicely for postcopy.
I'm not sure whether it'll still be a problem if rdma recv side is based on zero-copy. It would be a matter of whether atomicity can be guaranteed so that we don't want the guest vcpus to see a partially copied page during on-flight DMAs. UFFDIO_COPY (or friend) is currently the only solution for that.
Yes, but even ignoring that (and the UFFDIO_CONTINUE idea you mention), if the destination can issue an RDMA read itself, it doesn't need to send messages to the source to ask for a page fetch; it just goes and grabs it itself, that's got to be good for latency.
Oh, that's pretty internal stuff of rdma to me and beyond my knowledge.. but from what I can tell it sounds very reasonable indeed!
Thanks!
-- Peter Xu