On Fri, May 17, 2024 at 03:01:59PM +0200, Yu Zhang wrote:
Hello Michael and Peter,
Hi,
Exactly, not so compelling, as I did it first only on servers widely
used for production in our data center. The network adapters are
Ethernet controller: Broadcom Inc. and subsidiaries NetXtreme BCM5720
2-port Gigabit Ethernet PCIe
Hmm... I definitely thinks Jinpu's Mellanox ConnectX-6 looks more
reasonable.
https://lore.kernel.org/qemu-devel/CAMGffEn-DKpMZ4tA71MJYdyemg0Zda15wVAqk...
Appreciate a lot for everyone helping on the testings.
InfiniBand controller: Mellanox Technologies MT27800 Family
[ConnectX-5]
which doesn't meet our purpose. I can choose RDMA or TCP for VM
migration. RDMA traffic is through InfiniBand and TCP through Ethernet
on these two hosts. One is standby while the other is active.
Now I'll try on a server with more recent Ethernet and InfiniBand
network adapters. One of them has:
BCM57414 NetXtreme-E 10Gb/25Gb RDMA Ethernet Controller (rev 01)
The comparison between RDMA and TCP on the same NIC could make more sense.
It looks to me NICs are powerful now, but again as I mentioned I don't
think it's a reason we need to deprecate rdma, especially if QEMU's rdma
migration has the chance to be refactored using rsocket.
Is there anyone who started looking into that direction? Would it make
sense we start some PoC now?
Thanks,
--
Peter Xu