[libvirt] high outage times for qemu virtio network links during live migration, trying to debug

Hi, I'm using libvirt (1.2.12) with qemu (2.2.0) in the context of OpenStack. If I live-migrate a guest with virtio network interfaces, I see a ~1200msec delay in processing the network packets, and several hundred of them get dropped. I get the dropped packets, but I'm not sure why the delay is there. I instrumented qemu and libvirt, and the strange thing is that this delay seems to happen before qemu actually starts doing any migration-related work. (i.e. before qmp_migrate() is called) Looking at my timestamps, the start of the glitch seems to coincide with libvirtd calling qemuDomainMigratePrepareTunnel3Params(), and the end of the glitch occurs when the migration is complete and we're up and running on the destination. My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding? Thanks, Chris (Please CC me on responses, I'm not subscribed to the lists.)

On Tue, Jan 26, 2016 at 10:41:12AM -0600, Chris Friesen wrote:
Hi,
I'm using libvirt (1.2.12) with qemu (2.2.0) in the context of OpenStack.
If I live-migrate a guest with virtio network interfaces, I see a ~1200msec delay in processing the network packets, and several hundred of them get dropped. I get the dropped packets, but I'm not sure why the delay is there.
I instrumented qemu and libvirt, and the strange thing is that this delay seems to happen before qemu actually starts doing any migration-related work. (i.e. before qmp_migrate() is called)
Looking at my timestamps, the start of the glitch seems to coincide with libvirtd calling qemuDomainMigratePrepareTunnel3Params(), and the end of the glitch occurs when the migration is complete and we're up and running on the destination.
My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding?
The qemuDomainMigratePrepareTunnel3Params() method is responsible for starting the QEMU process on the target host. This should not normally have any impact on host networking connectivity, since the CPUs on that target QEMU wouldn't be running at that point. Perhaps the mere act of starting QEMU and plugging the TAP dev into the network on the target host causes some issue though ? eg are you using a bridge that is doing STP or something like that. Regards, Daniel -- |: http://berrange.com -o- http://www.flickr.com/photos/dberrange/ :| |: http://libvirt.org -o- http://virt-manager.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: http://entangle-photo.org -o- http://live.gnome.org/gtk-vnc :|

On 01/26/2016 10:45 AM, Daniel P. Berrange wrote:
On Tue, Jan 26, 2016 at 10:41:12AM -0600, Chris Friesen wrote:
My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding?
The qemuDomainMigratePrepareTunnel3Params() method is responsible for starting the QEMU process on the target host. This should not normally have any impact on host networking connectivity, since the CPUs on that target QEMU wouldn't be running at that point. Perhaps the mere act of starting QEMU and plugging the TAP dev into the network on the target host causes some issue though ? eg are you using a bridge that is doing STP or something like that.
Well, looks like your suspicions were correct. Our fast-path backend was mistakenly sending out a GARP when the backend was initialized as part of creating the qemu process on the target host. Oops. Thanks for your help. Chris

On 26/01/2016 17:41, Chris Friesen wrote:
I'm using libvirt (1.2.12) with qemu (2.2.0) in the context of OpenStack.
If I live-migrate a guest with virtio network interfaces, I see a ~1200msec delay in processing the network packets, and several hundred of them get dropped. I get the dropped packets, but I'm not sure why the delay is there.
I instrumented qemu and libvirt, and the strange thing is that this delay seems to happen before qemu actually starts doing any migration-related work. (i.e. before qmp_migrate() is called)
Looking at my timestamps, the start of the glitch seems to coincide with libvirtd calling qemuDomainMigratePrepareTunnel3Params(), and the end of the glitch occurs when the migration is complete and we're up and running on the destination.
My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding?
QEMU (or vhost) _are_ processing virtio traffic, because otherwise you'd have no delay---only dropped packets. Or am I missing something? Paolo

On 01/26/2016 10:50 AM, Paolo Bonzini wrote:
On 26/01/2016 17:41, Chris Friesen wrote:
I'm using libvirt (1.2.12) with qemu (2.2.0) in the context of OpenStack.
If I live-migrate a guest with virtio network interfaces, I see a ~1200msec delay in processing the network packets, and several hundred of them get dropped. I get the dropped packets, but I'm not sure why the delay is there.
I instrumented qemu and libvirt, and the strange thing is that this delay seems to happen before qemu actually starts doing any migration-related work. (i.e. before qmp_migrate() is called)
Looking at my timestamps, the start of the glitch seems to coincide with libvirtd calling qemuDomainMigratePrepareTunnel3Params(), and the end of the glitch occurs when the migration is complete and we're up and running on the destination.
My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding?
QEMU (or vhost) _are_ processing virtio traffic, because otherwise you'd have no delay---only dropped packets. Or am I missing something?
I have separate timestamps embedded in the packet for when it was sent and when it was echoed back by the target (which is the one being migrated). What I'm seeing is that packets to the guest are being sent every msec, but they get delayed somewhere for over a second on the way to the destination VM while the migration is in progress. Once the migration is over, a bunch of packets get delivered to the app in the guest and are then processed all at once and echoed back to the sender in a big burst (and a bunch of packets are dropped, presumably due to a buffer overflowing somewhere). For comparison, we have a DPDK-based fastpath NIC type that we added (sort of like vhost-net), and it continues to process packets while the dirty page scanning is going on. Only the actual cutover affects it. Chris

On 26/01/2016 18:21, Chris Friesen wrote:
My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding?
QEMU (or vhost) _are_ processing virtio traffic, because otherwise you'd have no delay---only dropped packets. Or am I missing something?
I have separate timestamps embedded in the packet for when it was sent and when it was echoed back by the target (which is the one being migrated). What I'm seeing is that packets to the guest are being sent every msec, but they get delayed somewhere for over a second on the way to the destination VM while the migration is in progress. Once the migration is over, a bunch of packets get delivered to the app in the guest and are then processed all at once and echoed back to the sender in a big burst (and a bunch of packets are dropped, presumably due to a buffer overflowing somewhere).
That doesn't exclude a bug somewhere in net/ code. It doesn't pinpoint it to QEMU or vhost-net. In any case, what I would do is to use tracing at all levels (guest kernel, QEMU, host kernel) for packet rx and tx, and find out at which layer the hiccup appears. Paolo
For comparison, we have a DPDK-based fastpath NIC type that we added (sort of like vhost-net), and it continues to process packets while the dirty page scanning is going on. Only the actual cutover affects it.

On 01/26/2016 11:31 AM, Paolo Bonzini wrote:
On 26/01/2016 18:21, Chris Friesen wrote:
My question is, why doesn't qemu continue processing virtio packets while the dirty page scanning and memory transfer over the network is proceeding?
QEMU (or vhost) _are_ processing virtio traffic, because otherwise you'd have no delay---only dropped packets. Or am I missing something?
I have separate timestamps embedded in the packet for when it was sent and when it was echoed back by the target (which is the one being migrated). What I'm seeing is that packets to the guest are being sent every msec, but they get delayed somewhere for over a second on the way to the destination VM while the migration is in progress. Once the migration is over, a bunch of packets get delivered to the app in the guest and are then processed all at once and echoed back to the sender in a big burst (and a bunch of packets are dropped, presumably due to a buffer overflowing somewhere).
That doesn't exclude a bug somewhere in net/ code. It doesn't pinpoint it to QEMU or vhost-net.
In any case, what I would do is to use tracing at all levels (guest kernel, QEMU, host kernel) for packet rx and tx, and find out at which layer the hiccup appears.
Is there a straightforward way to trace packet processing in qemu (preferably with millisecond-accurate timestamps)? Chris

On 26/01/2016 18:49, Chris Friesen wrote:
That doesn't exclude a bug somewhere in net/ code. It doesn't pinpoint it to QEMU or vhost-net.
In any case, what I would do is to use tracing at all levels (guest kernel, QEMU, host kernel) for packet rx and tx, and find out at which layer the hiccup appears.
Is there a straightforward way to trace packet processing in qemu (preferably with millisecond-accurate timestamps)?
You can use tracing (docs/tracing.txt). There are two possibilities: 1) use existing low-level virtio tracepoints: virtqueue_fill (end of tx and rx operation) and virtqueue_pop (beginning of tx operation). 2) add tracepoints to hw/net/virtio-net.c (virtio_net_flush_tx, virtio_net_tx_complete, virtio_net_receive) or net/tap.c (tap_receive, tap_receive_iov, tap_send). Paolo
participants (3)
-
Chris Friesen
-
Daniel P. Berrange
-
Paolo Bonzini