
[...]
it appears that OpenStack uses Linux bridge in conjunction with an OVS bridge:
There are four distinct type of virtual networking devices: TAP devices, veth pairs, Linux bridges, and Open vSwitch bridgesFor an ethernet frame to travel from eth0 of virtual machine vm01, to the physical network, it must pass through nine devices inside of the host: TAP vnet0, Linux bridge qbrXXX, veth pair (qcbXXX, qvoXXX), Open vSwitch bridge br-int, veth pair (int-br-eth1, phy-br-eth1), and, finally, the physical network interface card eth1.
That depends on how you configure openstack to operate. The reason openstack links ovs to a bridge, is that you can't setup iptables rules with ovs.
This is useful insight, didn't know about it.
So for each guest, openstack creates a separate bridge + veth pair, and then sets iptables rules on that. This is pretty undesirable from a performance POV due to the number of devices the traffic must traverse :-( So I wouldn't take openstack's usage as an example of good practice here.
Noted. I see your recommendation: from Libvirt guests POV, OVS bridge connected to physical eth0 is the least over-head . I was wondering does an OVS bridge perform any better than Linux bridge. I had a brief chat with Thomas Graf, he mentions, w.r.t performance, from some numbers they did, ovs /slightly/ did better. However, he hasn't seen any numbers that would indicate that ovs is better in performance but i haven't seen anything the other way around either. -- /kashyap