On 1/3/19 9:23 AM, Marc Haber wrote:
Hi Laine,
thanks for your answer, I really appreciate that.
On Wed, Jan 02, 2019 at 11:34:30AM -0500, Laine Stump wrote:
> On 12/16/18 4:59 PM, Marc Haber wrote:
>> I would like to run a network firewall as a VM on a KVM host. There are
>> ~ 25 VLANs delivered to the KVM host on three dedicated links, no LACP
>> or other things. I have the VLANs 100-180 on the host's enp1s0, the VLANs
>> 200-280 on the host's enp2s0 and the VLANs 300-380 on the host's enp3s0.
>>
>> To save myself from configuring all VLANs on the KVM host, I'd like to
>> hand the entire ethernet link to the VM and to have the VLAN interfaces
>> there. Using classical Linux bridges (brctl), things work fine.
>
> When I asked the person I go to with questions about macvtap (because he
> knows the internals), his response was "if a Linux host bridge works, then
> he should use that". In other words, he was skeptical that what you want to
> do could be made to work with macvtap.
I see.
A Linux host bridge is what I build with brctl?
Yes, although I wouldn't use the brctl command directly if I were you -
much simpler an more stable to set it up in your host's system network
config, and let initscripts/NetworkManager/whatever your distro uses
take care of creating the bridge / attaching your physical ethernet each
time the host boots.
> Is there a specific reason you need to use macvtap than a Linux host bridge?
I somehow got the impression that using macvtap is the more "modern"
and also more performant approach to bring network to VMs. Since the VM
in question is a Firewall, I'd love to have the performance impact
caused by virtualization minimized[1].
If this is a misconception, it might have been partially caused by some
colleagues at my last customer's site who very vocal about deprecating
the classical brctl bridges in favor of macvtap/macvlan,
Not really. macvtap is useful because it's simple to setup (doesn't
require a change to the host's network config), but is also problematic
in some ways, e.g. it doesn't allow host<->guest communication (unless
you have a switch that reflects all traffic back to the sender). A long
time ago people were theorizing that macvtap would provide much better
performance than a host bridge connection, but I don't think that has
been the case in practice (it may be somewhat better in some situations,
but not really in others).
These days use of macvtap for guest connection is more the exception
than the rule.
and the fact
that virt-manager uses macvtap by default and needs to be massaged into
allowing a classic brctl bridge.
On my host that has a bridge named "br0", *that* is what is offered for
connection of a new guest interface by default, not any macvtap
interface. And on another host that has no br0 device, libvirt's own
"default" network is the default selection for new network devices,
followed in the list by other libvirt virtual networks. In all cases,
the selections for macvtap connections to the host's physical ethernets
are included all the way at the bottom of the list, below both libvirt
virtual networks *and* host bridges.
Greetings
Marc
[1] The transfer rate of a tunneled IPv6 link with a dedicated VM
handling the tunnel and a dedicated VM handling firewalling with brctl
bridges (ingress packet - hypervisor - firewall VM - hypervisor - tunnel
VM - hypervisor - firewall VM - hypervisor - egress packet) maxes out at
about 15 Mbit on the APU device being used,
1) I guess this APU is something other than x86? 15Mbits is *glacially*
slow, regardless of what's used for the connection.
2) have you tried the same setup with macvtap (since you can't get vlans
working, maybe just try an apples-apples comparison of traffic with no
vlans on both setups) and seen markedly better performance?
3) You should be able to get some amount of improvement with host
bridges if you define a libvirt network pointing at the bridge you're
using, with macTableManager="libvirt" in the definition. e.g. if you
have a bridge in your host network config called "br0", define this network:
<network>
<name>nolearn</name>
<bridge name='br0'macTableManager='libvirt'/>
<forward mode='bridge'/>
</network>
then use this for your guest interfaces:
<interface type='network'/>
<source network='nolearn'/>
...
</interface>
Setting macTableManager='libvirt' causes libvirt to disable learning on
the bridge ports, and manually update the FDB of the bridge with the MAC
addresses of the guest interfaces. If learning is disabled on all ports
except one (that "one" being the physical ethernet of the host that is
attached to the bridge), then the kernel also disables promiscuous mode
on the physical ethernet. The combination of disabling learning and
disabling promiscuous mode on the physdev should have a noticeable
impact on performance (although I'm not certain if this panned out as
nicely in practice as it did in the minds of kernel network developers
either :-)
(note that if you play around with this you'll need to understand that
only traffic sent to the broadcast MAC, and to the specific MAC of the
guest as specified in the libvirt config for the guest, will actually
make it to the guest - no playing tricks with modifying the MAC address
only in the guest!)
with negligible load on the
two VMs and the hypervisor kernel spending a non-negligible amount of
its time inside the kernel wich I interpret as the context changes
killing the machine