On Wed, Mar 13, 2019 at 11:40:51PM +0100, Lars Lindstrom wrote:
On 3/13/19 2:26 PM, Martin Kletzander wrote:
> IIUC, you are using the tap0 device, but it is not plugged anywhere.
> By that I
> mean there is one end that you created and passed through into the VM,
> but there
> is no other end of that. I can think of some complicated ways how to
> do what
> you are trying to, but hopefully the above explanation will move you
> forward and
> you'll figure out something better than what I'm thinking about right
> now. What
> usually helps me is to think of a way this would be done with hardware
> and
> replicate that as most of the technology is modelled after HW anyway. Or
> someone else will have a better idea.
>
> Before sending it I just thought, wouldn't it be possible to just have
> a veth
> pair instead of the tap device? one end would go to the VM and the
> other one
> would be used for the containers' macvtaps...
What I am trying to achieve is the most performant way to connect a set
of containers to the KVM while having proper isolation. As the Linux
bridge does not support port isolation I started with a 'bridge'
networking and MACVLAN using a VLAN for each container, but this comes
at the cost of bridging and the VLAN trunk on the KVM side. The simplest
(and hopefully therefore most performant) solution I could come up with
was using a 'virtio' NIC in the KVM, with 'direct' connection in
'vepa'
mode to 'some other end' on the host, TAP in its simplest form, which
Docker then uses for its MACVLAN network.
I hope I'm not misunderstanding something, but the way I understand it is the
following:
TAP device is 1 device, a virtual ethernet card or an emulated network card.
Usually (VPNs and such) some user process binds to the device, can read data
which would normally go out on the wire and can write data that will look like
they came on the wire from the outside. This is "one end of the device", the
software is there instead of a wire connected to it. The "other end" shows up
in the OS as a network device that it can use to getting an IP address, sending
other packets, etc.
I think we both understand that, I just want to make sure we are on the same
page and also to have something to reference with my highly technical term (like
"other end") =)
Then when you create the VM and give it a VEPA access to the tap device, it
essentially just "duplicates" that device (the part that's in the OS) and
whatever the VM tries to send will be sent out over the wire.
When you create the containers with MACVLAN (macvtap) it does similar thing, it
creates virtual interface over the tap device and whatever the container is
trying to send ends up being sent over the wire.
That is why I think this cannot work for you (unless you have a program that
binds to the tap device and hairpins all the packets, but that's not what you
want I think).
I am not quite sure if I understood you correctly with the 'other end'.
Yeah, sorry for my knowledge on how this is actually called. I found out,
however, the "other end" is whatever _binds_ to the device.
With the given configuration I would expect that one end of the TAP
device is connected to the NIC in the KVM (and it actually is, it has an
IP address assigned in the KVM and is serving the web configurator) and
the other end is connected to the MACVLAN network of Docker. If this is
From how I understand it you are plugging the same end to all of them.
not how TAP works, how do I then provide a 'simple virtual
NIC' which
has one end in the KVM itself and the other on the host (without using
This is what veth pair does, IIUC. I imagine it as a patch cable.
bridging or alike). I always thought then when using 'bridge'
network
libvirt does exactly that, it creates a TAP device on the host and
assigns it to a bridge.
Yes, but it is slightly more complicated. There is a device that is connected
to the bridge which is on the host, so that the host can communicate with it.
However, when you start a VM, new device is created that is plugged in the
bridge and then passed to the VM. That is done for each VM (and talking about
the default network).
According to the man page I have to specify both interfaces when
creating the 'vdev' device, but how would I do that on the host with one
end being in the KVM?
I cannot try this right now, but I would try something like this:
ip link add dev veth-vm type veth peer name veth-cont
and then put veth-vm in the VM (type='direct' would work, but I can imagine
type='ethernet' might be even faster) and start the containers with macvtap
using veth-cont.
I am in no position to even estimate what the performance is, neither compare it
to anything. I would, however, imagine this could be pretty low-overhead
solution.
Let me know if that works or if I made any sense, I'd love to hear that.
Have a nice day,
Martin