I have to say that I wouldn't do the networking that way - in fact, in the
clusters I manage, we haven't done the networking that way :-). Rather
than layer 3 routing between VMs, we've chosen to use layer 2 virtual
switching (yes, using openvswitch). We have the luxury of multiple 10G
NICs between our hosts, so we've separated out the management network from
the guest network, simply to ensure that we retain administrative access to
the hosts via ssh. If you want to live a little more dangerously, you
could use VLAN or VXLAN on one NIC - or you could spend a few dollars on an
extra network card on each host for the peace of mind!
For the project that's been live for two years: we presently run four hosts
on the lab's production network (another two on its acceptance-test
network, and another one as a "kickaround" host for playing about with
configs). Guest VMs on all four production hosts share 192.168.59.0/24
(why "59" is a story for another day), on an OVS virtual switch on each
host named br-guest, with the guest-specific NIC also set as a port on the
virtual switch. Guest traffic is therefore sent transparently between the
hosts where needed, and we can live-migrate a guest from one host to
another with no need to change the guest's IP address. Because we share a
common guest network and IP range between all hosts, it's trivial to add
(or remove) hosts - no host needs to know anything about routing to another
host, and in fact only our management layer cares how many hosts we have.
We happen to have "controller" nodes that run redundant DHCP servers with
non-overlapping scopes, but the exact location is not a requirement of this
setup. We could equally well set up a DHCP service on the guest network on
each host, allowing allocation of e.g. 192.168.59.1 to .100 on one host,
.101 to .200 on another host. Guests will typically receive offers from
each DHCP server and can choose, which is fine as they're all on the same
network. This provides redundancy in case of a full or failed DHCP server,
which your routed network approach wouldn't without some careful DHCP
forwarding work.
We happen to base our hosts on CentOS 7, but I manage other Debian-derived
systems and can probably remember enough about its network setup to help
with Ubuntu. Certainly I can help with OVS weirdnesses; it took some time
to get my head round exactly how it works. That said, I've never set up a
kvm host on Debian.
Good luck; happy to provide further pointers if useful.
Cheers,
- Peter
On 30 May 2018 at 15:32, Cobin Bluth <cbluth(a)gmail.com> wrote:
Hello Libvirt Users,
I would like to setup a two node bare-metal cluster. I need to guidance on
the network configuration. I have attached a small diagram, the same
diagram can be seen here:
https://i.imgur.com/SOk6a6G.png
I would like to configure the following details:
- Each node has a DHCP enabled guest network where VMs will run. (eg, *192.168.1.0/24
<
http://192.168.1.0/24>* for Host1, and *192.168.2.0/24
<
http://192.168.2.0/24>* for Host2)
- Any guest in Host1 should be able to ping guests in Host2, and vice
versa.
- All guests have routes to reach the open internet (so that '*yum update*'
will work "out-of-the-box")
- Each node will be able to operate fully if the other physical node
fails. (no central DHCP server, etc)
- I would like to *add more physical nodes later* when I need the
resources.
This is what I have done so far:
- Installed latest Ubuntu 18.04, with latest version of libvirt and
supporting software from ubuntu's apt repo.
- Each node can reach the other via its own eth0.
- Each node has a working vxlan0, which can ping the other via its vxlan0,
so it looks like the vxlan config is working. (I used *ip link add vxlan0
type vxlan...*)
- Configured route on Host1 like so: *ip route add 192.168.2.0/24
<
http://192.168.2.0/24> via 172.20.0.1*
- Configured route on Host2 also: *ip route add 192.168.1.0/24
<
http://192.168.1.0/24> via 172.20.0.2*
- All guests on Host1 (and Host1) can ping eth0 and vxlan0 on Host2, and
vice versa, yay.
- Guests on Host1 *cannot* ping guests on Host2, I suspect because the
the default NAT config of the libvirt network.
So, at this point I started to search for tutorials or more
information/documentation, but I am a little overwhelmed by the sheer
amount of information, as well as a lot of "stale" information on blogs etc.
I have learned that I can *virsh net-edit default*, and then change it to
an "open" network:* <forward mode='open'/>*
After doing this, the guests cannot reach outside their own network, nor
reach the internet, so I assume that I would need to add some routes, or
something else to get the network functioning like I want it. There is also *<forward
mode="route"/>*, but I dont fully understand the scenarios where one
would need an *open* or a *route* forward mode. I have also shied away
from using openvswitch, and have opted for ifupdown2.
(I have taken most of my inspiration from this blog post:
https://joejulian.name/post/how-to-configure-linux-
vxlans-with-multiple-unicast-endpoints/ )
Some questions that I have for the mailing list, any help would be greatly
appreciated:
- Is my target configuration of a KVM cluster uncommon? Do you see
drawbacks of this setup, or does it go against "typical convention"?
- Would my scenario be better suited for an "*open*" network or a
"*route*"
network?
- What would be the approach to complete this setup?
_______________________________________________
libvirt-users mailing list
libvirt-users(a)redhat.com
https://www.redhat.com/mailman/listinfo/libvirt-users