On 10/06/2012 10:34 AM, Gene Czarcinski wrote:
As I have mentioned in other messages, I am interested in having
full
support for IPv6 in libvirt. To me this includes having dhcp6 for
IPV6 address assignment and using RA (radvd) to establish the default
route. This is what I am using on my real LANs.
Before starting into adding dhcp6 support to libvirt, I wanted to see
just how it works with the current software. First of all, it appears
that, when nat or routed are specified for IPv4, the IPv6 is routed.
At the time I added IPv6 support, there was no NAT for IPv6, and even
now I don't know if it's clear that it will be accepted (truthfully I
haven't been following it). So the choice was to either not allow IPv6
at all on NATed networks, or to just route the IPv6 in these cases. We
chose the latter.
If it is an isolated/private network, then it can only work with
other guests on that network. The iptables and ip6tables settings
corresponded and were as expected. On the virtualization host, both
IPv4 and IPv6 forwarding are enabled.
While I can easily do stuff like ping6 and ssh -6 from virtual guests
to the virtualization host, I have been unable to do anything with
external hosts ... unless I add a static route for the virtual IPv6
network on the target host back to the virtualization host.
.. or on the machine acting as the default route. Or if you advertise a
route to the virtual network's subnet with a routing protocol.
This is the only way I have gotten anything to work. To complicate
things, it seem that "everything" wants the IPv6 network to have
prefix=64 or things do not work correctly.
Not totally. The one thing I'm aware of that requires a prefix=64
network is that IPv6 autoconf *by definition* will only work on networks
with a 64 bit prefix. I'm not really sure why this is so (I might have
read the reason a long time ago, but if so I've long forgotten what it
was), but it is, and there's nothing to be done about it :-/
The real systems use fd00:dead:beef:17::/64 for their network. The
virtual networks all use fd00:face:17:xx::/64 for their networks.
The network traffic on the virtualization host is forwarded to the
target host ... I can see the packets with wireshark on the target host.
On the target host I tried specifying a static route for network
fd00:face:17::/48 ... well, that really screwed things up, resulted in
some "redirects" from the virtualization host saying the that it was
sent a malformed packed ... it took a reboot to clean things up.
OK, so leave the fd00:face:17:6::/64 static route on the target host
but subnetwork this network on the virtualization host using networks
like fd00:face:17:6:8::/80 and fd00:face:6:9::/80. This works if I
manually configure IPv6 on the virtual guest. Since radvd is "upset"
by a non-prefix=64 network, I was not surprised when the guest's
automatic IPv6 address/network was not configured.
OK, what am I missing? What don't I understand?
It really is unfortunate that autoconf only works for prefix=64, as I
can easily imagine a case where the entire address space available to
the admin would be prefix=64, so they would need to allow only
prefix=(64+n) for each virtual network.
If IPv6 is going to be useful in virtualization, then there must be
some "easy" way to have other systems understand that the
virtualization host is acting as a router for the virtual IPv6
networks it runs.
That's what routing protocols do, and configuration of a routing
protocol daemon to advertise out to the physical network is outside the
scope of libvirt. This is nothing unique to IPv6 - people using IPv4
networks in <forward mode='route'/> mode have the same problem all the
time. It's not even unique to virtualization - you would have the same
problem if you created a physical subnet behind one of your hosts.
While being able to go between the virtualization hosts and the
virtual guests is very useful, I do not consider this sufficient.
I have googled and found some stuff as well as reading more RFCs than
I wanted to but I cannot find anything to address this issue.
IIRC, I did find something under a libvirt document that indicates
"routed" will be used for some kind of subnetworking.
Sigh. That one comment has done more to make me feel old than anything
else in a while :-/ Some history - The last time I needed to run a
routing protocol was > 10 years ago, and back then I was running NetBSD
and had just spent a bunch of time working on a router product based on
BSD4.3; at that time, the standard program to use for RIP (the "routing
information protocol" was called "routed" (i.e. the "route
daemon").
There was another project called "gated" (gateway daemon) that
implemented OSPF, EGP, and BGP, and that was about it.
Cut to today (well, Sunday anyway - I got a bit distracted by work and
didn't complete this reply in a timely fashion) and me seeing your
comment about "routed" in a manner that implied "what is this?"
I first naively ran "yum provides */routed" to see which package
contained routed ("Surely it must still be the go-to daemon for routing
information..."), and am told that no packages have it. So I search
around and find no mention of it on Linux, then finally realize that
it's under the BSD license, and there is no exact GPLed equivalent as
there is for most standard system utilities. More searching and I see
that the gated project is apparently completely defunct, that another
project called "zebra" had taken up those reins, and it too is now
defunct, but a fork of zebra called quagga is now around, and has a
package in Fedora.
So, the result of this trip: what seems to be the current in-use routing
protocol daemon on Linux systems is 2 generations removed from what I
last used.
Now GET OFF MY LAWN!!!!!! :-)
Anyway, I guess try quagga (unless someone else more in the know has a
better suggestion). You'll need to figure out what routing protocol is
in use on your network (if any) and go along with that.
Or you can just add a static route on the default router for the network
connected to your host's physical adapter.
Does libvirt need an IPv6 "NAT" to make this work?
One really nice thing about IPv6 is the elimination of NAT. One of the
biggest headaches about IPv6 for some network admins is the lack of NAT.
:-/ I personally would love to see NAT never exist in IPv6, but it
appears that won't happen. If the proposed NAT for IPv6 has "landed",
then we could certainly entertain patches to support it for <forward
mode='nat'/>, but only if it is standards track and stable. (I seriously
don't know how far along it is)
Comments? Suggestions?
In general, you may be helped by running a routing protocol daemon on
your host (as long as your routing infrastructure is also running a
routing protocol).