
Hello, in a common scenario where there aren't enough public IPv4 addresses for all domains, I have elaborated this workaround: - the host operates a sixxs.net IPv6 tunnel with aiccu. - the virbr0 interface is manually configured an IPv6 address within a /64 subnet delegated by sixxs.net. (I do this from /etc/rc.local for lack of a better place) - radvd runs on the host to autoconfigure IPv6 for the guests on virbr0 and advertise the host as a gateway With this setup, all machines are globally addressable from the IPv6 internet, which is still quite useful for backstage services such as a build farm. In order to automate this setup, libvirt should support configuring an IPv6 address on bridged interfaces, and possibly multiple addresses for dual stack setups. Automatically running radvd would make a nice goodie. -- // Bernie Innocenti - http://codewiz.org/ \X/ Sugar Labs - http://sugarlabs.org/

On Thu, Jun 04, 2009 at 07:26:05PM +0200, Bernie Innocenti wrote:
Hello,
in a common scenario where there aren't enough public IPv4 addresses for all domains, I have elaborated this workaround:
- the host operates a sixxs.net IPv6 tunnel with aiccu.
- the virbr0 interface is manually configured an IPv6 address within a /64 subnet delegated by sixxs.net. (I do this from /etc/rc.local for lack of a better place)
- radvd runs on the host to autoconfigure IPv6 for the guests on virbr0 and advertise the host as a gateway
With this setup, all machines are globally addressable from the IPv6 internet, which is still quite useful for backstage services such as a build farm.
In order to automate this setup, libvirt should support configuring an IPv6 address on bridged interfaces, and possibly multiple addresses for dual stack setups. Automatically running radvd would make a nice goodie.
I'm not sure that we should automatically run radvd, because this has potential implications for the host as a whole. It is hard to restrict scope to just the virbr0 interface, as we do with IPv4 using NAT. We should definitely allow multiple <ip> elements, and allow both IPv4 and IPv6 and configure interfaces accordingly. Annoyingly we used the attribute 'netmask'. We really should have used 'prefix', since netmask as a concept is deprecated in IPv6 world. I'd suggest we allow continued use of netmask for IPv4 addresses, but recommend use of 'prefix' in the future. If they give a netmask, then automatically generate a prefix attribute, and vica-verca. <ip address="192.168.122.1" netmask="255.255.255.0"> <dhcp> <range start="192.168.122.2" end="192.168.122.254" /> </dhcp> </ip> <ip address="2001:200:0:8002:203:47ff:fea5:3083" prefix="64'/> In theory we should also allow <dhcp> for IPv6, but I'm not sure that the dnsmasq daemon supports offering of DHCPv6 addresses. Todo this properly we'll need to - Extend the parser to allow multiple addresses - Change the string -> address code to use getaddrinfo, not inet_aton - Change interface bring up code to add multiple addresses IPv4 & 6 - Add support for ip6tables - Add rules for ip6tables as appropriate for the <forward/> rule Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|

On 06/05/09 12:43, Daniel P. Berrange wrote:
I'm not sure that we should automatically run radvd, because this has potential implications for the host as a whole. It is hard to restrict scope to just the virbr0 interface, as we do with IPv4 using NAT.
We could run a separate instance of radvd using a custom configuration file (-C option), limiting it to the bridge interface, like so: interface virbr0 { IgnoreIfMissing on; AdvSendAdvert on; MinRtrAdvInterval 30; MaxRtrAdvInterval 100; AdvDefaultPreference low; AdvHomeAgentFlag off; #bernie: we actually have a /48, thus 65536 /64 subnets prefix 2001:4978:243:1::0/64 { AdvOnLink on; AdvAutonomous on; }; }; -- // Bernie Innocenti - http://codewiz.org/ \X/ Sugar Labs - http://sugarlabs.org/

On Fri, Jun 05, 2009 at 02:29:24PM +0200, Bernie Innocenti wrote:
On 06/05/09 12:43, Daniel P. Berrange wrote:
I'm not sure that we should automatically run radvd, because this has potential implications for the host as a whole. It is hard to restrict scope to just the virbr0 interface, as we do with IPv4 using NAT.
We could run a separate instance of radvd using a custom configuration file (-C option), limiting it to the bridge interface, like so:
Oh sure, what I meant was that simply running radvd & setting an IPv6 addr on virbr0 isn't sufficient in all case. You need to turn on IPv6 forwarding, but you can't do this if the primary LAN interfaces are using IPv6 auto-config. So in most 'out of the box' scenarios you won't be able to get routable IPv6 configs working on virbr0. It'll only work if the admin has configured their host with non-autoconf addresses and enabled forwarding.
interface virbr0 { IgnoreIfMissing on; AdvSendAdvert on; MinRtrAdvInterval 30; MaxRtrAdvInterval 100; AdvDefaultPreference low; AdvHomeAgentFlag off;
#bernie: we actually have a /48, thus 65536 /64 subnets prefix 2001:4978:243:1::0/64 { AdvOnLink on; AdvAutonomous on; }; };
Regards, Daniel -- |: Red Hat, Engineering, London -o- http://people.redhat.com/berrange/ :| |: http://libvirt.org -o- http://virt-manager.org -o- http://ovirt.org :| |: http://autobuild.org -o- http://search.cpan.org/~danberr/ :| |: GnuPG: 7D3B9505 -o- F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 :|
participants (2)
-
Bernie Innocenti
-
Daniel P. Berrange