On Wed, Aug 15, 2007 at 10:35:28AM -0600, Bruce Evans wrote:
Hello,
I am now starting to investigate the networking driver callback
functions in libvirt.
struct _virNetworkDriver {
virDrvOpen open;
virDrvClose close;
virDrvNumOfNetworks numOfNetworks;
virDrvListNetworks listNetworks;
virDrvNumOfDefinedNetworks numOfDefinedNetworks;
virDrvListDefinedNetworks listDefinedNetworks;
virDrvNetworkLookupByUUID networkLookupByUUID;
virDrvNetworkLookupByName networkLookupByName;
virDrvNetworkCreateXML networkCreateXML;
virDrvNetworkDefineXML networkDefineXML;
virDrvNetworkUndefine networkUndefine;
virDrvNetworkCreate networkCreate;
virDrvNetworkDestroy networkDestroy;
virDrvNetworkDumpXML networkDumpXML;
virDrvNetworkGetBridgeName networkGetBridgeName;
virDrvNetworkGetAutostart networkGetAutostart;
virDrvNetworkSetAutostart networkSetAutostart;
};
My question is from the point of view of the Virtual Manager GUI.
From the controls available in the GUI, what actions are intended
to be done, via the network callback functions?
Not sure if you've come across the screenshots ...
http://virt-manager.org/screenshots/networking.html
In the UI we refer to two concepts
'Virtual network' - this is the NAT based networking using the APIs above
'Shared physical device' - this is bridging a guest directly to LAN
Are the actions only client related? Create a vnet, vdisk, etc?
Or are they also intended for server actions, create a switch service,
disk service, etc?
Not entirely sure what you mean by client vs server related. The APIs are
basically for defining a virtual network on a host (aka Dom0). You can then
connect a guest to the virtual network by using guest XML that looks like
<interface type="network">
<source network="default"/>
</interface>
There is no example in the test.c program, so I don't have any example
to look at.
Also, is there any documentation describing the network callback functions?
The original design/concept doc is
http://www.gnome.org/~markmc/virtual-networking.html
The high level concept for virtual networks is that they provide a means
to connect guests to the network without having to bridge them to the
LAN. This is primarily to handle the laptop use case
- Often disconnected from both wired & wireless - guests should be
able to communicate to hosts & between each other regardless
- The active network device can change on-the-fly - eg NetworkManager switching
wired, to wireless. If a device is active, the guests should be able to reach
the internet via the device in question.
- The active device is not neccessarily 802.11 based - eg ppp, or VPN so can
not bridge guests directly to physical device.
So to satisfy these requirements a virtual network is defined to consist
of:
- An isolated bridge device with no physical devices attached
- Device is configured with an IP address. At this time, usually from
one of the IPv4 private ranges.
- Optionally a daemon on the host provides DHCP addresses from
one or more IP ranges.
- Optionally the network can be configured to allow traffic to outside
world using NAT. It may be restricted to only traffic to a specific
physical device.
The virtual network is not Xen or QEMU specific - you can mix & match the
guests you connect to a virtual network on a single host, so allowing a
Xen guest to talk to a QEMU guest.
So, in terms of connecting a Xen guest to the virtual network, this works
in just the same way as regular Xen briding - we still use vif-bridge for
this task. For QEMU guests, we use tun/tap devices.
There is no connectivity from off-host into the guests - this is obviously
a result of NAT. We're basically providing a managed equivalent of Xen's
network-nat/vif-nat. We may consider allowing a proxy_arp routed config for
virtual networks instead of NAT in the future, but that's TBD (this would
be closer to the Xen's scripts network-route/vif-route).
For DHCP + DNS services we provide the dnsmasq daemon which binds to the
bridge device associated with the virtual network. NB there can be many
virtual networks each with their own bridge device & DHCP/DNS daemon on
each host.
On Linux we use 'brctl' & a couple of Linux ioctl()'s for setting up the
bridges. This obviously needs an alternate impl for Solaris. The code is
all well isolated in src/bridge.c
On Linux we use iptables for seting up the NAT rules, so again an alternate
impl is needed for Solaris. The code is is src/iptables.c (bad name really
I guess).
Finally there is defined to be one virtual network 'out of the box' called
'default'. This is configured with 192.168.122.0/255.255.255.0, and does
NAT to the outside world. This ensures that a base install will always have
a means to connect a guest to the network.
So the two core pieces I think you'd need to look at are the bridge.c file
and the iptables.c file, providing impls for Solaris. Most of the rest of
the networking code shouldn't have any serious portability problems.
Regards,
Dan.
--
|=- Red Hat, Engineering, Emerging Technologies, Boston. +1 978 392 2496 -=|
|=- Perl modules:
http://search.cpan.org/~danberr/ -=|
|=- Projects:
http://freshmeat.net/~danielpb/ -=|
|=- GnuPG: 7D3B9505 F3C9 553F A1DA 4AC2 5648 23C1 B3DF F742 7D3B 9505 -=|