(Sorry about the resend, missed the list and DV the first time.)
Daniel P. Berrange wrote:
On Wed, Oct 31, 2007 at 02:24:58PM +0100, Daniel Hokka Zakrisson
wrote:
> Daniel P. Berrange wrote:
>> On Tue, Oct 30, 2007 at 04:28:59PM +0100, Daniel Hokka Zakrisson wrote:
>>> This is an initial stab at adding Linux-VServer support to libvirt.
>>> There are still a couple of things missing, like scheduler support in
>>> the XML parsing, and proper network support.
>> Great to see interest in adding more drivers ! I've not had time to look
>> at your patch in great detail yet, but I'll give some more detailed
>> feedback in the next day or so. My first comment though - why the
>> <xid>123</xid> field in the XML - the unique integer ID for a domain
>> should really be in the 'id' attribute <domain id='123'>.
There are a
>> couple of other small XML format consistency issues like that to check
>> up on.
> Yeah, the only reason I did it with a separate element was that I really
> don't know XPath, so I hadn't figured out how to get the id in that case.
>
>> NB, in most cases there is no need to implement network support in each
>> driver. The libvirt networking APIs happen to be implemented in the QEMU
>> driver codebase, but they're not really dependant on QEMU - the same
>> functionality is shared across all drivers. If you connect to the Xen
>> hypervisor, libvirt will auto-magically hook up the networking APIs to
>> go via the QEMU driver. The same should work OK for VServer without
>> you needing todo anything special.
> Well, Linux-VServer is different from most (all?) other virtualization
> technologies in that we do IP isolation, rather than virtualizing the
> network stack. This means that guests are merely limited to a subset of
> the IP addresses assigned to the host, so there's no routing or bridging
> involved.
Ok, well the virNetwork APIs have a pretty well defined semantics
- It provides a bridge device in the host with an IP from admin
defined range (typically IPv4 private range, but not restricted)
- The bridge is connected to the public nework via NAT
- Guest VMs have a tap device enslaved in the bridge which is
connected to the emulated NIC inside the guest.
- Guest NIC gets assigned an IP via DHCP
Some container technologies enable each container to have private 'tap'
style interfaces which are plumbed through to the host networking. If
this is the case for VServer, then it would be possible to connect a
VServer container to one or more of the libvirt virtual networks. In
No, everything is shared with the host, the guest is just restricted to
a subset of the assigned IP addresses. We don't deal with interfaces at
all, other than to bring up the necessary addresses on the right one
when it's started (if requested).
any case there is no need to implement the virNetwork* APIs
specifically
within the VServer driver to do the guest <-> network connectiion. The
virNetwork* APIs are merely concerned with configuring & managing the
bridge device in the host. The drivers take care of adding the tap
device per guest themselves as part of the virDomainCreate/Start APis.
Okay, so I'll just remove all of that.
Dan.
--
Daniel Hokka Zakrisson