Hello Daniel,
I have started working on Bossonvz slightly more than a year ago. As far as I remember, back then I wanted to base my work on top of the existing Openvz libvirt driver but I was dissappointed by the state of the driver. Our goal was to have a driver that supports online migration and mount points management and after toying with the openvz driver for a while we have decided to start from the scratch. Personaly, I also do not like the idea of domain management by just calling the vz tools binaries and relying on parsing of the resulting outputs.
The API stability is a very good question. Of course, to circumvent the vz tools and touching the kernel we had to accept the fact that the kernel API might change in the future. And it will most definitely do. This is not based on any formal proclamation in any way but it is true that in a past year I did not have to change my code because of some incompatibility in the vzkernel API (and I have tested the code with various vz kernels ranging from 042stab068.8 to 042stab084.26). At the moment, since the vz tools are the only user of the API, the vz developers can pretty much do whatever they like with the kernel API. I believe it could be benefitial also to them to have another user of their API.
The implementations (vztools and bossonvz) are independent but they share the same kernel. This means that if I start a bossonvz domain, I can use vzlist and I will see the appropriate container. The opposite does not work that way because bossonvz is a stateful driver. So if I start a container by using vzctl, I will not see anything with virsh list. But I would not be able to start a domain with the same veid; libvirt won't let me do that because the kernel API call fails.
By default, bossonvz also does not touch any vz tools config files (mainly veid.conf), so various vz tools that rely on the existence of the veid.conf for a running container (vzctl enter) will refuse to co-operate. This is solved by the bossonvz hook which can create stub veid.conf files to silence vzctl. After that, it is no problem to access the domain with vzctl enter (we use it regularly since entering the domain with virsh is not implemented yet). To sum up, the implementations influence one another to some extent but neither is necessary for the other and both can run simultaneuosly side-by-side.
One of the goals we would like to reach is to have a uniform management interface for our clients. It means that the virtualization technology (openvz, qemu/kvm) should be abstracted away as much as possible. And because all qemu/libvirt management tools allow the user to see the "screen", we wanted the same for bossonvz.
The implementation is located in bossonvz_vnc_* files. It is a stand-alone binary that is started as a separate process on the domain's startup by a domain controller (similar to the LXC domain controller). The VNC server binary opens up a tty device in the container via a vzkernel API call but instead of passing the IO through to the host pty device (like the LXC does), it actually intercepts the terminal sequences and renderes them to a memory frame-buffer (all in user-space). On top of that sits a custom VNC server implementation that serves the updated chunks to VNC clients. Everything is done in user-space, no modifications to the upstream vzkernel are necessary. The code heavily uses the libvirt util functions as well as uthash and some functions from libcrypto.
Since the VNC server runs on the host, it is invisible to the user in the container. If for some reason the VNC server crashes, the domain controller can start it again without influencing the state of the domain itself. During migration (or a suspend/resume cycle), the state of the server is serialized to a file and stored on the domain's root partition. >From there, it is deserialized and loaded when the domain is resumed.
Since there is no dependency on the vzkernel API in the VNC server (apart from opening the tty device), there shall be no problem to use the code in the LXC driver.
As far as the maintanence is concerned: Our bussinnes is centered around OpenVZ and Qemu/kvm technologies. Primarly, we are prepared to provide a long-term support of the bossonvz driver for every RedHat libvirt release and every stable vzkernel release because we use the driver already in production on RedHat based distros. Had the driver be accepted to the libvirt master with all the features that we need in the company, we are ready to provide support there as well.
You are correct that LXC and OpenVZ cover the same thing, container virtualization. However, until LXC offers the same level of features as OpenVZ provides, we will use our bossonvz driver in the company.
--
David Fabian
Cluster Design, s.r.o.
Dne Po 30. června 2014 10:31:26, Daniel P. Berrange napsal(a):
> On Fri, Jun 27, 2014 at 03:16:51PM +0200, Bosson VZ wrote:
> > Hello,
> >
> > in the company I work for, we use openvz and qemu/kvm on our clusters
> > side-by-side. To manage our domains, we used libvirt/qemu for qemu/kvm
> > domains and vz tools for openvz domains in the past. This was very
> > inconvinient since the management differed in many ways. So we have
> > decided to unify our management and use libvirt exclusively. Since the
> > openvz driver already included in libvirt lacks features that need, we
> > have implemented a new libvirt backend driver for openvz called the
> > bossonvz driver.
>
> Are there any particular reasons why you were unable to extend the
> existing openvz driver to do what you needed ? I know openvz driver
> has been mostly unmaintained and its code isn't too pleasant, but
> it'd still be nice to know what show-stopper problems you saw with
> using it as a base for more work.
>
> > Unlike the openvz driver, bossonvz is a complete, feature-rich stateful
> > libvirt driver. It uses the libvirt driver API exclusively and
> > communicates with the kernel directly, much like the LXC driver. The
> > code is hugely inspired by the LXC driver and the Qemu driver, but adds
> > a bit of functionality to the libvirt core too. More details and the
> > source code can be found at
> >
> > http://bossonvz.bosson.eu/
> >
> > The driver is completely independent of vz tools, it needs only a
> > running vz kernel.
>
> That's very interesting. Do you know if there is any statement of
> support for the OpenVZ kernel <-> userspace API. I know the mainline
> kernel aims to maintain userspace API compatibility, but are all the
> custom additions that the OpenVZ fork has maintained in a compatible
> manner as OpenVZ provides new versions ?
>
> eg is there any significant risk that when a new OpenVZ release
> comes out, kernel changes might break this new driver, or are they
> careful to maintain compat for the mgmt layer ?
>
> Also is there interoperability between this driver and the openvz
> tools. eg if openvz tools launch a guest, can it be seen and managed
> by this driver, and vica-verca ? Or are they completely separate
> views and non-interacting ?
>
> > One of the things, we are most proud of, is the
> > possibility to access the domain's tty via VNC (hurray to uniform
> > management web interfaces).
>
> That's nice - care to elaborate on the technical architecture
> of this ? Presumably when you say 'tty' you mean the VNC is just
> a frontend to the text-mode console, it isn't providing any kind
> of virtualized graphical framebuffer. Is this VNC support you
> have part of the libvirt patches, or a standalone component that
> just interacts with it ?
>
> Basically I'm wondering whether this VNC support is something we
> can leverage in the LXC driver too ?
>
> > Since the code was developed in-house (primarily for internal
> > purposes), it is based on an older libvirt release (1.1.2). There
> > are also some modifications to the libvirt core (virCommand) and
> > the domain file (mount options syntax) which might need some
> > redesign.
> >
> > At the moment I would like to get some feedback on the driver. In
> > the future, I would like to see the driver merged upstream, and
> > am prepared to do the work but I need to know if there is any
> > interest in doing so.
>
> As you've probably already figured out, we generally welcome support for
> new virtualization drivers. The two big questions to me are
>
> - Are you able to provide ongoing maintaince and development of
> this driver for the future ? We're already suffering from a lack
> of people able to maintain the existing openvz driver and the
> related parallels isn't too active either. Basically we'd like
> to see a commitment to be an active maintainer of the code.
>
> - What do you see as the long term future of the driver ? I've
> already asked whether there's any risk from future OpenVZ releases
> potentially breaking things. Historically the idea with the kernel's
> namespace support was that the OpenVZ forked kernel would be able
> to go away. IIUC, the openvz userspace can already run on top of a
> plain mainline kernel, albeit with reduced features compared to when
> it runs on openvz kernel fork. If we assume the trend of merging
> OpenVZ features into mainline continues, we might get to a point
> where OpenVZ kernel fork no longer exists. At that point I wonder
> what would be the compelling differences between BossonVZ and the
> LXC driver ? ie why would we benefit from 2 container drivers for
> the same kernel functionality.
>
> Regards,
> Daniel
>