On Mon, Mar 30, 2020 at 12:11:19PM +0200, Andrea Bolognani wrote:
On Fri, 2020-03-27 at 17:20 +0000, Daniel P. Berrangé wrote:
> On Fri, Mar 27, 2020 at 03:59:51PM +0100, Andrea Bolognani wrote:
> > Another question is, once we start doing cascading builds, what to do
> > with stuff like (from the .bashrc template used by lcitool)
>
> I don't think we will do cascading builds in the way we've done
> in Jenkins, because there was alot of pointless redundancy in
> our setup, resulting in us testing the wrong things.
>
> Take the Go binding for example. Go doesn't have the same kind of
> portability issues that C does, so testing the compile across the
> many distros is not directly needed. Similarly we only ever teted
> it against the latest libvirt git master, despite the code being
> able to compile against many older versions.
>
> So the two dimensions for Go that we actually need are testing against
> multiple Go versions, and testing against multiple libvirt versions.
>
> Testing against multiple distros is a crude indirect way of testing
> several Go versions, without us actually understanding which versions
> we really are testing.
Agreed that we could be smarter and more comprehensive in what we
test, especially when it comes to language bindings; at the same
time it's useful to test against the latest codebase for the various
dependencies, so we should make sure we don't lose that coverage.
> What we did in the Travis config for Go was much more useful in
> what dimensions it tested:
>
>
https://gitlab.com/libvirt/libvirt-go/-/blob/master/.travis.yml
>
> The same applies for the other language bindings too.
>
> The other reason to not try to chain up builds is that it doesn't
> align with the forking model of contribution. If someone does a
> fork of the libvirt-go binding, they want to be able to run tests
> on that in isolation. They shouldn't have to first do a fork of
> libvirt and run build, in order to them run builds on the go
> binding.
Of course that wouldn't be acceptable.
So far I'm aware of two approaches for chaining, one of which is
currently in use and the other one which IIUC was prototyped but
never actually deployed:
* the CI job for each project includes build instructions for all
projects it depends on, eg. the libvirt-dbus job would start by
fetching, building and installing libvirt, then moving on to
doing the same for libvirt-glib, then finally get to building
and testing libvirt-dbus itself. This is the approach libosinfo
is currently using;
* the CI job for each project would result in a container image
that has the same contents as the one used for building, plus
a complete installation of the project itself, eg. the libvirt
job would generate an image that has libvirt installed, the
libvirt-glib job would use that image and generate one that has
both libvirt and libvirt-glib installed, and finally libvirt-dbus
would use this last image as build environment.
If I understand correctly, you're suggesting a third approach:
* the CI job for each project uses an image that contains all its
dependencies, including the ones that are maintained under the
libvirt umbrella, installed from distro packages.
Did I get that right? Or did you have something else in mind?
I'm suggesting both option 1 and/or 3 depending on the support scenario.
In the cases where the project needs to test against libvirt git master,
it should clone and build libvirt.git, and then build itself against that.
In the case where the project needs to test against existing releases in
distros, it should have container images that include the pre-built libvirt.
The Perl binding only supports building against libvirt Git, so option 1
is sufficient. The Go & Python bindings support building against historic
versions, so option 1 & 3 are both needed.
> Where we do need chaining is to trigger these builds. ie, when
> a libvirt changes hit master, we want to trigger pipelines in
> any dependant projects to validate that they're not seeing a
> regression. GitLab has a way to configure pipelines triggers
> todo this.
I'm not sure how this part would fit into the rest, but let's just
ignore it for the moment O:-)
Consider the builds are self-contained. libvirt-python CI gets triggered
when a change is committed to libvirt-python.git. We also need to have
CI triggered in libvirt-python, when a change is committed to libvirt.git,
so we need to use the gitlab triggers for this.
> > In some cases that'll make things easier; in other
cases, you're
> > still going to have to change the libvirt-jenkins-ci repository to
> > eg. alter the build environment in some way, then rebuild the images
> > and change the build steps accordingly, except instead of having
> > changes to the build environment and build recipe appear as two
> > subsequent commits in the same repository, now they will be dozens
> > of commits spread across as many repositories.
>
> Eventually I'd like to get the container image biulds into the main
> repos too. ie instead of libvirt-dockerfiles.git, we should commit
> the dockerfiles into each project's git repo. The GitLab CI job can
> generate (and cache) the container images directly, avoiding a need
> for us to send builds via quay.io separately.
This will again result in the situation where a single update to
lcitool might result in a couple dozen commits to a couple dozen
repositories, but since it will be entirely mechanical and likely
fall under the same "Dockerfile update rule" as pushes to the
libvirt-dockerfiles repo currently fall under, I think it should
be reasonably manageable.
Will the container images built this way made available outside of
the GitLab CI infrastructure? We still want people to be able to
run 'make ci-build@...' locally.
I believe that container images built in GitLab are made publically
accessible, but I've not validated this myself yet. Agreed on your
point that we need to continue supporting local builds like this.
Will the GitLab registry allow us to store a lot of images? We
currently have 38 for libvirt alone, and if we're going to build
new ones for all the sub-projects then we'll get to the hundreds
really quickly...
Again I've not proved anything, but in general
GitLab.com instance
does not appear to have applied any limits to projects that are
made public under OSS licenses. If we did hit any container limit
then we'd have to continue with quay.io for this purpose.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|