On Fri, Mar 27, 2020 at 03:59:51PM +0100, Andrea Bolognani wrote:
Anyway, what I do *not* want to see is something along the lines of
x86-freebsd-12:
variables:
MAKE: gmake
CC: clang
in every .gitlab-ci.yml for every project under the libvirt umbrella:
not only that would be ugly and obfuscate the actual build steps for
the software, but it's nigh unmaintainable as it would take dozens
of commits spread across as many repositories to roll out even a
completely trivial change.
Another question is, once we start doing cascading builds, what to do
with stuff like (from the .bashrc template used by lcitool)
I don't think we will do cascading builds in the way we've done
in Jenkins, because there was alot of pointless redundancy in
our setup, resulting in us testing the wrong things.
Take the Go binding for example. Go doesn't have the same kind of
portability issues that C does, so testing the compile across the
many distros is not directly needed. Similarly we only ever teted
it against the latest libvirt git master, despite the code being
able to compile against many older versions.
So the two dimensions for Go that we actually need are testing against
multiple Go versions, and testing against multiple libvirt versions.
Testing against multiple distros is a crude indirect way of testing
several Go versions, without us actually understanding which versions
we really are testing.
What we did in the Travis config for Go was much more useful in
what dimensions it tested:
https://gitlab.com/libvirt/libvirt-go/-/blob/master/.travis.yml
The same applies for the other language bindings too.
The other reason to not try to chain up builds is that it doesn't
align with the forking model of contribution. If someone does a
fork of the libvirt-go binding, they want to be able to run tests
on that in isolation. They shouldn't have to first do a fork of
libvirt and run build, in order to them run builds on the go
binding.
So each .gitlab-ci.yml for a project needs to be independant of
other projects / self-contained in what it builds / tests.
Where we do need chaining is to trigger these builds. ie, when
a libvirt changes hit master, we want to trigger pipelines in
any dependant projects to validate that they're not seeing a
regression. GitLab has a way to configure pipelines triggers
todo this.
It might not matter that much, because the build instructions are
going to be simpler, but we might also consider an approach similar
to
https://salsa.debian.org/salsa-ci-team/pipeline#basic-use
> This will be good as it eliminates more
> areas where we need to lock-step change the stuff in
> libvirt-jenkins-ci repo vs the main code repos.
In some cases that'll make things easier; in other cases, you're
still going to have to change the libvirt-jenkins-ci repository to
eg. alter the build environment in some way, then rebuild the images
and change the build steps accordingly, except instead of having
changes to the build environment and build recipe appear as two
subsequent commits in the same repository, now they will be dozens
of commits spread across as many repositories.
Eventually I'd like to get the container image biulds into the main
repos too. ie instead of libvirt-dockerfiles.git, we should commit
the dockerfiles into each project's git repo. The GitLab CI job can
generate (and cache) the container images directly, avoiding a need
for us to send builds via quay.io separately.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|