On Fri, 2020-03-27 at 10:47 +0000, Daniel P. Berrangé wrote:
On Thu, Mar 26, 2020 at 05:38:47PM +0100, Andrea Bolognani wrote:
> On Thu, 2020-03-26 at 14:05 +0000, Daniel P. Berrangé wrote:
> > On Thu, Mar 26, 2020 at 02:50:48PM +0100, Andrea Bolognani wrote:
> > > Ultimately I think we need to take a cue from what lcitool does when
> > > configuring VMs and generate a simple environment file that is baked
> > > into images and can be sourced from jobs with a single line.
> >
> > I much prefer to have the job configuration all in the gitlab config file
> > rather than split between the gitlab config and the images, as it lets you
> > understand the full setup.
>
> I agree in theory, but 1) that specific ship has sailed when we
> started adding stuff like $LANG, $ABI and $CONFIGURE_OPTS in the
> container image's environment and 2) doing it at the .gitlab-ci.yml
> level will result in duplicating a lot of the logic that we already
> have in lcitool.
Setting $LANG makes sense because the container image build
decides what locales are installed and so knows what $LANG
must be used.
Similarly $ABI makes sense as that again is directly based
off which compiler toolchain packages were installed.
In retrospect $CONFIGURE_OPTS was a mistake, becuase that
only makes sense in the context of autotools usage and
decision about how the application will be built. So I'd
remove this one too.
Yeah, I was never fond of $CONFIGURE_OPTS, and in fact I believe I
argued against its inclusion at the time - although I have to admit
that its presence makes some of the CI scaffolding much more terse.
Anyway, what I do *not* want to see is something along the lines of
x86-freebsd-12:
variables:
MAKE: gmake
CC: clang
in every .gitlab-ci.yml for every project under the libvirt umbrella:
not only that would be ugly and obfuscate the actual build steps for
the software, but it's nigh unmaintainable as it would take dozens
of commits spread across as many repositories to roll out even a
completely trivial change.
Another question is, once we start doing cascading builds, what to do
with stuff like (from the .bashrc template used by lcitool)
# These search paths need to encode the OS architecture in some way
# in order to work, so use the appropriate tool to obtain this
# information and adjust them accordingly
package_format="{{ package_format }}"
if test "$package_format" = "deb"; then
multilib=$(dpkg-architecture -q DEB_TARGET_MULTIARCH)
export LD_LIBRARY_PATH="$VIRT_PREFIX/lib/$multilib:$LD_LIBRARY_PATH"
export
PKG_CONFIG_PATH="$VIRT_PREFIX/lib/$multilib/pkgconfig:$PKG_CONFIG_PATH"
export
GI_TYPELIB_PATH="$VIRT_PREFIX/lib/$multilib/girepository-1.0:$GI_TYPELIB_PATH"
elif test "$package_format" = "rpm"; then
multilib=$(rpm --eval '%{_lib}')
export LD_LIBRARY_PATH="$VIRT_PREFIX/$multilib:$LD_LIBRARY_PATH"
export
PKG_CONFIG_PATH="$VIRT_PREFIX/$multilib/pkgconfig:$PKG_CONFIG_PATH"
export
GI_TYPELIB_PATH="$VIRT_PREFIX/$multilib/girepository-1.0:$GI_TYPELIB_PATH"
fi
# We need to ask Perl for this information, since it's used to
# construct installation paths
plarch=$(perl -e 'use Config; print $Config{archname}')
export PERL5LIB="$VIRT_PREFIX/lib/perl5/$plarch"
# For Python we need the version number (major and minor) and
# to know whether "lib64" paths are searched
pylib=lib
if $PYTHON -c 'import sys; print("\n".join(sys.path))' | grep -q
lib64; then
pylib=lib64
fi
pyver=$($PYTHON -c 'import sys; print(".".join(map(lambda x:
str(sys.version_info[x]), [0,1])))')
export PYTHONPATH="$VIRT_PREFIX/$pylib/python$pyver/site-packages"
Will we just run all builds with --prefix=/usr and install stuff
into the system search paths? In that case, we have to ensure that
the user that's running the build inside the container has write
access to those global paths, and also think about what this means
for the FreeBSD runner.
WRT duplication of logic, we only have that because we
use libvirt-jenkins-ci repo/tools to store the Jenkins
job configuration separately from the application repos.
As we phase out & eventually eliminate Jenkins, we will
no longer have a need to store any build recipes in
the libvirt-jenkins-ci repo/tools - they can focus
exclusively on container/image built mgmt, and all the
logic for actually building apps can live in the repos
of those apps.
I was thinking more in terms of duplicating the logic that decides,
for example, what name the ninja build system needs to be invoked as
on a specific target, but you make a good point about the build
rules: right now we have a set of shared templates that we reuse for
all projects with similar build systems, but with the move to GitLab
CI we'll end up duplicating a lot of that.
It might not matter that much, because the build instructions are
going to be simpler, but we might also consider an approach similar
to
https://salsa.debian.org/salsa-ci-team/pipeline#basic-use
This will be good as it eliminates more
areas where we need to lock-step change the stuff in
libvirt-jenkins-ci repo vs the main code repos.
In some cases that'll make things easier; in other cases, you're
still going to have to change the libvirt-jenkins-ci repository to
eg. alter the build environment in some way, then rebuild the images
and change the build steps accordingly, except instead of having
changes to the build environment and build recipe appear as two
subsequent commits in the same repository, now they will be dozens
of commits spread across as many repositories.
--
Andrea Bolognani / Red Hat / Virtualization