On Tue, Feb 22, 2022 at 05:04:02PM +0100, Erik Skultety wrote:
So, we're offloading as much CI configuration/workflow stuff to
lcitool
as possible. We can generate config files, install/update machines
(local or remote), dump Dockerfiles...we can't build and run those containers
locally. Instead, we have a CI helper script in libvirt repo which essentially
just wraps a Makefile which pulls a gitlab container for you and:
- gives you shell, or
- runs the build and tests
I'm not sure how many people actually know we have that helper script let alone
use it. I've been playing with the idea that we could integrate what's done in
the Makefile to lcitool utilizing either the podman library [1] or the docker
library [2]. Apart from consolidating all CI services-related efforts to
lcitool the other benefit would be that we could gain the ability to run and
debug in a project-specific container also in other libvirt projects not just
main libvirt.
So, I though this could be a nice project for GSoC. Ideas?
I've been meaning to replace the make-based logic used for spawning
local containers with a Python equivalent and get rid of ci/Makefile
entirely. There was some progress last year, but I got sidetracked
and never managed to finish the job. So obviously I'm in favor of
doing more work in that area, especially if it's someone else doing
it O:-)
I'm not entirely convinced that requiring the use of lcitool for this
task is necessarily the best idea though. Ideally, it should be
possible for developers to reproduce and debug CI failures locally
without having to clone a second repository. It's fine to require
lcitool for tasks that are performed by maintainers, but casual
contributors should find all they need in libvirt.git itself.
Another thing that has been bothering me is that neither 'ci/helper
build' nor 'lcitool build' will actually perform the exact same build
steps as the corresponding CI job, making them only mildly effective
as debugging tools for CI failures. And of course each of these build
recipes is maintained separately, so we have three similar but not
quite identical scripts floating around.
Here's my (fairly hand-wavy :) idea of how things should work.
Each project's repository should contain its build steps in the form
of a ci/build script. The exact calling interface for this script
will have to be determined, but based on existing usage at the very
least we need to be able to specify the build type (regular, cross
and with dependencies built from git) and target-specific settings
(for example whether to skip building RPMs). So I can imagine the
invocation looking something like
$ TYPE=native RPMS=skip ci/build
The build steps that currently live in the .native_build_job,
.cross_build_job and .native_git_build_job templates in
.gitlab-ci.yml will all be moved to the ci/build script. It shouldn't
be necessary to alter them significantly in the process.
With this interface defined, we can change 'lcitool manifest' so that
the jobs it generates go from
x86_64-almalinux-8-clang:
extends: .native_build_job
needs:
- x86_64-almalinux-8-container
variables:
CC: clang
NAME: almalinux-8
RPM: skip
to
x86_64-almalinux-8-clang:
needs:
- x86_64-almalinux-8-container
script:
- TYPE=native NAME=almalinux-8 CC=clang RPM=skip ci/build
Of course it would still be possible to tell lcitool to use a custom
template for the job instead, which might be necessary for setting up
CI caching and the like. For simple cases though, you'd be able to
use the default implementation.
'lcitool build' would also be changed so that it invokes the ci/build
script in the project's repository.
The last missing piece would then be to finish converting the
ci/helper script that exists in libvirt to Python and make it so
calling that entry point also ultimately in running the ci/build
script.
Of course having multiple copies of the ci/helper script and its
logic for listing / running containers around is problematic because
they would get out of sync over time... Perhaps we can maintain that
script as part of libvirt-ci.git, and generate a local copy to be
stored in each project's repository at the same time as other
CI-related files such as Dockerfiles are generated? That sounds like
it could work, but admittedly this is the fuzziest part of the entire
plan :)
--
Andrea Bolognani / Red Hat / Virtualization