[libvirt PATCH 0/5] ci: Use GitLab container registry

Branch: https://gitlab.com/abologna/libvirt/-/tree/ci-full-gitlab-registry Pipeline: https://gitlab.com/abologna/libvirt/pipelines/150891361 This is what we're already doing with the subprojects we've migrated to GitLab CI and, as of earlier today, all projects under the libosinfo umbrella. Once this is merged, we can stop publishing container images on Quay and archive the libvirt-dockerfiles repository. Patch 3/5 has been trimmed in order to comply with the size limits of the mailing list. You can grab the unabridged version with $ git fetch https://gitlab.com/abologna/libvirt ci-full-gitlab-registry Andrea Bolognani (5): ci: Use variables to build image names ci: Add 'other' stage ci: Use GitLab container registry ci: Update build system integration ci: Improve CI_IMAGE_TAG handling .gitlab-ci.yml | 314 ++++++++++++++++-- ci/Makefile | 23 +- ci/containers/README.rst | 14 + ci/containers/ci-centos-7.Dockerfile | 137 ++++++++ ci/containers/ci-centos-8.Dockerfile | 108 ++++++ .../ci-debian-10-cross-aarch64.Dockerfile | 122 +++++++ .../ci-debian-10-cross-armv6l.Dockerfile | 120 +++++++ .../ci-debian-10-cross-armv7l.Dockerfile | 121 +++++++ .../ci-debian-10-cross-i686.Dockerfile | 121 +++++++ .../ci-debian-10-cross-mips.Dockerfile | 121 +++++++ .../ci-debian-10-cross-mips64el.Dockerfile | 121 +++++++ .../ci-debian-10-cross-mipsel.Dockerfile | 121 +++++++ .../ci-debian-10-cross-ppc64le.Dockerfile | 121 +++++++ .../ci-debian-10-cross-s390x.Dockerfile | 121 +++++++ ci/containers/ci-debian-10.Dockerfile | 112 +++++++ .../ci-debian-9-cross-aarch64.Dockerfile | 126 +++++++ .../ci-debian-9-cross-armv6l.Dockerfile | 124 +++++++ .../ci-debian-9-cross-armv7l.Dockerfile | 125 +++++++ .../ci-debian-9-cross-mips.Dockerfile | 125 +++++++ .../ci-debian-9-cross-mips64el.Dockerfile | 125 +++++++ .../ci-debian-9-cross-mipsel.Dockerfile | 125 +++++++ .../ci-debian-9-cross-ppc64le.Dockerfile | 125 +++++++ .../ci-debian-9-cross-s390x.Dockerfile | 125 +++++++ ci/containers/ci-debian-9.Dockerfile | 116 +++++++ .../ci-debian-sid-cross-aarch64.Dockerfile | 122 +++++++ .../ci-debian-sid-cross-armv6l.Dockerfile | 120 +++++++ .../ci-debian-sid-cross-armv7l.Dockerfile | 121 +++++++ .../ci-debian-sid-cross-i686.Dockerfile | 121 +++++++ .../ci-debian-sid-cross-mips.Dockerfile | 121 +++++++ .../ci-debian-sid-cross-mips64el.Dockerfile | 121 +++++++ .../ci-debian-sid-cross-mipsel.Dockerfile | 120 +++++++ .../ci-debian-sid-cross-ppc64le.Dockerfile | 121 +++++++ .../ci-debian-sid-cross-s390x.Dockerfile | 121 +++++++ ci/containers/ci-debian-sid.Dockerfile | 112 +++++++ ci/containers/ci-fedora-31.Dockerfile | 109 ++++++ ci/containers/ci-fedora-32.Dockerfile | 109 ++++++ ...ci-fedora-rawhide-cross-mingw32.Dockerfile | 129 +++++++ ...ci-fedora-rawhide-cross-mingw64.Dockerfile | 129 +++++++ ci/containers/ci-fedora-rawhide.Dockerfile | 110 ++++++ ci/containers/ci-opensuse-151.Dockerfile | 109 ++++++ ci/containers/ci-ubuntu-1804.Dockerfile | 117 +++++++ ci/containers/ci-ubuntu-2004.Dockerfile | 113 +++++++ ci/containers/refresh | 43 +++ ci/list-images.sh | 24 +- 44 files changed, 5054 insertions(+), 51 deletions(-) create mode 100644 ci/containers/README.rst create mode 100644 ci/containers/ci-centos-7.Dockerfile create mode 100644 ci/containers/ci-centos-8.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-aarch64.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-armv6l.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-armv7l.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-i686.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-mips.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-mips64el.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-mipsel.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-ppc64le.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-s390x.Dockerfile create mode 100644 ci/containers/ci-debian-10.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-aarch64.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-armv6l.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-armv7l.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-mips.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-mips64el.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-mipsel.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-ppc64le.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-s390x.Dockerfile create mode 100644 ci/containers/ci-debian-9.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-aarch64.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-armv6l.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-armv7l.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-i686.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-mips.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-mips64el.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-mipsel.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-ppc64le.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-s390x.Dockerfile create mode 100644 ci/containers/ci-debian-sid.Dockerfile create mode 100644 ci/containers/ci-fedora-31.Dockerfile create mode 100644 ci/containers/ci-fedora-32.Dockerfile create mode 100644 ci/containers/ci-fedora-rawhide-cross-mingw32.Dockerfile create mode 100644 ci/containers/ci-fedora-rawhide-cross-mingw64.Dockerfile create mode 100644 ci/containers/ci-fedora-rawhide.Dockerfile create mode 100644 ci/containers/ci-opensuse-151.Dockerfile create mode 100644 ci/containers/ci-ubuntu-1804.Dockerfile create mode 100644 ci/containers/ci-ubuntu-2004.Dockerfile create mode 100755 ci/containers/refresh -- 2.25.4

This removes a lot of repetition and makes the configuration much easier to read. Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- .gitlab-ci.yml | 79 ++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 57 insertions(+), 22 deletions(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 149334ed6f..35895a4931 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -18,6 +18,7 @@ stages: # Default native build jobs that are always run .native_build_default_job_template: &native_build_default_job_definition stage: native_build + image: quay.io/libvirt/buildenv-libvirt-$NAME:latest cache: paths: - ccache/ @@ -42,6 +43,7 @@ stages: # Default cross build jobs that are always run .cross_build_default_job_template: &cross_build_default_job_definition stage: cross_build + image: quay.io/libvirt/buildenv-libvirt-$NAME-cross-$CROSS:latest cache: paths: - ccache/ @@ -67,94 +69,127 @@ stages: x64-debian-9: <<: *native_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-9:latest + variables: + NAME: debian-9 x64-debian-10: <<: *native_build_default_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-10:latest + variables: + NAME: debian-10 x64-debian-sid: <<: *native_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-sid:latest + variables: + NAME: debian-sid x64-centos-7: <<: *native_build_default_job_definition - image: quay.io/libvirt/buildenv-libvirt-centos-7:latest + variables: + NAME: centos-7 x64-centos-8: <<: *native_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-centos-8:latest + variables: + NAME: centos-8 x64-fedora-31: <<: *native_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest + variables: + NAME: fedora-31 x64-fedora-32: <<: *native_build_default_job_definition - image: quay.io/libvirt/buildenv-libvirt-fedora-32:latest + variables: + NAME: fedora-32 x64-fedora-rawhide: <<: *native_build_default_job_definition - image: quay.io/libvirt/buildenv-libvirt-fedora-rawhide:latest + variables: + NAME: fedora-rawhide x64-opensuse-151: <<: *native_build_default_job_definition - image: quay.io/libvirt/buildenv-libvirt-opensuse-151:latest + variables: + NAME: opensuse-151 x64-ubuntu-1804: <<: *native_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-ubuntu-1804:latest + variables: + NAME: ubuntu-1804 x64-ubuntu-2004: <<: *native_build_default_job_definition - image: quay.io/libvirt/buildenv-libvirt-ubuntu-2004:latest + variables: + NAME: ubuntu-2004 # Cross compiled build jobs armv6l-debian-9: <<: *cross_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-armv6l:latest + variables: + NAME: debian-9 + CROSS: armv6l mips64el-debian-9: <<: *cross_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-mips64el:latest + variables: + NAME: debian-9 + CROSS: mips64el mips-debian-9: <<: *cross_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-mips:latest + variables: + NAME: debian-9 + CROSS: mips aarch64-debian-10: <<: *cross_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-aarch64:latest + variables: + NAME: debian-10 + CROSS: aarch64 ppc64le-debian-10: <<: *cross_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-ppc64le:latest + variables: + NAME: debian-10 + CROSS: ppc64le s390x-debian-10: <<: *cross_build_default_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-s390x:latest + variables: + NAME: debian-10 + CROSS: s390x armv7l-debian-sid: <<: *cross_build_default_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-armv7l:latest + variables: + NAME: debian-sid + CROSS: armv7l i686-debian-sid: <<: *cross_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-i686:latest + variables: + NAME: debian-sid + CROSS: i686 mipsel-debian-sid: <<: *cross_build_extra_job_definition - image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-mipsel:latest + variables: + NAME: debian-sid + CROSS: mipsel mingw32-fedora-rawhide: <<: *cross_build_default_job_definition - image: quay.io/libvirt/buildenv-libvirt-fedora-rawhide-cross-mingw32:latest + variables: + NAME: fedora-rawhide + CROSS: mingw32 mingw64-fedora-rawhide: <<: *cross_build_default_job_definition - image: quay.io/libvirt/buildenv-libvirt-fedora-rawhide-cross-mingw64:latest + variables: + NAME: fedora-rawhide + CROSS: mingw64 # This artifact published by this job is downloaded by libvirt.org to -- 2.25.4

On Fri, May 29, 2020 at 03:00:40PM +0200, Andrea Bolognani wrote:
This removes a lot of repetition and makes the configuration much easier to read.
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- .gitlab-ci.yml | 79 ++++++++++++++++++++++++++++++++++++-------------- 1 file changed, 57 insertions(+), 22 deletions(-)
Reviewed-by: Daniel P. Berrangé <berrange@redhat.com> Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

We're going to build container images as part of the CI pipeline soon, which means that we need to move all jobs that run in a container image which is not provided by an external project (such as the one that we use for DCO checking) later in the pipeline or they will fail. Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- .gitlab-ci.yml | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 35895a4931..8a5b3372de 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -5,6 +5,7 @@ stages: - prebuild - native_build - cross_build + - other .script_variables: &script_variables | export MAKEFLAGS="-j$(getconf _NPROCESSORS_ONLN)" @@ -196,7 +197,7 @@ mingw64-fedora-rawhide: # be deployed to the web root: # https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=webs... website: - stage: prebuild + stage: other before_script: - *script_variables script: @@ -218,7 +219,7 @@ website: codestyle: - stage: prebuild + stage: other before_script: - *script_variables script: -- 2.25.4

On Fri, May 29, 2020 at 03:00:41PM +0200, Andrea Bolognani wrote:
We're going to build container images as part of the CI pipeline soon, which means that we need to move all jobs that run in a container image which is not provided by an external project (such as the one that we use for DCO checking) later in the pipeline or they will fail.
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- .gitlab-ci.yml | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 35895a4931..8a5b3372de 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -5,6 +5,7 @@ stages: - prebuild - native_build - cross_build + - other
Can we keep this before the native_build, as I wanted to see quick reports of code style mistakes. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, 2020-06-02 at 11:26 +0100, Daniel P. Berrangé wrote:
On Fri, May 29, 2020 at 03:00:41PM +0200, Andrea Bolognani wrote:
+++ b/.gitlab-ci.yml @@ -5,6 +5,7 @@ stages: - prebuild - native_build - cross_build + - other
Can we keep this before the native_build, as I wanted to see quick reports of code style mistakes.
Sure, that should work as long as container builds are still performed earlier. Do you have a good name for the stage in mind? O:-) -- Andrea Bolognani / Red Hat / Virtualization

Instead of using pre-built containers hosted on Quay, build containers as part of the GitLab CI pipeline and upload them to the GitLab container registry for later use. This will not significantly slow down builds, because containers are only rebuilt when the corresponding Dockerfile has been modified. Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- .gitlab-ci.yml | 234 +++++++++++++++++- ci/containers/README.rst | 14 ++ ci/containers/ci-centos-7.Dockerfile | 137 ++++++++++ ci/containers/ci-centos-8.Dockerfile | 108 ++++++++ .../ci-debian-10-cross-aarch64.Dockerfile | 122 +++++++++ .../ci-debian-10-cross-armv6l.Dockerfile | 120 +++++++++ .../ci-debian-10-cross-armv7l.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-i686.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-mips.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-mips64el.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-mipsel.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-ppc64le.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-s390x.Dockerfile | 121 +++++++++ ci/containers/ci-debian-10.Dockerfile | 112 +++++++++ .../ci-debian-9-cross-aarch64.Dockerfile | 126 ++++++++++ .../ci-debian-9-cross-armv6l.Dockerfile | 124 ++++++++++ .../ci-debian-9-cross-armv7l.Dockerfile | 125 ++++++++++ .../ci-debian-9-cross-mips.Dockerfile | 125 ++++++++++ .../ci-debian-9-cross-mips64el.Dockerfile | 125 ++++++++++ .../ci-debian-9-cross-mipsel.Dockerfile | 125 ++++++++++ .../ci-debian-9-cross-ppc64le.Dockerfile | 125 ++++++++++ .../ci-debian-9-cross-s390x.Dockerfile | 125 ++++++++++ ci/containers/ci-debian-9.Dockerfile | 116 +++++++++ .../ci-debian-sid-cross-aarch64.Dockerfile | 122 +++++++++ .../ci-debian-sid-cross-armv6l.Dockerfile | 120 +++++++++ .../ci-debian-sid-cross-armv7l.Dockerfile | 121 +++++++++ .../ci-debian-sid-cross-i686.Dockerfile | 121 +++++++++ .../ci-debian-sid-cross-mips.Dockerfile | 121 +++++++++ .../ci-debian-sid-cross-mips64el.Dockerfile | 121 +++++++++ .../ci-debian-sid-cross-mipsel.Dockerfile | 120 +++++++++ .../ci-debian-sid-cross-ppc64le.Dockerfile | 121 +++++++++ .../ci-debian-sid-cross-s390x.Dockerfile | 121 +++++++++ ci/containers/ci-debian-sid.Dockerfile | 112 +++++++++ ci/containers/ci-fedora-31.Dockerfile | 109 ++++++++ ci/containers/ci-fedora-32.Dockerfile | 109 ++++++++ ...ci-fedora-rawhide-cross-mingw32.Dockerfile | 129 ++++++++++ ...ci-fedora-rawhide-cross-mingw64.Dockerfile | 129 ++++++++++ ci/containers/ci-fedora-rawhide.Dockerfile | 110 ++++++++ ci/containers/ci-opensuse-151.Dockerfile | 109 ++++++++ ci/containers/ci-ubuntu-1804.Dockerfile | 117 +++++++++ ci/containers/ci-ubuntu-2004.Dockerfile | 113 +++++++++ ci/containers/refresh | 43 ++++ 42 files changed, 4973 insertions(+), 5 deletions(-) create mode 100644 ci/containers/README.rst create mode 100644 ci/containers/ci-centos-7.Dockerfile create mode 100644 ci/containers/ci-centos-8.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-aarch64.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-armv6l.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-armv7l.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-i686.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-mips.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-mips64el.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-mipsel.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-ppc64le.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-s390x.Dockerfile create mode 100644 ci/containers/ci-debian-10.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-aarch64.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-armv6l.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-armv7l.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-mips.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-mips64el.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-mipsel.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-ppc64le.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-s390x.Dockerfile create mode 100644 ci/containers/ci-debian-9.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-aarch64.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-armv6l.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-armv7l.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-i686.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-mips.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-mips64el.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-mipsel.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-ppc64le.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-s390x.Dockerfile create mode 100644 ci/containers/ci-debian-sid.Dockerfile create mode 100644 ci/containers/ci-fedora-31.Dockerfile create mode 100644 ci/containers/ci-fedora-32.Dockerfile create mode 100644 ci/containers/ci-fedora-rawhide-cross-mingw32.Dockerfile create mode 100644 ci/containers/ci-fedora-rawhide-cross-mingw64.Dockerfile create mode 100644 ci/containers/ci-fedora-rawhide.Dockerfile create mode 100644 ci/containers/ci-opensuse-151.Dockerfile create mode 100644 ci/containers/ci-ubuntu-1804.Dockerfile create mode 100644 ci/containers/ci-ubuntu-2004.Dockerfile create mode 100755 ci/containers/refresh diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 8a5b3372de..0e7917d6cd 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -3,6 +3,7 @@ variables: stages: - prebuild + - containers - native_build - cross_build - other @@ -16,10 +17,35 @@ stages: # Common templates +.container_job_template: &container_job_definition + image: docker:stable + stage: containers + services: + - docker:dind + before_script: + - export TAG="$CI_REGISTRY_IMAGE/ci-$NAME:latest" + - export COMMON_TAG="$CI_REGISTRY/libvirt/libvirt/ci-$NAME:latest" + - docker info + - docker login registry.gitlab.com -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" + script: + - docker pull "$TAG" || docker pull "$COMMON_TAG" || true + - docker build --cache-from "$TAG" --cache-from "$COMMON_TAG" --tag "$TAG" -f "ci/containers/ci-$NAME.Dockerfile" ci/containers + - docker push "$TAG" + after_script: + - docker logout + +# We build many containers which can be useful to debug problems but are not +# needed for the pipeline itself to complete: those sometimes fail, and when +# that happens it's mostly because of temporary issues with Debian sid. We +# don't want those failures to affect the overall pipeline status +.container_optional_job_template: &container_optional_job_definition + <<: *container_job_definition + allow_failure: true + # Default native build jobs that are always run .native_build_default_job_template: &native_build_default_job_definition stage: native_build - image: quay.io/libvirt/buildenv-libvirt-$NAME:latest + image: $CI_REGISTRY_IMAGE/ci-$NAME:latest cache: paths: - ccache/ @@ -44,7 +70,7 @@ stages: # Default cross build jobs that are always run .cross_build_default_job_template: &cross_build_default_job_definition stage: cross_build - image: quay.io/libvirt/buildenv-libvirt-$NAME-cross-$CROSS:latest + image: $CI_REGISTRY_IMAGE/ci-$NAME-cross-$CROSS:latest cache: paths: - ccache/ @@ -66,6 +92,204 @@ stages: - /^ci-full-.*$/ +# Container build jobs + +centos-7-container: + <<: *container_job_definition + variables: + NAME: centos-7 + +centos-8-container: + <<: *container_job_definition + variables: + NAME: centos-8 + +debian-9-container: + <<: *container_job_definition + variables: + NAME: debian-9 + +debian-9-cross-aarch64-container: + <<: *container_optional_job_definition + variables: + NAME: debian-9-cross-aarch64 + +debian-9-cross-armv6l-container: + <<: *container_job_definition + variables: + NAME: debian-9-cross-armv6l + +debian-9-cross-armv7l-container: + <<: *container_optional_job_definition + variables: + NAME: debian-9-cross-armv7l + +debian-9-cross-mips-container: + <<: *container_job_definition + variables: + NAME: debian-9-cross-mips + +debian-9-cross-mips64el-container: + <<: *container_job_definition + variables: + NAME: debian-9-cross-mips64el + +debian-9-cross-mipsel-container: + <<: *container_optional_job_definition + variables: + NAME: debian-9-cross-mipsel + +debian-9-cross-ppc64le-container: + <<: *container_optional_job_definition + variables: + NAME: debian-9-cross-ppc64le + +debian-9-cross-s390x-container: + <<: *container_optional_job_definition + variables: + NAME: debian-9-cross-s390x + +debian-10-container: + <<: *container_job_definition + variables: + NAME: debian-10 + +debian-10-cross-aarch64-container: + <<: *container_job_definition + variables: + NAME: debian-10-cross-aarch64 + +debian-10-cross-armv6l-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-armv6l + +debian-10-cross-armv7l-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-armv7l + +debian-10-cross-i686-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-i686 + +debian-10-cross-mips-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-mips + +debian-10-cross-mips64el-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-mips64el + +debian-10-cross-mipsel-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-mipsel + +debian-10-cross-ppc64le-container: + <<: *container_job_definition + variables: + NAME: debian-10-cross-ppc64le + +debian-10-cross-s390x-container: + <<: *container_job_definition + variables: + NAME: debian-10-cross-s390x + +debian-sid-container: + <<: *container_job_definition + variables: + NAME: debian-sid + +debian-sid-cross-aarch64-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-aarch64 + +debian-sid-cross-armv6l-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-armv6l + +debian-sid-cross-armv7l-container: + <<: *container_job_definition + variables: + NAME: debian-sid-cross-armv7l + +debian-sid-cross-i686-container: + <<: *container_job_definition + variables: + NAME: debian-sid-cross-i686 + +debian-sid-cross-mips-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-mips + +debian-sid-cross-mips64el-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-mips64el + +debian-sid-cross-mipsel-container: + <<: *container_job_definition + variables: + NAME: debian-sid-cross-mipsel + +debian-sid-cross-ppc64le-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-ppc64le + +debian-sid-cross-s390x-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-s390x + +fedora-31-container: + <<: *container_job_definition + variables: + NAME: fedora-31 + +fedora-32-container: + <<: *container_job_definition + variables: + NAME: fedora-32 + +fedora-rawhide-container: + <<: *container_job_definition + variables: + NAME: fedora-rawhide + +fedora-rawhide-cross-mingw32-container: + <<: *container_job_definition + variables: + NAME: fedora-rawhide-cross-mingw32 + +fedora-rawhide-cross-mingw64-container: + <<: *container_job_definition + variables: + NAME: fedora-rawhide-cross-mingw64 + +opensuse-151-container: + <<: *container_job_definition + variables: + NAME: opensuse-151 + RPM: skip + +ubuntu-1804-container: + <<: *container_job_definition + variables: + NAME: ubuntu-1804 + +ubuntu-2004-container: + <<: *container_job_definition + variables: + NAME: ubuntu-2004 + # Native architecture build + test jobs x64-debian-9: @@ -198,6 +422,7 @@ mingw64-fedora-rawhide: # https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=webs... website: stage: other + image: $CI_REGISTRY_IMAGE/ci-centos-8:latest before_script: - *script_variables script: @@ -208,7 +433,6 @@ website: - $MAKE -C docs install - cd .. - mv vroot/share/doc/libvirt/html/ website - image: quay.io/libvirt/buildenv-libvirt-centos-8:latest artifacts: expose_as: 'Website' name: 'website' @@ -220,6 +444,7 @@ website: codestyle: stage: other + image: $CI_REGISTRY_IMAGE/ci-centos-8:latest before_script: - *script_variables script: @@ -227,7 +452,6 @@ codestyle: - cd build - ../autogen.sh || (cat config.log && exit 1) - $MAKE syntax-check - image: quay.io/libvirt/buildenv-libvirt-centos-8:latest # This artifact published by this job is downloaded to push to Weblate @@ -235,6 +459,7 @@ codestyle: # https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=potf... potfile: stage: prebuild + image: $CI_REGISTRY_IMAGE/ci-centos-8:latest only: - master before_script: @@ -247,7 +472,6 @@ potfile: - $MAKE -C po libvirt.pot - cd .. - mv build/po/libvirt.pot libvirt.pot - image: quay.io/libvirt/buildenv-libvirt-centos-8:latest artifacts: expose_as: 'Potfile' name: 'potfile' diff --git a/ci/containers/README.rst b/ci/containers/README.rst new file mode 100644 index 0000000000..530897e311 --- /dev/null +++ b/ci/containers/README.rst @@ -0,0 +1,14 @@ +CI job assets +============= + +This directory contains assets used in the automated CI jobs, most +notably the Dockerfiles used to build container images in which the +CI jobs then run. + +The ``refresh`` script is used to re-create the Dockerfiles using the +``lcitool`` command that is provided by repo +https://gitlab.com/libvirt/libvirt-ci + +The containers are built during the CI process and cached in the GitLab +container registry of the project doing the build. The cached containers +can be deleted at any time and will be correctly rebuilt. diff --git a/ci/containers/ci-centos-7.Dockerfile b/ci/containers/ci-centos-7.Dockerfile new file mode 100644 index 0000000000..abbbdcc47a --- /dev/null +++ b/ci/containers/ci-centos-7.Dockerfile @@ -0,0 +1,137 @@ +FROM centos:7 + +RUN echo -e '[openvz]\n\ +name=OpenVZ addons\n\ +baseurl=https://download.openvz.org/virtuozzo/releases/openvz-7.0.11-235/x86_64/os/\n\ +enabled=1\n\ +gpgcheck=1\n\ +skip_if_unavailable=0\n\ +metadata_expire=6h\n\ +priority=90\n\ +includepkgs=libprl*' > /etc/yum.repos.d/openvz.repo && \ + echo -e '-----BEGIN PGP PUBLIC KEY BLOCK-----\n\ +Version: GnuPG v2.0.22 (GNU/Linux)\n\ +\n\ +mI0EVl80nQEEAKrEeyeTCwrzS9kYedZ/sAc/GUqlb81C7pA9SaR3fyck5mVw1Ogk\n\ +YdmNBPM2kY7QDxR9F0EpSpnxSCAXZXugsQ8KzZ0DRLVeBDQyGs9IGK5hI0zzxIil\n\ +BzfvIexLiQQhLy7YlIi8Jt/uUqKkW0pIMNMGcduY97VATtczpncpkmSzABEBAAG0\n\ +SFZpcnR1b3p6byBUZWFtIChHUEcga2V5IHNpZ25hdHVyZSBmb3IgcGFja2FnZXMp\n\ +IDxzZWN1cml0eUB2aXJ0dW96em8uY29tPoi5BBMBAgAjBQJWXzSdAhsDBwsJCAcD\n\ +AgEGFQgCCQoLBBYCAwECHgECF4AACgkQygt9GUTNrSruIgP/er70Eyo73A1gfrjv\n\ +oPUkyo4rslVRZu3qqCwoMFtJc/Z/UxWgEka1buorlcGLa6eO/EZ49c0n+KGa4Kvt\n\ +EUboIq0yEu5i0FyAj92ifm+hNhoAbGfm0cZ4/fD0oGr3l8OsQo4+iHX4xAPwFe7Y\n\ +zABuB8I1ZDZ4OIp5tDfTTuF2LT24jQRWXzSdAQQAog2Aqb+Ptl68O7cQhWLjVGkj\n\ +yyigZrdeReLx3HloKJPBeQ/kA6uvMJc/IYS3uppMWXv9v+QenS6uhP1TUJ2k9FvM\n\ +t94MQZfALN7Vpf8AF+UeWu4Ru+y4BNzcFhrPhIFNFChOR2QqW6FkgE57D9I177NC\n\ +oJMyrlNe8wcGa178An8AEQEAAYifBBgBAgAJBQJWXzSdAhsMAAoJEMoLfRlEza0q\n\ +bKwD/3+OFVIEXnIv5XgdGRNX5fHggsUN1bb8gva7HANRlKdd4LD8foDM3F/yv/3V\n\ +igG14D5EjKz56SaBDNgiI4++hOzb2M8jhAsR86jxkXFrrP1U3ZNRKg6av9DPFAPS\n\ +WEiJKtQrZDJloqtyi/mmRa1VsV7RYR0VPJjhK/R8EQ7Ysshy\n\ +=fRMg\n\ +-----END PGP PUBLIC KEY BLOCK-----' > /etc/pki/rpm-gpg/RPM-GPG-KEY-OpenVZ && \ + rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-OpenVZ && \ + yum install -y epel-release && \ + yum update -y && \ + yum install -y \ + audit-libs-devel \ + augeas \ + autoconf \ + automake \ + avahi-devel \ + bash \ + bash-completion \ + ca-certificates \ + ccache \ + chrony \ + cyrus-sasl-devel \ + dbus-devel \ + device-mapper-devel \ + dnsmasq \ + dwarves \ + ebtables \ + fuse-devel \ + gcc \ + gdb \ + gettext \ + gettext-devel \ + git \ + glib2-devel \ + glibc-common \ + glibc-devel \ + glusterfs-api-devel \ + gnutls-devel \ + iproute \ + iscsi-initiator-utils \ + kmod \ + libacl-devel \ + libattr-devel \ + libblkid-devel \ + libcap-ng-devel \ + libcurl-devel \ + libiscsi-devel \ + libnl3-devel \ + libpcap-devel \ + libpciaccess-devel \ + libprlsdk-devel \ + librbd1-devel \ + libselinux-devel \ + libssh-devel \ + libssh2-devel \ + libtirpc-devel \ + libtool \ + libudev-devel \ + libwsman-devel \ + libxml2 \ + libxml2-devel \ + libxslt \ + lsof \ + lvm2 \ + make \ + ncurses-devel \ + net-tools \ + netcf-devel \ + nfs-utils \ + ninja-build \ + numactl-devel \ + numad \ + parted \ + parted-devel \ + patch \ + perl \ + pkgconfig \ + polkit \ + python3 \ + python3-pip \ + python3-setuptools \ + python3-wheel \ + python36-docutils \ + qemu-img \ + radvd \ + readline-devel \ + rpm-build \ + sanlock-devel \ + screen \ + scrub \ + strace \ + sudo \ + systemtap-sdt-devel \ + vim \ + wireshark-devel \ + xfsprogs-devel \ + yajl-devel && \ + yum autoremove -y && \ + yum clean all -y && \ + mkdir -p /usr/libexec/ccache-wrappers && \ + ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/cc && \ + ln -s /usr/bin/ccache /usr/libexec/ccache-wrappers/$(basename /usr/bin/gcc) + +RUN pip3 install \ + meson==0.49.0 + +ENV LANG "en_US.UTF-8" + +ENV MAKE "/usr/bin/make" +ENV NINJA "/usr/bin/ninja-build" +ENV PYTHON "/usr/bin/python3" + +ENV CCACHE_WRAPPERSDIR "/usr/libexec/ccache-wrappers" [... many more Dockerfiles here ...] diff --git a/ci/containers/refresh b/ci/containers/refresh new file mode 100755 index 0000000000..8c00363ae1 --- /dev/null +++ b/ci/containers/refresh @@ -0,0 +1,43 @@ +#!/bin/sh + +if test -z "$1" +then + echo "syntax: $0 PATH-TO-LCITOOL" + exit 1 +fi + +LCITOOL=$1 + +if ! test -x "$LCITOOL" +then + echo "$LCITOOL is not executable" + exit 1 +fi + +HOSTS=$($LCITOOL hosts | grep -v freebsd) + +for host in $HOSTS +do + name=${host#libvirt-} + + case "$name" in + fedora-rawhide) + for cross in mingw32 mingw64 + do + $LCITOOL dockerfile $host libvirt --cross $cross >ci-$name-cross-$cross.Dockerfile + done + ;; + debian-*) + for cross in aarch64 armv6l armv7l i686 mips mips64el mipsel ppc64le s390x + do + if test "$name" = "debian-9" && test "$cross" = "i686" + then + continue + fi + $LCITOOL dockerfile $host libvirt --cross $cross >ci-$name-cross-$cross.Dockerfile + done + ;; + esac + + $LCITOOL dockerfile $host libvirt >ci-$name.Dockerfile +done -- 2.25.4

On Fri, May 29, 2020 at 03:00:42PM +0200, Andrea Bolognani wrote:
Instead of using pre-built containers hosted on Quay, build containers as part of the GitLab CI pipeline and upload them to the GitLab container registry for later use.
This will not significantly slow down builds, because containers are only rebuilt when the corresponding Dockerfile has been modified.
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- .gitlab-ci.yml | 234 +++++++++++++++++- ci/containers/README.rst | 14 ++ ci/containers/ci-centos-7.Dockerfile | 137 ++++++++++ ci/containers/ci-centos-8.Dockerfile | 108 ++++++++ .../ci-debian-10-cross-aarch64.Dockerfile | 122 +++++++++ .../ci-debian-10-cross-armv6l.Dockerfile | 120 +++++++++ .../ci-debian-10-cross-armv7l.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-i686.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-mips.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-mips64el.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-mipsel.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-ppc64le.Dockerfile | 121 +++++++++ .../ci-debian-10-cross-s390x.Dockerfile | 121 +++++++++ ci/containers/ci-debian-10.Dockerfile | 112 +++++++++ .../ci-debian-9-cross-aarch64.Dockerfile | 126 ++++++++++ .../ci-debian-9-cross-armv6l.Dockerfile | 124 ++++++++++ .../ci-debian-9-cross-armv7l.Dockerfile | 125 ++++++++++ .../ci-debian-9-cross-mips.Dockerfile | 125 ++++++++++ .../ci-debian-9-cross-mips64el.Dockerfile | 125 ++++++++++ .../ci-debian-9-cross-mipsel.Dockerfile | 125 ++++++++++ .../ci-debian-9-cross-ppc64le.Dockerfile | 125 ++++++++++ .../ci-debian-9-cross-s390x.Dockerfile | 125 ++++++++++ ci/containers/ci-debian-9.Dockerfile | 116 +++++++++ .../ci-debian-sid-cross-aarch64.Dockerfile | 122 +++++++++ .../ci-debian-sid-cross-armv6l.Dockerfile | 120 +++++++++ .../ci-debian-sid-cross-armv7l.Dockerfile | 121 +++++++++ .../ci-debian-sid-cross-i686.Dockerfile | 121 +++++++++ .../ci-debian-sid-cross-mips.Dockerfile | 121 +++++++++ .../ci-debian-sid-cross-mips64el.Dockerfile | 121 +++++++++ .../ci-debian-sid-cross-mipsel.Dockerfile | 120 +++++++++ .../ci-debian-sid-cross-ppc64le.Dockerfile | 121 +++++++++ .../ci-debian-sid-cross-s390x.Dockerfile | 121 +++++++++ ci/containers/ci-debian-sid.Dockerfile | 112 +++++++++ ci/containers/ci-fedora-31.Dockerfile | 109 ++++++++ ci/containers/ci-fedora-32.Dockerfile | 109 ++++++++ ...ci-fedora-rawhide-cross-mingw32.Dockerfile | 129 ++++++++++ ...ci-fedora-rawhide-cross-mingw64.Dockerfile | 129 ++++++++++ ci/containers/ci-fedora-rawhide.Dockerfile | 110 ++++++++ ci/containers/ci-opensuse-151.Dockerfile | 109 ++++++++ ci/containers/ci-ubuntu-1804.Dockerfile | 117 +++++++++ ci/containers/ci-ubuntu-2004.Dockerfile | 113 +++++++++ ci/containers/refresh | 43 ++++ 42 files changed, 4973 insertions(+), 5 deletions(-) create mode 100644 ci/containers/README.rst create mode 100644 ci/containers/ci-centos-7.Dockerfile create mode 100644 ci/containers/ci-centos-8.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-aarch64.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-armv6l.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-armv7l.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-i686.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-mips.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-mips64el.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-mipsel.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-ppc64le.Dockerfile create mode 100644 ci/containers/ci-debian-10-cross-s390x.Dockerfile create mode 100644 ci/containers/ci-debian-10.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-aarch64.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-armv6l.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-armv7l.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-mips.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-mips64el.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-mipsel.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-ppc64le.Dockerfile create mode 100644 ci/containers/ci-debian-9-cross-s390x.Dockerfile create mode 100644 ci/containers/ci-debian-9.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-aarch64.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-armv6l.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-armv7l.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-i686.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-mips.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-mips64el.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-mipsel.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-ppc64le.Dockerfile create mode 100644 ci/containers/ci-debian-sid-cross-s390x.Dockerfile create mode 100644 ci/containers/ci-debian-sid.Dockerfile create mode 100644 ci/containers/ci-fedora-31.Dockerfile create mode 100644 ci/containers/ci-fedora-32.Dockerfile create mode 100644 ci/containers/ci-fedora-rawhide-cross-mingw32.Dockerfile create mode 100644 ci/containers/ci-fedora-rawhide-cross-mingw64.Dockerfile create mode 100644 ci/containers/ci-fedora-rawhide.Dockerfile create mode 100644 ci/containers/ci-opensuse-151.Dockerfile create mode 100644 ci/containers/ci-ubuntu-1804.Dockerfile create mode 100644 ci/containers/ci-ubuntu-2004.Dockerfile create mode 100755 ci/containers/refresh
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 8a5b3372de..0e7917d6cd 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -3,6 +3,7 @@ variables:
stages: - prebuild + - containers - native_build - cross_build - other @@ -16,10 +17,35 @@ stages:
# Common templates
+.container_job_template: &container_job_definition + image: docker:stable + stage: containers + services: + - docker:dind + before_script: + - export TAG="$CI_REGISTRY_IMAGE/ci-$NAME:latest" + - export COMMON_TAG="$CI_REGISTRY/libvirt/libvirt/ci-$NAME:latest" + - docker info + - docker login registry.gitlab.com -u "$CI_REGISTRY_USER" -p "$CI_REGISTRY_PASSWORD" + script: + - docker pull "$TAG" || docker pull "$COMMON_TAG" || true + - docker build --cache-from "$TAG" --cache-from "$COMMON_TAG" --tag "$TAG" -f "ci/containers/ci-$NAME.Dockerfile" ci/containers + - docker push "$TAG" + after_script: + - docker logout + +# We build many containers which can be useful to debug problems but are not +# needed for the pipeline itself to complete: those sometimes fail, and when +# that happens it's mostly because of temporary issues with Debian sid. We +# don't want those failures to affect the overall pipeline status +.container_optional_job_template: &container_optional_job_definition + <<: *container_job_definition + allow_failure: true
I don't think we should be building container images that we're not going to be using in any of the jobs, as it can only ever slow down the build overall.
+ # Default native build jobs that are always run .native_build_default_job_template: &native_build_default_job_definition stage: native_build - image: quay.io/libvirt/buildenv-libvirt-$NAME:latest + image: $CI_REGISTRY_IMAGE/ci-$NAME:latest cache: paths: - ccache/ @@ -44,7 +70,7 @@ stages: # Default cross build jobs that are always run .cross_build_default_job_template: &cross_build_default_job_definition stage: cross_build - image: quay.io/libvirt/buildenv-libvirt-$NAME-cross-$CROSS:latest + image: $CI_REGISTRY_IMAGE/ci-$NAME-cross-$CROSS:latest cache: paths: - ccache/ @@ -66,6 +92,204 @@ stages: - /^ci-full-.*$/
+# Container build jobs + +centos-7-container:
IMHO we should name these to match the build job. eg arch, then distro x64-centos-7-container
+ <<: *container_job_definition + variables: + NAME: centos-7 + +centos-8-container: + <<: *container_job_definition + variables: + NAME: centos-8 + +debian-9-container: + <<: *container_job_definition + variables: + NAME: debian-9 + +debian-9-cross-aarch64-container: + <<: *container_optional_job_definition + variables: + NAME: debian-9-cross-aarch64 + +debian-9-cross-armv6l-container: + <<: *container_job_definition + variables: + NAME: debian-9-cross-armv6l
This container, and many others are only used by the "extra" build jobs, so should be subject to the same filtering.
+ +debian-9-cross-armv7l-container: + <<: *container_optional_job_definition + variables: + NAME: debian-9-cross-armv7l + +debian-9-cross-mips-container: + <<: *container_job_definition + variables: + NAME: debian-9-cross-mips + +debian-9-cross-mips64el-container: + <<: *container_job_definition + variables: + NAME: debian-9-cross-mips64el + +debian-9-cross-mipsel-container: + <<: *container_optional_job_definition + variables: + NAME: debian-9-cross-mipsel + +debian-9-cross-ppc64le-container: + <<: *container_optional_job_definition + variables: + NAME: debian-9-cross-ppc64le + +debian-9-cross-s390x-container: + <<: *container_optional_job_definition + variables: + NAME: debian-9-cross-s390x + +debian-10-container: + <<: *container_job_definition + variables: + NAME: debian-10 + +debian-10-cross-aarch64-container: + <<: *container_job_definition + variables: + NAME: debian-10-cross-aarch64 + +debian-10-cross-armv6l-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-armv6l + +debian-10-cross-armv7l-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-armv7l + +debian-10-cross-i686-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-i686 + +debian-10-cross-mips-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-mips + +debian-10-cross-mips64el-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-mips64el + +debian-10-cross-mipsel-container: + <<: *container_optional_job_definition + variables: + NAME: debian-10-cross-mipsel + +debian-10-cross-ppc64le-container: + <<: *container_job_definition + variables: + NAME: debian-10-cross-ppc64le + +debian-10-cross-s390x-container: + <<: *container_job_definition + variables: + NAME: debian-10-cross-s390x + +debian-sid-container: + <<: *container_job_definition + variables: + NAME: debian-sid + +debian-sid-cross-aarch64-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-aarch64 + +debian-sid-cross-armv6l-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-armv6l + +debian-sid-cross-armv7l-container: + <<: *container_job_definition + variables: + NAME: debian-sid-cross-armv7l + +debian-sid-cross-i686-container: + <<: *container_job_definition + variables: + NAME: debian-sid-cross-i686 + +debian-sid-cross-mips-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-mips + +debian-sid-cross-mips64el-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-mips64el + +debian-sid-cross-mipsel-container: + <<: *container_job_definition + variables: + NAME: debian-sid-cross-mipsel + +debian-sid-cross-ppc64le-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-ppc64le + +debian-sid-cross-s390x-container: + <<: *container_optional_job_definition + variables: + NAME: debian-sid-cross-s390x + +fedora-31-container: + <<: *container_job_definition + variables: + NAME: fedora-31 + +fedora-32-container: + <<: *container_job_definition + variables: + NAME: fedora-32 + +fedora-rawhide-container: + <<: *container_job_definition + variables: + NAME: fedora-rawhide + +fedora-rawhide-cross-mingw32-container: + <<: *container_job_definition + variables: + NAME: fedora-rawhide-cross-mingw32 + +fedora-rawhide-cross-mingw64-container: + <<: *container_job_definition + variables: + NAME: fedora-rawhide-cross-mingw64 + +opensuse-151-container: + <<: *container_job_definition + variables: + NAME: opensuse-151 + RPM: skip + +ubuntu-1804-container: + <<: *container_job_definition + variables: + NAME: ubuntu-1804 + +ubuntu-2004-container: + <<: *container_job_definition + variables: + NAME: ubuntu-2004 + # Native architecture build + test jobs
x64-debian-9: @@ -198,6 +422,7 @@ mingw64-fedora-rawhide: # https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=webs... website: stage: other + image: $CI_REGISTRY_IMAGE/ci-centos-8:latest before_script: - *script_variables script: @@ -208,7 +433,6 @@ website: - $MAKE -C docs install - cd .. - mv vroot/share/doc/libvirt/html/ website - image: quay.io/libvirt/buildenv-libvirt-centos-8:latest artifacts: expose_as: 'Website' name: 'website' @@ -220,6 +444,7 @@ website:
codestyle: stage: other + image: $CI_REGISTRY_IMAGE/ci-centos-8:latest before_script: - *script_variables script: @@ -227,7 +452,6 @@ codestyle: - cd build - ../autogen.sh || (cat config.log && exit 1) - $MAKE syntax-check - image: quay.io/libvirt/buildenv-libvirt-centos-8:latest
# This artifact published by this job is downloaded to push to Weblate @@ -235,6 +459,7 @@ codestyle: # https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=potf... potfile: stage: prebuild + image: $CI_REGISTRY_IMAGE/ci-centos-8:latest only: - master before_script: @@ -247,7 +472,6 @@ potfile: - $MAKE -C po libvirt.pot - cd .. - mv build/po/libvirt.pot libvirt.pot - image: quay.io/libvirt/buildenv-libvirt-centos-8:latest artifacts: expose_as: 'Potfile' name: 'potfile'
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, 2020-06-02 at 11:33 +0100, Daniel P. Berrangé wrote:
On Fri, May 29, 2020 at 03:00:42PM +0200, Andrea Bolognani wrote:
+# We build many containers which can be useful to debug problems but are not +# needed for the pipeline itself to complete: those sometimes fail, and when +# that happens it's mostly because of temporary issues with Debian sid. We +# don't want those failures to affect the overall pipeline status +.container_optional_job_template: &container_optional_job_definition + <<: *container_job_definition + allow_failure: true
I don't think we should be building container images that we're not going to be using in any of the jobs, as it can only ever slow down the build overall.
These same containers are also available for use outside of CI, eg. with 'make ci-build', so I think we should keep building them. As for slowing down builds, that still only applies to the first build after Dockerfiles are updated, so I don't think it ultimately matters very much.
+# Container build jobs + +centos-7-container:
IMHO we should name these to match the build job. eg arch, then distro
x64-centos-7-container
Okay.
+debian-9-cross-armv6l-container: + <<: *container_job_definition + variables: + NAME: debian-9-cross-armv6l
This container, and many others are only used by the "extra" build jobs, so should be subject to the same filtering.
Okay, even though as we discussed separately the whole idea of splitting jobs between regular and extra might be more trouble than it's worth and be more confusing than helpful. -- Andrea Bolognani / Red Hat / Virtualization

On Tue, Jun 02, 2020 at 01:10:08PM +0200, Andrea Bolognani wrote:
On Tue, 2020-06-02 at 11:33 +0100, Daniel P. Berrangé wrote:
On Fri, May 29, 2020 at 03:00:42PM +0200, Andrea Bolognani wrote:
+# We build many containers which can be useful to debug problems but are not +# needed for the pipeline itself to complete: those sometimes fail, and when +# that happens it's mostly because of temporary issues with Debian sid. We +# don't want those failures to affect the overall pipeline status +.container_optional_job_template: &container_optional_job_definition + <<: *container_job_definition + allow_failure: true
I don't think we should be building container images that we're not going to be using in any of the jobs, as it can only ever slow down the build overall.
These same containers are also available for use outside of CI, eg. with 'make ci-build', so I think we should keep building them.
That only needs them built on the master branch of the main repo though, not every branch in every fork
As for slowing down builds, that still only applies to the first build after Dockerfiles are updated, so I don't think it ultimately matters very much.
I'd expect a rebiuld if the distro base image changes which could be fairly often for the rawhide like distros.
+# Container build jobs + +centos-7-container:
IMHO we should name these to match the build job. eg arch, then distro
x64-centos-7-container
Okay.
+debian-9-cross-armv6l-container: + <<: *container_job_definition + variables: + NAME: debian-9-cross-armv6l
This container, and many others are only used by the "extra" build jobs, so should be subject to the same filtering.
Okay, even though as we discussed separately the whole idea of splitting jobs between regular and extra might be more trouble than it's worth and be more confusing than helpful.
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, 2020-06-02 at 12:23 +0100, Daniel P. Berrangé wrote:
On Tue, Jun 02, 2020 at 01:10:08PM +0200, Andrea Bolognani wrote:
On Tue, 2020-06-02 at 11:33 +0100, Daniel P. Berrangé wrote:
I don't think we should be building container images that we're not going to be using in any of the jobs, as it can only ever slow down the build overall.
These same containers are also available for use outside of CI, eg. with 'make ci-build', so I think we should keep building them.
That only needs them built on the master branch of the main repo though, not every branch in every fork
Fair enough. So what you're suggesting is something like .container_optional_job_template: &container_optional_job_definition <<: *container_job_definition allow_failure: true except: variables: - $CI_PROJECT_NAMESPACE != libvirt only: - master correct?
As for slowing down builds, that still only applies to the first build after Dockerfiles are updated, so I don't think it ultimately matters very much.
I'd expect a rebiuld if the distro base image changes which could be fairly often for the rawhide like distros.
This advantage might be cancelled out by the fact that only a limited number of shared runners is available, so if for example we have access to 5 runners, whether we run 6 or 10 jobs will make no difference in terms of total pipeline completion time. -- Andrea Bolognani / Red Hat / Virtualization

On Tue, Jun 02, 2020 at 02:45:30PM +0200, Andrea Bolognani wrote:
On Tue, 2020-06-02 at 12:23 +0100, Daniel P. Berrangé wrote:
On Tue, Jun 02, 2020 at 01:10:08PM +0200, Andrea Bolognani wrote:
On Tue, 2020-06-02 at 11:33 +0100, Daniel P. Berrangé wrote:
I don't think we should be building container images that we're not going to be using in any of the jobs, as it can only ever slow down the build overall.
These same containers are also available for use outside of CI, eg. with 'make ci-build', so I think we should keep building them.
That only needs them built on the master branch of the main repo though, not every branch in every fork
Fair enough. So what you're suggesting is something like
.container_optional_job_template: &container_optional_job_definition <<: *container_job_definition allow_failure: true except: variables: - $CI_PROJECT_NAMESPACE != libvirt only: - master
correct?
Perhaps just matching what we do with extra builds: only: - master - /^ci-full-.*$/ so users can still get the full set of builds in their fork if they push to certain branch. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

The ci-* targets need to know where our container images are stored and how they are called to work, so now that we use the GitLab container registry instead of Quay some changes are necessary. Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- ci/Makefile | 16 ++++++++++++---- ci/list-images.sh | 24 ++++++------------------ 2 files changed, 18 insertions(+), 22 deletions(-) diff --git a/ci/Makefile b/ci/Makefile index bc1dac11e3..e1a5faaba6 100644 --- a/ci/Makefile +++ b/ci/Makefile @@ -47,10 +47,13 @@ CI_PREPARE_SCRIPT = $(CI_ROOTDIR)/prepare.sh # Script containing build instructions CI_BUILD_SCRIPT = $(CI_ROOTDIR)/build.sh +# Registry where container images are stored +CI_IMAGE_REGISTRY = registry.gitlab.com + # Location of the container images we're going to pull # Can be useful to overridde to use a locally built # image instead -CI_IMAGE_PREFIX = quay.io/libvirt/buildenv-libvirt- +CI_IMAGE_PREFIX = libvirt/libvirt/ci- # The default tag is ':latest' but if the container # repo above uses different conventions this can override it @@ -213,7 +216,12 @@ ci-prepare-tree: ci-check-engine fi ci-run-command@%: ci-prepare-tree - $(CI_ENGINE) run $(CI_ENGINE_ARGS) $(CI_IMAGE_PREFIX)$*$(CI_IMAGE_TAG) \ + image=; \ + if test "$(CI_IMAGE_REGISTRY)"; then \ + image="$${image}$(CI_IMAGE_REGISTRY)/"; \ + fi; \ + image="$${image}$(CI_IMAGE_PREFIX)$*$(CI_IMAGE_TAG)"; \ + $(CI_ENGINE) run $(CI_ENGINE_ARGS) "$$image" \ /bin/bash -c ' \ $(CI_USER_HOME)/prepare || exit 1; \ sudo \ @@ -243,11 +251,11 @@ ci-list-images: @echo @echo "Available x86 container images:" @echo - @sh list-images.sh "$(CI_ENGINE)" "$(CI_IMAGE_PREFIX)" | grep -v cross + @sh list-images.sh "$(CI_IMAGE_PREFIX)" | grep -v cross @echo @echo "Available cross-compiler container images:" @echo - @sh list-images.sh "$(CI_ENGINE)" "$(CI_IMAGE_PREFIX)" | grep cross + @sh list-images.sh "$(CI_IMAGE_PREFIX)" | grep cross @echo ci-help: diff --git a/ci/list-images.sh b/ci/list-images.sh index 35efdb6982..9ae2f60a95 100644 --- a/ci/list-images.sh +++ b/ci/list-images.sh @@ -1,26 +1,14 @@ #!/bin/sh -engine="$1" -prefix="$2" +prefix="$1" -do_podman() { - # Podman freaks out if the search term ends with a dash, which ours - # by default does, so let's strip it. The repository name is the - # second field in the output, and it already starts with the registry - podman search --limit 100 "${prefix%-}" | while read _ repo _; do - echo "$repo" - done -} +PROJECT_ID=192693 -do_docker() { - # Docker doesn't include the registry name in the output, so we have - # to add it. The repository name is the first field in the output - registry="${prefix%%/*}" - docker search --limit 100 "$prefix" | while read repo _; do - echo "$registry/$repo" - done +all_repos() { + curl -s "https://gitlab.com/api/v4/projects/$PROJECT_ID/registry/repositories?per_pag..." \ + | tr , '\n' | grep '"path":' | sed 's,"path":",,g;s,"$,,g' } -"do_$engine" | grep "^$prefix" | sed "s,^$prefix,,g" | while read repo; do +all_repos | grep "^$prefix" | sed "s,^$prefix,,g" | while read repo; do echo " $repo" done | sort -u -- 2.25.4

On Fri, May 29, 2020 at 03:00:43PM +0200, Andrea Bolognani wrote:
The ci-* targets need to know where our container images are stored and how they are called to work, so now that we use the GitLab container registry instead of Quay some changes are necessary.
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- ci/Makefile | 16 ++++++++++++---- ci/list-images.sh | 24 ++++++------------------ 2 files changed, 18 insertions(+), 22 deletions(-)
diff --git a/ci/Makefile b/ci/Makefile index bc1dac11e3..e1a5faaba6 100644 --- a/ci/Makefile +++ b/ci/Makefile @@ -47,10 +47,13 @@ CI_PREPARE_SCRIPT = $(CI_ROOTDIR)/prepare.sh # Script containing build instructions CI_BUILD_SCRIPT = $(CI_ROOTDIR)/build.sh
+# Registry where container images are stored +CI_IMAGE_REGISTRY = registry.gitlab.com + # Location of the container images we're going to pull # Can be useful to overridde to use a locally built # image instead -CI_IMAGE_PREFIX = quay.io/libvirt/buildenv-libvirt- +CI_IMAGE_PREFIX = libvirt/libvirt/ci-
# The default tag is ':latest' but if the container # repo above uses different conventions this can override it @@ -213,7 +216,12 @@ ci-prepare-tree: ci-check-engine fi
ci-run-command@%: ci-prepare-tree - $(CI_ENGINE) run $(CI_ENGINE_ARGS) $(CI_IMAGE_PREFIX)$*$(CI_IMAGE_TAG) \ + image=; \ + if test "$(CI_IMAGE_REGISTRY)"; then \
What condition is this expected to be testing ?
+ image="$${image}$(CI_IMAGE_REGISTRY)/"; \ + fi; \ + image="$${image}$(CI_IMAGE_PREFIX)$*$(CI_IMAGE_TAG)"; \ + $(CI_ENGINE) run $(CI_ENGINE_ARGS) "$$image" \ /bin/bash -c ' \ $(CI_USER_HOME)/prepare || exit 1; \ sudo \
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, 2020-06-02 at 11:36 +0100, Daniel P. Berrangé wrote:
On Fri, May 29, 2020 at 03:00:43PM +0200, Andrea Bolognani wrote:
+# Registry where container images are stored +CI_IMAGE_REGISTRY = registry.gitlab.com + # Location of the container images we're going to pull # Can be useful to overridde to use a locally built # image instead -CI_IMAGE_PREFIX = quay.io/libvirt/buildenv-libvirt- +CI_IMAGE_PREFIX = libvirt/libvirt/ci-
[...]
ci-run-command@%: ci-prepare-tree - $(CI_ENGINE) run $(CI_ENGINE_ARGS) $(CI_IMAGE_PREFIX)$*$(CI_IMAGE_TAG) \ + image=; \ + if test "$(CI_IMAGE_REGISTRY)"; then \
What condition is this expected to be testing ?
The case where someone built a (possibly custom) image locally and wants to use it with something like $ make ci-build@centos-8 \ CI_IMAGE_REGISTRY= \ CI_IMAGE_PREFIX=my- This usage scenario is explicity called out in the comment for the CI_IMAGE_PREFIX variable. But, I just realized I can avoid introducing CI_IMAGE_PREFIX by changing the code slightly in ci/list-images.sh, so I'll do that instead and avoid the extra complexity. -- Andrea Bolognani / Red Hat / Virtualization

On Tue, Jun 02, 2020 at 01:22:50PM +0200, Andrea Bolognani wrote:
On Tue, 2020-06-02 at 11:36 +0100, Daniel P. Berrangé wrote:
On Fri, May 29, 2020 at 03:00:43PM +0200, Andrea Bolognani wrote:
+# Registry where container images are stored +CI_IMAGE_REGISTRY = registry.gitlab.com + # Location of the container images we're going to pull # Can be useful to overridde to use a locally built # image instead -CI_IMAGE_PREFIX = quay.io/libvirt/buildenv-libvirt- +CI_IMAGE_PREFIX = libvirt/libvirt/ci-
[...]
ci-run-command@%: ci-prepare-tree - $(CI_ENGINE) run $(CI_ENGINE_ARGS) $(CI_IMAGE_PREFIX)$*$(CI_IMAGE_TAG) \ + image=; \ + if test "$(CI_IMAGE_REGISTRY)"; then \
What condition is this expected to be testing ?
The case where someone built a (possibly custom) image locally and wants to use it with something like
$ make ci-build@centos-8 \ CI_IMAGE_REGISTRY= \ CI_IMAGE_PREFIX=my-
This usage scenario is explicity called out in the comment for the CI_IMAGE_PREFIX variable.
This needs to be "test -n" then to validate non-zero length variable Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

Since we're already building the full container image reference dynamically at this point, we can finally get rid of the annoying requirement to include ":" in CI_IMAGE_TAG. Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- ci/Makefile | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-) diff --git a/ci/Makefile b/ci/Makefile index e1a5faaba6..96e4d62611 100644 --- a/ci/Makefile +++ b/ci/Makefile @@ -55,9 +55,9 @@ CI_IMAGE_REGISTRY = registry.gitlab.com # image instead CI_IMAGE_PREFIX = libvirt/libvirt/ci- -# The default tag is ':latest' but if the container +# The default tag is 'latest' but if the container # repo above uses different conventions this can override it -CI_IMAGE_TAG = :latest +CI_IMAGE_TAG = latest # We delete the virtual root after completion, set # to 0 if you need to keep it around for debugging @@ -220,7 +220,10 @@ ci-run-command@%: ci-prepare-tree if test "$(CI_IMAGE_REGISTRY)"; then \ image="$${image}$(CI_IMAGE_REGISTRY)/"; \ fi; \ - image="$${image}$(CI_IMAGE_PREFIX)$*$(CI_IMAGE_TAG)"; \ + image="$${image}$(CI_IMAGE_PREFIX)$*"; \ + if test "$(CI_IMAGE_TAG)"; then \ + image="$${image}:$(CI_IMAGE_TAG)"; \ + fi; \ $(CI_ENGINE) run $(CI_ENGINE_ARGS) "$$image" \ /bin/bash -c ' \ $(CI_USER_HOME)/prepare || exit 1; \ -- 2.25.4

On Fri, May 29, 2020 at 03:00:44PM +0200, Andrea Bolognani wrote:
Since we're already building the full container image reference dynamically at this point, we can finally get rid of the annoying requirement to include ":" in CI_IMAGE_TAG.
Signed-off-by: Andrea Bolognani <abologna@redhat.com> --- ci/Makefile | 9 ++++++--- 1 file changed, 6 insertions(+), 3 deletions(-)
diff --git a/ci/Makefile b/ci/Makefile index e1a5faaba6..96e4d62611 100644 --- a/ci/Makefile +++ b/ci/Makefile @@ -55,9 +55,9 @@ CI_IMAGE_REGISTRY = registry.gitlab.com # image instead CI_IMAGE_PREFIX = libvirt/libvirt/ci-
-# The default tag is ':latest' but if the container +# The default tag is 'latest' but if the container # repo above uses different conventions this can override it -CI_IMAGE_TAG = :latest +CI_IMAGE_TAG = latest
# We delete the virtual root after completion, set # to 0 if you need to keep it around for debugging @@ -220,7 +220,10 @@ ci-run-command@%: ci-prepare-tree if test "$(CI_IMAGE_REGISTRY)"; then \ image="$${image}$(CI_IMAGE_REGISTRY)/"; \ fi; \ - image="$${image}$(CI_IMAGE_PREFIX)$*$(CI_IMAGE_TAG)"; \ + image="$${image}$(CI_IMAGE_PREFIX)$*"; \ + if test "$(CI_IMAGE_TAG)"; then \ + image="$${image}:$(CI_IMAGE_TAG)"; \ + fi; \
Again, I'm not seeing what this test is for
$(CI_ENGINE) run $(CI_ENGINE_ARGS) "$$image" \ /bin/bash -c ' \ $(CI_USER_HOME)/prepare || exit 1; \ -- 2.25.4
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Tue, 2020-06-02 at 11:37 +0100, Daniel P. Berrangé wrote:
On Fri, May 29, 2020 at 03:00:44PM +0200, Andrea Bolognani wrote:
-# The default tag is ':latest' but if the container +# The default tag is 'latest' but if the container # repo above uses different conventions this can override it -CI_IMAGE_TAG = :latest +CI_IMAGE_TAG = latest
# We delete the virtual root after completion, set # to 0 if you need to keep it around for debugging @@ -220,7 +220,10 @@ ci-run-command@%: ci-prepare-tree if test "$(CI_IMAGE_REGISTRY)"; then \ image="$${image}$(CI_IMAGE_REGISTRY)/"; \ fi; \ - image="$${image}$(CI_IMAGE_PREFIX)$*$(CI_IMAGE_TAG)"; \ + image="$${image}$(CI_IMAGE_PREFIX)$*"; \ + if test "$(CI_IMAGE_TAG)"; then \ + image="$${image}:$(CI_IMAGE_TAG)"; \ + fi; \
Again, I'm not seeing what this test is for
It was intended to account for the possibility of the user passing an empty CI_IMAGE_TAG, but now that I think about it a bit more that would be quite a silly thing to do and erroring out in that case is perfectly fine. I'll drop this commit. -- Andrea Bolognani / Red Hat / Virtualization

On Fri, May 29, 2020 at 03:00:39PM +0200, Andrea Bolognani wrote:
Branch: https://gitlab.com/abologna/libvirt/-/tree/ci-full-gitlab-registry Pipeline: https://gitlab.com/abologna/libvirt/pipelines/150891361
This is what we're already doing with the subprojects we've migrated to GitLab CI and, as of earlier today, all projects under the libosinfo umbrella.
Once this is merged, we can stop publishing container images on Quay and archive the libvirt-dockerfiles repository.
Patch 3/5 has been trimmed in order to comply with the size limits of the mailing list. You can grab the unabridged version with
$ git fetch https://gitlab.com/abologna/libvirt ci-full-gitlab-registry
This is a lot of files and lines of text/code. I was wondering about building the dockerfiles as part of the container_job_definition. To me it seems like a lot of duplication and a lot of noise in the future if we decide to change the dockerfiles generation. The main difference that I can think of is that with files in repository we need to regenerate all the dockerfiles to apply changes made in libvirt-ci but with the automatic generation we would have that for free. Both approaches have some benefits and drawbacks and I guess we could have some variable to prevent automatic generation of dockerfiles to make sure that unwanted changes in libvirt-ci doesn't affect CI for all libvirt repositories, on the other hand it would automatically check that changes to libvirt-ci doesn't break anything. I personally don't like the need to introduce 5000+ lines just for compilation testing. Pavel

On Mon, Jun 01, 2020 at 04:51:19PM +0200, Pavel Hrdina wrote:
On Fri, May 29, 2020 at 03:00:39PM +0200, Andrea Bolognani wrote:
Branch: https://gitlab.com/abologna/libvirt/-/tree/ci-full-gitlab-registry Pipeline: https://gitlab.com/abologna/libvirt/pipelines/150891361
This is what we're already doing with the subprojects we've migrated to GitLab CI and, as of earlier today, all projects under the libosinfo umbrella.
Once this is merged, we can stop publishing container images on Quay and archive the libvirt-dockerfiles repository.
Patch 3/5 has been trimmed in order to comply with the size limits of the mailing list. You can grab the unabridged version with
$ git fetch https://gitlab.com/abologna/libvirt ci-full-gitlab-registry
This is a lot of files and lines of text/code. I was wondering about building the dockerfiles as part of the container_job_definition.
To me it seems like a lot of duplication and a lot of noise in the future if we decide to change the dockerfiles generation. The main difference that I can think of is that with files in repository we need to regenerate all the dockerfiles to apply changes made in libvirt-ci but with the automatic generation we would have that for free.
The key reason for keeping the dockerfiles in the libvirt.git repo and NOT auto-generating them on the fly is to ensure the CI process is self-contained, with no dependancy on external moving parts in other git repos. If you automatically generated dockerfiles from libvirt-ci.git, then you end up with unstable CI when changes in libvirt.git need to be made in lock-step with changes in libvirt-ci.git. If you change libvirt.git first, CI will break if it runs before libvirt-ci.git is updated. If you change libvirt-ci.git first, then CI will break if it runs before libvirt.git is updated. This is a no-win situation. This is especially painful when you consider that a user's fork of libvirt.git may not updated to current master. Or consider a user who needs to make changes to libvirt.git that require updated dockerfiles and needs to be able to test them before any change in libvirt-ci.git is present. We've seen these problems many times with our current Jenkins setup where CI breaks for a period when we have to do matching updates between both libvirt-ci.git & libvirt.git. With dockerfiles kept in libvirt.git we know that the containers we're building will always contain exactly what we need. This also makes it easy for users to experiment with changes, as they can modify the dockerfiles directly to add/remove pieces. Such changes can be propagated back to libvirt-ci.git out of band.
Both approaches have some benefits and drawbacks and I guess we could have some variable to prevent automatic generation of dockerfiles to make sure that unwanted changes in libvirt-ci doesn't affect CI for all libvirt repositories, on the other hand it would automatically check that changes to libvirt-ci doesn't break anything.
Changes to libvirt-ci that affect the dockerfiles, should come with a URL pointer to a merge request against the affected project(s), showing the succesful CI run with updated dockerfiles.
I personally don't like the need to introduce 5000+ lines just for compilation testing.
That's a tiny proportion of the code we have in libvirt.git, so IMHO it is not worth worrying about. The benefits of having the CI self contained far outweigh the downside. Essentially we are prioritizing the main libvirt.git as that is the primary content from the POV of most contributors. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Mon, 2020-06-01 at 16:51 +0200, Pavel Hrdina wrote:
On Fri, May 29, 2020 at 03:00:39PM +0200, Andrea Bolognani wrote:
Branch: https://gitlab.com/abologna/libvirt/-/tree/ci-full-gitlab-registry Pipeline: https://gitlab.com/abologna/libvirt/pipelines/150891361
This is what we're already doing with the subprojects we've migrated to GitLab CI and, as of earlier today, all projects under the libosinfo umbrella.
Once this is merged, we can stop publishing container images on Quay and archive the libvirt-dockerfiles repository.
Patch 3/5 has been trimmed in order to comply with the size limits of the mailing list. You can grab the unabridged version with
$ git fetch https://gitlab.com/abologna/libvirt ci-full-gitlab-registry
This is a lot of files and lines of text/code. I was wondering about building the dockerfiles as part of the container_job_definition.
To me it seems like a lot of duplication and a lot of noise in the future if we decide to change the dockerfiles generation. The main difference that I can think of is that with files in repository we need to regenerate all the dockerfiles to apply changes made in libvirt-ci but with the automatic generation we would have that for free.
Both approaches have some benefits and drawbacks and I guess we could have some variable to prevent automatic generation of dockerfiles to make sure that unwanted changes in libvirt-ci doesn't affect CI for all libvirt repositories, on the other hand it would automatically check that changes to libvirt-ci doesn't break anything.
I personally don't like the need to introduce 5000+ lines just for compilation testing.
To prevent unwanted changes to slip in, we could make libvirt-ci a submodule and only bump the hash when we specifically want to update something. Overall I'd be perfectly okay with the approach you suggest, though I reserve the right to change my mind about this after having tried to implement it :) Adding Dan to the conversation so that he can weigh in. -- Andrea Bolognani / Red Hat / Virtualization

On Mon, Jun 01, 2020 at 05:31:45PM +0200, Andrea Bolognani wrote:
On Mon, 2020-06-01 at 16:51 +0200, Pavel Hrdina wrote:
On Fri, May 29, 2020 at 03:00:39PM +0200, Andrea Bolognani wrote:
Branch: https://gitlab.com/abologna/libvirt/-/tree/ci-full-gitlab-registry Pipeline: https://gitlab.com/abologna/libvirt/pipelines/150891361
This is what we're already doing with the subprojects we've migrated to GitLab CI and, as of earlier today, all projects under the libosinfo umbrella.
Once this is merged, we can stop publishing container images on Quay and archive the libvirt-dockerfiles repository.
Patch 3/5 has been trimmed in order to comply with the size limits of the mailing list. You can grab the unabridged version with
$ git fetch https://gitlab.com/abologna/libvirt ci-full-gitlab-registry
This is a lot of files and lines of text/code. I was wondering about building the dockerfiles as part of the container_job_definition.
To me it seems like a lot of duplication and a lot of noise in the future if we decide to change the dockerfiles generation. The main difference that I can think of is that with files in repository we need to regenerate all the dockerfiles to apply changes made in libvirt-ci but with the automatic generation we would have that for free.
Both approaches have some benefits and drawbacks and I guess we could have some variable to prevent automatic generation of dockerfiles to make sure that unwanted changes in libvirt-ci doesn't affect CI for all libvirt repositories, on the other hand it would automatically check that changes to libvirt-ci doesn't break anything.
I personally don't like the need to introduce 5000+ lines just for compilation testing.
To prevent unwanted changes to slip in, we could make libvirt-ci a submodule and only bump the hash when we specifically want to update something.
Overall I'd be perfectly okay with the approach you suggest, though I reserve the right to change my mind about this after having tried to implement it :) Adding Dan to the conversation so that he can weigh in.
Submodules introduce a extra layer of pain for people whose changes involve modifications to the dockerfile. Per my other reply, the goal here is that someone changinging libvirt.git should be able to modify the dockerfile and have CI jobs "just work" for their merge request. They should only need to touch libvirt-ci.git once they have got everything working for their libvirt.git changes & CI. Involving libvirt-ci.git makes the update workflow something like this: - Fork libvirt.git - Fork libvirt-ci.git - Make code changes that need dockerfile update in libvirt.git - Update forked libvirt.git to point to fork of libvirt-ci.git - Make lcitool related changes in libvirt-ci.git - Commit lcitool related changes in libvirt-ci.git - Update forked libvirt.git submodule hash to point to libvirt-ci.git update - If CI fails, repeat last three steps - Submit merge request for libvirt-ci.git - Submit merge request for libvirt.git - If libvirt.git change is approved, then merge libvirt-ci.git change - Update forked libvirt.git to point back to main libvirt-ci.git - Refresh merge request for libvirt.git - Approve and merge libvirt.git merge request By committing dockerfiles we have a simpler life - Fork libvirt.git - Fork libvirt-ci.git - Make code changes that need dockerfile update in libvirt.git - Make lcitool related changes in libvirt-ci.git - Re-generate dockerfiles with lcitool from your fork - If CI fails, repeat last two steps - Commit lcitool related changes in libvirt-ci.git - Submit merge request for libvirt.git - Submit merge request for libvirt-ci.git - Approve and merge libvirt.git merge request - Approve and merge libvirt-ci.git merge request In the second case, the preson updating libvirt-ci.git doesn't even have to be the same as the person who submits the libvirt.git updates, as it can be done out of band to some extent. eg we can do this: - Fork libvirt.git - Make code changes that need dockerfile update in libvirt.git - Edit dockerfiles with needed changes - If CI fails, repeat last step - Submit merge request for libvirt.git - Approve and merge libvirt.git merge request and - Fork libvirt-ci.git - Make lcitool related changes in libvirt-ci.git - Commit lcitool related changes in libvirt-ci.git - Submit merge request for libvirt-ci.git - Approve and merge libvirt-ci.git merge request Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Mon, 2020-06-01 at 16:55 +0100, Daniel P. Berrangé wrote:
By committing dockerfiles we have a simpler life
- Fork libvirt.git - Fork libvirt-ci.git - Make code changes that need dockerfile update in libvirt.git - Make lcitool related changes in libvirt-ci.git - Re-generate dockerfiles with lcitool from your fork - If CI fails, repeat last two steps - Commit lcitool related changes in libvirt-ci.git - Submit merge request for libvirt.git - Submit merge request for libvirt-ci.git - Approve and merge libvirt.git merge request - Approve and merge libvirt-ci.git merge request
In the second case, the preson updating libvirt-ci.git doesn't even have to be the same as the person who submits the libvirt.git updates, as it can be done out of band to some extent. eg we can do this:
- Fork libvirt.git - Make code changes that need dockerfile update in libvirt.git - Edit dockerfiles with needed changes - If CI fails, repeat last step - Submit merge request for libvirt.git - Approve and merge libvirt.git merge request
and
- Fork libvirt-ci.git - Make lcitool related changes in libvirt-ci.git - Commit lcitool related changes in libvirt-ci.git - Submit merge request for libvirt-ci.git - Approve and merge libvirt-ci.git merge request
These, and the ones made in the other message, are very solid points. I have to say, however, that I'm not a fan of the idea of updating the per-repository Dockerfiles before the corresponding code change has made its way into libvirt-ci.git: ideally, it would works the other way around and the libvirt.git commit would contain a reference to the corresponding libvirt-ci.git commit, just like we're doing right now in libvirt-dockerfiles.git. -- Andrea Bolognani / Red Hat / Virtualization

On Mon, Jun 01, 2020 at 06:45:25PM +0200, Andrea Bolognani wrote:
On Mon, 2020-06-01 at 16:55 +0100, Daniel P. Berrangé wrote:
By committing dockerfiles we have a simpler life
- Fork libvirt.git - Fork libvirt-ci.git - Make code changes that need dockerfile update in libvirt.git - Make lcitool related changes in libvirt-ci.git - Re-generate dockerfiles with lcitool from your fork - If CI fails, repeat last two steps - Commit lcitool related changes in libvirt-ci.git - Submit merge request for libvirt.git - Submit merge request for libvirt-ci.git - Approve and merge libvirt.git merge request - Approve and merge libvirt-ci.git merge request
In the second case, the preson updating libvirt-ci.git doesn't even have to be the same as the person who submits the libvirt.git updates, as it can be done out of band to some extent. eg we can do this:
- Fork libvirt.git - Make code changes that need dockerfile update in libvirt.git - Edit dockerfiles with needed changes - If CI fails, repeat last step - Submit merge request for libvirt.git - Approve and merge libvirt.git merge request
and
- Fork libvirt-ci.git - Make lcitool related changes in libvirt-ci.git - Commit lcitool related changes in libvirt-ci.git - Submit merge request for libvirt-ci.git - Approve and merge libvirt-ci.git merge request
These, and the ones made in the other message, are very solid points.
I have to say, however, that I'm not a fan of the idea of updating the per-repository Dockerfiles before the corresponding code change has made its way into libvirt-ci.git: ideally, it would works the other way around and the libvirt.git commit would contain a reference to the corresponding libvirt-ci.git commit, just like we're doing right now in libvirt-dockerfiles.git.
I have to agree that all of it makes sense. Doesn't change the fact, that I don't like it :). Thanks, Pavel
participants (3)
-
Andrea Bolognani
-
Daniel P. Berrangé
-
Pavel Hrdina