[libvirt PATCH v3 00/12] gitlab: expand the CI job coverage

The main goals with this series are: - Introduce a minimal job building the website and publishing an artifact which can be deployed onto libvirt.org - Introduce a minimal job building the libvirt.pot for import into Weblate (only runs on git master branch) - Introduce a signed-off-by check. We can add a rule to block direct pushes without a SoB, but this will be needed to block merge requests without a SoB. - Expanding CI jobs to get coverage closer to Travis/Jenkins - Reducing cross-build jobs to just interesting variants, since the full set hasn't shown value in detecting failures The Linux native job coverage is now a superset of that achieved by Travis/Jenkins. For post-merge testing the full set of jobs are run on git master. This measured approx 50 minutes total duration. For pre-merge testing the job count is reduced for quicker turnaround time for developers Measured ~35 minutes total duration for a cold cache, vs ~20 minutes for warm cache. Changed in v3: - Add job for validating Signed-off-by - Add job for validating code style - Use ccache for native/cross build jobs - Eliminated the extra build stage and put all native jobs in the same place, but with filters - Keep all cross build jobs, but filter them based on branch Changed in v2: - Add more native test jobs to run on git master - Restrict git clone depth - User variable for "make" command name - Add test job to build the master pot file - Remove extra configure args for website job - Re-ordered patches to reduce repeated changes Daniel P. Berrangé (12): gitlab: add variable for make command name gitlab: restrict git history to 100 commits gitlab: create an explicit stage for cross build jobs gitlab: use CI for building website contents gitlab: reduce number of cross build jobs run by default gitlab: rename the cross build jobs gitlab: add mingw cross build CI jobs gitlab: add x86_64 native CI jobs gitlab: add job for building latest potfile gitlab: introduce use of ccache for speeding up rebuilds gitlab: introduce a check for validate DCO sign-off gitlab: add explicit early job for syntax-check .gitlab-ci.yml | 234 +++++++++++++++++++++++++++++++++++++---- scripts/require-dco.py | 96 +++++++++++++++++ 2 files changed, 307 insertions(+), 23 deletions(-) create mode 100755 scripts/require-dco.py -- 2.24.1

To facilitate future jobs that will use FreeBSD Reviewed-by: Andrea Bolognani <abologna@redhat.com> Reviewed-by: Erik Skultety <eskultet@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 5 ++++- 1 file changed, 4 insertions(+), 1 deletion(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index ea49c6178b..6f77ab55ba 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -1,9 +1,12 @@ +variables: + MAKE: make + .job_template: &job_definition script: - mkdir build - cd build - ../autogen.sh $CONFIGURE_OPTS || (cat config.log && exit 1) - - make -j $(getconf _NPROCESSORS_ONLN) + - $MAKE -j $(getconf _NPROCESSORS_ONLN) # We could run every arch on every versions, but it is a little # overkill. Instead we split jobs evenly across 9, 10 and sid -- 2.24.1

We don't need the full git history when running CI jobs. From a code POV we only need the most recent commit, but we want to be able to run checks on the commits too. In particular to validate the DCO signoff for each commit. Reviewed-by: Andrea Bolognani <abologna@redhat.com> Reviewed-by: Erik Skultety <eskultet@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 1 + 1 file changed, 1 insertion(+) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 6f77ab55ba..4438f51a6a 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -1,5 +1,6 @@ variables: MAKE: make + GIT_DEPTH: 100 .job_template: &job_definition script: -- 2.24.1

As we introduce more build jobs, it will be useful to have a grouping of jobs to more easily visualize the results and potentially control build ordering. Reviewed-by: Erik Skultety <eskultet@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 4438f51a6a..8d22706bd4 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -2,7 +2,12 @@ variables: MAKE: make GIT_DEPTH: 100 -.job_template: &job_definition +stages: + - cross_build + + +.cross_build_default_job_template: &cross_build_default_job_definition + stage: cross_build script: - mkdir build - cd build @@ -14,37 +19,37 @@ variables: # to achieve reasonable cross-coverage. debian-9-cross-armv6l: - <<: *job_definition + <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-armv6l:latest debian-9-cross-mips64el: - <<: *job_definition + <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-mips64el:latest debian-9-cross-mips: - <<: *job_definition + <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-mips:latest debian-10-cross-aarch64: - <<: *job_definition + <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-aarch64:latest debian-10-cross-ppc64le: - <<: *job_definition + <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-ppc64le:latest debian-10-cross-s390x: - <<: *job_definition + <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-s390x:latest debian-sid-cross-armv7l: - <<: *job_definition + <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-armv7l:latest debian-sid-cross-i686: - <<: *job_definition + <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-i686:latest debian-sid-cross-mipsel: - <<: *job_definition + <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-mipsel:latest -- 2.24.1

On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
As we introduce more build jobs, it will be useful to have a grouping of jobs to more easily visualize the results and potentially control build ordering.
Reviewed-by: Erik Skultety <eskultet@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 25 +++++++++++++++---------- 1 file changed, 15 insertions(+), 10 deletions(-)
Reviewed-by: Andrea Bolognani <abologna@redhat.com> -- Andrea Bolognani / Red Hat / Virtualization

Run the bare minimum build that is possible to create the docs, avoiding compiling code which other jobs will deal with. The generated website is published as an artifact and thus is browsable by developers on build completion and can be downloaded as a zip file. Reviewed-by: Erik Skultety <eskultet@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 8d22706bd4..b79d9a2b77 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -3,6 +3,7 @@ variables: GIT_DEPTH: 100 stages: + - prebuild - cross_build @@ -53,3 +54,25 @@ debian-sid-cross-i686: debian-sid-cross-mipsel: <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-mipsel:latest + +# This artifact published by this job is downloaded by libvirt.org to +# be deployed to the web root: +# https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=webs... +website: + stage: prebuild + script: + - mkdir build + - cd build + - ../autogen.sh --prefix=$(pwd)/../vroot || (cat config.log && exit 1) + - $MAKE -j $(getconf _NPROCESSORS_ONLN) -C docs + - $MAKE -j $(getconf _NPROCESSORS_ONLN) -C docs install + - cd .. + - mv vroot/share/doc/libvirt/html/ website + image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest + artifacts: + expose_as: 'Website' + name: 'website' + when: on_success + expire_in: 30 days + paths: + - website -- 2.24.1

On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
Run the bare minimum build that is possible to create the docs, avoiding compiling code which other jobs will deal with.
The generated website is published as an artifact and thus is browsable by developers on build completion and can be downloaded as a zip file.
Reviewed-by: Erik Skultety <eskultet@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)
Reviewed-by: Andrea Bolognani <abologna@redhat.com> -- Andrea Bolognani / Red Hat / Virtualization

Currently we nine different cross build jobs, but as we introduce more native jobs this is going to result in a very long CI execution time. For developers testing their personal branches under development it is generally sufficient to just look at a couple of interesting scenarios, namely 32-bit and big endian. This splits the cross build jobs so that by default only the armv7 and s390x archs are built. The remainining archs are setup so that they are only built for code on the master branch, which will have the effect of doing post-merge testing. Developers can opt-in to full testing of their pre-merge code by pushing it to a branch with a name prefix of "ci-extra-". Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 26 ++++++++++++++++---------- 1 file changed, 16 insertions(+), 10 deletions(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index b79d9a2b77..5fa80a0458 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -7,6 +7,7 @@ stages: - cross_build +# Default cross build jobs that are always run .cross_build_default_job_template: &cross_build_default_job_definition stage: cross_build script: @@ -15,28 +16,33 @@ stages: - ../autogen.sh $CONFIGURE_OPTS || (cat config.log && exit 1) - $MAKE -j $(getconf _NPROCESSORS_ONLN) -# We could run every arch on every versions, but it is a little -# overkill. Instead we split jobs evenly across 9, 10 and sid -# to achieve reasonable cross-coverage. +# Extra cross build jobs that are only run post-merge, or +# when code is pushed to a branch with "ci-extra-" name prefix +.cross_build_extra_job_template: &cross_build_extra_job_definition + <<: *cross_build_default_job_definition + only: + - master + - /^ci-extra-.*$/ + debian-9-cross-armv6l: - <<: *cross_build_default_job_definition + <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-armv6l:latest debian-9-cross-mips64el: - <<: *cross_build_default_job_definition + <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-mips64el:latest debian-9-cross-mips: - <<: *cross_build_default_job_definition + <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-mips:latest debian-10-cross-aarch64: - <<: *cross_build_default_job_definition + <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-aarch64:latest debian-10-cross-ppc64le: - <<: *cross_build_default_job_definition + <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-ppc64le:latest debian-10-cross-s390x: @@ -48,11 +54,11 @@ debian-sid-cross-armv7l: image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-armv7l:latest debian-sid-cross-i686: - <<: *cross_build_default_job_definition + <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-i686:latest debian-sid-cross-mipsel: - <<: *cross_build_default_job_definition + <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-mipsel:latest # This artifact published by this job is downloaded by libvirt.org to -- 2.24.1

On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
Currently we nine different cross build jobs
You accidentally a verb here :) [...]
+# Extra cross build jobs that are only run post-merge, or +# when code is pushed to a branch with "ci-extra-" name prefix +.cross_build_extra_job_template: &cross_build_extra_job_definition + <<: *cross_build_default_job_definition + only: + - master + - /^ci-extra-.*$/
Can I suggest the changing the expected prefix from ci-extra to ci-full? Because when you push to that branch you get the full set of build jobs, not just the extra ones. Regardless, Reviewed-by: Andrea Bolognani <abologna@redhat.com> -- Andrea Bolognani / Red Hat / Virtualization

On Thu, Mar 26, 2020 at 02:40:22PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
Currently we nine different cross build jobs
You accidentally a verb here :)
heh, s/we/we have/
[...]
+# Extra cross build jobs that are only run post-merge, or +# when code is pushed to a branch with "ci-extra-" name prefix +.cross_build_extra_job_template: &cross_build_extra_job_definition + <<: *cross_build_default_job_definition + only: + - master + - /^ci-extra-.*$/
Can I suggest the changing the expected prefix from ci-extra to ci-full? Because when you push to that branch you get the full set of build jobs, not just the extra ones.
Sure, works for me.
Regardless,
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
-- Andrea Bolognani / Red Hat / Virtualization
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Thu, Mar 26, 2020 at 02:40:22PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
Currently we nine different cross build jobs
You accidentally a verb here :)
[...]
+# Extra cross build jobs that are only run post-merge, or +# when code is pushed to a branch with "ci-extra-" name prefix +.cross_build_extra_job_template: &cross_build_extra_job_definition + <<: *cross_build_default_job_definition + only: + - master + - /^ci-extra-.*$/
Can I suggest the changing the expected prefix from ci-extra to ci-full? Because when you push to that branch you get the full set of build jobs, not just the extra ones.
Regardless,
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com>

The pipeline UI will truncate the names of jobs after about 15 characters. As a result with the cross-builds, we truncate the most important part of the job name. Putting the most important part first is robust against truncation, and we can drop the redundant "-cross" stub. Reviewed-by: Erik Skultety <skultety.erik@gmail.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 5fa80a0458..7de450e37d 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -25,39 +25,39 @@ stages: - /^ci-extra-.*$/ -debian-9-cross-armv6l: +armv6l-debian-9: <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-armv6l:latest -debian-9-cross-mips64el: +mips64el-debian-9: <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-mips64el:latest -debian-9-cross-mips: +mips-debian-9: <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-mips:latest -debian-10-cross-aarch64: +aarch64-debian-10: <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-aarch64:latest -debian-10-cross-ppc64le: +ppc64le-debian-10: <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-ppc64le:latest -debian-10-cross-s390x: +s390x-debian-10: <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-10-cross-s390x:latest -debian-sid-cross-armv7l: +armv7l-debian-sid: <<: *cross_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-armv7l:latest -debian-sid-cross-i686: +i686-debian-sid: <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-i686:latest -debian-sid-cross-mipsel: +mipsel-debian-sid: <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-mipsel:latest -- 2.24.1

On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
The pipeline UI will truncate the names of jobs after about 15 characters. As a result with the cross-builds, we truncate the most important part of the job name. Putting the most important part first is robust against truncation, and we can drop the redundant "-cross" stub.
Reviewed-by: Erik Skultety <skultety.erik@gmail.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-)
Reviewed-by: Andrea Bolognani <abologna@redhat.com> -- Andrea Bolognani / Red Hat / Virtualization

This pulls in the mingw cross build jobs using Fedora 30 as a base, matching what is done on Jenkins and Travis. Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 7de450e37d..631c447793 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -61,6 +61,15 @@ mipsel-debian-sid: <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-sid-cross-mipsel:latest +mingw32-fedora-30: + <<: *cross_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-30-cross-mingw32:latest + +mingw64-fedora-30: + <<: *cross_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-30-cross-mingw64:latest + + # This artifact published by this job is downloaded by libvirt.org to # be deployed to the web root: # https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=webs... -- 2.24.1

On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
This pulls in the mingw cross build jobs using Fedora 30 as a base, matching what is done on Jenkins and Travis.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 9 +++++++++ 1 file changed, 9 insertions(+)
Reviewed-by: Andrea Bolognani <abologna@redhat.com> -- Andrea Bolognani / Red Hat / Virtualization

On Thu, Mar 26, 2020 at 12:35:33PM +0000, Daniel P. Berrangé wrote:
This pulls in the mingw cross build jobs using Fedora 30 as a base, matching what is done on Jenkins and Travis.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- Reviewed-by: Erik Skultety <eskultet@redhat.com>

This patch adds x86_64 native CI jobs for all distros that we currently build container images for. This is a superset of the Linux jobs run on current Jenkins and Travis platforms. The remaining missing platforms are FreeBSD and macOS, neither of which can use the shared runner container based infrastructure. We may add further native jobs in the future which are not x86_64 based, if we get access to suitable hardware, thus the jobs all have an arch prefix in their name, just like the cross-built jobs do. As with the cross-arch builds, the native jobs are split into two groups. One group is run in all situations, while the other group is only run on the master branch, or branches with a name prefix 'ci-extra-'. This avoids the build time getting too long when developers are testing their code prior to submission, while keeping full coverage of code that is merged. Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 631c447793..85ab8424e1 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -4,9 +4,30 @@ variables: stages: - prebuild + - native_build - cross_build +# Common templates + +# Default native build jobs that are always run +.native_build_default_job_template: &native_build_default_job_definition + stage: native_build + script: + - mkdir build + - cd build + - ../autogen.sh $CONFIGURE_OPTS || (cat config.log && exit 1) + - $MAKE -j $(getconf _NPROCESSORS_ONLN) distcheck + +# Extra native build jobs that are only run post-merge, or +# when code is pushed to a branch with "ci-extra-" name prefix +.native_build_extra_job_template: &native_build_extra_job_definition + <<: *native_build_default_job_definition + only: + - master + - /^ci-extra-.*$/ + + # Default cross build jobs that are always run .cross_build_default_job_template: &cross_build_default_job_definition stage: cross_build @@ -25,6 +46,55 @@ stages: - /^ci-extra-.*$/ +# Native architecture build + test jobs + +x64-debian-9: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-debian-9:latest + +x64-debian-10: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-debian-10:latest + +x64-debian-sid: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-debian-sid:latest + +x64-centos-7: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-centos-7:latest + +x64-centos-8: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-centos-8:latest + +x64-fedora-30: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-30:latest + +x64-fedora-31: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest + +x64-fedora-rawhide: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-rawhide:latest + +x64-opensuse-151: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-opensuse-151:latest + +x64-ubuntu-1604: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-ubuntu-1604:latest + +x64-ubuntu-1804: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-ubuntu-1804:latest + + +# Cross compiled build jobs + armv6l-debian-9: <<: *cross_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-debian-9-cross-armv6l:latest -- 2.24.1

On Thu, Mar 26, 2020 at 12:35:34PM +0000, Daniel P. Berrangé wrote:
This patch adds x86_64 native CI jobs for all distros that we currently build container images for. This is a superset of the Linux jobs run on current Jenkins and Travis platforms.
The remaining missing platforms are FreeBSD and macOS, neither of which can use the shared runner container based infrastructure.
We may add further native jobs in the future which are not x86_64 based, if we get access to suitable hardware, thus the jobs all have an arch prefix in their name, just like the cross-built jobs do.
As with the cross-arch builds, the native jobs are split into two groups. One group is run in all situations, while the other group is only run on the master branch, or branches with a name prefix 'ci-extra-'. This avoids the build time getting too long when developers are testing their code prior to submission, while keeping full coverage of code that is merged.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 631c447793..85ab8424e1 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -4,9 +4,30 @@ variables:
stages: - prebuild + - native_build - cross_build
+# Common templates + +# Default native build jobs that are always run +.native_build_default_job_template: &native_build_default_job_definition + stage: native_build + script: + - mkdir build + - cd build + - ../autogen.sh $CONFIGURE_OPTS || (cat config.log && exit 1) + - $MAKE -j $(getconf _NPROCESSORS_ONLN) distcheck + +# Extra native build jobs that are only run post-merge, or +# when code is pushed to a branch with "ci-extra-" name prefix +.native_build_extra_job_template: &native_build_extra_job_definition + <<: *native_build_default_job_definition + only: + - master + - /^ci-extra-.*$/
As Andrea commented a few patches back, ci-full is probably a better prefix.
+ + # Default cross build jobs that are always run .cross_build_default_job_template: &cross_build_default_job_definition stage: cross_build @@ -25,6 +46,55 @@ stages: - /^ci-extra-.*$/
+# Native architecture build + test jobs + +x64-debian-9: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-debian-9:latest + +x64-debian-10: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-debian-10:latest + +x64-debian-sid: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-debian-sid:latest + +x64-centos-7: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-centos-7:latest + +x64-centos-8: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-centos-8:latest
Shouldn't we actually prefer the newer distros over the older ones in terms of what runs on all branches vs what runs only on master + a dedicated ci- prefixed branch? At least that makes much more sense to from the upstream POV. Everything else is just a nice to have.
+ +x64-fedora-30: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-30:latest + +x64-fedora-31: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest
Same here...
+ +x64-fedora-rawhide: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-rawhide:latest + +x64-opensuse-151: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-opensuse-151:latest + +x64-ubuntu-1604: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-ubuntu-1604:latest + +x64-ubuntu-1804: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-ubuntu-1804:latest
...and here... With the distro versions swapped across the job definitions: Reviewed-by: Erik Skultety <eskultet@redhat.com>

On Thu, 2020-03-26 at 17:31 +0100, Erik Skultety wrote:
Shouldn't we actually prefer the newer distros over the older ones in terms of what runs on all branches vs what runs only on master + a dedicated ci- prefixed branch? At least that makes much more sense to from the upstream POV. Everything else is just a nice to have.
The idea is to have a decent mix of old (CentOS 7, Ubuntu 16.04), not so old (Debian 10, Fedora 30, openSUSE 15.1) and bleeding edge (Fedora Rawhide, plus Debian sid for one of the cross builds), as well as cover both native and cross (Linux and MinGW) builds. Building on something like Fedora 31 is not as interesting as far as catching bugs early goes, because it's safe to assume that the developer is already using a fairly modern OS to build locally. -- Andrea Bolognani / Red Hat / Virtualization

On Thu, Mar 26, 2020 at 05:31:48PM +0100, Erik Skultety wrote:
On Thu, Mar 26, 2020 at 12:35:34PM +0000, Daniel P. Berrangé wrote:
This patch adds x86_64 native CI jobs for all distros that we currently build container images for. This is a superset of the Linux jobs run on current Jenkins and Travis platforms.
The remaining missing platforms are FreeBSD and macOS, neither of which can use the shared runner container based infrastructure.
We may add further native jobs in the future which are not x86_64 based, if we get access to suitable hardware, thus the jobs all have an arch prefix in their name, just like the cross-built jobs do.
As with the cross-arch builds, the native jobs are split into two groups. One group is run in all situations, while the other group is only run on the master branch, or branches with a name prefix 'ci-extra-'. This avoids the build time getting too long when developers are testing their code prior to submission, while keeping full coverage of code that is merged.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 70 ++++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 70 insertions(+)
diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 631c447793..85ab8424e1 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -4,9 +4,30 @@ variables:
stages: - prebuild + - native_build - cross_build
+# Common templates + +# Default native build jobs that are always run +.native_build_default_job_template: &native_build_default_job_definition + stage: native_build + script: + - mkdir build + - cd build + - ../autogen.sh $CONFIGURE_OPTS || (cat config.log && exit 1) + - $MAKE -j $(getconf _NPROCESSORS_ONLN) distcheck + +# Extra native build jobs that are only run post-merge, or +# when code is pushed to a branch with "ci-extra-" name prefix +.native_build_extra_job_template: &native_build_extra_job_definition + <<: *native_build_default_job_definition + only: + - master + - /^ci-extra-.*$/
As Andrea commented a few patches back, ci-full is probably a better prefix.
Yes, already made this change.
+# Native architecture build + test jobs + +x64-debian-9: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-debian-9:latest + +x64-debian-10: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-debian-10:latest + +x64-debian-sid: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-debian-sid:latest + +x64-centos-7: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-centos-7:latest + +x64-centos-8: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-centos-8:latest
Shouldn't we actually prefer the newer distros over the older ones in terms of what runs on all branches vs what runs only on master + a dedicated ci- prefixed branch? At least that makes much more sense to from the upstream POV. Everything else is just a nice to have.
Counter-intuitively it is actually more important to have the oldest distros represented, as history has shown that these are the most likely ones to suffer build breakage. Our developers are most likely to be doing their own work on the latest versions of the distros thus detecting problems there. Thus I've intentionally not picked the latest versions of everything, and instead tried to get a mixture of vintages, with a bias to the very oldest and very newest, in the belief that the intersection of oldest & newest will detect issues on the middle vintage distros. CentOS v7 in particular has proved especially important to test as it is usually our oldest distro.
+x64-fedora-30: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-30:latest + +x64-fedora-31: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest
Same here...
The combo of Fedora 30 and Fedora rawhide will give good coverage of F31. The combo of F31 and rawhide won't be good for F30
+x64-fedora-rawhide: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-fedora-rawhide:latest + +x64-opensuse-151: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-opensuse-151:latest + +x64-ubuntu-1604: + <<: *native_build_default_job_definition + image: quay.io/libvirt/buildenv-libvirt-ubuntu-1604:latest + +x64-ubuntu-1804: + <<: *native_build_extra_job_definition + image: quay.io/libvirt/buildenv-libvirt-ubuntu-1804:latest
...and here...
I feel Ubuntu 1804 vintage has stronger coverage across our other default distros, so we'll benefit more from Ubuntu 1604 content which has less overlap.
With the distro versions swapped across the job definitions: Reviewed-by: Erik Skultety <eskultet@redhat.com>
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

Whenever there is a change to the translatable strings we need to push a new libvirt.pot to weblate. This only needs to be done when code merges into git master, so the job is restricted to that branch. Reviewed-by: Andrea Bolognani <abologna@redhat.com> Reviewed-by: Erik Skultety <eskultet@redhat.com> Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 85ab8424e1..53600c3a96 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -161,3 +161,28 @@ website: expire_in: 30 days paths: - website + + +# This artifact published by this job is downloaded to push to Weblate +# for translation usage: +# https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=potf... +potfile: + stage: prebuild + only: + - master + script: + - mkdir build + - cd build + - ../autogen.sh || (cat config.log && exit 1) + - $MAKE -j $(getconf _NPROCESSORS_ONLN) -C src generated-sources + - $MAKE -j $(getconf _NPROCESSORS_ONLN) -C po libvirt.pot + - cd .. + - mv build/po/libvirt.pot libvirt.pot + image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest + artifacts: + expose_as: 'Potfile' + name: 'potfile' + when: on_success + expire_in: 30 days + paths: + - libvirt.pot -- 2.24.1

For any given job there is a high liklihood that ccache will be able to reuse previously built object files. This will result in faster build pipelines in later updates. Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 22 ++++++++++++++++++++++ 1 file changed, 22 insertions(+) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 53600c3a96..d38672f260 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -13,6 +13,15 @@ stages: # Default native build jobs that are always run .native_build_default_job_template: &native_build_default_job_definition stage: native_build + cache: + paths: + - ccache/ + key: "$CI_JOB_NAME" + before_script: + - mkdir -p ccache + - export CC="ccache gcc" + - export CCACHE_BASEDIR=${PWD} + - export CCACHE_DIR=${PWD}/ccache script: - mkdir build - cd build @@ -31,6 +40,15 @@ stages: # Default cross build jobs that are always run .cross_build_default_job_template: &cross_build_default_job_definition stage: cross_build + cache: + paths: + - ccache/ + key: "$CI_JOB_NAME" + before_script: + - mkdir -p ccache + - export CC="ccache ${ABI}-gcc" + - export CCACHE_BASEDIR=${PWD} + - export CCACHE_DIR=${PWD}/ccache script: - mkdir build - cd build @@ -63,10 +81,14 @@ x64-debian-sid: x64-centos-7: <<: *native_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-centos-7:latest + # ccache isn't available + before_script: x64-centos-8: <<: *native_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-centos-8:latest + # ccache isn't available + before_script: x64-fedora-30: <<: *native_build_default_job_definition -- 2.24.1

On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
For any given job there is a high liklihood that ccache will be able to
*likelihood [...]
.native_build_default_job_template: &native_build_default_job_definition stage: native_build + cache: + paths: + - ccache/ + key: "$CI_JOB_NAME" + before_script: + - mkdir -p ccache + - export CC="ccache gcc" + - export CCACHE_BASEDIR=${PWD} + - export CCACHE_DIR=${PWD}/ccache
Having to set this up at the job level is kinda gross, and specifically the export CC="ccache gcc" trick is 1) not going to work for FreeBSD and 2) going to break Go builds. Ultimately I think we need to take a cue from what lcitool does when configuring VMs and generate a simple environment file that is baked into images and can be sourced from jobs with a single line. Anyway, that's a cleanup that we can easily perform later, so for the time being this will do. [...]
@@ -63,10 +81,14 @@ x64-debian-sid: x64-centos-7: <<: *native_build_default_job_definition image: quay.io/libvirt/buildenv-libvirt-centos-7:latest + # ccache isn't available + before_script:
x64-centos-8: <<: *native_build_extra_job_definition image: quay.io/libvirt/buildenv-libvirt-centos-8:latest + # ccache isn't available + before_script:
Updated CentOS images that include ccache have already been generated, so this hunk is no longer necessary and should be dropped before pushing. Reviewed-by: Andrea Bolognani <abologna@redhat.com> -- Andrea Bolognani / Red Hat / Virtualization

On Thu, Mar 26, 2020 at 02:50:48PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
For any given job there is a high liklihood that ccache will be able to
*likelihood
[...]
.native_build_default_job_template: &native_build_default_job_definition stage: native_build + cache: + paths: + - ccache/ + key: "$CI_JOB_NAME" + before_script: + - mkdir -p ccache + - export CC="ccache gcc" + - export CCACHE_BASEDIR=${PWD} + - export CCACHE_DIR=${PWD}/ccache
Having to set this up at the job level is kinda gross, and specifically the
export CC="ccache gcc"
trick is 1) not going to work for FreeBSD and 2) going to break Go builds.
That's easy enough to fix for FreeBSD - we just set CC=clang vs CC=gcc variable for the jobs at the top level, and then this code can change to CC="ccache $CC". We don't have any Go code in libvirt but what's the issue with ccache in that case ?
Ultimately I think we need to take a cue from what lcitool does when configuring VMs and generate a simple environment file that is baked into images and can be sourced from jobs with a single line.
I much prefer to have the job configuration all in the gitlab config file rather than split between the gitlab config and the images, as it lets you understand the full setup. We could however make the container images setup a link farm for ccache as we did with the VM images. So then the job just needs to set the $PATH to point to the link farm bin dir, instead of $CC. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Thu, 2020-03-26 at 14:05 +0000, Daniel P. Berrangé wrote:
On Thu, Mar 26, 2020 at 02:50:48PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
.native_build_default_job_template: &native_build_default_job_definition stage: native_build + cache: + paths: + - ccache/ + key: "$CI_JOB_NAME" + before_script: + - mkdir -p ccache + - export CC="ccache gcc" + - export CCACHE_BASEDIR=${PWD} + - export CCACHE_DIR=${PWD}/ccache
Having to set this up at the job level is kinda gross, and specifically the
export CC="ccache gcc"
trick is 1) not going to work for FreeBSD and 2) going to break Go builds.
That's easy enough to fix for FreeBSD - we just set CC=clang vs CC=gcc variable for the jobs at the top level, and then this code can change to CC="ccache $CC".
We don't have any Go code in libvirt but what's the issue with ccache in that case ?
Yeah, we were hitting it with libvirt-go{,-xml} so it's not really relevant in the immediate, but it's still something that we're going to have to deal with at some point. The problem, IIRC, was that cgo will just break when $CC contains more than a single word, so the link farm approach works fine but the CC="ccache $CC" approach doesn't.
Ultimately I think we need to take a cue from what lcitool does when configuring VMs and generate a simple environment file that is baked into images and can be sourced from jobs with a single line.
I much prefer to have the job configuration all in the gitlab config file rather than split between the gitlab config and the images, as it lets you understand the full setup.
I agree in theory, but 1) that specific ship has sailed when we started adding stuff like $LANG, $ABI and $CONFIGURE_OPTS in the container image's environment and 2) doing it at the .gitlab-ci.yml level will result in duplicating a lot of the logic that we already have in lcitool.
We could however make the container images setup a link farm for ccache as we did with the VM images. So then the job just needs to set the $PATH to point to the link farm bin dir, instead of $CC.
Yeah, in the long run we definitely want to build the link farm. -- Andrea Bolognani / Red Hat / Virtualization

On Thu, Mar 26, 2020 at 05:38:47PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 14:05 +0000, Daniel P. Berrangé wrote:
On Thu, Mar 26, 2020 at 02:50:48PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
.native_build_default_job_template: &native_build_default_job_definition stage: native_build + cache: + paths: + - ccache/ + key: "$CI_JOB_NAME" + before_script: + - mkdir -p ccache + - export CC="ccache gcc" + - export CCACHE_BASEDIR=${PWD} + - export CCACHE_DIR=${PWD}/ccache
Having to set this up at the job level is kinda gross, and specifically the
export CC="ccache gcc"
trick is 1) not going to work for FreeBSD and 2) going to break Go builds.
That's easy enough to fix for FreeBSD - we just set CC=clang vs CC=gcc variable for the jobs at the top level, and then this code can change to CC="ccache $CC".
We don't have any Go code in libvirt but what's the issue with ccache in that case ?
Yeah, we were hitting it with libvirt-go{,-xml} so it's not really relevant in the immediate, but it's still something that we're going to have to deal with at some point.
The problem, IIRC, was that cgo will just break when $CC contains more than a single word, so the link farm approach works fine but the CC="ccache $CC" approach doesn't.
Ah yes, I remember that now.
Ultimately I think we need to take a cue from what lcitool does when configuring VMs and generate a simple environment file that is baked into images and can be sourced from jobs with a single line.
I much prefer to have the job configuration all in the gitlab config file rather than split between the gitlab config and the images, as it lets you understand the full setup.
I agree in theory, but 1) that specific ship has sailed when we started adding stuff like $LANG, $ABI and $CONFIGURE_OPTS in the container image's environment and 2) doing it at the .gitlab-ci.yml level will result in duplicating a lot of the logic that we already have in lcitool.
Setting $LANG makes sense because the container image build decides what locales are installed and so knows what $LANG must be used. Similarly $ABI makes sense as that again is directly based off which compiler toolchain packages were installed. In retrospect $CONFIGURE_OPTS was a mistake, becuase that only makes sense in the context of autotools usage and decision about how the application will be built. So I'd remove this one too. WRT duplication of logic, we only have that because we use libvirt-jenkins-ci repo/tools to store the Jenkins job configuration separately from the application repos. As we phase out & eventually eliminate Jenkins, we will no longer have a need to store any build recipes in the libvirt-jenkins-ci repo/tools - they can focus exclusively on container/image built mgmt, and all the logic for actually building apps can live in the repos of those apps. This will be good as it eliminates more areas where we need to lock-step change the stuff in libvirt-jenkins-ci repo vs the main code repos. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Fri, 2020-03-27 at 10:47 +0000, Daniel P. Berrangé wrote:
On Thu, Mar 26, 2020 at 05:38:47PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 14:05 +0000, Daniel P. Berrangé wrote:
On Thu, Mar 26, 2020 at 02:50:48PM +0100, Andrea Bolognani wrote:
Ultimately I think we need to take a cue from what lcitool does when configuring VMs and generate a simple environment file that is baked into images and can be sourced from jobs with a single line.
I much prefer to have the job configuration all in the gitlab config file rather than split between the gitlab config and the images, as it lets you understand the full setup.
I agree in theory, but 1) that specific ship has sailed when we started adding stuff like $LANG, $ABI and $CONFIGURE_OPTS in the container image's environment and 2) doing it at the .gitlab-ci.yml level will result in duplicating a lot of the logic that we already have in lcitool.
Setting $LANG makes sense because the container image build decides what locales are installed and so knows what $LANG must be used.
Similarly $ABI makes sense as that again is directly based off which compiler toolchain packages were installed.
In retrospect $CONFIGURE_OPTS was a mistake, becuase that only makes sense in the context of autotools usage and decision about how the application will be built. So I'd remove this one too.
Yeah, I was never fond of $CONFIGURE_OPTS, and in fact I believe I argued against its inclusion at the time - although I have to admit that its presence makes some of the CI scaffolding much more terse. Anyway, what I do *not* want to see is something along the lines of x86-freebsd-12: variables: MAKE: gmake CC: clang in every .gitlab-ci.yml for every project under the libvirt umbrella: not only that would be ugly and obfuscate the actual build steps for the software, but it's nigh unmaintainable as it would take dozens of commits spread across as many repositories to roll out even a completely trivial change. Another question is, once we start doing cascading builds, what to do with stuff like (from the .bashrc template used by lcitool) # These search paths need to encode the OS architecture in some way # in order to work, so use the appropriate tool to obtain this # information and adjust them accordingly package_format="{{ package_format }}" if test "$package_format" = "deb"; then multilib=$(dpkg-architecture -q DEB_TARGET_MULTIARCH) export LD_LIBRARY_PATH="$VIRT_PREFIX/lib/$multilib:$LD_LIBRARY_PATH" export PKG_CONFIG_PATH="$VIRT_PREFIX/lib/$multilib/pkgconfig:$PKG_CONFIG_PATH" export GI_TYPELIB_PATH="$VIRT_PREFIX/lib/$multilib/girepository-1.0:$GI_TYPELIB_PATH" elif test "$package_format" = "rpm"; then multilib=$(rpm --eval '%{_lib}') export LD_LIBRARY_PATH="$VIRT_PREFIX/$multilib:$LD_LIBRARY_PATH" export PKG_CONFIG_PATH="$VIRT_PREFIX/$multilib/pkgconfig:$PKG_CONFIG_PATH" export GI_TYPELIB_PATH="$VIRT_PREFIX/$multilib/girepository-1.0:$GI_TYPELIB_PATH" fi # We need to ask Perl for this information, since it's used to # construct installation paths plarch=$(perl -e 'use Config; print $Config{archname}') export PERL5LIB="$VIRT_PREFIX/lib/perl5/$plarch" # For Python we need the version number (major and minor) and # to know whether "lib64" paths are searched pylib=lib if $PYTHON -c 'import sys; print("\n".join(sys.path))' | grep -q lib64; then pylib=lib64 fi pyver=$($PYTHON -c 'import sys; print(".".join(map(lambda x: str(sys.version_info[x]), [0,1])))') export PYTHONPATH="$VIRT_PREFIX/$pylib/python$pyver/site-packages" Will we just run all builds with --prefix=/usr and install stuff into the system search paths? In that case, we have to ensure that the user that's running the build inside the container has write access to those global paths, and also think about what this means for the FreeBSD runner.
WRT duplication of logic, we only have that because we use libvirt-jenkins-ci repo/tools to store the Jenkins job configuration separately from the application repos. As we phase out & eventually eliminate Jenkins, we will no longer have a need to store any build recipes in the libvirt-jenkins-ci repo/tools - they can focus exclusively on container/image built mgmt, and all the logic for actually building apps can live in the repos of those apps.
I was thinking more in terms of duplicating the logic that decides, for example, what name the ninja build system needs to be invoked as on a specific target, but you make a good point about the build rules: right now we have a set of shared templates that we reuse for all projects with similar build systems, but with the move to GitLab CI we'll end up duplicating a lot of that. It might not matter that much, because the build instructions are going to be simpler, but we might also consider an approach similar to https://salsa.debian.org/salsa-ci-team/pipeline#basic-use
This will be good as it eliminates more areas where we need to lock-step change the stuff in libvirt-jenkins-ci repo vs the main code repos.
In some cases that'll make things easier; in other cases, you're still going to have to change the libvirt-jenkins-ci repository to eg. alter the build environment in some way, then rebuild the images and change the build steps accordingly, except instead of having changes to the build environment and build recipe appear as two subsequent commits in the same repository, now they will be dozens of commits spread across as many repositories. -- Andrea Bolognani / Red Hat / Virtualization

On Fri, Mar 27, 2020 at 03:59:51PM +0100, Andrea Bolognani wrote:
Anyway, what I do *not* want to see is something along the lines of
x86-freebsd-12: variables: MAKE: gmake CC: clang
in every .gitlab-ci.yml for every project under the libvirt umbrella: not only that would be ugly and obfuscate the actual build steps for the software, but it's nigh unmaintainable as it would take dozens of commits spread across as many repositories to roll out even a completely trivial change.
Another question is, once we start doing cascading builds, what to do with stuff like (from the .bashrc template used by lcitool)
I don't think we will do cascading builds in the way we've done in Jenkins, because there was alot of pointless redundancy in our setup, resulting in us testing the wrong things. Take the Go binding for example. Go doesn't have the same kind of portability issues that C does, so testing the compile across the many distros is not directly needed. Similarly we only ever teted it against the latest libvirt git master, despite the code being able to compile against many older versions. So the two dimensions for Go that we actually need are testing against multiple Go versions, and testing against multiple libvirt versions. Testing against multiple distros is a crude indirect way of testing several Go versions, without us actually understanding which versions we really are testing. What we did in the Travis config for Go was much more useful in what dimensions it tested: https://gitlab.com/libvirt/libvirt-go/-/blob/master/.travis.yml The same applies for the other language bindings too. The other reason to not try to chain up builds is that it doesn't align with the forking model of contribution. If someone does a fork of the libvirt-go binding, they want to be able to run tests on that in isolation. They shouldn't have to first do a fork of libvirt and run build, in order to them run builds on the go binding. So each .gitlab-ci.yml for a project needs to be independant of other projects / self-contained in what it builds / tests. Where we do need chaining is to trigger these builds. ie, when a libvirt changes hit master, we want to trigger pipelines in any dependant projects to validate that they're not seeing a regression. GitLab has a way to configure pipelines triggers todo this.
It might not matter that much, because the build instructions are going to be simpler, but we might also consider an approach similar to
https://salsa.debian.org/salsa-ci-team/pipeline#basic-use
This will be good as it eliminates more areas where we need to lock-step change the stuff in libvirt-jenkins-ci repo vs the main code repos.
In some cases that'll make things easier; in other cases, you're still going to have to change the libvirt-jenkins-ci repository to eg. alter the build environment in some way, then rebuild the images and change the build steps accordingly, except instead of having changes to the build environment and build recipe appear as two subsequent commits in the same repository, now they will be dozens of commits spread across as many repositories.
Eventually I'd like to get the container image biulds into the main repos too. ie instead of libvirt-dockerfiles.git, we should commit the dockerfiles into each project's git repo. The GitLab CI job can generate (and cache) the container images directly, avoiding a need for us to send builds via quay.io separately. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Fri, 2020-03-27 at 17:20 +0000, Daniel P. Berrangé wrote:
On Fri, Mar 27, 2020 at 03:59:51PM +0100, Andrea Bolognani wrote:
Another question is, once we start doing cascading builds, what to do with stuff like (from the .bashrc template used by lcitool)
I don't think we will do cascading builds in the way we've done in Jenkins, because there was alot of pointless redundancy in our setup, resulting in us testing the wrong things.
Take the Go binding for example. Go doesn't have the same kind of portability issues that C does, so testing the compile across the many distros is not directly needed. Similarly we only ever teted it against the latest libvirt git master, despite the code being able to compile against many older versions.
So the two dimensions for Go that we actually need are testing against multiple Go versions, and testing against multiple libvirt versions.
Testing against multiple distros is a crude indirect way of testing several Go versions, without us actually understanding which versions we really are testing.
Agreed that we could be smarter and more comprehensive in what we test, especially when it comes to language bindings; at the same time it's useful to test against the latest codebase for the various dependencies, so we should make sure we don't lose that coverage.
What we did in the Travis config for Go was much more useful in what dimensions it tested:
https://gitlab.com/libvirt/libvirt-go/-/blob/master/.travis.yml
The same applies for the other language bindings too.
The other reason to not try to chain up builds is that it doesn't align with the forking model of contribution. If someone does a fork of the libvirt-go binding, they want to be able to run tests on that in isolation. They shouldn't have to first do a fork of libvirt and run build, in order to them run builds on the go binding.
Of course that wouldn't be acceptable. So far I'm aware of two approaches for chaining, one of which is currently in use and the other one which IIUC was prototyped but never actually deployed: * the CI job for each project includes build instructions for all projects it depends on, eg. the libvirt-dbus job would start by fetching, building and installing libvirt, then moving on to doing the same for libvirt-glib, then finally get to building and testing libvirt-dbus itself. This is the approach libosinfo is currently using; * the CI job for each project would result in a container image that has the same contents as the one used for building, plus a complete installation of the project itself, eg. the libvirt job would generate an image that has libvirt installed, the libvirt-glib job would use that image and generate one that has both libvirt and libvirt-glib installed, and finally libvirt-dbus would use this last image as build environment. If I understand correctly, you're suggesting a third approach: * the CI job for each project uses an image that contains all its dependencies, including the ones that are maintained under the libvirt umbrella, installed from distro packages. Did I get that right? Or did you have something else in mind?
Where we do need chaining is to trigger these builds. ie, when a libvirt changes hit master, we want to trigger pipelines in any dependant projects to validate that they're not seeing a regression. GitLab has a way to configure pipelines triggers todo this.
I'm not sure how this part would fit into the rest, but let's just ignore it for the moment O:-)
In some cases that'll make things easier; in other cases, you're still going to have to change the libvirt-jenkins-ci repository to eg. alter the build environment in some way, then rebuild the images and change the build steps accordingly, except instead of having changes to the build environment and build recipe appear as two subsequent commits in the same repository, now they will be dozens of commits spread across as many repositories.
Eventually I'd like to get the container image biulds into the main repos too. ie instead of libvirt-dockerfiles.git, we should commit the dockerfiles into each project's git repo. The GitLab CI job can generate (and cache) the container images directly, avoiding a need for us to send builds via quay.io separately.
This will again result in the situation where a single update to lcitool might result in a couple dozen commits to a couple dozen repositories, but since it will be entirely mechanical and likely fall under the same "Dockerfile update rule" as pushes to the libvirt-dockerfiles repo currently fall under, I think it should be reasonably manageable. Will the container images built this way made available outside of the GitLab CI infrastructure? We still want people to be able to run 'make ci-build@...' locally. Will the GitLab registry allow us to store a lot of images? We currently have 38 for libvirt alone, and if we're going to build new ones for all the sub-projects then we'll get to the hundreds really quickly... -- Andrea Bolognani / Red Hat / Virtualization

On Mon, Mar 30, 2020 at 12:11:19PM +0200, Andrea Bolognani wrote:
On Fri, 2020-03-27 at 17:20 +0000, Daniel P. Berrangé wrote:
On Fri, Mar 27, 2020 at 03:59:51PM +0100, Andrea Bolognani wrote:
Another question is, once we start doing cascading builds, what to do with stuff like (from the .bashrc template used by lcitool)
I don't think we will do cascading builds in the way we've done in Jenkins, because there was alot of pointless redundancy in our setup, resulting in us testing the wrong things.
Take the Go binding for example. Go doesn't have the same kind of portability issues that C does, so testing the compile across the many distros is not directly needed. Similarly we only ever teted it against the latest libvirt git master, despite the code being able to compile against many older versions.
So the two dimensions for Go that we actually need are testing against multiple Go versions, and testing against multiple libvirt versions.
Testing against multiple distros is a crude indirect way of testing several Go versions, without us actually understanding which versions we really are testing.
Agreed that we could be smarter and more comprehensive in what we test, especially when it comes to language bindings; at the same time it's useful to test against the latest codebase for the various dependencies, so we should make sure we don't lose that coverage.
What we did in the Travis config for Go was much more useful in what dimensions it tested:
https://gitlab.com/libvirt/libvirt-go/-/blob/master/.travis.yml
The same applies for the other language bindings too.
The other reason to not try to chain up builds is that it doesn't align with the forking model of contribution. If someone does a fork of the libvirt-go binding, they want to be able to run tests on that in isolation. They shouldn't have to first do a fork of libvirt and run build, in order to them run builds on the go binding.
Of course that wouldn't be acceptable.
So far I'm aware of two approaches for chaining, one of which is currently in use and the other one which IIUC was prototyped but never actually deployed:
* the CI job for each project includes build instructions for all projects it depends on, eg. the libvirt-dbus job would start by fetching, building and installing libvirt, then moving on to doing the same for libvirt-glib, then finally get to building and testing libvirt-dbus itself. This is the approach libosinfo is currently using;
* the CI job for each project would result in a container image that has the same contents as the one used for building, plus a complete installation of the project itself, eg. the libvirt job would generate an image that has libvirt installed, the libvirt-glib job would use that image and generate one that has both libvirt and libvirt-glib installed, and finally libvirt-dbus would use this last image as build environment.
If I understand correctly, you're suggesting a third approach:
* the CI job for each project uses an image that contains all its dependencies, including the ones that are maintained under the libvirt umbrella, installed from distro packages.
Did I get that right? Or did you have something else in mind?
I'm suggesting both option 1 and/or 3 depending on the support scenario. In the cases where the project needs to test against libvirt git master, it should clone and build libvirt.git, and then build itself against that. In the case where the project needs to test against existing releases in distros, it should have container images that include the pre-built libvirt. The Perl binding only supports building against libvirt Git, so option 1 is sufficient. The Go & Python bindings support building against historic versions, so option 1 & 3 are both needed.
Where we do need chaining is to trigger these builds. ie, when a libvirt changes hit master, we want to trigger pipelines in any dependant projects to validate that they're not seeing a regression. GitLab has a way to configure pipelines triggers todo this.
I'm not sure how this part would fit into the rest, but let's just ignore it for the moment O:-)
Consider the builds are self-contained. libvirt-python CI gets triggered when a change is committed to libvirt-python.git. We also need to have CI triggered in libvirt-python, when a change is committed to libvirt.git, so we need to use the gitlab triggers for this.
In some cases that'll make things easier; in other cases, you're still going to have to change the libvirt-jenkins-ci repository to eg. alter the build environment in some way, then rebuild the images and change the build steps accordingly, except instead of having changes to the build environment and build recipe appear as two subsequent commits in the same repository, now they will be dozens of commits spread across as many repositories.
Eventually I'd like to get the container image biulds into the main repos too. ie instead of libvirt-dockerfiles.git, we should commit the dockerfiles into each project's git repo. The GitLab CI job can generate (and cache) the container images directly, avoiding a need for us to send builds via quay.io separately.
This will again result in the situation where a single update to lcitool might result in a couple dozen commits to a couple dozen repositories, but since it will be entirely mechanical and likely fall under the same "Dockerfile update rule" as pushes to the libvirt-dockerfiles repo currently fall under, I think it should be reasonably manageable.
Will the container images built this way made available outside of the GitLab CI infrastructure? We still want people to be able to run 'make ci-build@...' locally.
I believe that container images built in GitLab are made publically accessible, but I've not validated this myself yet. Agreed on your point that we need to continue supporting local builds like this.
Will the GitLab registry allow us to store a lot of images? We currently have 38 for libvirt alone, and if we're going to build new ones for all the sub-projects then we'll get to the hundreds really quickly...
Again I've not proved anything, but in general GitLab.com instance does not appear to have applied any limits to projects that are made public under OSS licenses. If we did hit any container limit then we'd have to continue with quay.io for this purpose. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Thu, Mar 26, 2020 at 12:35:36PM +0000, Daniel P. Berrangé wrote:
For any given job there is a high liklihood that ccache will be able to reuse previously built object files. This will result in faster build pipelines in later updates.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- Reviewed-by: Erik Skultety <eskultet@redhat.com>

This introduces a CI job for validating DCO sign-off in every commit message. The CI jobs are not provided any information on what the baseline commit for the branch was. We can't compare against the forked repo's master branch, as there's no guarantee the user is keeping master up2date in their fork. Thus we add the master upstream repo as a git remote and identify the common ancestor. Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 14 ++++++ scripts/require-dco.py | 96 ++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 110 insertions(+) create mode 100755 scripts/require-dco.py diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index d38672f260..965db22d62 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -208,3 +208,17 @@ potfile: expire_in: 30 days paths: - libvirt.pot + + +# Check that all commits are signed-off for the DCO. Skip +# on master branch and -maint branches, since we only need +# to test developer's personal branches. +dco: + stage: prebuild + image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest + script: + - ./scripts/require-dco.py + only: + - branches + except: + - /^v.*-maint$/ diff --git a/scripts/require-dco.py b/scripts/require-dco.py new file mode 100755 index 0000000000..3b642d6679 --- /dev/null +++ b/scripts/require-dco.py @@ -0,0 +1,96 @@ +#!/usr/bin/env python3 + +# require-dco.py: validate all commits are signed off +# +# Copyright (C) 2020 Red Hat, Inc. +# +# This library is free software; you can redistribute it and/or +# modify it under the terms of the GNU Lesser General Public +# License as published by the Free Software Foundation; either +# version 2.1 of the License, or (at your option) any later version. +# +# This library is distributed in the hope that it will be useful, +# but WITHOUT ANY WARRANTY; without even the implied warranty of +# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU +# Lesser General Public License for more details. +# +# You should have received a copy of the GNU Lesser General Public +# License along with this library. If not, see +# <http://www.gnu.org/licenses/>. + +import os +import os.path +import sys +import subprocess + +cwd = os.getcwd() +reponame = os.path.basename(cwd) +repourl = "https://gitlab.com/libvirt/%s.git" % reponame + +subprocess.check_call(["git", "remote", "add", "dcocheck", repourl]) +subprocess.check_call(["git", "fetch", "dcocheck", "master"], + stdout=subprocess.DEVNULL, + stderr=subprocess.DEVNULL) + +ancestor = subprocess.check_output(["git", "merge-base", "dcocheck/master", "HEAD"], + universal_newlines=True) + +ancestor = ancestor.strip() + +subprocess.check_call(["git", "remote", "rm", "dcocheck"]) + +errors = False + +print("\nChecking for 'Signed-off-by: NAME <EMAIL>' on all commits since %s...\n" % ancestor) + +log = subprocess.check_output(["git", "log", "--format=%H %s", ancestor + "..."], + universal_newlines=True) + +commits = [[c[0:40], c[41:]] for c in log.strip().split("\n")] + +for sha, subject in commits: + + msg = subprocess.check_output(["git", "show", "-s", sha], + universal_newlines=True) + lines = msg.strip().split("\n") + + print("🔍 %s %s" % (sha, subject)) + sob = False + for line in lines: + if "Signed-off-by:" in line: + sob = True + if "localhost" in line: + print(" ❌ FAIL: bad email in %s" % line) + errors = True + + if not sob: + print(" ❌ FAIL missing Signed-off-by tag") + errors = True + +if errors: + print(""" + +❌ ERROR: One or more commits are missing a valid Signed-off-By tag. + + +This project requires all contributors to assert that their contributions +are provided in compliance with the terms of the Developer's Certificate +of Origin 1.1 (DCO): + + https://developercertificate.org/ + +To indicate acceptance of the DCO every commit must have a tag + + Signed-off-by: REAL NAME <EMAIL> + +This can be achieved by passing the "-s" flag to the "git commit" command. + +To bulk update all commits on current branch "git rebase" can be used: + + git rebase -i master -x 'git commit --amend --no-edit -s' + +""") + + sys.exit(1) + +sys.exit(0) -- 2.24.1

On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote: [...]
+for sha, subject in commits: + + msg = subprocess.check_output(["git", "show", "-s", sha], + universal_newlines=True) + lines = msg.strip().split("\n") + + print("🔍 %s %s" % (sha, subject))
I could personally live without the emoji...
+ sob = False + for line in lines: + if "Signed-off-by:" in line: + sob = True + if "localhost" in line: + print(" ❌ FAIL: bad email in %s" % line) + errors = True
... but if you absolutely must have them, at least don't try to mess with indentation - aligning text and emoji is basically never going to work reliably anyway. Please consider applying the diff at the end of this message if you think dropping the emoji is not an option. Anyway, the rest looks good, so as long as you at least remove that leading whitespace Reviewed-by: Andrea Bolognani <abologna@redhat.com> diff --git a/scripts/require-dco.py b/scripts/require-dco.py index 3b642d6679..cb057e48b3 100755 --- a/scripts/require-dco.py +++ b/scripts/require-dco.py @@ -54,18 +54,20 @@ for sha, subject in commits: universal_newlines=True) lines = msg.strip().split("\n") - print("🔍 %s %s" % (sha, subject)) + print(" %s %s " % (sha, subject), end="") sob = False for line in lines: if "Signed-off-by:" in line: sob = True if "localhost" in line: - print(" ❌ FAIL: bad email in %s" % line) + print("❌ (FAIL: bad email in %s)" % line) errors = True if not sob: - print(" ❌ FAIL missing Signed-off-by tag") + print("❌ (FAIL: missing Signed-off-by tag)") errors = True + else: + print("✅") if errors: print(""" -- Andrea Bolognani / Red Hat / Virtualization

On Thu, Mar 26, 2020 at 06:06:37PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote: [...]
+for sha, subject in commits: + + msg = subprocess.check_output(["git", "show", "-s", sha], + universal_newlines=True) + lines = msg.strip().split("\n") + + print("🔍 %s %s" % (sha, subject))
I could personally live without the emoji...
They are to make the important lines stand out more from the rest of the log messages related to the CI job.
+ sob = False + for line in lines: + if "Signed-off-by:" in line: + sob = True + if "localhost" in line: + print(" ❌ FAIL: bad email in %s" % line) + errors = True
... but if you absolutely must have them, at least don't try to mess with indentation - aligning text and emoji is basically never going to work reliably anyway. Please consider applying the diff at the end of this message if you think dropping the emoji is not an option.
I wasn't actually trying to align them - the fail lines are intentionally indented, though not by enough.
Anyway, the rest looks good, so as long as you at least remove that leading whitespace
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
diff --git a/scripts/require-dco.py b/scripts/require-dco.py index 3b642d6679..cb057e48b3 100755 --- a/scripts/require-dco.py +++ b/scripts/require-dco.py @@ -54,18 +54,20 @@ for sha, subject in commits: universal_newlines=True) lines = msg.strip().split("\n")
- print("🔍 %s %s" % (sha, subject)) + print(" %s %s " % (sha, subject), end="") sob = False for line in lines: if "Signed-off-by:" in line: sob = True if "localhost" in line: - print(" ❌ FAIL: bad email in %s" % line) + print("❌ (FAIL: bad email in %s)" % line) errors = True
if not sob: - print(" ❌ FAIL missing Signed-off-by tag") + print("❌ (FAIL: missing Signed-off-by tag)")
This puts all the messages on one line, which results in long lines when the commit message is already long. This is why I put the fails on separate indented lines below the check message.
errors = True + else: + print("✅")
if errors: print(""" -- Andrea Bolognani / Red Hat / Virtualization
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Fri, 2020-03-27 at 10:53 +0000, Daniel P. Berrangé wrote:
On Thu, Mar 26, 2020 at 06:06:37PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
+ if "localhost" in line: + print(" ❌ FAIL: bad email in %s" % line) + errors = True
... but if you absolutely must have them, at least don't try to mess with indentation - aligning text and emoji is basically never going to work reliably anyway. Please consider applying the diff at the end of this message if you think dropping the emoji is not an option.
I wasn't actually trying to align them - the fail lines are intentionally indented, though not by enough.
Then indent them by 4 spaces or more. Anything less than that will just look bad. -- Andrea Bolognani / Red Hat / Virtualization

On Fri, Mar 27, 2020 at 12:51:23PM +0100, Andrea Bolognani wrote:
On Fri, 2020-03-27 at 10:53 +0000, Daniel P. Berrangé wrote:
On Thu, Mar 26, 2020 at 06:06:37PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote:
+ if "localhost" in line: + print(" ❌ FAIL: bad email in %s" % line) + errors = True
... but if you absolutely must have them, at least don't try to mess with indentation - aligning text and emoji is basically never going to work reliably anyway. Please consider applying the diff at the end of this message if you think dropping the emoji is not an option.
I wasn't actually trying to align them - the fail lines are intentionally indented, though not by enough.
Then indent them by 4 spaces or more. Anything less than that will just look bad.
Yep, will do. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

Running the code style syntax-check as part of the build jobs leads to all jobs failing in the same way. Have a prebuild job for validating syntax-check to catch code style problems upfront and thus avoid needing to run all the build jobs. Signed-off-by: Daniel P. Berrangé <berrange@redhat.com> --- .gitlab-ci.yml | 10 ++++++++++ 1 file changed, 10 insertions(+) diff --git a/.gitlab-ci.yml b/.gitlab-ci.yml index 965db22d62..9ef7ad0325 100644 --- a/.gitlab-ci.yml +++ b/.gitlab-ci.yml @@ -185,6 +185,16 @@ website: - website +codestyle: + stage: prebuild + script: + - mkdir build + - cd build + - ../autogen.sh --prefix=$(pwd)/../vroot || (cat config.log && exit 1) + - $MAKE -j $(getconf _NPROCESSORS_ONLN) syntax-check + image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest + + # This artifact published by this job is downloaded to push to Weblate # for translation usage: # https://gitlab.com/libvirt/libvirt/-/jobs/artifacts/master/download?job=potf... -- 2.24.1

On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote: [...]
+codestyle:
check-codestlye, maybe?
+ stage: prebuild + script: + - mkdir build + - cd build + - ../autogen.sh --prefix=$(pwd)/../vroot || (cat config.log && exit 1)
Setting --prefix is not necessary for this job. With that dropped, Reviewed-by: Andrea Bolognani <abologna@redhat.com> -- Andrea Bolognani / Red Hat / Virtualization

On Thu, Mar 26, 2020 at 03:41:14PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote: [...]
+codestyle:
check-codestlye, maybe?
Shorter names are better for the gitlab pipeline UI display, so rather not add a generic prefix to it.
+ stage: prebuild + script: + - mkdir build + - cd build + - ../autogen.sh --prefix=$(pwd)/../vroot || (cat config.log && exit 1)
Setting --prefix is not necessary for this job.
Opps yeah.
With that dropped,
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Thu, Mar 26, 2020 at 03:41:14PM +0100, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 12:35 +0000, Daniel P. Berrangé wrote: [...]
+codestyle:
check-codestlye, maybe?
+ stage: prebuild + script: + - mkdir build + - cd build + - ../autogen.sh --prefix=$(pwd)/../vroot || (cat config.log && exit 1)
Setting --prefix is not necessary for this job.
With that dropped,
Reviewed-by: Andrea Bolognani <abologna@redhat.com>
Reviewed-by: Erik Skultety <eskultet@redhat.com> -- Erik Skultety
participants (3)
-
Andrea Bolognani
-
Daniel P. Berrangé
-
Erik Skultety