On Tue, Feb 01, 2022 at 12:36:11PM +0000, Daniel P. Berrangé wrote:
On Mon, Jan 31, 2022 at 07:01:01PM +0100, Erik Skultety wrote:
> Create an integration child pipeline in this stage which will trigger a
> multi-project CI build of Perl bindings which are required by the TCK
> test suite.
> In general, this stage will install all the necessary build artifacts
> and configure logging on the worker node prior to executing the actual
> test suite. In case of a failure, libvirt and Avocado logs are saved
> and published as artifacts.
>
> Signed-off-by: Erik Skultety <eskultet(a)redhat.com>
> ---
> .gitlab-ci-integration.yml | 116 +++++++++++++++++++++++++++++++++++++
> .gitlab-ci.yml | 14 +++++
> 2 files changed, 130 insertions(+)
> create mode 100644 .gitlab-ci-integration.yml
>
> diff --git a/.gitlab-ci-integration.yml b/.gitlab-ci-integration.yml
> new file mode 100644
> index 0000000000..cabefc5166
> --- /dev/null
> +++ b/.gitlab-ci-integration.yml
> @@ -0,0 +1,116 @@
> +stages:
> + - bindings
> + - integration
> +
> +.tests:
> + stage: integration
> + before_script:
> + - mkdir "$SCRATCH_DIR"
> + - sudo dnf install -y libvirt-rpms/* libvirt-perl-rpms/*
> + - sudo pip3 install --prefix=/usr avocado-framework
> + - source /etc/os-release # in order to query the vendor-provided variables
> + - if test "$ID" == "centos" && test
"$VERSION_ID" -lt 9 ||
> + test "$ID" == "fedora" && test
"$VERSION_ID" -lt 35;
> + then
> + DAEMONS="libvirtd virtlogd virtlockd";
> + else
> + DAEMONS="virtproxyd virtqemud virtinterfaced virtsecretd virtstoraged
virtnwfilterd virtnodedevd virtlogd virtlockd";
> + fi
> + - for daemon in $DAEMONS;
> + do
> + sudo sed -Ei
"s/^(#)?(log_outputs=).*/\2'1:file:\/var\/log\/libvirt\/${daemon}.log'/"
/etc/libvirt/${daemon}.conf;
> + sudo sed -Ei "s/^(#)?(log_filters=).*/\2'4:*object* 4:*json*
4:*event* 4:*rpc* 4:daemon.remote 4:util.threadjob 4:*access* 1:*'/"
/etc/libvirt/${daemon}.conf;
> + sudo systemctl --quiet stop ${daemon}.service;
> + sudo systemctl restart ${daemon}.socket;
> + done
> + - sudo virsh net-start default &>/dev/null || true;
What context is all this being run in ?
There's no docker image listed in the container, and I see the
'tags' referring to VMs. Is this really installing stuff as
root in the VM runnerrs ?
Basically I'm not seeing where any of this work is cleaned up
to give a pristine environment for the next pipeline, or what
happens if two pipelines run concurrently ?
- all of ^this runs in the VM runner and yes, the gitlab-runner user has
passwordless sudo ONLY inside the VM
=> note that the same is not true for the host where the gitlab-runner
agent runs and dispatches jobs to the VMs with no permissions and can
essentially just SSH into the VM; that was IIRC the only way how to make it all
work
-> I also remember having read in the docs that the gitlab-runner user
must have passwordless sudo in the VM with custom executor, although
I can't find that piece in the docs now
=> if you're concerned about breaking from the isolation, well, that has
nothing to do with CI, but libvirt+QEMU/KVM in general; also, even if
a malicious process took over the VM itself, GitLab has a watchdog, so
the VM would get destroyed anyway after the job timed out
- the work doesn't need to be cleaned up, because the VM (which is transient)
gets destroyed automatically at the end of the job along with its storage
overlay which is re-created for every VM, kinda like OpenStack does it
- there's a hard limit on number of jobs a gitlab-runner-controlled host can
take which we're in charge of, IOW depending on the baremetal capabilities,
we select the appropriate number of jobs to avoid resource overcommit
=> in any case, if you want to see how a machine is created, you can find it in
https://gitlab.com/eskultety/libvirt-gitlab-executor/-/blob/master/src/pr...
> +
> + script:
> + - mkdir logs
> + - cd "$SCRATCH_DIR"
> + - git clone --depth 1
https://gitlab.com/libvirt/libvirt-tck.git
> + - cd libvirt-tck
> + - sudo avocado --config avocado.config run --job-results-dir
"$SCRATCH_DIR"/avocado
> + after_script:
> + - if test -e "$SCRATCH_DIR"/avocado;
> + then
> + sudo mv "$SCRATCH_DIR"/avocado/latest/test-results logs/avocado;
> + fi
> + - sudo mv /var/log/libvirt logs/libvirt
> + - sudo chown -R $(whoami):$(whoami) logs
> + variables:
> + SCRATCH_DIR: "/tmp/scratch"
> + artifacts:
> + name: logs
> + paths:
> + - logs
> + when: on_failure
> +
> +
> +libvirt-perl-bindings:
> + stage: bindings
> + trigger:
> + project: eskultety/libvirt-perl
I assume pointing to this personal repo is a temp hack for some fix
that's not merged into main libvirt-perl ?
Yeah, sorry about that. I mean this is RFC, so I don't expect this to be merged
as is and because in main libvirt there are more changes than in libvirt-perl,
I didn't bother fixing it just for the sake of this RFC :). After all, this
way, you can see it works in my fork.
...
> .script_variables: &script_variables |
> @@ -128,3 +129,16 @@ coverity:
> - curl
https://scan.coverity.com/builds?project=$COVERITY_SCAN_PROJECT_NAME
--form token=$COVERITY_SCAN_TOKEN --form email=$GITLAB_USER_EMAIL --form
file=(a)cov-int.tar.gz --form version="$(git describe --tags)" --form
description="$(git describe --tags) / $CI_COMMIT_TITLE /
$CI_COMMIT_REF_NAME:$CI_PIPELINE_ID"
> rules:
> - if: "$CI_PIPELINE_SOURCE == 'schedule' &&
$COVERITY_SCAN_PROJECT_NAME && $COVERITY_SCAN_TOKEN"
> +
> +integration:
> + stage: test
> + needs:
> + - x86_64-centos-stream-8
> + - x86_64-centos-stream-9
> + - x86_64-fedora-34
> + - x86_64-fedora-35
> + trigger:
> + include: .gitlab-ci-integration.yml
> + strategy: depend
> + variables:
> + PARENT_PIPELINE_ID: $CI_PIPELINE_ID
I've not really thought about the implications, so I'm curious what's
the rationale for using a separate pipeline in this way, as opposed to
have x86_64-fedora-35-integration, x86_64-centos-stream-9-integration,
etc jobs in the existing pipeline ?
Because you still need the perl bindings. My reasoning is that despite being
possibly less efficient it is desirable if we build the bindings in the
respective repo (libvirt-perl) which already has the pipeline. We only need to
kick off that one only once, not per each integration platform. If the
consensus is going to be that we want to re-build the bindings in the VM, then
we don't need to spawn a multi-project pipeline and indeed e.g.
x86_64-centos-stream-9-integration could be a standalone stage placed right
after the builds.
Secondly, it was exciting trying out the functionality in GitLab. It also
leaves the stages "self-contained" meaning that the VM only does what it's
supposed to, because the rest can be handled by something else which is
already in place and working.
Erik