On Wed, 2020-04-08 at 12:28 +0200, Erik Skultety wrote:
On Wed, Apr 08, 2020 at 12:05:11PM +0200, Andrea Bolognani wrote:
> You didn't answer in the other thread, so I'll ask again here: is the
> idea that we're going to use only the FreeBSD runners to supplement
> the shared runners for the existing unit tests, and all Linux runners
> are only going to be used for integration tests later on, hence why
> we need to use the shell executor rather than the docker executor?
Why not both? We can always extend the capacity with VM builders, although it's
true that functional tests was what I had in mind originally (+the FreeBSD
exception like you already mentioned). Either way, I don't understand why we
should force usage of the docker executor for the VMs which we can use for
builds. The way I'm looking at the setup is: container setup vs VM setup, with
both being consistent in their own respective category, IOW, why should the
setup of the VM in terms of the gitlab-runner be different when running
functional tests vs builds? So, I'd like to stay with the shell executor on VMs
in all cases and not fragment the setup even further.
Because all the builds that currently exist are defined in terms of
containers, so when you have something like
x86-fedora-31:
script:
...
image: quay.io/libvirt/buildenv-libvirt-fedora-31:latest
you cannot just run the same job on a worker that is configured to
use the shell executor.
I guess you could drop the image element and replace it with
tags:
- fedora-31
but then you'd either have to duplicate the job definition, or to
only have the new one which then would not work for forks and merge
requests, so that makes it less interesting.
Furthermore, with the OpenShift infra we got, I see very little to no
value in
using the VMs to extend our build capacity.
I don't understand what you're trying to say here at all, sorry.
--
Andrea Bolognani / Red Hat / Virtualization