On Wed, 2020-03-25 at 12:19 +0000, Daniel P. Berrangé wrote:
On Wed, Mar 25, 2020 at 12:27:33PM +0100, Andrea Bolognani wrote:
> This kind of testing will still be limited by the fact that merge
> requests will not be able to use these dedicated runners. Compare
> this to something like KubeVirt's CI, where proper integrations tests
> are executed for each and every merge request...
BTW, when comparing to CI done in GitHub be aware that you're comparing
apples & oranges. Traditionally most/all GitHub CI setups were done via
external services / bots, feeding results back with the GitHub API. This
approach is possible with the GitLab API too if you want to ignore the
built-in CI facilities.
IOW, it would be entirely possible to ignore the GitLab runners / CI
system, and have a integration test harness that just watched for new
Merge requests, runs tests on them, and then adds a comment and/or
approval mark to the merge request.
My preference is to do as much as possible using the GitLab CI setup though,
since that's the more commonly expected approach. If we hit roadblocks that
are important, we can take the external service/bot approach where needed.
I don't think it's really an unfair comparison. My point was that the
CI setup we're building simply can't do things that other setups can.
Of course everything is a trade-off to some extent, and as of today
the value proposition of not having to maintain your own runners
can't be ignored; once we start maintaining our own FreeBSD runners,
as well as Linux runners for real functional tests, then the balance
shifts again.
Anyway, I'm not saying that we should stall the current efforts,
which already represent an improvement over the status quo, but
merely pointing out that there are some limitation that we should be
aware of and that might result in us re-evaluating our choices
further down the line.
--
Andrea Bolognani / Red Hat / Virtualization