On Fri, Apr 03, 2020 at 05:08:59PM +0200, Andrea Bolognani wrote:
On Fri, 2020-04-03 at 09:38 +0200, Erik Skultety wrote:
> On Tue, Mar 31, 2020 at 05:23:32PM +0200, Andrea Bolognani wrote:
> > On Thu, 2020-03-26 at 14:33 +0100, Erik Skultety wrote:
> > > +++ b/guests/playbooks/update/tasks/gitlab.yml
> > > @@ -0,0 +1,64 @@
> > > +---
> > > +- name: Look up Gitlab runner secret
> > > + set_fact:
> > > + gitlab_runner_secret: '{{ lookup("file",
gitlab_runner_token_file) }}'
> >
> > The GitLab token should be extracted from the vault, not from a
> > regular file.
>
> This was a deliberate decision and I didn't want to do it the jenkins way, let
> me expand on why. Ultimately, we want other community members to contribute
> their resources to our gitlab so that the test results are centralized, but
> what if they want to setup the same machine for their internal infrastructure
> as well to stay unified with the setup upstream libvirt "mandates". With
the
> secret in the vault, we always create machines that can only be plugged into
> *our* gitlab, but why should be that restrictive? Give the consumers the leeway
> to actually have a possibility to plug it into their own instance by supplying
> the token the same way as we supply the vault and root passwords, so I'd rather
> not be tying this to a single use case just like we did with jenkins.
Thanks for spelling out your intentions, it really helps.
I agree that reusing the recipe to build runners that connect to a
local / private GitLab instance is a valid use case that we should
accomodate.
One point that immediately leads to, and which was unintentionally
already addressed above, is that in that case you need not only the
token to be configurable, but also the connection URL.
Doh! How could I have forgot that.
Other than that, storing the token in ~/.config/lcitool is fine I
guess. The number of files in that directory is getting fairly
ridiculous, though, so Someoneā¢ should really look into throwing
away the current, extremely crude configuration system that's been
inherited from the original shell implementation and replace it
with something more sensible like a config.toml.
Let me spend some time on that improvement :).
Erik
> > > +# To ensure idempotency, we must run the registration only when we first
> > > +# created the config dir, otherwise we'll register a new runner on
every
> > > +# update
> > > +- name: Register the gitlab-runner agent
> > > + shell: 'gitlab-runner register --non-interactive --config
/home/gitlab/.gitlab-runner/config.toml --registration-token {{ gitlab_runner_secret }}
--url
https://gitlab.com --executor shell --tag-list {{ inventory_hostname }}'
> > > + when: rc_gitlab_runner_config_dir.changed
> >
> > ... this check could be replaced with
> >
> > creates: '{{ gitlab_runner_config_path }}'
>
> Like I said in the comment above, the reason why I created those 2 tasks to
> create the config file was to ensure idempotency, the registration runs every
> single time you execute the playbook and guess what, it registers a new runner
> which you can see in the gitlab runner pool - we don't want that. I'm not
sure
> "creates" actually prevents that behaviour.
That's *exactly* what it does:
creates A filename, when it already exists, this step
(path) will *not* be run.
Ok, I will incorporate this.
https://docs.ansible.com/ansible/latest/modules/shell_module.html#paramet...
> > * the 'shell' executor is I think our only option when it comes to
> > FreeBSD (and we should make sure we fully understand the security
> > implications of using it before exposing any of this on the
> > internet), but for Linux we should be able to use the 'docker'
> > executor instead, which should provide additional isolation;
>
> I get the isolation part, but why exactly is the additional isolation by the
> container deemed so necessary compared to achieving consistency in terms of
> maintenance? I mentioned it earlier, these machines run in a secured
> environment not accessible from the outside. Moreover, these are test machines
> that are designed to come and go, so if the worst comes, they can be torn down
> and replaced with new copies, that's the whole point of CI environments, so
> aren't we trying to be a bit too paranoid?
Consistency is a worthy goal, but it's not the only one. Security is
another very important one.
Anyway, it's simpler than that: all of our Linux jobs currently
expect to run in container images, and we can only have that using
the Docker executor.
With the functional tests in mind, the Docker executor doesn't really solve
anything, for builds, there are already a bunch of containers taking care of
that, now we need to cover cases where container technology cannot be utilized
as well as environments for functional tests - of course, for that, these
patches are only sort of a prequel.
--
Erik Skultety