On Tue, Mar 31, 2020 at 05:23:32PM +0200, Andrea Bolognani wrote:
On Thu, 2020-03-26 at 14:33 +0100, Erik Skultety wrote:
> +++ b/guests/group_vars/all/main.yml
> @@ -5,3 +5,6 @@
> ansible_ssh_pass: root
>
> jenkins_url:
https://ci.centos.org/computer/{{ inventory_hostname
}}/slave-agent.jnlp
> +
> +# In our case, ansible_system is either Linux or FreeBSD
> +gitlab_runner_url:
https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-r...
ansible_system|lower }}-amd64
Note that these two URLs are semantically quite different:
* jenkins_url is the endpoint to which the Jenkins agent will
connect to;
* gitlab_runner_url is the location from where the gitlab-runner
binary can be downloaded.
To keep things consistent, we could have
jenkins_url:
https://ci.centos.org/computer/{{ inventory_hostname }}/slave-agent.jnlp
jenkins_agent_url:
https://ci.centos.org/jnlpJars/slave.jar
gitlab_url:
https://gitlab.com
gitlab_runner_url:
https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-r...
ansible_system|lower }}-amd64
although I'm starting to question why these URLs should even be part
of the inventory when they could just as easily be hardcoded into the
corresponding playbook...
> +++ b/guests/playbooks/update/tasks/gitlab.yml
> @@ -0,0 +1,64 @@
> +---
> +- name: Look up Gitlab runner secret
> + set_fact:
> + gitlab_runner_secret: '{{ lookup("file",
gitlab_runner_token_file) }}'
The GitLab token should be extracted from the vault, not from a
regular file.
This was a deliberate decision and I didn't want to do it the jenkins way, let
me expand on why. Ultimately, we want other community members to contribute
their resources to our gitlab so that the test results are centralized, but
what if they want to setup the same machine for their internal infrastructure
as well to stay unified with the setup upstream libvirt "mandates". With the
secret in the vault, we always create machines that can only be plugged into
*our* gitlab, but why should be that restrictive? Give the consumers the leeway
to actually have a possibility to plug it into their own instance by supplying
the token the same way as we supply the vault and root passwords, so I'd rather
not be tying this to a single use case just like we did with jenkins.
[...]
> +- name: Make sure the gitlab-runner config dir exists exists
> + file:
> + path: '{{ gitlab_runner_config_path | dirname }}'
> + owner: gitlab
> + group: gitlab
> + state: directory
> + register: rc_gitlab_runner_config_dir
> +
> +- name: Create and empty gitlab-runner config
> + file:
> + path: '{{ gitlab_runner_config_path }}'
> + owner: gitlab
> + group: gitlab
> + state: touch
> + when: rc_gitlab_runner_config_dir.changed
Running 'gitlab-runner register' will create the configuration file,
so touching it in advance is unnecessary and...
> +# To ensure idempotency, we must run the registration only when we first
> +# created the config dir, otherwise we'll register a new runner on every
> +# update
> +- name: Register the gitlab-runner agent
> + shell: 'gitlab-runner register --non-interactive --config
/home/gitlab/.gitlab-runner/config.toml --registration-token {{ gitlab_runner_secret }}
--url
https://gitlab.com --executor shell --tag-list {{ inventory_hostname }}'
> + when: rc_gitlab_runner_config_dir.changed
... this check could be replaced with
creates: '{{ gitlab_runner_config_path }}'
Like I said in the comment above, the reason why I created those 2 tasks to
create the config file was to ensure idempotency, the registration runs every
single time you execute the playbook and guess what, it registers a new runner
which you can see in the gitlab runner pool - we don't want that. I'm not sure
"creates" actually prevents that behaviour.
Additional comments on this command:
* you have defined gitlab_runner_config_path earlier, might as well
use it here;
* the 'shell' executor is I think our only option when it comes to
FreeBSD (and we should make sure we fully understand the security
implications of using it before exposing any of this on the
internet), but for Linux we should be able to use the 'docker'
executor instead, which should provide additional isolation;
I get the isolation part, but why exactly is the additional isolation by the
container deemed so necessary compared to achieving consistency in terms of
maintenance? I mentioned it earlier, these machines run in a secured
environment not accessible from the outside. Moreover, these are test machines
that are designed to come and go, so if the worst comes, they can be torn down
and replaced with new copies, that's the whole point of CI environments, so
aren't we trying to be a bit too paranoid?
* I haven't played around with tags much, but from what I think I
have understood we will want to have something more generic here,
such as "freebsd" and "linux", so that we can for example tag
some of the jobs as "freebsd" and be sure they will run on the
expected OS;
we can go that way as well...
* I think we'll want to have the Linux runners configured to accept
untagged jobs, but the FreeBSD ones only accepting tagged ones
(again, if I have understood how job tagging works).
Untagged jobs have a problem - you cannot ensure parallelism, so you may end up
with job queued on a single machine, e.g. we tried to play with different job
settings with ptoscano and even though there were 3 nodes available and we
scheduled 3 jobs, 2 jobs were queued on a single machine, so tagged jobs is
probably what we do want, unless we don't care about parallel execution (but we
certainly do care about distro coverage).
[...]
> +- block:
> + - name: Install the gitlab_runner rc service script
> + template:
> + src: '{{ playbook_base }}/templates/gitlab-runner.j2'
> + dest: '/usr/local/etc/rc.d/gitlab_runner'
> + mode: '0755'
> +
> + - name: Enable the gitlab-runner rc service
s/gitlab-runner/gitlab_runner/
though I wonder if we can't just use the same name on both Linux and
FreeBSD? On my FreeBSD machine, while the use of underscore in
service names is clearly the default, there's at least one service
(ftp-proxy) which doesn't follow that convention with (I assume) no
ill effect.
Have you managed to get it work? I spent like 2 days trying different
combinations in the name of the service and I just could not make it work, so I
stopped trying to rub it against the grain and simply followed the FreeBSD
convention, but if you can make it work, I'm happy to switch, as I really
wanted to be 99.9% (maybe a little bit less :) ) consistent.
--
Erik Skultety