...
> + - for daemon in $DAEMONS;
> + do
> + sudo sed -Ei
"s/^(#)?(log_outputs=).*/\2'1:file:\/var\/log\/libvirt\/${daemon}.log'/"
/etc/libvirt/${daemon}.conf;
> + sudo sed -Ei "s/^(#)?(log_filters=).*/\2'4:*object* 4:*json*
4:*event* 4:*rpc* 4:daemon.remote 4:util.threadjob 4:*access* 1:*'/"
/etc/libvirt/${daemon}.conf;
> + sudo systemctl --quiet stop ${daemon}.service;
> + sudo systemctl restart ${daemon}.socket;
> + done
I suggest changing this to something along the lines of
- for daemon in $DAEMONS;
do
log_outputs="file:/var/log/libvirt/${daemon}.log"
log_filters="3:remote 4:event 3:util.json 3:util.object
3:util.dbus 3:util.netlink 3:node_device 3:rpc 3:access 1:*"
sed -Ei -e
"s;^#*\\s*log_outputs\\s*=.*$;log_outputs=\"$log_outputs\";g" \
-e
"s;^#*\\s*log_filters\\s*=.*$;log_filters=\"$log_filters\";g" \
"src/remote/${daemon}.conf.in"
# ...
done
Advantages:
* saves one call to sed;
* doesn't need the awkward quoting for slashes in paths;
* produces key="value" which is consistent with the existing
contents of the configuration files (even though the parser will
accept single quotes too);
* handles slight, semantically-irrelevant changes to the contents
of the configuration files such as whitespace being added;
* doesn't unnecessarily use captures;
* avoids excessively long lines.
Yes, ^this looks better.
Note that I have adjusted the value of log_filters to match what is
recommended in
https://libvirt.org/kbase/debuglogs.html#less-verbose-logging-for-qemu-vms
but maybe there's a reason you had picked a different set of filters.
I didn't even know we document this, so I always use the filters I empirically
settled with. Given the general feeling about warnings usefulness in libvirt
logs I either use 1 for debug logs or 4 for errors.
If you feel strong I should use what we recommend on that page, I'll go with
that, but I'll add '4:util.threadjob' as well as threadjobs are also verbose
and don't add any value.
...
Also note that I haven't actually tested that the above works
correctly O:-)
It's a good starting point, I'll handle the rest.
...
> +libvirt-perl-bindings:
> + stage: bindings
> + trigger:
> + project: eskultety/libvirt-perl
> + branch: multi-project-ci
> + strategy: depend
> +
> +
> +centos-stream-8-tests:
> + extends: .tests
> + needs:
> + - libvirt-perl-bindings
> + - pipeline: $PARENT_PIPELINE_ID
> + job: x86_64-centos-stream-8
> + - project: eskultety/libvirt-perl
> + job: x86_64-centos-stream-8
> + ref: multi-project-ci
> + artifacts: true
IIUC from the documentation and from reading around, using
strategy:depend will cause the local job to reflect the status of the
triggered pipeline. So far so good.
What I am unclear about is, is there any guarantee that using
artifact:true with a job from an external project's pipeline will
expose the artifacts from the job that was executed as part of the
specific pipeline that we've just triggered ourselves as opposed to
some other pipeline that might have already been completed in the
past of might have completed in the meantime?
Not just by using artifact:true or strategy:depend. The important bit is having
'libvirt-perl-bindings' in the job's needs list. Let me explain, if you
don't
put the bindings trigger job to the requirements list of another job
(centos-stream-8-tests in this case) what will happen is that the trigger job
will be waiting for the external pipeline to finish, but centos-stream-8-tests
job would execute basically as soon as the container project builds are
finished because artifacts:true would download the latest RPM artifacts from an
earlier build...
Taking a step back, why exactly are we triggering a rebuild of
libvirt-perl in the first place? Once we change that project's
pipeline so that RPMs are published as artifacts, can't we just grab
the ones from the latest successful pipeline? Maybe you've already
explained why you did things this way and I just missed it!
...which brings us here. Well, I adopted the mantra that all libvirt-friends
projects depend on libvirt and given that we need libvirt-perl bindings to test
upstream, I'd like to always have the latest bindings available to test with
the current upstream build. The other reason why I did the way you commented on
is that during development of the proposal many times I had to make changes to
both libvirt and libvirt-perl in lockstep and it was tremendously frustrating
to wait for the pipeline to get to the integration stage only to realize that
the integration job didn't wait for the latest bindings and instead picked up
the previous latest artifacts which I knew were either faulty or didn't contain
the necessary changes yet.
> + variables:
> + DISTRO: centos-stream-8
This variable doesn't seem to be used anywhere, so I assume it's a
leftover from development. Maybe you tried to implement the .test
template so that using it didn't require as much repetition and
unfortunately it didn't work out?
Oh but it is. This is how the gitlab provisioner script knows which distro to
provision, it's equivalent to lcitool's target.
> + tags:
> + - centos-stream-vm
IIUC this is used both by the GitLab scheduler to pick suitable nodes
on which to execute the job (our own hardware in this case) and also
by the runner to decide which template to use for the VM.
So shouldn't this be more specific? I would expect something like
tags:
- centos-stream-8-vm
What's the point, we'd have to constantly refresh the tags if the platforms
come and go given our support, whereas fedora-vm and centos-stream-vm cover all
currently supported versions - always!
Other than that, I'm not sure that tags are passed on to the gitlab job itself,
I may have missed it, but unless the tags are exposed as env variables, the
provisioner script wouldn't know which template to provision. Also, the tag is
supposed to annotate the baremetal host in this case, so in that context having
'-vm' in the tag name makes sense, but doesn't for the provisioner script
which
relies on/tries to be compatible with lcitool as much as possible.
or
tags:
- centos-stream-8
- vm
> +++ b/.gitlab-ci.yml
> @@ -4,6 +4,7 @@ variables:
> stages:
> - containers
> - builds
> + - test
> - sanity_checks
Unlike Dan, I actually think it makes sense to have these as a
separate stage. I'd call it integration_tests and list it last
though. But admittedly this is entirely in bikeshedding territory and
I'm fine with the current solution or the one Dan proposed too :)
I feel like having test/intengration_tests is more bulletproof for the future
of our CI :).
Erik