On Tue, Feb 22, 2022 at 03:19:58PM +0100, Erik Skultety wrote:
> Note that I have adjusted the value of log_filters to match what
is
> recommended in
>
>
https://libvirt.org/kbase/debuglogs.html#less-verbose-logging-for-qemu-vms
>
> but maybe there's a reason you had picked a different set of filters.
I didn't even know we document this, so I always use the filters I empirically
settled with. Given the general feeling about warnings usefulness in libvirt
logs I either use 1 for debug logs or 4 for errors.
If you feel strong I should use what we recommend on that page, I'll go with
that, but I'll add '4:util.threadjob' as well as threadjobs are also verbose
and don't add any value.
Maybe change the page so 4:util.threadjob is included in the
recommended configuration? Other than that, if you feel that your set
of filters will work best for the situation at hand you don't need to
change them.
> > +libvirt-perl-bindings:
> > + stage: bindings
> > + trigger:
> > + project: eskultety/libvirt-perl
> > + branch: multi-project-ci
> > + strategy: depend
> > +
> > +
> > +centos-stream-8-tests:
> > + extends: .tests
> > + needs:
> > + - libvirt-perl-bindings
> > + - pipeline: $PARENT_PIPELINE_ID
> > + job: x86_64-centos-stream-8
> > + - project: eskultety/libvirt-perl
> > + job: x86_64-centos-stream-8
> > + ref: multi-project-ci
> > + artifacts: true
>
> IIUC from the documentation and from reading around, using
> strategy:depend will cause the local job to reflect the status of the
> triggered pipeline. So far so good.
>
> What I am unclear about is, is there any guarantee that using
> artifact:true with a job from an external project's pipeline will
> expose the artifacts from the job that was executed as part of the
> specific pipeline that we've just triggered ourselves as opposed to
> some other pipeline that might have already been completed in the
> past of might have completed in the meantime?
Not just by using artifact:true or strategy:depend. The important bit is having
'libvirt-perl-bindings' in the job's needs list. Let me explain, if you
don't
put the bindings trigger job to the requirements list of another job
(centos-stream-8-tests in this case) what will happen is that the trigger job
will be waiting for the external pipeline to finish, but centos-stream-8-tests
job would execute basically as soon as the container project builds are
finished because artifacts:true would download the latest RPM artifacts from an
earlier build...
No, I got that part. My question was whether
other-project-pipeline:
trigger:
project: other-project
strategy: depend
our-job:
needs:
- other-project-pipeline
- project: other-project
job: other-project-job
artifacts: true
actually guarantees that the instance of other-project-job whose
artifacts are available to our-job is the same one that was started
as part of the pipeline triggered by the other-project-pipeline job.
Looking at the YAML I wouldn't bet on this being the case, but
perhaps I've missed this guarantee being documented somewhere?
> Taking a step back, why exactly are we triggering a rebuild of
> libvirt-perl in the first place? Once we change that project's
> pipeline so that RPMs are published as artifacts, can't we just grab
> the ones from the latest successful pipeline? Maybe you've already
> explained why you did things this way and I just missed it!
...which brings us here. Well, I adopted the mantra that all libvirt-friends
projects depend on libvirt and given that we need libvirt-perl bindings to test
upstream, I'd like to always have the latest bindings available to test with
the current upstream build. The other reason why I did the way you commented on
is that during development of the proposal many times I had to make changes to
both libvirt and libvirt-perl in lockstep and it was tremendously frustrating
to wait for the pipeline to get to the integration stage only to realize that
the integration job didn't wait for the latest bindings and instead picked up
the previous latest artifacts which I knew were either faulty or didn't contain
the necessary changes yet.
Of course that would be annoying when you're making changes to both
projects at the same time, but is that a scenario that we can expect
to be common once the integration tests are in place?
To be clear, I'm not necessarily against the way you're doing things
right now, it's just that it feels like using the artifacts from the
latest successful libvirt-perl pipeline would lower complexity, avoid
burning additional resources and reduce wait times.
If the only only downside is having a worse experience when making
changes to the pipeline, and we can expect that to be infrequent
enough, perhaps that's a reasonable tradeoff.
> > + variables:
> > + DISTRO: centos-stream-8
>
> This variable doesn't seem to be used anywhere, so I assume it's a
> leftover from development. Maybe you tried to implement the .test
> template so that using it didn't require as much repetition and
> unfortunately it didn't work out?
Oh but it is. This is how the gitlab provisioner script knows which distro to
provision, it's equivalent to lcitool's target.
I've looked at
https://gitlab.com/eskultety/libvirt-gitlab-executor
and understand how this value is used now, but without the additional
context it's basically impossible to figure out its purpose. Please
make sure you document it somehow.
> > + tags:
> > + - centos-stream-vm
>
> IIUC this is used both by the GitLab scheduler to pick suitable nodes
> on which to execute the job (our own hardware in this case) and also
> by the runner to decide which template to use for the VM.
>
> So shouldn't this be more specific? I would expect something like
>
> tags:
> - centos-stream-8-vm
What's the point, we'd have to constantly refresh the tags if the platforms
come and go given our support, whereas fedora-vm and centos-stream-vm cover all
currently supported versions - always!
Other than that, I'm not sure that tags are passed on to the gitlab job itself,
I may have missed it, but unless the tags are exposed as env variables, the
provisioner script wouldn't know which template to provision. Also, the tag is
supposed to annotate the baremetal host in this case, so in that context having
'-vm' in the tag name makes sense, but doesn't for the provisioner script
which
relies on/tries to be compatible with lcitool as much as possible.
Okay, my misunderstanding was caused by not figuring out the purpose
of DISTRO. I agree that more specific tags are not necessary.
Should we make them *less* specific instead? As in, is there any
reason for having different tags for Fedora and CentOS jobs as
opposed to using a generic "this needs to run in a VM" tag for both?
--
Andrea Bolognani / Red Hat / Virtualization