...
> > > +libvirt-perl-bindings:
> > > + stage: bindings
> > > + trigger:
> > > + project: eskultety/libvirt-perl
> > > + branch: multi-project-ci
> > > + strategy: depend
> > > +
> > > +
> > > +centos-stream-8-tests:
> > > + extends: .tests
> > > + needs:
> > > + - libvirt-perl-bindings
> > > + - pipeline: $PARENT_PIPELINE_ID
> > > + job: x86_64-centos-stream-8
> > > + - project: eskultety/libvirt-perl
> > > + job: x86_64-centos-stream-8
> > > + ref: multi-project-ci
> > > + artifacts: true
> >
> > IIUC from the documentation and from reading around, using
> > strategy:depend will cause the local job to reflect the status of the
> > triggered pipeline. So far so good.
> >
> > What I am unclear about is, is there any guarantee that using
> > artifact:true with a job from an external project's pipeline will
> > expose the artifacts from the job that was executed as part of the
> > specific pipeline that we've just triggered ourselves as opposed to
> > some other pipeline that might have already been completed in the
> > past of might have completed in the meantime?
>
> Not just by using artifact:true or strategy:depend. The important bit is having
> 'libvirt-perl-bindings' in the job's needs list. Let me explain, if you
don't
> put the bindings trigger job to the requirements list of another job
> (centos-stream-8-tests in this case) what will happen is that the trigger job
> will be waiting for the external pipeline to finish, but centos-stream-8-tests
> job would execute basically as soon as the container project builds are
> finished because artifacts:true would download the latest RPM artifacts from an
> earlier build...
No, I got that part. My question was whether
other-project-pipeline:
trigger:
project: other-project
strategy: depend
our-job:
needs:
- other-project-pipeline
- project: other-project
job: other-project-job
artifacts: true
actually guarantees that the instance of other-project-job whose
artifacts are available to our-job is the same one that was started
as part of the pipeline triggered by the other-project-pipeline job.
Sorry for a delayed response.
I don't think so. We can basically only rely on a fact that the jobs would
actually be queued in order they arrive which means that jobs submitted earlier
should finish earlier, but that of course is only a premise not a guarantee.
On the other hand I never intended to run the integration CI on every single
push to the master branch, instead, I wanted to make this a scheduled pipeline
which would effectively alleviate the problem, because with scheduled pipelines
there would very likely not be a concurrent pipeline running in libvirt-perl
which would make us download artifacts from a pipeline we didn't trigger
ourselves.
Looking at the YAML I wouldn't bet on this being the case, but
perhaps I've missed this guarantee being documented somewhere?
> > Taking a step back, why exactly are we triggering a rebuild of
> > libvirt-perl in the first place? Once we change that project's
> > pipeline so that RPMs are published as artifacts, can't we just grab
> > the ones from the latest successful pipeline? Maybe you've already
> > explained why you did things this way and I just missed it!
>
> ...which brings us here. Well, I adopted the mantra that all libvirt-friends
> projects depend on libvirt and given that we need libvirt-perl bindings to test
> upstream, I'd like to always have the latest bindings available to test with
> the current upstream build. The other reason why I did the way you commented on
> is that during development of the proposal many times I had to make changes to
> both libvirt and libvirt-perl in lockstep and it was tremendously frustrating
> to wait for the pipeline to get to the integration stage only to realize that
> the integration job didn't wait for the latest bindings and instead picked up
> the previous latest artifacts which I knew were either faulty or didn't contain
> the necessary changes yet.
Of course that would be annoying when you're making changes to both
projects at the same time, but is that a scenario that we can expect
to be common once the integration tests are in place?
To be clear, I'm not necessarily against the way you're doing things
right now, it's just that it feels like using the artifacts from the
latest successful libvirt-perl pipeline would lower complexity, avoid
burning additional resources and reduce wait times.
If the only only downside is having a worse experience when making
changes to the pipeline, and we can expect that to be infrequent
enough, perhaps that's a reasonable tradeoff.
I gave this more thought. What you suggest is viable, but the following is worth
considering if we go with your proposal:
- libvirt-perl jobs build upstream libvirt first in order to build the bindings
-> generally it takes until right before the release that APIs/constants
are added to the respective bindings (Perl/Python)
-> if we rely on the latest libvirt-perl artifacts without actually
triggering the pipeline, yes, the artifacts would be stable, but fairly
old (unless we schedule a recurrent pipeline in the project to refresh
them), thus not giving us feedback from the integration stage that
bindings need to be added first, because the API coverage would likely
fail, thus failing the whole libvirt-perl pipeline and thus invalidating
the integration test stage in the libvirt project
=> now, I admit this would get pretty annoying because it would force
contributors (or the maintainer) who add new APIs to add respective
bindings as well in a timely manner, but then again ultimately we'd
like our contributors to also introduce an integration test along
with their feature...
- as for resource consumption, given that this would either execute on a merge
request basis or as a scheduled pipeline, it won't be that much of a resource
waste, especially if we stop building containers in libvirt-<project> unless
there was a change to the underlying Dockerfile, that is IMO the biggest
resource waste, so we can always cut down on resources and still keep using
fresh builds from both projects.
> > > + variables:
> > > + DISTRO: centos-stream-8
> >
> > This variable doesn't seem to be used anywhere, so I assume it's a
> > leftover from development. Maybe you tried to implement the .test
> > template so that using it didn't require as much repetition and
> > unfortunately it didn't work out?
>
> Oh but it is. This is how the gitlab provisioner script knows which distro to
> provision, it's equivalent to lcitool's target.
I've looked at
https://gitlab.com/eskultety/libvirt-gitlab-executor
and understand how this value is used now, but without the additional
context it's basically impossible to figure out its purpose. Please
make sure you document it somehow.
I'll add a commentary documenting the variable.
> > > + tags:
> > > + - centos-stream-vm
> >
> > IIUC this is used both by the GitLab scheduler to pick suitable nodes
> > on which to execute the job (our own hardware in this case) and also
> > by the runner to decide which template to use for the VM.
> >
> > So shouldn't this be more specific? I would expect something like
> >
> > tags:
> > - centos-stream-8-vm
>
> What's the point, we'd have to constantly refresh the tags if the platforms
> come and go given our support, whereas fedora-vm and centos-stream-vm cover all
> currently supported versions - always!
> Other than that, I'm not sure that tags are passed on to the gitlab job itself,
> I may have missed it, but unless the tags are exposed as env variables, the
> provisioner script wouldn't know which template to provision. Also, the tag is
> supposed to annotate the baremetal host in this case, so in that context having
> '-vm' in the tag name makes sense, but doesn't for the provisioner
script which
> relies on/tries to be compatible with lcitool as much as possible.
Okay, my misunderstanding was caused by not figuring out the purpose
of DISTRO. I agree that more specific tags are not necessary.
Should we make them *less* specific instead? As in, is there any
reason for having different tags for Fedora and CentOS jobs as
opposed to using a generic "this needs to run in a VM" tag for both?
Well, I would not be against, but I feel this is more of a political issue:
this HW was provided by Red Hat with the intention to be dedicated for Red Hat
workloads. If another interested 3rd party comes (and I do hope they will) and
provides HW, we should utilize the resources fairly in a way respectful to the
donor's/owner's intentions, IOW if party A provides a single machine to run
CI workloads using Debian VMs, we should not schedule Fedora/CentOS workloads
in there effectively saturating it.
So if the tags are to be adjusted, then I'd be in favour of recording the owner
of the runner in the tag.
Erik