On Wed, Mar 02, 2022 at 06:27:04AM -0800, Andrea Bolognani wrote:
On Wed, Mar 02, 2022 at 01:11:04PM +0100, Erik Skultety wrote:
> > > I gave this more thought. What you suggest is viable, but the following is
worth
> > > considering if we go with your proposal:
> > >
> > > - libvirt-perl jobs build upstream libvirt first in order to build the
bindings
> > > -> generally it takes until right before the release that
APIs/constants
> > > are added to the respective bindings (Perl/Python)
This is not entirely accurate. While it's true that bindings
generally lag behind the C library, they're usually updated within
days. Changes only getting in at the very end of a development cycle
is an exception, not the norm.
> > > -> if we rely on the latest libvirt-perl artifacts without
actually
> > > triggering the pipeline, yes, the artifacts would be stable, but
fairly
> > > old (unless we schedule a recurrent pipeline in the project to
refresh
> > > them), thus not giving us feedback from the integration stage that
> > > bindings need to be added first, because the API coverage would
likely
> > > fail, thus failing the whole libvirt-perl pipeline and thus
invalidating
> > > the integration test stage in the libvirt project
> > > => now, I admit this would get pretty annoying because it would
force
> > > contributors (or the maintainer) who add new APIs to add
respective
> > > bindings as well in a timely manner, but then again ultimately
we'd
> > > like our contributors to also introduce an integration test
along
> > > with their feature...
> >
> > Note right now the perl API coverage tests are configured to only be gating
> > when run on nightly scheduled jobs. I stopped them being gating on
contributions
> > because if someone if fixing a bug in the bindings it is silly to force
> > their merge request to also add new API bindings.
I don't think we can expect integration tests to be merged at the
same time as a feature when new APIs are involved. If tests are
written in Python, then the Python bindings need to introduce support
for the new API before the test can exist, and that can't happen
until the feature has been merged.
Again, that is a logical conclusion which brings us to an unrelated process
question: How do we change the contribution process so that the contribution of
a feature doesn't end with it being merged to the C library, IOW we'd ideally
want to have a test introduced with every feature,but given that we'd need the
bindings first to actually do that, but we can't have a binding unless the C
counterpart is already merged, how do we keep the contributors motivated
enough? (Note that it's food for thought, it's only tangential to this effort)
If the feature or bug fix doesn't require new APIs to be introduced
this is of course not an issue. Most changes should fall into this
bucket.
So overall I still think using existing artifacts would be the better
approach, at least initially. We can always change things later if we
find that we've outgrown it.
So, given that
https://gitlab.com/libvirt/libvirt-perl/-/merge_requests/55 was
already merged we should not get to a situation where no artifacts would be
available because gitlab documents that even if artifacts expired they won't be
deleted until new artifacts become available, I think we can depend on the
latest available artifacts without building them. I'll refresh the patch
accordingly and test.
> > > > Should we make them *less* specific instead? As in, is there any
> > > > reason for having different tags for Fedora and CentOS jobs as
> > > > opposed to using a generic "this needs to run in a VM" tag
for both?
> > >
> > > Well, I would not be against, but I feel this is more of a political
issue:
> > > this HW was provided by Red Hat with the intention to be dedicated for Red
Hat
> > > workloads. If another interested 3rd party comes (and I do hope they will)
and
> > > provides HW, we should utilize the resources fairly in a way respectful to
the
> > > donor's/owner's intentions, IOW if party A provides a single
machine to run
> > > CI workloads using Debian VMs, we should not schedule Fedora/CentOS
workloads
> > > in there effectively saturating it.
> > > So if the tags are to be adjusted, then I'd be in favour of recording
the owner
> > > of the runner in the tag.
> >
> > If we have hardware available, we should use to the best of its ability.
> > Nothing is gained by leaving it idle if it has spare capacity to run jobs.
>
> Well, logically there's absolutely no disagreement with you here. Personally,
> I would go about it the same. The problem is that the HW we're talking about
> wasn't an official donation, Red Hat still owns and controls the HW, so the
> company can very much disagree with running other workloads on it long term.
> I'm not saying we shouldn't test the limits, reliability and bandwidth to
its
> fullest potential. What I'm trying to say is that the major issue here is that
> contributing to open source projects is a collaborative effort of all
> interested parties (duh, should go without saying) and so we cannot expect a
> single party which just happens to have the biggest stake in the project to run
> workloads for everybody else. I mean the situation would have been different if
> the HW were a proper donation, but unfortunately it is not. If we pick and run
> workloads on various distros for the sake of getting coverage (which makes
> total sense btw), it would later be harder to communicate back to the community
> why the number of distros (or their variety) would need to be decreased once
> the HW's capabilities are saturated with demanding workloads, e.g. migration
> testing or device assignment, etc.
>
> Whether I do or do not personally feel comfortable being involved in ^this
> political situation and decision making, as a contributor using Red Hat's email
> domain I do respect the company's intentions with regards to the offered HW.
> I think that the final setup we agree on eventually is up for an internal
> debate and doesn't have a direct impact on this proposal per-se.
I think it's not unreasonable that when Red Hat, or any other entity,
provides hardware access to the project there will be some strings
attached. This is already the case for GitLab and Cirrus CI, for
example.
I could easily see the instance of libvirt-gitlab-executor running on
hardware owned and operated by Red Hat returning a failure if a job
submitted to it comes with DISTRO=debian-11.
libvirt-gitlab-executor is supposed to be system owner agnostic, I'd even like
to make the project part of the libvirt gitlab namespace umbrella so that
anyone can use it to prepare their machine to be plugged into and used for
libvirt upstream integration testing. Therefore, I don't think the project is
the right place for such checks, those should IMO live solely in the
gitlab-ci.yml configuration.
Erik
For the time being, I think it'd be okay to use a tag like
redhat-vm-host or something like that for these jobs. Once we have
had jobs running for a bit, we can figure out whether there is spare
capacity available and try to convince Red Hat to let more jobs run
on the machines.
--
Andrea Bolognani / Red Hat / Virtualization