On Fri, Aug 05, 2016 at 10:41:25AM +0100, Daniel P. Berrange wrote:
> On Fri, Aug 05, 2016 at 10:55:55AM +0200, Pavel Hrdina wrote:
> > On Fri, Aug 05, 2016 at 09:36:12AM +0100, Daniel P. Berrange wrote:
> > > On Fri, Aug 05, 2016 at 10:29:17AM +0200, Pavel Hrdina wrote:
> > > > On Thu, Aug 04, 2016 at 03:52:09PM +0100, Daniel P. Berrange wrote:
> > > > > We are using the CentOS Jenkins server for running CI tasks.
> > > > > Currently those tasks are maintained by people manually
> > > > > updating the Jenkins web UI. This is a horrible interface
> > > > > that requires 100's of mouse clicks to achieve even the
> > > > > simplest things. It is also incredibly hard to compare
> > > > > the config of different jobs to make sure they are working
> > > > > in a consistent manner.
> > > > >
> > > > > Fortunately there are tools which can help - OpenStack
> > > > > created the jenkins-job-builder tool which uses the Jenkins
> > > > > REST API to create/update jobs from a simple YAML file
> > > > > definition.
> > > > >
> > > > > This series thus creates a set of YAML files which will
> > > > > (almost) replicate our current manually create config.
> > > > >
> > > > > I've used jenkins-job-builder in offline test mode to
> > > > > generate Jenkins XML files and then compared them to what
> > > > > we currently have and they are mostly the same. So there
> > > > > should not be too many suprises lurking, but I do still
> > > > > expect some accidental breakage in places. As such I have
> > > > > not actually uploaded the new auto-generated job configs
> > > > > to
ci.centos.org at this time.
> > > > >
> > > > > The intention is that these configs will all live in the
> > > > > libvirt GIT server in a new 'libvirt-jenkins-ci'
repository
> > > >
> > > > Hi Dan, wow nice job. I didn't know that there was this tool to
maintain jobs
> > > > for Jenkins. The web interface is horrible.
> > > >
> > > > I'm not sure why we need to build all the tools, we don't do
it right now and
> > > > even though we are using those tools, they have nothing in common
with libvirt
> > > > projects and friends.
> > >
> > > What do you mean by "build all the tools" ? Everything I have
provided
> > > jobs for here, we *already* have running jobs for under libvirt CI.
> >
> > I meant autotools, distuitls, and both Perl jobs.
>
> I'm still not sure what you mean. Those are all things we're using
> from existing jobs, so I've not added anything new that we don't
> already have.
Having lunch apparently helped me understand the files under the jobs directory.
Let's consider that this conversation didn't happen :).
> > > > If we need to install those tools on all nodes, we should probably
create
> > > > another configuration/database of packages that are required to run
all the jobs
> > > > from all projects.
> > >
> > > We shouldn't install any pre-built packages for anything that we are
> > > capable of building from git, as git master branches cannot be assumed
> > > to work with older binary packaged versions, especially when getting to
> > > centos which is comparatively old. I already fixed this on the CI build
> > > slaves by removing a bunch of packages as we had some jobs building
against
> > > things from git while other jobs we trying to build against deps from git
> > > but then accidentally picking up parts of the binary package install, so
> > > we have a horrible half&half build.
> >
> > Well, when I was creating the Jenkins jobs, my idea was to use Fedora-rawhide
> > for upstream builds where upstream libvirt-python is built against upstream
> > libvirt and so on. For all other nodes the idea was to use system packages
and
> > build all projects against system packages.
> >
> > At least for libvirt I got the idea that it should be buildable also on older
> > systems like Centos6 and that was my main motivation to apply this also on
> > other project where it makes sense.
>
> That doesn't really work in general for tracking git master. For example
> this is why virt-manager builds stopped working for weeks on all the non
> rawhide platforms. I think the python binding is pretty much the only
> thing that is explicitly written to always be able to build against older
> libvirt.
Yes, it doesn't work in general, but in case of libvirt-python it works and
libvirt-python is written that way and it would be nice to test it IMHO. In
case of virt-manager it's because of wrong design of unit tests (they should not
depend on environment or on installed packages).
If the project depends on newer versions then the project should
enforce this
dependency like libvirt-perl does, otherwise it should be safe to build it
against older versions.
How about we make two categories of projects, ones that depends on latest
version and the others that don't require latest versions?
Doing that against host RPMs, leads to a rather unreliable environment.
One set of projects will require a bunch of RPMs installed on the host
to build against, while the other set will not want those RPMs installed
and have to be careful not to accidentally build against them.
To do this properly, I think we would have create a libvirt-stable job
that uses a separate workspace from the main libvirt job and builds from
an arbitrarily older stable branch, and installs into a different
VIRT_PREFIX directory. We'd then have to create a second libvirt-python
job that builds against this libvirt-stable job. This way we ensure we
never pollute the build hosts with RPMs that could affect builds of the
stuff that is supposed to be building against git master.
Regards,
Daniel
--
|: