On Wed, 2019-02-13 at 18:52 +0000, Daniel P. Berrangé wrote:
On Thu, Feb 07, 2019 at 04:21:34PM +0100, Andrea Bolognani wrote:
> On Tue, 2019-02-05 at 17:53 +0000, Daniel P. Berrangé wrote:
> [...]
> > With the Debian 9 distro, this supports arm64, armel, armhf, mips,
> > mipsel, mips64el, ppc64el, s390x, which are all the official archs
> > that Debian maintainers currently build libvirt for. With Debian Sid,
> > this is extended to include i386.
>
> Is i386 really not supported as a cross compilation target on Debian
> 9? It seems like it would be such a low hanging fruit...
Yes, the i386 gcc simply doesn't exist in 9. No idea why
Pretty weird indeed.
But that kinda leads into the question: why having Debian 9 support
at all? We're only going to have a single set of buildenv-cross-*
images, one per architecture, and those are going to be based on
Debian Sid, no? Just like we use Fedora Rawhide for MinGW builds.
So adding Debian 9 into the mix seems unnecessary.
> [...]
> > +++ b/guests/host_vars/libvirt-debian-9/docker.yml
> > @@ -1,2 +1,59 @@
> > ---
> > docker_base: debian:9
> > +cross_build:
>
> I don't think this belongs in docker.yml, since there's nothing
> specific to Docker here: we could just as well use this information
> to build a virtual machine in which to perform cross builds.
>
> I would create a new fact file, eg. cross-build.yml, for this.
Ok. In practice this split doesn't seem to actually do anything
since we load all files & take the union of their contents.
It's just to keep things organized :)
> > + blacklist:
> > + # Doesn't properly resolve from foreign arch package
> > + # to the native package of augeas-lenses
>
> I cannot really understand what you're trying to say with this
> comment, can you try to reword it please?
dpkg fails to resolve the deps. It wants augeas-lenses:s390x
when it should be satisfied with augeas-lenses:$NATIVE. You
can't install both augeas-lenses as they conflict on files.
Got it.
> > + - libnetcf-dev
> > + # Fails to resolve deps from foreign arch
> > + - wireshark-dev
> > + # dtrace tool doesn't know how to cross-compile
> > + - systemtap-sdt-dev
>
> For these and all other blacklisted packages, you should list the
> name of the mapping (eg. netcf) rather than the concrete Debian
> package name (eg. libnetcf-dev). Then of course you'll have to act
> on the blacklist earlier, before mapping is performed.
I don't find that desirable. This is a debian specific config
file, and it is clearer to list the actual debian packages we
have trouble with, as that's what apt-get is complaining about
when it fails. Using the generic mapping obscures what is
going on.
That's true every time package names are involved, and everywhere
else we use the abstract names rather than the concrete ones. We
should use them here as well, not only to be consistent but also
so that we only ever have to update a single file when the concrete
package names change.
> > + arches:
> > + arm64:
> > + gcc-pkg-prefix: crossbuild-essential-arm64
>
> This is not a prefix, and the package is not GCC :)
>
> I would use 'cross_gcc', which incidentally is also the name of the
> variable you use in the Python script to store this value. (More on
> this later, though.)
Actually this entire variable can be eliminated, as we can derive
it from the target name.
Alright, I had suggested doing that with the arch name and
crossbuild-essential-*, but using the target and gcc-* works just
as nicely :)
> > + target-prefix: aarch64-linux-gnu
>
> Same here, you later use 'cross_prefix' in the script but also store
> it as 'TARGET' in the environment... I'd suggest
'cross_target'.
Using "cross_" is redundant as this already inside a parent
"cross-build"
block.
Alright.
> [...]
> > + mipsel:
> > + gcc-pkg-prefix: gcc-mipsel-linux-gnu
>
> crossbuild-essential-mipsel is available in Debian 9, so you should
> use that instead.
Actually, I've eliminated all usage of the crossbuild-essential-*
packages. These needlessly pull in the C++ compiler too.
Okay.
> > docker_base: debian:sid
> > +cross_build:
> > + blacklist:
> > + # Doesn't properly resolve from foreign arch package
> > + # to the native package of augeas-lenses
> > + - libnetcf-dev
> > + # Fails to resolve deps from foreign arch
> > + - wireshark-dev
> > + # dtrace tool doesn't know how to cross-compile
> > + - systemtap-sdt-dev
> > + # appears to be dropped from the 'sid' tree
> > + - sheepdog
>
> So it has! And it's apparently not going to be in the next Ubuntu
> release, either.
What's missing that we've not seen this problem already ? Is it
just because we don't regularlyl rebuild the CI hosts / images ?
Yeah, we hadn't triggered a build for Docker images in a while,
and as for the virtual machine running builds on
ci.centos.org we
don't rebuild those unless there's some issue, so it's happily
hanging on to the local copy of sheepdog it installed back when it
was still part of Debian Sid.
> > + if cross_arch and pkgname[-4:] ==
"-dev":
> > + if pkgname not in cross_pkgs:
> > + cross_pkgs.append(pkgname + ":" + cross_arch)
>
> I can see how sticking with the Debian-style architecture names
> makes this bit trivial, but I'm still entirely convinced we should
> use the same architecture names as libvirt does.
> Especially once we'll have integrated this into libvirt's own build
> system, the difference in names would introduce needless friction,
> as the target audience (libvirt developers) is certainly more
> familiar with the architecture names used in that project than
> Debian's.
Using the libvirt arch names adds no value. There's no friction as
there is no context in which we're using the libvirt arch names &
need a mapping to the cross build names. These image configs are
distro specific and using the distro's native arch is directly
relevant to the build results.
The context is libvirt developers running a cross build through the
build system integration, eg. using your proposed implementation
you'd need to use
$ make cibuild-cross-arm64
instead of the much more natural (to libvirt developers)
$ make cibuild-cross-aarch64
Now, having to mentally map 'aarch64' to 'arm64' might not be a lot
of friction, but it's still some friction, and it's completely
unnecessary too because we can perform the architecture mapping very
easily in lcitool and make it completely transparent to all other
tools and users.
> > + ENV TARGET
"%(cross_prefix)s"
> > + ENV CONFIGURE_OPTS "--host=%(cross_prefix)s
--target=%(cross_prefix)s"
> > + ENV PKG_CONFIG_LIBDIR
"/usr/lib/%(cross_prefix)s/pkgconfig"
>
> As you mention in a previous commit message, each ENV statement
> will create and additional layer, and it's considered best practice
> to have as few of those as possible.
The difference is the usage context. The ENV statement is intended
to provide environment variables to runtime users of the image. The
previous ENV usage was merely used internally to the docker build
and was polluting the runtime env where it was not required.
Okay, point taken.
These ENV statements are intended for direct usage in the docker
images at runtime, so you can launch the image with docker and
simply type "$TARGET-gcc" or "./configure $CONFIGURE_OPTS" and
have it do the right thing.
I guess what I don't like is using something that's Docker-specific
to inject this kind of information into the guest OS, because in
my mind you should end up with a mostly identical environment
whether you're building in a virtual machines created using lcitool
or inside a Docker container.
The way we get this kind of information into virtual machines is
through a combination of static, job-agnostic variables set in the
user's .bashrc and job-specific variables passed by whatever
process is running the build.
Looking at the variables above, TARGET belongs to the former group
and CONFIGURE_OPTS to the latter, with PKG_CONFIG_LIBDIR sitting
uncomfortably somewhere in between.
Now, we're not adding a custom .bashrc at all when using Docker,
so I guess using ENV directives instead is fine. But poisoning the
user environment by setting PKG_CONFIG_LIBDIR globally might not be,
and having a magic CONFIGURE_OPTS that build commands are somehow
just supposed to know exists is very much not cool.
What I'd like to see instead is something like
docker \
-e PKG_CONFIG_LIBDIR='/usr/lib/$TARGET/pkgconfig' \
-e autogen_args='--host=$TARGET --target=$TARGET' \
...
/bin/sh -xc './autogen.sh $autogen_args'
but I just realized that such a thing wouldn't really work because
the variables won't be expanded inside the container, and so you'd
just get the literal string '$TARGET' which is of course no good :(
Alright, ENV directives it is I guess :(
> Another reason why I don't like this is that it's
different from
> what we already do for the "cross compilation" target we already
> support, MinGW: in that case, we have a single image with pristine
> environment, and we inject the additional variables only when
> actually running the build. In my opinion, that's a saner way to
> tweak the build, since the interaction is very explicit and doesn't
> require any knowledge of other components that might even be
> hosted, as is the case here, in a separate git repository!
The mingw images are slightly different in that the Fedora mingw
RPMs have invented a "mingw32-configure" program that sets all
these env variables already. If that program didn't exist, then
it would be desirable to have the same env vars set for the mingw
images too.
Yeah, that's kind of a shortcut that we took in our travis.yml
compared to what we do on
ci.centos.org or local builds through
lcitool, where we call good 'ol configure with appropriate arguments
after adding all the necessary variables into the environment.
Personally I don't much like the discrepancy, as I believe we should
perform all builds the same way regardless of the environment to
guarantee consistency, but it is way shorter and more convenient so
I guess we're keeping it :) Especially since the issue mentioned
above makes it hard to do anything else :(
> Additionally, and I only thought about this now :) did we
consider
> the possibility of providing a single Debian image with all cross
> compilation packages installed, rather than one image per target
> architecture? Assuming that's feasible, it would also fit very
> nicely with how we have a single Fedora image that we use for both
> MinGW variants...
Creating a single image would mean a significantly larger download
image size when you only care about testing a single architecture.
Because of the way image layer sharing works, having lots of
separate images ensures you can have a minimal download size for
a single image, while not giving duplication if you need to download
all images.
Okay, this makes sense.
We ought to have a separate Fedora image for each MinGW target for
this same reason in fact.
First step would be to rip the MinGW stuff out of the regular
Fedora Rawhide configuration; after that we can definitely move
to separate images.
--
Andrea Bolognani / Red Hat / Virtualization