On Wed, Jun 10, 2020 at 07:29:53PM +0200, Andrea Bolognani wrote:
On Wed, 2020-06-10 at 17:11 +0100, Daniel P. Berrangé wrote:
> On Wed, Jun 10, 2020 at 05:34:13PM +0200, Andrea Bolognani wrote:
> > +.container_default_job_template: &container_default_job_definition
> > + image: docker:stable
> > + stage: containers
> > + services:
> > + - docker:dind
> > + before_script:
> > + - export TAG="$CI_REGISTRY_IMAGE/ci-$NAME:$CI_COMMIT_REF_SLUG"
> > + - export
COMMON_TAG="$CI_REGISTRY/libvirt/libvirt/ci-$NAME:master"
>
> This is different to what we've done on all the other repos. I originally
> used this, but noted that it results in a ever growing set of tags being
> published in the container registry, as users will have a new branch name
> for every piece of work. It also means you'll never a get a cache hit
> from the user's registry across feature branches, though that is mitigated
> to by fact that we'll consider the global cache too I guess.
We can have an additional
--cache-from $CI_REGISTRY_IMAGE/ci-$NAME:master
to further reduce the possibility of getting a cache miss.
Note that you can configure an expiration policy for tags
https://docs.gitlab.com/ee/user/packages/container_registry/#managing-pro...
but apparently it has to happen on a per-project basis instead of
being something that you can set globally for your account.
Is having a lot of tags such a big problem? It seems like it's not
unusual... See
https://hub.docker.com/_/nginx?tab=tags
for an extreme example.
But yeah, maybe I'm overthinking this. If the pipeline produces the
containers it consumes, then whether you label them as "latest"
or "master" or "a0sd90lv_k1" doesn't really make any difference,
because the next pipeline is going to build the containers again
before using them.
I just never much like the idea of things growing without bounds,
even if someone else is paying for storage.
There is still one scenario in which reusing the same name could
lead
to unwanted results, however, and that is when two or more pipelines
are running at the same time. Right now this is allowed, but by
using resource groups
https://docs.gitlab.com/ee/ci/yaml/#resource_group
we should be able to prevent that from happening.
NB, running multiple pipelines in parallel is only a problem if those
pipelines actually contain changes to the dockerfile recipes. Otherwise
the rebuild of container images is essentially a no-op.
Regards,
Daniel
--
|:
https://berrange.com -o-
https://www.flickr.com/photos/dberrange :|
|:
https://libvirt.org -o-
https://fstop138.berrange.com :|
|:
https://entangle-photo.org -o-
https://www.instagram.com/dberrange :|