GSoC 2022 CI project idea proposal

So, we're offloading as much CI configuration/workflow stuff to lcitool as possible. We can generate config files, install/update machines (local or remote), dump Dockerfiles...we can't build and run those containers locally. Instead, we have a CI helper script in libvirt repo which essentially just wraps a Makefile which pulls a gitlab container for you and: - gives you shell, or - runs the build and tests I'm not sure how many people actually know we have that helper script let alone use it. I've been playing with the idea that we could integrate what's done in the Makefile to lcitool utilizing either the podman library [1] or the docker library [2]. Apart from consolidating all CI services-related efforts to lcitool the other benefit would be that we could gain the ability to run and debug in a project-specific container also in other libvirt projects not just main libvirt. So, I though this could be a nice project for GSoC. Ideas? Unless there are arguments against this idea then I'd eventually add the idea to our gitlab GSoC issue tracker. [1] https://github.com/containers/podman-py [2] https://docker-py.readthedocs.io/en/stable/ Erik

On 2/22/22 17:04, Erik Skultety wrote:
So, we're offloading as much CI configuration/workflow stuff to lcitool as possible. We can generate config files, install/update machines (local or remote), dump Dockerfiles...we can't build and run those containers locally. Instead, we have a CI helper script in libvirt repo which essentially just wraps a Makefile which pulls a gitlab container for you and: - gives you shell, or - runs the build and tests
I'm not sure how many people actually know we have that helper script let alone use it. I've been playing with the idea that we could integrate what's done in the Makefile to lcitool utilizing either the podman library [1] or the docker library [2]. Apart from consolidating all CI services-related efforts to lcitool the other benefit would be that we could gain the ability to run and debug in a project-specific container also in other libvirt projects not just main libvirt.
So, I though this could be a nice project for GSoC. Ideas?
I say go for it. This would be a nice project that can attract attention, since containers are popular nowadays. Michal

On Tue, Feb 22, 2022 at 05:04:02PM +0100, Erik Skultety wrote:
So, we're offloading as much CI configuration/workflow stuff to lcitool as possible. We can generate config files, install/update machines (local or remote), dump Dockerfiles...we can't build and run those containers locally. Instead, we have a CI helper script in libvirt repo which essentially just wraps a Makefile which pulls a gitlab container for you and: - gives you shell, or - runs the build and tests
I'm not sure how many people actually know we have that helper script let alone use it. I've been playing with the idea that we could integrate what's done in the Makefile to lcitool utilizing either the podman library [1] or the docker library [2]. Apart from consolidating all CI services-related efforts to lcitool the other benefit would be that we could gain the ability to run and debug in a project-specific container also in other libvirt projects not just main libvirt.
So, I though this could be a nice project for GSoC. Ideas?
I've been meaning to replace the make-based logic used for spawning local containers with a Python equivalent and get rid of ci/Makefile entirely. There was some progress last year, but I got sidetracked and never managed to finish the job. So obviously I'm in favor of doing more work in that area, especially if it's someone else doing it O:-) I'm not entirely convinced that requiring the use of lcitool for this task is necessarily the best idea though. Ideally, it should be possible for developers to reproduce and debug CI failures locally without having to clone a second repository. It's fine to require lcitool for tasks that are performed by maintainers, but casual contributors should find all they need in libvirt.git itself. Another thing that has been bothering me is that neither 'ci/helper build' nor 'lcitool build' will actually perform the exact same build steps as the corresponding CI job, making them only mildly effective as debugging tools for CI failures. And of course each of these build recipes is maintained separately, so we have three similar but not quite identical scripts floating around. Here's my (fairly hand-wavy :) idea of how things should work. Each project's repository should contain its build steps in the form of a ci/build script. The exact calling interface for this script will have to be determined, but based on existing usage at the very least we need to be able to specify the build type (regular, cross and with dependencies built from git) and target-specific settings (for example whether to skip building RPMs). So I can imagine the invocation looking something like $ TYPE=native RPMS=skip ci/build The build steps that currently live in the .native_build_job, .cross_build_job and .native_git_build_job templates in .gitlab-ci.yml will all be moved to the ci/build script. It shouldn't be necessary to alter them significantly in the process. With this interface defined, we can change 'lcitool manifest' so that the jobs it generates go from x86_64-almalinux-8-clang: extends: .native_build_job needs: - x86_64-almalinux-8-container variables: CC: clang NAME: almalinux-8 RPM: skip to x86_64-almalinux-8-clang: needs: - x86_64-almalinux-8-container script: - TYPE=native NAME=almalinux-8 CC=clang RPM=skip ci/build Of course it would still be possible to tell lcitool to use a custom template for the job instead, which might be necessary for setting up CI caching and the like. For simple cases though, you'd be able to use the default implementation. 'lcitool build' would also be changed so that it invokes the ci/build script in the project's repository. The last missing piece would then be to finish converting the ci/helper script that exists in libvirt to Python and make it so calling that entry point also ultimately in running the ci/build script. Of course having multiple copies of the ci/helper script and its logic for listing / running containers around is problematic because they would get out of sync over time... Perhaps we can maintain that script as part of libvirt-ci.git, and generate a local copy to be stored in each project's repository at the same time as other CI-related files such as Dockerfiles are generated? That sounds like it could work, but admittedly this is the fuzziest part of the entire plan :) -- Andrea Bolognani / Red Hat / Virtualization

On Thu, Feb 24, 2022 at 06:49:29AM -0800, Andrea Bolognani wrote:
On Tue, Feb 22, 2022 at 05:04:02PM +0100, Erik Skultety wrote:
So, we're offloading as much CI configuration/workflow stuff to lcitool as possible. We can generate config files, install/update machines (local or remote), dump Dockerfiles...we can't build and run those containers locally. Instead, we have a CI helper script in libvirt repo which essentially just wraps a Makefile which pulls a gitlab container for you and: - gives you shell, or - runs the build and tests
I'm not sure how many people actually know we have that helper script let alone use it. I've been playing with the idea that we could integrate what's done in the Makefile to lcitool utilizing either the podman library [1] or the docker library [2]. Apart from consolidating all CI services-related efforts to lcitool the other benefit would be that we could gain the ability to run and debug in a project-specific container also in other libvirt projects not just main libvirt.
So, I though this could be a nice project for GSoC. Ideas?
I've been meaning to replace the make-based logic used for spawning local containers with a Python equivalent and get rid of ci/Makefile entirely. There was some progress last year, but I got sidetracked and never managed to finish the job. So obviously I'm in favor of doing more work in that area, especially if it's someone else doing it O:-)
I'm not entirely convinced that requiring the use of lcitool for this task is necessarily the best idea though. Ideally, it should be possible for developers to reproduce and debug CI failures locally without having to clone a second repository. It's fine to require lcitool for tasks that are performed by maintainers, but casual contributors should find all they need in libvirt.git itself.
Unless they also want to test with VMs as well in which case lcitool is highly recommended. The thing is that right now ci/helper wraps the Makefile. Let's say you (or someone else) replaces it with a python equivalent, I don't think that the number of contributors suddenly starting to use it will differ much compared to the situation right now, but maybe I'm wrong :). My motivation to add the support to lcitool is simple - right now there's an RFC on adopting some infrastructure in order to run TCK-based functional tests as part of GitLab. That's just the first stage, ultimately we want developers to be able to run the same test suite locally without the need to wait for gitlab and since the current design of the above is using nested virt, it is an achievable goal. There was also a suggestion to adopt the Avocado framework for a new test framework which would be largely based on TCK [1]. Now, let's get to the point I'm trying to make :) - my expectation/vision is that both the remote and local functional test executions will revolve around lcitool and so eventually with the adoption of *some* framework it is going to become a requirement that libvirt developers do perform functional testing before submitting a patch series. With that said, it makes more sense in my mind to focus on adding the container runtime support to lcitool in order to move towards the bigger goal and have everything consolidated in a single place.
Another thing that has been bothering me is that neither 'ci/helper build' nor 'lcitool build' will actually perform the exact same build steps as the corresponding CI job, making them only mildly effective as debugging tools for CI failures. And of course each of these build recipes is maintained separately, so we have three similar but not quite identical scripts floating around.
Yes, but on its own, I think that is a problem which can be solved separately.
Here's my (fairly hand-wavy :) idea of how things should work.
Each project's repository should contain its build steps in the form of a ci/build script. The exact calling interface for this script will have to be determined, but based on existing usage at the very least we need to be able to specify the build type (regular, cross and with dependencies built from git) and target-specific settings (for example whether to skip building RPMs). So I can imagine the invocation looking something like
$ TYPE=native RPMS=skip ci/build
The build steps that currently live in the .native_build_job, .cross_build_job and .native_git_build_job templates in .gitlab-ci.yml will all be moved to the ci/build script. It shouldn't be necessary to alter them significantly in the process.
With this interface defined, we can change 'lcitool manifest' so that the jobs it generates go from
x86_64-almalinux-8-clang: extends: .native_build_job needs: - x86_64-almalinux-8-container variables: CC: clang NAME: almalinux-8 RPM: skip
to
x86_64-almalinux-8-clang: needs: - x86_64-almalinux-8-container script: - TYPE=native NAME=almalinux-8 CC=clang RPM=skip ci/build
Of course it would still be possible to tell lcitool to use a custom template for the job instead, which might be necessary for setting up CI caching and the like. For simple cases though, you'd be able to use the default implementation.
'lcitool build' would also be changed so that it invokes the ci/build script in the project's repository.
The last missing piece would then be to finish converting the ci/helper script that exists in libvirt to Python and make it so calling that entry point also ultimately in running the ci/build script.
Of course having multiple copies of the ci/helper script and its logic for listing / running containers around is problematic because they would get out of sync over time... Perhaps we can maintain that script as part of libvirt-ci.git, and generate a local copy to be stored in each project's repository at the same time as other CI-related files such as Dockerfiles are generated? That sounds like it could work, but admittedly this is the fuzziest part of the entire plan :)
The thing is that the GSoC project evaluation process has already begun, so I'd like to have something listed, unless you completely disagree with the vision I tried to lay out above in which case I'll have to retract on the idea completely and it'll require further discussion on the direction we want to take (even for the testing framework's sake in the other thread). [1] https://listman.redhat.com/archives/libvir-list/2021-July/msg00028.html Erik

On Fri, Feb 25, 2022 at 11:31:39AM +0100, Erik Skultety wrote:
On Thu, Feb 24, 2022 at 06:49:29AM -0800, Andrea Bolognani wrote:
I'm not entirely convinced that requiring the use of lcitool for this task is necessarily the best idea though. Ideally, it should be possible for developers to reproduce and debug CI failures locally without having to clone a second repository. It's fine to require lcitool for tasks that are performed by maintainers, but casual contributors should find all they need in libvirt.git itself.
Unless they also want to test with VMs as well in which case lcitool is highly recommended. The thing is that right now ci/helper wraps the Makefile. Let's say you (or someone else) replaces it with a python equivalent, I don't think that the number of contributors suddenly starting to use it will differ much compared to the situation right now, but maybe I'm wrong :). My motivation to add the support to lcitool is simple - right now there's an RFC on adopting some infrastructure in order to run TCK-based functional tests as part of GitLab. That's just the first stage, ultimately we want developers to be able to run the same test suite locally without the need to wait for gitlab and since the current design of the above is using nested virt, it is an achievable goal. There was also a suggestion to adopt the Avocado framework for a new test framework which would be largely based on TCK [1]. Now, let's get to the point I'm trying to make :) - my expectation/vision is that both the remote and local functional test executions will revolve around lcitool and so eventually with the adoption of *some* framework it is going to become a requirement that libvirt developers do perform functional testing before submitting a patch series. With that said, it makes more sense in my mind to focus on adding the container runtime support to lcitool in order to move towards the bigger goal and have everything consolidated in a single place.
Alright, that's a pretty solid argument. And even if we ultimately decide that we don't want to require using lcitool, once we have a project-agnostic Python implementation of this logic it would be reasonably straightforward to turn it into a standalone tool similar to the current ci/helper, so we have an exit strategy.
Another thing that has been bothering me is that neither 'ci/helper build' nor 'lcitool build' will actually perform the exact same build steps as the corresponding CI job, making them only mildly effective as debugging tools for CI failures. And of course each of these build recipes is maintained separately, so we have three similar but not quite identical scripts floating around.
Yes, but on its own, I think that is a problem which can be solved separately.
Agreed that the two tasks are not necessarily depending on each other, but there's significant overlap. I think it makes sense to rationalize how the build steps for a project are defined and maintained as part of adding "build in a local container" support to lcitool. Do you have any high-level concerns about the ci/build approach I vaguely described? The finer details are of course far from being set in stone, but I think the overall idea is solid and we should aim for it being implemented as we evolve our CI tooling. -- Andrea Bolognani / Red Hat / Virtualization

On Fri, Feb 25, 2022 at 03:02:49AM -0800, Andrea Bolognani wrote:
On Fri, Feb 25, 2022 at 11:31:39AM +0100, Erik Skultety wrote:
On Thu, Feb 24, 2022 at 06:49:29AM -0800, Andrea Bolognani wrote:
I'm not entirely convinced that requiring the use of lcitool for this task is necessarily the best idea though. Ideally, it should be possible for developers to reproduce and debug CI failures locally without having to clone a second repository. It's fine to require lcitool for tasks that are performed by maintainers, but casual contributors should find all they need in libvirt.git itself.
Unless they also want to test with VMs as well in which case lcitool is highly recommended. The thing is that right now ci/helper wraps the Makefile. Let's say you (or someone else) replaces it with a python equivalent, I don't think that the number of contributors suddenly starting to use it will differ much compared to the situation right now, but maybe I'm wrong :). My motivation to add the support to lcitool is simple - right now there's an RFC on adopting some infrastructure in order to run TCK-based functional tests as part of GitLab. That's just the first stage, ultimately we want developers to be able to run the same test suite locally without the need to wait for gitlab and since the current design of the above is using nested virt, it is an achievable goal. There was also a suggestion to adopt the Avocado framework for a new test framework which would be largely based on TCK [1]. Now, let's get to the point I'm trying to make :) - my expectation/vision is that both the remote and local functional test executions will revolve around lcitool and so eventually with the adoption of *some* framework it is going to become a requirement that libvirt developers do perform functional testing before submitting a patch series. With that said, it makes more sense in my mind to focus on adding the container runtime support to lcitool in order to move towards the bigger goal and have everything consolidated in a single place.
Alright, that's a pretty solid argument. And even if we ultimately decide that we don't want to require using lcitool, once we have a project-agnostic Python implementation of this logic it would be reasonably straightforward to turn it into a standalone tool similar to the current ci/helper, so we have an exit strategy.
Another thing that has been bothering me is that neither 'ci/helper build' nor 'lcitool build' will actually perform the exact same build steps as the corresponding CI job, making them only mildly effective as debugging tools for CI failures. And of course each of these build recipes is maintained separately, so we have three similar but not quite identical scripts floating around.
Yes, but on its own, I think that is a problem which can be solved separately.
Agreed that the two tasks are not necessarily depending on each other, but there's significant overlap. I think it makes sense to rationalize how the build steps for a project are defined and maintained as part of adding "build in a local container" support to lcitool.
Do you have any high-level concerns about the ci/build approach I vaguely described? The finer details are of course far from being set in stone, but I think the overall idea is solid and we should aim for it being implemented as we evolve our CI tooling.
No, I think that was a solid proposal, I would probably think of along those lines as well had I ever come to that idea myself :). Having each repo define their own build script which can be consumed both during local test executions and copied to the Dockerfile for a gitlab job to consume makes complete sense. Top it off with something like 'lcitool container --script <path_to_build_script>' and I think we have a solid ground for future work. Erik

On Fri, Feb 25, 2022 at 12:30:08PM +0100, Erik Skultety wrote:
On Fri, Feb 25, 2022 at 03:02:49AM -0800, Andrea Bolognani wrote:
Do you have any high-level concerns about the ci/build approach I vaguely described? The finer details are of course far from being set in stone, but I think the overall idea is solid and we should aim for it being implemented as we evolve our CI tooling.
No, I think that was a solid proposal, I would probably think of along those lines as well had I ever come to that idea myself :). Having each repo define their own build script which can be consumed both during local test executions and copied to the Dockerfile for a gitlab job to consume makes complete sense.
Just to make sure we're on the same page, what do you mean by "copied to the Dockerfile"? The CI job can call the script directly from the local clone of the repository just like a developer would on their machine - no copying necessary. The Dockerfile describes the environment a build will happen in, not the build steps.
Top it off with something like 'lcitool container --script <path_to_build_script>' and I think we have a solid ground for future work.
We're stepping into premature bikeshedding territory here :) but I would expect this to look like 'lcitool build $target --container' or something along those lines. Making it possible to specify an alternate path to the build script could be a neat feature, but we should have a sensible default that makes it possible for people to simply not care most of the time. -- Andrea Bolognani / Red Hat / Virtualization

On Fri, Feb 25, 2022 at 05:49:39AM -0800, Andrea Bolognani wrote:
On Fri, Feb 25, 2022 at 12:30:08PM +0100, Erik Skultety wrote:
On Fri, Feb 25, 2022 at 03:02:49AM -0800, Andrea Bolognani wrote:
Do you have any high-level concerns about the ci/build approach I vaguely described? The finer details are of course far from being set in stone, but I think the overall idea is solid and we should aim for it being implemented as we evolve our CI tooling.
No, I think that was a solid proposal, I would probably think of along those lines as well had I ever come to that idea myself :). Having each repo define their own build script which can be consumed both during local test executions and copied to the Dockerfile for a gitlab job to consume makes complete sense.
Just to make sure we're on the same page, what do you mean by "copied to the Dockerfile"? The CI job can call the script directly from the local clone of the repository just like a developer would on their machine - no copying necessary. The Dockerfile describes the environment a build will happen in, not the build steps.
My brain glitched, GitLab was clearly not in the same thinking box with Dockerfile and got swayed by other related thoughts like providing the build script along with specifying a local repo to be cloned inside the container - thinking unnecessarily way too far ahead... Anyhow, I took the idea proposal the following GSoC issue: https://gitlab.com/libvirt/libvirt/-/issues/279 Erik

On Fri, Feb 25, 2022 at 05:49:39AM -0800, Andrea Bolognani wrote:
On Fri, Feb 25, 2022 at 12:30:08PM +0100, Erik Skultety wrote:
On Fri, Feb 25, 2022 at 03:02:49AM -0800, Andrea Bolognani wrote:
Do you have any high-level concerns about the ci/build approach I vaguely described? The finer details are of course far from being set in stone, but I think the overall idea is solid and we should aim for it being implemented as we evolve our CI tooling.
No, I think that was a solid proposal, I would probably think of along those lines as well had I ever come to that idea myself :). Having each repo define their own build script which can be consumed both during local test executions and copied to the Dockerfile for a gitlab job to consume makes complete sense.
Just to make sure we're on the same page, what do you mean by "copied to the Dockerfile"? The CI job can call the script directly from the local clone of the repository just like a developer would on their machine - no copying necessary. The Dockerfile describes the environment a build will happen in, not the build steps.
Top it off with something like 'lcitool container --script <path_to_build_script>' and I think we have a solid ground for future work.
We're stepping into premature bikeshedding territory here :) but I would expect this to look like 'lcitool build $target --container' or something along those lines.
I've never got on very well with the 'lcitool build' command as its a bit of a black box you don't have much control over. In its current impl, it also means that lcitool has to know about build commands for each project which is unfortunate. If we're going to wire up support for containers, I think we should start by just creating a 'lcitool run $target-or-host' command that brings up an environment with an interactive shell, with the current source present. eg $ lcitool run fedora-35 We could then let it take a command + args to run as extra args eg $ lcitool run fedora-35 meson build $ lcitool run fedora-35 ninja -C build would just work with VM, but for containers you would need the container FS to be persistent across runs. This could be achieved by giving the container an explicit name, so you just re-restart the same container each time instead of using a throwaway container. The basic idea though is that running stuff inside the container/VM is just the same as running stuff in your local checkout. The exact same command name(s0, just prefixed with 'lcitool run fedora-35'. Or in my case I'd add an alias alias f35='lcitool run fedora-35' so i can do $ f35 meson build $ f35 ninja -C bujld etc Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Fri, Feb 25, 2022 at 02:09:13PM +0000, Daniel P. Berrangé wrote:
I've never got on very well with the 'lcitool build' command as its a bit of a black box you don't have much control over. In its current impl, it also means that lcitool has to know about build commands for each project which is unfortunate.
My proposal to standardize on running a ci/build script that's maintained as part of the project itself, and that is also used for the regular CI jobs, would address this.
If we're going to wire up support for containers, I think we should start by just creating a 'lcitool run $target-or-host' command that brings up an environment with an interactive shell, with the current source present. eg
$ lcitool run fedora-35
This becomes a lot of fun when you start considering how to have a consistent interface across containers and VMs. Exposing the local git tree to a container is sort of easy (even though there are a number of caveats) but to achieve the same result with a VM you'd need to get something like virtiofs involved and coordinate with the guest OS in non trivial ways...
We could then let it take a command + args to run as extra args
eg
$ lcitool run fedora-35 meson build $ lcitool run fedora-35 ninja -C build
would just work with VM, but for containers you would need the container FS to be persistent across runs. This could be achieved by giving the container an explicit name, so you just re-restart the same container each time instead of using a throwaway container.
The basic idea though is that running stuff inside the container/VM is just the same as running stuff in your local checkout. The exact same command name(s0, just prefixed with 'lcitool run fedora-35'.
Or in my case I'd add an alias
alias f35='lcitool run fedora-35'
so i can do
$ f35 meson build $ f35 ninja -C bujld
If you're trying to reproduce a CI failure locally so that you can debug it, or performing validation before posting patches, you don't want to spell out the build steps this way. What you want is to call $ lcitool build fedora-35 --container and get the same failure you've seen in the CI pipeline, because the same steps have been executed in the same container image. To debug the failure, you should be able to either run something like $ lcitool shell fedora-35 --container and run ./ci/build yourself, or even better be offered the chance to spawn an interactive shell as soon as 'lcitool build' figures out that a failure has occurred. Some way to optionally keep the container's state around so that you can return to it at a later time would be desirable, but that's more of an optimization than something that's really necessary for regular usage. -- Andrea Bolognani / Red Hat / Virtualization

On Fri, Feb 25, 2022 at 08:36:10AM -0800, Andrea Bolognani wrote:
On Fri, Feb 25, 2022 at 02:09:13PM +0000, Daniel P. Berrangé wrote:
I've never got on very well with the 'lcitool build' command as its a bit of a black box you don't have much control over. In its current impl, it also means that lcitool has to know about build commands for each project which is unfortunate.
My proposal to standardize on running a ci/build script that's maintained as part of the project itself, and that is also used for the regular CI jobs, would address this.
If we're going to wire up support for containers, I think we should start by just creating a 'lcitool run $target-or-host' command that brings up an environment with an interactive shell, with the current source present. eg
$ lcitool run fedora-35
This becomes a lot of fun when you start considering how to have a consistent interface across containers and VMs. Exposing the local git tree to a container is sort of easy (even though there are a number of caveats) but to achieve the same result with a VM you'd need to get something like virtiofs involved and coordinate with the guest OS in non trivial ways...
We could then let it take a command + args to run as extra args
eg
$ lcitool run fedora-35 meson build $ lcitool run fedora-35 ninja -C build
would just work with VM, but for containers you would need the container FS to be persistent across runs. This could be achieved by giving the container an explicit name, so you just re-restart the same container each time instead of using a throwaway container.
The basic idea though is that running stuff inside the container/VM is just the same as running stuff in your local checkout. The exact same command name(s0, just prefixed with 'lcitool run fedora-35'.
Or in my case I'd add an alias
alias f35='lcitool run fedora-35'
so i can do
$ f35 meson build $ f35 ninja -C bujld
If you're trying to reproduce a CI failure locally so that you can debug it, or performing validation before posting patches, you don't want to spell out the build steps this way. What you want is to call
$ lcitool build fedora-35 --container
and get the same failure you've seen in the CI pipeline, because the same steps have been executed in the same container image.
I guess that depends on your POV really, as that's not the way I prefer to debug things. I want to have an interactive shell and invokes the individual commands myself, so that I can have ability to customize their invokation to alter loggging or debugging settings, or choose different build options to understand their effect. IOW, what matters to me is the getting an environment up and running, with the source tree available, and ready to step through the build commands. A 'ci/build' shell script is ok to just reproduce a failure, but since it doesn't let me tweak things, I'm not likely to use it much for debugging. Regards, Daniel -- |: https://berrange.com -o- https://www.flickr.com/photos/dberrange :| |: https://libvirt.org -o- https://fstop138.berrange.com :| |: https://entangle-photo.org -o- https://www.instagram.com/dberrange :|

On Fri, Feb 25, 2022 at 05:14:41PM +0000, Daniel P. Berrangé wrote:
On Fri, Feb 25, 2022 at 08:36:10AM -0800, Andrea Bolognani wrote:
If you're trying to reproduce a CI failure locally so that you can debug it, or performing validation before posting patches, you don't want to spell out the build steps this way. What you want is to call
$ lcitool build fedora-35 --container
and get the same failure you've seen in the CI pipeline, because the same steps have been executed in the same container image.
I guess that depends on your POV really, as that's not the way I prefer to debug things. I want to have an interactive shell and invokes the individual commands myself, so that I can have ability to customize their invokation to alter loggging or debugging settings, or choose different build options to understand their effect.
IOW, what matters to me is the getting an environment up and running, with the source tree available, and ready to step through the build commands. A 'ci/build' shell script is ok to just reproduce a failure, but since it doesn't let me tweak things, I'm not likely to use it much for debugging.
As long as you still have the ability to use 'lcitool shell' to enter the environment, your use case should be covered. I agree that we should have that ability, and in fact the ci/helper script already supports it. What I'm trying to say is that getting $ lcitool run fedora-35 meson build $ lcitool run fedora-35 ninja -C build to work as expected is much harder than enabling $ lcitool shell fedora-35
meson build ninja -C build
because the former needs persistence across lcitool runs while the latter doesn't. -- Andrea Bolognani / Red Hat / Virtualization
participants (4)
-
Andrea Bolognani
-
Daniel P. Berrangé
-
Erik Skultety
-
Michal Prívozník