-
-
Notifications
You must be signed in to change notification settings - Fork 169
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Approach for reusing intermediate Docker images in system tests #1700
Comments
About the second option: |
Interestingly, one of the supported backends is also the GitHub Actions Cache. We can use different/multiple caches, but we need to name each of them differently. If we also set the cache mode to |
With the current approach of having N Dockerfiles for N containers used during the systemtests this makes the Dockerfiles become quite spread out and loosely coupled. A very different approach might be to just create one huge Dockerfile and just use multiple stages to build the adapters. This could look like: FROM ubuntu:22_04 as precice_base
ARG PRECICE_REF
RUN update_os()
RUN ./install_precice ${PRECICE_REF}
# Build openfoam-adapter
FROM precice_base as openfoam-adapter
ARG OPENFOAM_ADAPTER_REF
RUN ./Allwmake ${OPENFOAM_ADAPTER_REF}
# Build python_bindings image
FROM precice_base as python_bindings
ARG PYTHON_BINDINGS_REF
RUN ./install_python_bindings ${PYTHON_BINDINGS_REF}
# Solve dependency of fenics on python bindings like.
FROM python_bindings as fenics-adapter
ARG FENICS_ADAPTER_REF
RUN ./install_fenics_adapter ${FENICS_ADAPTER_REF}
One could then use the This approach has of course Pros and Cons: Pro:
Cons
|
I understand that this is the natural implementation of what we have established so far. I would like to see a working prototype.
This would be the main thing to check, and what could actually also speed up everything.
In the
I do not understand this. Could you please elaborate?
I guess this means "what happens if we also want to publish Docker images for each repository". I would say, let's ignore that for now. Rethinking about it, it would not really be something that I would like to support users with. |
What i mean by that: Assume someone changes the way the openfoam-adapter is beeing built. Like having additional compiler flags or what not. Then one would not only need to touch the openfoam-adapter repository, but also the tutorials tutorial to update the respective Dockerfile.
Thinking about it: If done right one could just easily reuse the dockerfile do publish an image to Dockerhub or any other registry by building the specific stage. This could easily be done by the build-and-push action
So should i go ahead and try to build a small prototype using the openfoam-adapter? |
I think we should be able to find a solution to that later, if such a problem becomes realistic. One approach could be, e.g., to have the templates of the Docker files etc in each repository. But these are also specific to the platform used, so I think it makes most sense to keep them where the tests are. We also do this in the
Yes, please! 🚲 |
Follow-up: I understand that the idea now is that we use multi-stage Docker builds just to construct the containers that we will be using in Docker Compose. Is that correct? Or do we now want to have a single container with all the components tested in each test? I would guess that a combination would be faster, as it would not need to generate new images that often: use multi-stage builds to take advantage of the caching when switching between states of each image for each component, but then still use Docker Compose to reuse images of components that have not been changed. So, more like this:
rather than:
|
I would go with:
I also did some experimenting. The flexibility of not having to rely on a published tag is quite nice. |
@valentin-seitz can you provide some comment on why this issue was closed? A reference to a pull request or other issue are also great. |
These are the related PRs: The The Docker Compose template then configures the GitHub Actions cache: https://github.com/precice/tutorials/blob/develop/tools/tests/docker-compose.template.yaml Cached layers are visible here (when available): https://github.com/precice/tutorials/actions/caches While the GitHUb Actions cache is updated, it is not hit. There seems to be a related bug in the Action. If we switched the system tests to our own runner, we could just rely on the normal Docker cache anyway. |
In the preCICE v2 paper, we describe the challenges, stakeholders, and general approach for the preCICE system tests, which test together multiple repositories, built as layers of Docker images on top of each other, and started as different services. This is something we already tried in the previous, Travis-based system tests.
To save significant time, we want to avoid rebuilding images of components that could already be available. In the previous approach, we were using a rather arbitrary naming scheme for images that we were pushing to Docker hub. Most of the images are not available anymore, but they were quite a few.
In the new system tests, I can think of the following options, which need further investigation:
Alternatively, or next to option 2, we could provide only one base image (e.g., the official preCICE image) and use Terraform (or maybe Ansible) to generate the images we need at each job. We will again need some way to reuse artifacts.
Are there any further alternatives we should consider? Any suggestions on the technical implementation?
The text was updated successfully, but these errors were encountered: