Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

jib gets stuck at 'processing base image' when the base image comes from local docker daemon #4046

Open
EternalWind opened this issue Jun 16, 2023 · 3 comments · May be fixed by #4048
Open

Comments

@EternalWind
Copy link

Description of the issue

When reading the base image from local docker daemon, if the base image is built from a dockerfile consisting of a relatively huge amount of instructions(example here), jib will stuck at 'processing base image' forever like below.
image

Root cause

When inspecting the base image from docker daemon, jib invokes Process.waitFor() immediately without reading from stdout. It only tries to read from stdout after waitFor() returns, i.e. the process terminates.

If the size of the output is significant enough to overwhelm the buffer, the process would become pending until someone read from its stdout to clear out the buffer. This results in a deadlock that makes waitFor() never returns.

Proposed fix

Updates the implementation of CliDockerClient.inspect() as below.
image

@athtsio
Copy link

athtsio commented Nov 14, 2023

Hello all,

I am facing the same issue as described, running on a Jenkins instance as gradle task with plugin on version 3.3.1. Briefly these are the stages:

  1. Create a docker image, using an another existing base docker image which is located to a remote registry (creating an extra layer via a Dockerfile with some stuff on top of that base docker image).

  2. Use jibDockerBuild gradle task having as:
    from { image = "docker://" + [dockerImageName that was just built on stage1] }

  3. Stuck having the exact described issue. It gets stuck at 25% > processing base image {IMAGE_NAME}

This is occured like 3/5 pipeline triggers and I need to find a workaround or get the provided fix for this bug ASAP - since this is getting really annoying :/ .

Thanks a lot, and especially you @EternalWind for providing a solution for that bug :) .

@EternalWind
Copy link
Author

@athtsio Since they are unlikely to be merging my fix any time soon, one workaround we have been using is docker's multi-stage build.

We base the final image on a much simpler base image, e.g. a minimal Alpine image, instead of the desired base image. Then, we use the desired base image as a builder image and copy out all of the desired files from it since the instructions from a builder image are not going into the final image.

Of course, this is based on the assumption that most of the instructions you want from the desired base image are installing packages or copying files.

Hopefully the fix could be merged soon.

@athtsio
Copy link

athtsio commented Nov 14, 2023

@EternalWind thanks for the response/suggestion.
We are using some kind of builder pattern.

The base image is really minimal. It is built and published in a registry. This means that we do not have to build this base image each time we need to dockerize the app. We take it/use it as is in the one single FROM in the Dockerfile.

On top of that base image, we built the image that will be used from the jib task which is like the base image plus 3 new packages installed, copy and run a script that imports the aws rds certificates. Nothing more !

My point is that the instructions are literally 10 lines.
From my point of view, I see nothing extreme in the Dockerfile to switch to multi stage build pattern :/

I do not know if by adding --no-cache on docker build will bring any difference...or make things worse 😭

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants