I'm curious about how tags will work when updating the base image for a
released image? The image for a tag should be immutable (IMHO), and I think
people would be surprised if 8.8.0 suddenly changed even if it was for a
good reason such as fixing a CVE in the base image. But based on what Kevin
said, perhaps there's already precedence for this with the official images?

On Fri, Jan 15, 2021 at 1:51 PM Houston Putman <houstonput...@gmail.com>
wrote:

> Thanks for bringing up this issue Kevin.
>
> Periodically re-building docker images is certainly a feature we could
> support, and probably should to automatically keep up with security fixes.
> We could even automate it pretty easily in Jenkins.
>
> We could also build in support in the gradle commands to instead of
> building a TGZ from source, download and verify the "official" TGZ, to
> build the image with. That way release images are always built with the
> same exact binaries. The Dockerfile wouldn't need to change at all between
> local and release, it still merely expects a TGZ to be passed in the
> context; gradle can determine if it needs to be built from scratch or
> downloaded.
>
> This still likely wouldn't be good enough to make the image an "official
> docker image" but it gets us to essentially the same end-state image. The
> only difference is the downloading and verification are happening in gradle
> instead of the Dockerfile.
>
> - Houston
>
> On Fri, Jan 15, 2021 at 2:05 PM Kevin Risden <kris...@apache.org> wrote:
>
>> Currently the solr-docker-image, and a majority of "Docker Official
>>> Images", the officially released Solr binaries are downloaded from mirrors
>>> and validated within the Dockerfiles. This makes it easy to ensure to users
>>> that the 9.0 solr docker image contains the 9.0 solr release. This process
>>> doesn't fit very well with local builds, because there is nowhere to
>>> download local builds from, and validation isn't required.
>>>
>>> *The current opinion in the community is to abandon the "Docker Official
>>> Images" style process of downloading and validating official binaries, and
>>> instead having the release manager use the local-build image creation with
>>> the final release source.* This should result in the same docker image
>>> in the end, however there is no trust built into the docker image itself.
>>> Instead we are likely going to document a way for users to verify the
>>> docker-image contents themselves.
>>>
>>
>> Before we abandon the official process of downloading/validating official
>> binaries, I think there is a good reason to keep the ability to download an
>> "official" Apache Solr release and use it in the "official" Solr
>> convenience Docker image.
>>
>> Docker images are static point in time copies of an OS and all supporting
>> packages (like Java) when built. Periodically Docker images should be
>> rebuilt to pick up the latest security and bug enhancements in the base
>> image. Just like any OS should run `apt upgrade` or `yum update`
>> periodically to ensure it is up to date.
>>
>> My proposal is to periodically rebuild the "official" Solr convenience
>> Docker image based on the "official" Solr release to ensure we keep the
>> Docker images up to date. The idea being that we have a list of "supported"
>> versions of Solr (ie: 8.5, 8.6, 8.7) and periodically (ie: daily, weekly)
>> the Docker images are rebuilt. Once a new release is made (ie: 8.8) it gets
>> added to this rebuilding matrix. This ensures that the "official" Solr
>> convenience Docker image is reasonably up to date with regards to the base
>> image security updates.
>>
>> My understanding (from a few years ago) is that the Docker official
>> images are rebuilt when the base image is updated. This was automatic from
>> what I understand. If we move away from the "Docker official image" way of
>> doing things, we should still ensure that we can provide high quality
>> secure Docker images to the community.
>>
>> We should not need a new Apache Solr release (ie: 8.7.1, 8.8) to update
>> the Docker image to pick up the latest base image changes.
>>
>> It is important to be able to build and test locally with a Docker image
>> that matches what an "official" Docker image is. This should still be a
>> goal, but we should be able to rebuild the Solr docker image without
>> rebuilding all of Solr.
>>
>> PS - I chatted a bit with Houston on Slack about this topic and hopefully
>> captured all the context correctly.
>>
>> Kevin Risden
>>
>>
>> On Fri, Jan 15, 2021 at 12:45 PM Houston Putman <houstonput...@gmail.com>
>> wrote:
>>
>>> There's a few decisions that need to be ironed out around the Solr
>>> docker image before 9.0 is released. This is because the community has
>>> decided that Solr should start releasing it's own docker images starting
>>> with 9.0.
>>>
>>> Below is the current state of the ongoing discussions for the Solr
>>> Docker image. Please feel free to correct me or fill in any information I
>>> may be missing.
>>>
>>> Where does this image live?
>>>
>>> There are two options for this really.
>>>
>>>    - _/solr - docker run solr:9.0 (Official Docker Image)
>>>    - apache/solr - docker run apache/solr:9.0
>>>
>>> The benefits of the first are 1) the nice usability of being able to
>>> plainly specify "solr" and 2) the "Docker Official Images" badge on
>>> DockerHub. The downsides are that there are very strict requirements with
>>> creating Official Docker Images, which would complicate and require
>>> separating the way that we build release docker images and local docker
>>> images.
>>>
>>> The benefits of using the apache namespace is that we can build the
>>> image in any way that we want. We would be able to build release and local
>>> docker images the exact same way. The downside is the loss of the "Docker
>>> Official Images" badge.
>>>
>>> *I think there is some consensus that choosing the "apache/solr"
>>> location is fine, and worth the added flexibility we get in the build
>>> process.*
>>>
>>> Legal Stuff
>>>
>>> There are a few legal questions we need to keep in mind when creating
>>> this process and doing a release for the first time.
>>>
>>>    - Source release - The apache policy is: (from Michael Sokolov)
>>>
>>>    “Every ASF release MUST contain one or more source packages, which
>>>>    MUST be sufficient for a user to build and test the release provided 
>>>> they
>>>>    have access to the appropriate platform and tools.”
>>>
>>>    For the docker build this is fine as long as the solr/docker gradle
>>>    module is included in the source release. As one can always rebuild the
>>>    same image running gradlew docker using the source.
>>>
>>>    -
>>>
>>>    Jan Høydahl mentioned that the docker file layers should be limited,
>>>    but I'm not exactly sure what this means or entails. Maybe he can expand 
>>> on
>>>    this.
>>>
>>> Artifacts within the Image
>>>
>>> As mentioned above in the "Image Location" section, the goal of
>>> including the docker build process inside the Solr project is to make
>>> development easier by providing an easy way to build the official-style
>>> docker image with local source code. In order to achieve "official-style"
>>> images for local builds, we want to make the build process for local images
>>> as close as possible to the process for building official release images.
>>>
>>> Currently the solr-docker-image, and a majority of "Docker Official
>>> Images", the officially released Solr binaries are downloaded from mirrors
>>> and validated within the Dockerfiles. This makes it easy to ensure to users
>>> that the 9.0 solr docker image contains the 9.0 solr release. This process
>>> doesn't fit very well with local builds, because there is nowhere to
>>> download local builds from, and validation isn't required.
>>>
>>> *The current opinion in the community is to abandon the "Docker Official
>>> Images" style process of downloading and validating official binaries, and
>>> instead having the release manager use the local-build image creation with
>>> the final release source.* This should result in the same docker image
>>> in the end, however there is no trust built into the docker image itself.
>>> Instead we are likely going to document a way for users to verify the
>>> docker-image contents themselves.
>>>
>>> I am not sure what the user-side verification process would look like
>>> for the image, but I definitely think it is something that we should look
>>> into. Maybe Docker will allow us to use these as official images if we can
>>> script out this verification and make it easy for them to do? Just a
>>> thought, I'm not sure if that would actually work.
>>>
>>

Reply via email to