Periodically rebuilding old images seems challenging in this new setup
compared to docker-solr.  I suppose it could be done with a script that
loops over release branches to check them out to then execute the
appropriate gradle task to build & push.  But the requirement to use an
identical Solr binary (same input tgz as was originally produced) is the
real challenge, because we can't use the same Dockerfile.  The input TGZ is
gone.  Maybe we could grab /opt/solr/* from the previous image, which can
be referenced in a FROM?

BTW I updated the Dockerfile significantly in Houston's recent PR.  I think
it's a step in this direction.

~ David Smiley
Apache Lucene/Solr Search Developer
http://www.linkedin.com/in/davidwsmiley


On Sat, Jan 16, 2021 at 7:36 AM Jan Høydahl <jan....@cominvent.com> wrote:

> Great summary Houston!
>
> Could also be that docker team is willing to provide a “link” from
> official _/solr to apache/solr if we can convince them of solid quality.
> Think they do this for elastic images already.
>
> Since Docker images contain Linux and Java, which we would not be allowed
> to release as part of Solr, I have seen discussions in various ASF lists
> stating that it can be argued that the only binaries we do “release” are
> the layers built by out Dockerfile, i.e. what comes after the (runtime)
> FROM line. So we should be careful with what extra software we add in
> dockerfile. I did a check earlier and think we are in good shape.
>
> We currently re-build a bunch of older solr docked images every time we
> release a new, but I don’t think there is any automatic refresh of images
> outside a release. Great idea to kick off refresh of images from Jenkins.
>
> We can also publish nightly “master” images but since they are not
> officially voted releases they must be clearly labelled as unofficial and
> not advertised on the web page, only for dev purposes.
>
> Jan Høydahl
>
> 16. jan. 2021 kl. 00:36 skrev Timothy Potter <thelabd...@gmail.com>:
>
> 
> I'm curious about how tags will work when updating the base image for a
> released image? The image for a tag should be immutable (IMHO), and I think
> people would be surprised if 8.8.0 suddenly changed even if it was for a
> good reason such as fixing a CVE in the base image. But based on what Kevin
> said, perhaps there's already precedence for this with the official images?
>
> On Fri, Jan 15, 2021 at 1:51 PM Houston Putman <houstonput...@gmail.com>
> wrote:
>
>> Thanks for bringing up this issue Kevin.
>>
>> Periodically re-building docker images is certainly a feature we could
>> support, and probably should to automatically keep up with security fixes.
>> We could even automate it pretty easily in Jenkins.
>>
>> We could also build in support in the gradle commands to instead of
>> building a TGZ from source, download and verify the "official" TGZ, to
>> build the image with. That way release images are always built with the
>> same exact binaries. The Dockerfile wouldn't need to change at all between
>> local and release, it still merely expects a TGZ to be passed in the
>> context; gradle can determine if it needs to be built from scratch or
>> downloaded.
>>
>> This still likely wouldn't be good enough to make the image an "official
>> docker image" but it gets us to essentially the same end-state image. The
>> only difference is the downloading and verification are happening in gradle
>> instead of the Dockerfile.
>>
>> - Houston
>>
>> On Fri, Jan 15, 2021 at 2:05 PM Kevin Risden <kris...@apache.org> wrote:
>>
>>> Currently the solr-docker-image, and a majority of "Docker Official
>>>> Images", the officially released Solr binaries are downloaded from mirrors
>>>> and validated within the Dockerfiles. This makes it easy to ensure to users
>>>> that the 9.0 solr docker image contains the 9.0 solr release. This process
>>>> doesn't fit very well with local builds, because there is nowhere to
>>>> download local builds from, and validation isn't required.
>>>>
>>>> *The current opinion in the community is to abandon the "Docker
>>>> Official Images" style process of downloading and validating official
>>>> binaries, and instead having the release manager use the local-build image
>>>> creation with the final release source.* This should result in the
>>>> same docker image in the end, however there is no trust built into the
>>>> docker image itself. Instead we are likely going to document a way for
>>>> users to verify the docker-image contents themselves.
>>>>
>>>
>>> Before we abandon the official process of downloading/validating
>>> official binaries, I think there is a good reason to keep the ability to
>>> download an "official" Apache Solr release and use it in the "official"
>>> Solr convenience Docker image.
>>>
>>> Docker images are static point in time copies of an OS and all
>>> supporting packages (like Java) when built. Periodically Docker images
>>> should be rebuilt to pick up the latest security and bug enhancements in
>>> the base image. Just like any OS should run `apt upgrade` or `yum update`
>>> periodically to ensure it is up to date.
>>>
>>> My proposal is to periodically rebuild the "official" Solr convenience
>>> Docker image based on the "official" Solr release to ensure we keep the
>>> Docker images up to date. The idea being that we have a list of "supported"
>>> versions of Solr (ie: 8.5, 8.6, 8.7) and periodically (ie: daily, weekly)
>>> the Docker images are rebuilt. Once a new release is made (ie: 8.8) it gets
>>> added to this rebuilding matrix. This ensures that the "official" Solr
>>> convenience Docker image is reasonably up to date with regards to the base
>>> image security updates.
>>>
>>> My understanding (from a few years ago) is that the Docker official
>>> images are rebuilt when the base image is updated. This was automatic from
>>> what I understand. If we move away from the "Docker official image" way of
>>> doing things, we should still ensure that we can provide high quality
>>> secure Docker images to the community.
>>>
>>> We should not need a new Apache Solr release (ie: 8.7.1, 8.8) to update
>>> the Docker image to pick up the latest base image changes.
>>>
>>> It is important to be able to build and test locally with a Docker image
>>> that matches what an "official" Docker image is. This should still be a
>>> goal, but we should be able to rebuild the Solr docker image without
>>> rebuilding all of Solr.
>>>
>>> PS - I chatted a bit with Houston on Slack about this topic and
>>> hopefully captured all the context correctly.
>>>
>>> Kevin Risden
>>>
>>>
>>> On Fri, Jan 15, 2021 at 12:45 PM Houston Putman <houstonput...@gmail.com>
>>> wrote:
>>>
>>>> There's a few decisions that need to be ironed out around the Solr
>>>> docker image before 9.0 is released. This is because the community has
>>>> decided that Solr should start releasing it's own docker images starting
>>>> with 9.0.
>>>>
>>>> Below is the current state of the ongoing discussions for the Solr
>>>> Docker image. Please feel free to correct me or fill in any information I
>>>> may be missing.
>>>>
>>>> Where does this image live?
>>>>
>>>> There are two options for this really.
>>>>
>>>>    - _/solr - docker run solr:9.0 (Official Docker Image)
>>>>    - apache/solr - docker run apache/solr:9.0
>>>>
>>>> The benefits of the first are 1) the nice usability of being able to
>>>> plainly specify "solr" and 2) the "Docker Official Images" badge on
>>>> DockerHub. The downsides are that there are very strict requirements with
>>>> creating Official Docker Images, which would complicate and require
>>>> separating the way that we build release docker images and local docker
>>>> images.
>>>>
>>>> The benefits of using the apache namespace is that we can build the
>>>> image in any way that we want. We would be able to build release and local
>>>> docker images the exact same way. The downside is the loss of the "Docker
>>>> Official Images" badge.
>>>>
>>>> *I think there is some consensus that choosing the "apache/solr"
>>>> location is fine, and worth the added flexibility we get in the build
>>>> process.*
>>>>
>>>> Legal Stuff
>>>>
>>>> There are a few legal questions we need to keep in mind when creating
>>>> this process and doing a release for the first time.
>>>>
>>>>    - Source release - The apache policy is: (from Michael Sokolov)
>>>>
>>>>    “Every ASF release MUST contain one or more source packages, which
>>>>>    MUST be sufficient for a user to build and test the release provided 
>>>>> they
>>>>>    have access to the appropriate platform and tools.”
>>>>
>>>>    For the docker build this is fine as long as the solr/docker gradle
>>>>    module is included in the source release. As one can always rebuild the
>>>>    same image running gradlew docker using the source.
>>>>
>>>>    -
>>>>
>>>>    Jan Høydahl mentioned that the docker file layers should be
>>>>    limited, but I'm not exactly sure what this means or entails. Maybe he 
>>>> can
>>>>    expand on this.
>>>>
>>>> Artifacts within the Image
>>>>
>>>> As mentioned above in the "Image Location" section, the goal of
>>>> including the docker build process inside the Solr project is to make
>>>> development easier by providing an easy way to build the official-style
>>>> docker image with local source code. In order to achieve "official-style"
>>>> images for local builds, we want to make the build process for local images
>>>> as close as possible to the process for building official release images.
>>>>
>>>> Currently the solr-docker-image, and a majority of "Docker Official
>>>> Images", the officially released Solr binaries are downloaded from mirrors
>>>> and validated within the Dockerfiles. This makes it easy to ensure to users
>>>> that the 9.0 solr docker image contains the 9.0 solr release. This process
>>>> doesn't fit very well with local builds, because there is nowhere to
>>>> download local builds from, and validation isn't required.
>>>>
>>>> *The current opinion in the community is to abandon the "Docker
>>>> Official Images" style process of downloading and validating official
>>>> binaries, and instead having the release manager use the local-build image
>>>> creation with the final release source.* This should result in the
>>>> same docker image in the end, however there is no trust built into the
>>>> docker image itself. Instead we are likely going to document a way for
>>>> users to verify the docker-image contents themselves.
>>>>
>>>> I am not sure what the user-side verification process would look like
>>>> for the image, but I definitely think it is something that we should look
>>>> into. Maybe Docker will allow us to use these as official images if we can
>>>> script out this verification and make it easy for them to do? Just a
>>>> thought, I'm not sure if that would actually work.
>>>>
>>>

Reply via email to