For kolla, we were thinking about a couple of optimization that should greatly reduce the space.
1. only upload to the hub based on stable versions. The updates are much less frequent. 2. fingerprint the containers. base it on rpm/deb list, pip list, git checksums. If the fingerprint is the same, don't reupload a container. Nothing really changed but some trivial files or timestamps on files. Also, remember the apparent size of a container is not the same as the actual size. Due to layering, the actual size is often significantly smaller then what shows up in 'docker images'. For example, this http://tarballs.openstack.org/kolla-kubernetes/gate/containers/centos-binary-ceph.tar.bz2 is only 1.2G and contains all the containers needed for a compute kit deployment. For trunk based builds, it may still be a good idea to only mirror those to tarballs.o.o or a openstack provided docker repo that infra has been discussing? Thanks, Kevin ________________________________________ From: Gabriele Cerami [gcer...@redhat.com] Sent: Thursday, October 19, 2017 8:03 AM To: openstack-dev@lists.openstack.org Subject: [openstack-dev] [TripleO][Kolla] Concerns about containers images in DockerHub Hi, our CI scripts are now automatically building, testing and pushing approved openstack/RDO services images to public repositories in dockerhub using ansible docker_image module. Promotions have had some hiccups, but we're starting to regularly upload new images every 4 hours. When we'll get at full speed, we'll potentially have - 3-4 different sets of images, one per release of openstack (counting a EOL release grace period) - 90-100 different services images per release - 4-6 different versions of the same image ( keeping older promoted images for a while ) At around 300MB per image a possible grand total is around 650GB of space used. We don't know if this is acceptable usage of dockerhub space and for this we already sent a similar email the to docker support to ask specifically if the user would get penalty in any way (e.g. enforcing quotas, rete limiting, blocking). We're still waiting for a reply. In any case it's critical to keep the usage around the estimate, and to achieve this we need a way to automatically delete the older images. docker_image module does not provide this functionality, and we think the only way is issuing direct calls to dockerhub API https://docs.docker.com/registry/spec/api/#deleting-an-image docker_image module structure doesn't seem to encourage the addition of such functionality directly in it, so we may be forced to use the uri module. With new images uploaded potentially every 4 hours, this will become a problem to be solved within the next two weeks. We'd appreciate any input for an existing, in progress and/or better solution for bulk deletion, and issues that may arise with our space usage in dockerhub Thanks __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev __________________________________________________________________________ OpenStack Development Mailing List (not for usage questions) Unsubscribe: openstack-dev-requ...@lists.openstack.org?subject:unsubscribe http://lists.openstack.org/cgi-bin/mailman/listinfo/openstack-dev