Oops, the container link got mangled, it was supposed to be < https://github.com/apache/trafficcontrol/pkgs/container/trafficcontrol/ci/trafficserver-alpine >.
On Tue, Dec 6, 2022 at 9:38 AM Zach Hoffman <zrhoff...@apache.org> wrote: > in Traffic Control, we build an Alpine image < > https://github.com/apache/trafficcontrol/pkgs/container/trafficcontrol%2Fci%2Ftrafficserver-alpine>, > with Traffic Server baked into it, for amd64 and arm64 . Rather than > building them both from the same machine, the amd64 and arm64 images in > separate GitHub Actions jobs, then combined using `docker manifest` in a > third GitHub Actions job < > https://github.com/apache/trafficcontrol/blob/master/.github/workflows/container-trafficserver-alpine.yml > >. > > The arm64 part still takes much longer (over 2 hours, compare to 10 > minutes for amd64), but this approach eliminates the need to build for more > than one platform at a time, so if the arm64 job were to run on an arm64 > runner in the future, that wouldn't need to further complicate the other > jobs in order to take advantage of that speedup. > > -Zach > > On Tue, Dec 6, 2022 at 9:02 AM Jarek Potiuk <ja...@potiuk.com> wrote: > >> In Airflow we have a bit more complex setting (we are building 2x5x2 >> different images and they are different sets of them for different >> branches), Building images for Airflow takes quite some time (installing >> many dependencies) so qemu was out of the question (several hours to build >> single image). Unfortunately qemu is ~ 15 times (!) slower than hardware >> builds from our experience. Good enough for small images, but unacceptable >> if your image normally takes time >> >> The built-in actions like the one mentioned by Jacob have the limitations >> that they depende on qemu - I have not yet found an easy "Action" that >> could utilise AMD and ARM hardware together. But maybe there are other >> ways. >> >> We developed our own scripting and tooling: >> >> * we have self-hosted runners in GitHub and we only use those to build >> images (with Astronomer money and Amazon-sponsored credits on AWS) >> * for the time when we build multi-platform build we start ARM instances >> (they have built-in timed auto-kill-switch so that they are not run >> indefinitely and take resources) >> * we configured ssh-forwarded docker port from our AMD instance to forward >> docker socket >> * we configured two builder with buildx (local for AMD and the forwarded >> one for ARM) >> * we run multi-platform buildx build to use both builders. >> * we have our own Python build/development environment (breeze) that wraps >> the docker commands that are executed so the "code" doing it is not easy >> to >> copy elsewhere sometimes, but I copied below the actual "meat" what >> happens >> under the hood. >> * we run it as part of our Github Action workflows >> >> Examples: >> >> * Starting ARM instance and configuring builders: >> https://github.com/apache/airflow/actions/runs/3603432288/jobs/6080260757 >> >> This is the relevant part of establishing the worker for buildx (I omit >> starting the instance as it is amazon-specific): >> >> export AUTOSSH_LOGFILE="${WORKING_DIR}/autossh.log" >> autossh -f "-L12357:/var/run/docker.sock" \ >> -N -o "IdentitiesOnly=yes" -o "StrictHostKeyChecking=no" \ >> -i "${WORKING_DIR}/my_key" "${EC2_USER}@${INSTANCE_PRIVATE_DNS_NAME}" >> >> bash -c 'echo -n "Waiting port 12357 .."; for _ in `seq 1 40`; do echo -n >> .; sleep 0.25; nc -z localhost 12357 && echo " Open." && exit ; done; echo >> " Timeout!" >&2; exit 1' >> >> docker buildx rm --force airflow_cache || true >> docker buildx create --name airflow_cache >> docker buildx create --name airflow_cache --append localhost:12357 >> >> * Releasing image: >> https://github.com/apache/airflow/actions/runs/3603432288/jobs/6080260776 >> >> This is the command that is run under-the-hood >> >> docker buildx build --builder airflow_cache --build-arg >> PYTHON_BASE_IMAGE=python:3.10-slim-bullseye --build-arg >> AIRFLOW_VERSION=2.5.0 --platform linux/amd64,linux/arm64 . -t >> apache/airflow:2.5.0-python3.10 --push >> >> J. >> >> >> >> On Tue, Dec 6, 2022 at 2:10 PM Jacob Wujciak >> <ja...@voltrondata.com.invalid> >> wrote: >> >> > Hello Robert, >> > >> > I would suggest using GItHub Actions, there you can use the official >> suite >> > of docker actions to build multiplatform images with little need for >> custom >> > scripting [1]. >> > Feel free to ping me in the ASF slack. >> > >> > Best >> > Jacob >> > >> > [1]: https://github.com/docker/build-push-action >> > >> > On Tue, Dec 6, 2022 at 1:43 PM Robert Munteanu <romb...@apache.org> >> wrote: >> > >> > > Hi, >> > > >> > > We had a user report that our official Docker image does not support >> > > architectures other than AMD64 [1]. M1 Macs and Raspberry Pis can't >> run >> > > the image with our current setup. >> > > >> > > On our side, we set up automated builds on Docker Hub using automated >> > > builds. Unforunately, Docker Hub autobuilds don't support `docker >> > > buildx` or another form of multi-arch builds. It is not the roadmap >> > > though [2], but here is no guarantee on when (or if) it will become >> > > available. >> > > >> > > I see two alternatives so far: >> > > >> > > 1. Moving to GitHub actions >> > > 2. Use hooks to install qemu and 'fake' a multi-arch build on Docker >> > > Hub >> > > >> > > To be honest, none is too appealing to me, we have a simple process >> > > that works. >> > > >> > > How are other projects handling this? Or does anyone have any ideas >> > > that they can share? >> > > >> > > Thanks, >> > > Robert >> > > >> > > [1]: https://issues.apache.org/jira/browse/SLING-11714 >> > > [2]: https://github.com/docker/roadmap/issues/109 >> > > >> > >> >