Yes, but we have somewhat different objectives than you might.

We use the Docker golang (alpine) image to build our images, and it works 
wonderfully (and really makes it a lot easier to cope with differences in 
Jenkins build nodes).  However, our apps generally eventually run on 
Kubernetes, which means we have a few things to consider for actual use that 
other folks might not.

I should also note that for the most part, we use shell scripts to build, as 
make doesn't really bring us many advantages; it can't really track 
dependencies across the Docker barrier, and since they're not strictly speaking 
files, it can't track the predecessor images (we use a multi-stage build 
process) as dependencies, so shell scripts it is.

By way of background, our project is a collection of microservices which we 
generally build at the same time for a variety of reasons (one of which is to 
maximize parallelism and cache reuse).  We use `go build` and/or `go install` 
to achieve that, but we use `go run` when testing things.

Our build process is as follows:

- We make a "build" image for each supported distribution we build our apps for 
(currently Alpine and CentOS, which have to be built separately because 
Alpine's musl-libc makes the binaries incompatible with everything else).  The 
build image is generally the base Go image (which is the official one for 
Alpine, and an equivalent cobbled-together one for CentOS) along with a basic 
set of packages installed that we know we need for our build (typically git, 
gcc, protobuf stuff and not much more).  That image is stored locally and used 
as a base for the next step.

- We make a "base" image by first copying in the bootstrap scripts (formerly to 
install dep, but now just to do a bit of prep work) and the go.{mod,sum} files. 
 We warm up the build cache by running `go mod download`, then copy in some 
secondary scripts to run `go get -u` to pull in the required Go binaries for 
things like code generators needed for the build.  Then we copy in the rest of 
the code (we defer this to maximize use of the Docker build cache so it doesn't 
fetch the dependencies every time), run the code generators with `go generate`, 
and then run `go install` on all the apps we want to build (rather than `go 
build` so they wind up in an easily-known location), running it on all of them 
at once to build them at the same time.  After that, we copy a few of the 
static assets that get bundled with the apps over to other well-known locations 
in the image because none of our environment variables carry over to the next 
step.

- We then use Docker's newer multi-stage build capability to start from a fresh 
runtime image for each app/distro combo, and copy the apps and their required 
static assets into the fresh image. That way, our Alpine images tend to be less 
than 15MB each (CentOS is a bit more, though in theory the base layers get 
reused, though this is not always true for distribution).  Because they are 
microservices, each app container is built with the entrypoint simply launching 
the app itself so the container terminates when the app does (which is 
preferable for Kubernetes).


Now, this obviously wouldn't be ideal for rapid deployment, which is where 
things start getting interesting.  In theory, we could work directly from the 
"base" container (the middle piece), since that has all the source and runtime 
needed to run the containers, but not everyone on the team relishes using 
terminal-based text editors (there's no accounting for taste).  We do, in fact, 
run the unit tests and benchmarks from that container and it works splendidly.  
But for an edit-compile-run cycle, it leaves a little bit to be desired, so we 
add a bit more on for local development.

For the iterative cycle, we do two things.  First, we bind-mount our source 
directory over the copied source directory in the container (yeah, it's a 
little wasteful, but the image is already there and set up, so).  We then run 
an autorunner which watches the source files and re-runs `go run 
./<appdir>/<app.go> <args>` (we use `reflex` since it worked best for our 
purposes, but there are others out there including `realize` which seems to be 
generally more popular).  This works wonders for developing microservices in 
the container environment with an editor outside the container, though getting 
the regexes to hit all the right files can be interesting; you may need to 
manually poke things after a `go generate`, for example.  This works extremely 
well even across the VM barrier of Docker for Mac, and I would assume also 
Docker for Windows, since they have taken great pains to ensure that filesystem 
notifications work across platforms.

For specific Kubernetes work, because we want to be able to connect to 
in-cluster resources that aren't available outside the cluster as well as 
connect to the API and get all the environment variables we'd get when deployed 
in-cluster, it's a bit more complex.  You don't want to be trying to negotiate 
bind-mounted volumes in a k8s cluster deployed by a Helm chart; it's just not 
worth the trouble (and just isn't possible if your cluster is remote).  For 
this, we use Telepresence, which does a pretty great job of proxying a local 
container into the remote or local cluster in place of another deployment; as 
far as your container and the cluster are concerned, you might as well be 
running right inside.  We use the container-proxying method Telepresence 
offers; the VPN option won't work with AnyConnect (which we need) and proxies 
the entire machine into the cluster anyway, and the libc shim functions don't 
work on Go because, well, libc.

Coupled with the autorunner, this has massively improved our development 
experience.  Before we had to move in-cluster (because we could use just the 
external interfaces), we could just `go run` things locally with environment 
variables set, but once we moved in cluster it became a laborious cycle of edit 
code->build container->kubectl edit/delete pod->kubectl logs, which took about 
3 minutes per cycle worst case.  Now it just rebuilds when we edit the source 
files, and the cycle time is something like 10s max.

Anyway, I hope that helps. I'm happy to help with more specific queries, but 
that's a containerized setup that's worked quite well for us so far.


- Dave


> On Feb 2, 2019, at 12:02 PM, Keith Brown <keith6...@gmail.com> wrote:
> 
> Thanks.
> 
> At the moment I use a Makefile (different from the above :-) ) .
> 
> I suppose I am asking because I want to integrate vim-go with the docker 
> build ( docker run -v $PWD:/mnt -w /mnt bash -c "go build main.go" ) 
> Was wondering if anyone has done that before. 
> 
> 
> On Friday, February 1, 2019 at 7:30:32 PM UTC-5, Space A. wrote:
> You simply need
> docker run <...>
> which will invoke smth like
> go build
> at the end.
> 
> PS: The above Makefile is garbage.
> 
> 
> пятница, 1 февраля 2019 г., 21:59:53 UTC+3 пользователь Bojan Delić написал:
> I have Makefile that supports build in docker image, part of it looks 
> something like this:
> 
> NAME := <name>
> PACKAGE := github.com/username/repo
> 
> 
> .PHONY: build
> build: clean gobuild ## Build application
> 
> .PHONY: gobuild
> gobuild: LDFLAGS=$(shell ./scripts/gen-ldflags.sh $(VERSION))
> gobuild:
>    @echo "Building '$(NAME)'"
>    @env go generate -mod=vendor
>    @CGO_ENABLED=0 go build -mod=vendor --ldflags "$(LDFLAGS)" -tags netgo .
> 
> .PHONY: clean
> clean: ## Remove build artifacts
>    @echo "==> Cleaning build artifacts..."
>    @rm -fv coverage.txt
>    @find . -name '*.test' | xargs rm -fv
>    @rm -fv $(NAME)
> 
> .PHONY: build-in-docker
> build-in-docker:
>    @docker run --rm \
>       -v $(shell pwd):/go/src/$(PACKAGE) \
>       -w /go/src/$(PACKAGE) \
>       -e GO111MODULE=on \
>       golang:latest \
>       make build
> 
> As you can see, there are some external scripts called (like gen-ldflags.sh) 
> and docker build is just invoking "make build" inside docker container. I do 
> not use this for CI (GitLab CI is already setup to use docker images), so 
> that is why I use latest tag (in CI I use explicit version of Go). 
> 
> 
> There are some leftovers from earlier times, like mounting working dir to 
> GOPATH, which is not needed if GO111MODULE is set.
> 
> 
> On Friday, February 1, 2019 at 4:48:01 AM UTC+1, Keith Brown wrote:
> does anyone use docker golang image to compile? if so,how is your setup?
> 
> 
> -- 
> You received this message because you are subscribed to the Google Groups 
> "golang-nuts" group.
> To unsubscribe from this group and stop receiving emails from it, send an 
> email to golang-nuts+unsubscr...@googlegroups.com.
> For more options, visit https://groups.google.com/d/optout.

-- 
You received this message because you are subscribed to the Google Groups 
"golang-nuts" group.
To unsubscribe from this group and stop receiving emails from it, send an email 
to golang-nuts+unsubscr...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

Reply via email to