Re: [go-nuts] Re: using docker for compiling

2019-02-07 Thread akshita babel
And FYI the folder is in local system and API is on server and hence the
path thing will not work, we need to send the object to API


On Fri, Feb 8, 2019 at 11:09 AM akshita babel 
wrote:

> How can I take a folder as an input in my go api and know the relative
> path or absolute path of each file in each folder
>
> On Fri, Feb 8, 2019 at 7:12 AM David Riley  wrote:
>
>> Yes, but we have somewhat different objectives than you might.
>>
>> We use the Docker golang (alpine) image to build our images, and it works
>> wonderfully (and really makes it a lot easier to cope with differences in
>> Jenkins build nodes).  However, our apps generally eventually run on
>> Kubernetes, which means we have a few things to consider for actual use
>> that other folks might not.
>>
>> I should also note that for the most part, we use shell scripts to build,
>> as make doesn't really bring us many advantages; it can't really track
>> dependencies across the Docker barrier, and since they're not strictly
>> speaking files, it can't track the predecessor images (we use a multi-stage
>> build process) as dependencies, so shell scripts it is.
>>
>> By way of background, our project is a collection of microservices which
>> we generally build at the same time for a variety of reasons (one of which
>> is to maximize parallelism and cache reuse).  We use `go build` and/or `go
>> install` to achieve that, but we use `go run` when testing things.
>>
>> Our build process is as follows:
>>
>> - We make a "build" image for each supported distribution we build our
>> apps for (currently Alpine and CentOS, which have to be built separately
>> because Alpine's musl-libc makes the binaries incompatible with everything
>> else).  The build image is generally the base Go image (which is the
>> official one for Alpine, and an equivalent cobbled-together one for CentOS)
>> along with a basic set of packages installed that we know we need for our
>> build (typically git, gcc, protobuf stuff and not much more).  That image
>> is stored locally and used as a base for the next step.
>>
>> - We make a "base" image by first copying in the bootstrap scripts
>> (formerly to install dep, but now just to do a bit of prep work) and the
>> go.{mod,sum} files.  We warm up the build cache by running `go mod
>> download`, then copy in some secondary scripts to run `go get -u` to pull
>> in the required Go binaries for things like code generators needed for the
>> build.  Then we copy in the rest of the code (we defer this to maximize use
>> of the Docker build cache so it doesn't fetch the dependencies every time),
>> run the code generators with `go generate`, and then run `go install` on
>> all the apps we want to build (rather than `go build` so they wind up in an
>> easily-known location), running it on all of them at once to build them at
>> the same time.  After that, we copy a few of the static assets that get
>> bundled with the apps over to other well-known locations in the image
>> because none of our environment variables carry over to the next step.
>>
>> - We then use Docker's newer multi-stage build capability to start from a
>> fresh runtime image for each app/distro combo, and copy the apps and their
>> required static assets into the fresh image. That way, our Alpine images
>> tend to be less than 15MB each (CentOS is a bit more, though in theory the
>> base layers get reused, though this is not always true for distribution).
>> Because they are microservices, each app container is built with the
>> entrypoint simply launching the app itself so the container terminates when
>> the app does (which is preferable for Kubernetes).
>>
>>
>> Now, this obviously wouldn't be ideal for rapid deployment, which is
>> where things start getting interesting.  In theory, we could work directly
>> from the "base" container (the middle piece), since that has all the source
>> and runtime needed to run the containers, but not everyone on the team
>> relishes using terminal-based text editors (there's no accounting for
>> taste).  We do, in fact, run the unit tests and benchmarks from that
>> container and it works splendidly.  But for an edit-compile-run cycle, it
>> leaves a little bit to be desired, so we add a bit more on for local
>> development.
>>
>> For the iterative cycle, we do two things.  First, we bind-mount our
>> source directory over the copied source directory in the container (yeah,
>> it's a little wasteful, but the image is already there and set up, so).  We
>> then run an autorunner which watches the source files and re-runs `go run
>> .// ` (we use `reflex` since it worked best for our
>> purposes, but there are others out there including `realize` which seems to
>> be generally more popular).  This works wonders for developing
>> microservices in the container environment with an editor outside the
>> container, though getting the regexes to hit all the right files can be
>> interesting; you may need to manually poke things after a `go 

Re: [go-nuts] Re: using docker for compiling

2019-02-07 Thread akshita babel
How can I take a folder as an input in my go api and know the relative path
or absolute path of each file in each folder

On Fri, Feb 8, 2019 at 7:12 AM David Riley  wrote:

> Yes, but we have somewhat different objectives than you might.
>
> We use the Docker golang (alpine) image to build our images, and it works
> wonderfully (and really makes it a lot easier to cope with differences in
> Jenkins build nodes).  However, our apps generally eventually run on
> Kubernetes, which means we have a few things to consider for actual use
> that other folks might not.
>
> I should also note that for the most part, we use shell scripts to build,
> as make doesn't really bring us many advantages; it can't really track
> dependencies across the Docker barrier, and since they're not strictly
> speaking files, it can't track the predecessor images (we use a multi-stage
> build process) as dependencies, so shell scripts it is.
>
> By way of background, our project is a collection of microservices which
> we generally build at the same time for a variety of reasons (one of which
> is to maximize parallelism and cache reuse).  We use `go build` and/or `go
> install` to achieve that, but we use `go run` when testing things.
>
> Our build process is as follows:
>
> - We make a "build" image for each supported distribution we build our
> apps for (currently Alpine and CentOS, which have to be built separately
> because Alpine's musl-libc makes the binaries incompatible with everything
> else).  The build image is generally the base Go image (which is the
> official one for Alpine, and an equivalent cobbled-together one for CentOS)
> along with a basic set of packages installed that we know we need for our
> build (typically git, gcc, protobuf stuff and not much more).  That image
> is stored locally and used as a base for the next step.
>
> - We make a "base" image by first copying in the bootstrap scripts
> (formerly to install dep, but now just to do a bit of prep work) and the
> go.{mod,sum} files.  We warm up the build cache by running `go mod
> download`, then copy in some secondary scripts to run `go get -u` to pull
> in the required Go binaries for things like code generators needed for the
> build.  Then we copy in the rest of the code (we defer this to maximize use
> of the Docker build cache so it doesn't fetch the dependencies every time),
> run the code generators with `go generate`, and then run `go install` on
> all the apps we want to build (rather than `go build` so they wind up in an
> easily-known location), running it on all of them at once to build them at
> the same time.  After that, we copy a few of the static assets that get
> bundled with the apps over to other well-known locations in the image
> because none of our environment variables carry over to the next step.
>
> - We then use Docker's newer multi-stage build capability to start from a
> fresh runtime image for each app/distro combo, and copy the apps and their
> required static assets into the fresh image. That way, our Alpine images
> tend to be less than 15MB each (CentOS is a bit more, though in theory the
> base layers get reused, though this is not always true for distribution).
> Because they are microservices, each app container is built with the
> entrypoint simply launching the app itself so the container terminates when
> the app does (which is preferable for Kubernetes).
>
>
> Now, this obviously wouldn't be ideal for rapid deployment, which is where
> things start getting interesting.  In theory, we could work directly from
> the "base" container (the middle piece), since that has all the source and
> runtime needed to run the containers, but not everyone on the team relishes
> using terminal-based text editors (there's no accounting for taste).  We
> do, in fact, run the unit tests and benchmarks from that container and it
> works splendidly.  But for an edit-compile-run cycle, it leaves a little
> bit to be desired, so we add a bit more on for local development.
>
> For the iterative cycle, we do two things.  First, we bind-mount our
> source directory over the copied source directory in the container (yeah,
> it's a little wasteful, but the image is already there and set up, so).  We
> then run an autorunner which watches the source files and re-runs `go run
> .// ` (we use `reflex` since it worked best for our
> purposes, but there are others out there including `realize` which seems to
> be generally more popular).  This works wonders for developing
> microservices in the container environment with an editor outside the
> container, though getting the regexes to hit all the right files can be
> interesting; you may need to manually poke things after a `go generate`,
> for example.  This works extremely well even across the VM barrier of
> Docker for Mac, and I would assume also Docker for Windows, since they have
> taken great pains to ensure that filesystem notifications work across
> platforms.
>
> For specific Kubernetes 

Re: [go-nuts] Re: using docker for compiling

2019-02-07 Thread David Riley
Yes, but we have somewhat different objectives than you might.

We use the Docker golang (alpine) image to build our images, and it works 
wonderfully (and really makes it a lot easier to cope with differences in 
Jenkins build nodes).  However, our apps generally eventually run on 
Kubernetes, which means we have a few things to consider for actual use that 
other folks might not.

I should also note that for the most part, we use shell scripts to build, as 
make doesn't really bring us many advantages; it can't really track 
dependencies across the Docker barrier, and since they're not strictly speaking 
files, it can't track the predecessor images (we use a multi-stage build 
process) as dependencies, so shell scripts it is.

By way of background, our project is a collection of microservices which we 
generally build at the same time for a variety of reasons (one of which is to 
maximize parallelism and cache reuse).  We use `go build` and/or `go install` 
to achieve that, but we use `go run` when testing things.

Our build process is as follows:

- We make a "build" image for each supported distribution we build our apps for 
(currently Alpine and CentOS, which have to be built separately because 
Alpine's musl-libc makes the binaries incompatible with everything else).  The 
build image is generally the base Go image (which is the official one for 
Alpine, and an equivalent cobbled-together one for CentOS) along with a basic 
set of packages installed that we know we need for our build (typically git, 
gcc, protobuf stuff and not much more).  That image is stored locally and used 
as a base for the next step.

- We make a "base" image by first copying in the bootstrap scripts (formerly to 
install dep, but now just to do a bit of prep work) and the go.{mod,sum} files. 
 We warm up the build cache by running `go mod download`, then copy in some 
secondary scripts to run `go get -u` to pull in the required Go binaries for 
things like code generators needed for the build.  Then we copy in the rest of 
the code (we defer this to maximize use of the Docker build cache so it doesn't 
fetch the dependencies every time), run the code generators with `go generate`, 
and then run `go install` on all the apps we want to build (rather than `go 
build` so they wind up in an easily-known location), running it on all of them 
at once to build them at the same time.  After that, we copy a few of the 
static assets that get bundled with the apps over to other well-known locations 
in the image because none of our environment variables carry over to the next 
step.

- We then use Docker's newer multi-stage build capability to start from a fresh 
runtime image for each app/distro combo, and copy the apps and their required 
static assets into the fresh image. That way, our Alpine images tend to be less 
than 15MB each (CentOS is a bit more, though in theory the base layers get 
reused, though this is not always true for distribution).  Because they are 
microservices, each app container is built with the entrypoint simply launching 
the app itself so the container terminates when the app does (which is 
preferable for Kubernetes).


Now, this obviously wouldn't be ideal for rapid deployment, which is where 
things start getting interesting.  In theory, we could work directly from the 
"base" container (the middle piece), since that has all the source and runtime 
needed to run the containers, but not everyone on the team relishes using 
terminal-based text editors (there's no accounting for taste).  We do, in fact, 
run the unit tests and benchmarks from that container and it works splendidly.  
But for an edit-compile-run cycle, it leaves a little bit to be desired, so we 
add a bit more on for local development.

For the iterative cycle, we do two things.  First, we bind-mount our source 
directory over the copied source directory in the container (yeah, it's a 
little wasteful, but the image is already there and set up, so).  We then run 
an autorunner which watches the source files and re-runs `go run 
.// ` (we use `reflex` since it worked best for our 
purposes, but there are others out there including `realize` which seems to be 
generally more popular).  This works wonders for developing microservices in 
the container environment with an editor outside the container, though getting 
the regexes to hit all the right files can be interesting; you may need to 
manually poke things after a `go generate`, for example.  This works extremely 
well even across the VM barrier of Docker for Mac, and I would assume also 
Docker for Windows, since they have taken great pains to ensure that filesystem 
notifications work across platforms.

For specific Kubernetes work, because we want to be able to connect to 
in-cluster resources that aren't available outside the cluster as well as 
connect to the API and get all the environment variables we'd get when deployed 
in-cluster, it's a bit more complex.  You don't want to be