On Fri, Jun 2, 2017 at 10:21 PM, Nick Coghlan <ncogh...@gmail.com> wrote:
> On 3 June 2017 at 13:38, Donald Stufft <don...@stufft.io> wrote:
>> However, what I was able to find was what appears to be the original reason
>> pip started copying the directory to begin with,
>> https://github.com/pypa/pip/issues/178 which was caused by the build system
>> reusing the build directory between two different virtual environments and
>> causing an invalid installation to happen. The ticket is old enough that I
>> can get at specifics it because it was migrated over from bitbucket. However
>> the fact that we *used* to do exactly what you want and it caused exactly
>> one of problem I was worried about seems to suggest to me that pip is
>> absolutely correct in keeping this behavior.
>
> FWIW, I'll also note that in-place builds play merry hell with
> containerised build tools, volume mounts, and SELinux filesystem
> labels.
>
> In-place builds *can* be made to work, and when you invest the time to
> make them work, they give you all sorts of useful benefits
> (incremental builds, etc), but out-of-tree builds inherently avoid a
> lot of potential problems (especially in a world where virtual
> environments are a thing).
>
> As far as "out-of-tree caching" is concerned, all the build systems
> I'm personally familiar with *except* the C/C++ ones use some form of
> out of band caching location, even if that's dependency caching rather
> than build artifact caching.
>
> As an example of the utility of that approach, Atlassian recently
> updated the Alpha version of their Pipelines feature to automatically
> manage cache directories and allow them to be shared between otherwise
> independent builds:
> https://confluence.atlassian.com/bitbucket/caching-dependencies-895552876.html

Oh sure, if you have a piece of build *infrastructure*, then all kinds
of things make sense. Set up ccache, distcc, cache dependencies, go
wild. Mozilla's got a cute version of ccache that puts the cache in s3
so it can be shared among ephemeral build VMs.

That's not what I'm talking about. The case I'm talking about is,
like, a baby dev taking their first steps, or someone trying to get a
build of a package working on an unusual system:

git clone ..../numpy.git
cd numpy
# edit some file, maybe a config file saying which fortran compiler
this weird machine uses
# build and run tests

In this case it would be extremely rude to silently dump all our
intermediate build artifacts into ~/.something, but I also don't want
to require every new dev opt-in to some special infrastructure and
learn new commands – I want there to be gentle onramp from blindly
installing packages as a user to hacking on them. Making 'pip install'
automatically do incremental builds when run repeatedly on the same
working directory accomplishes this better than anything else.

It's not clear to me what cases you're concerned about breaking with
"containerised build tools, ...". Are you thinking about, like,
'docker run some-volume -v $PWD:/io pip install /io'? Surely for
anything involving containers there should be an explicit wheel built
somewhere in there?

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
_______________________________________________
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig

Reply via email to