On 3 June 2017 at 15:53, Nathaniel Smith <[email protected]> wrote: > That's not what I'm talking about. The case I'm talking about is, > like, a baby dev taking their first steps, or someone trying to get a > build of a package working on an unusual system: > > git clone ..../numpy.git > cd numpy > # edit some file, maybe a config file saying which fortran compiler > this weird machine uses > # build and run tests
It's come up a couple of times before, but this example makes me realise that we should be *explicitly* using "tox" as our reference implementation for "local developer experience", to avoid letting ourselves fall into the trap of optimising too much for pip specifically as the reference installer. The reason I say that is I actually looked at the tox docs yesterday, and *completely missed* the relevance of one of their config settings to PEP 517 (the one that lets you skip the sdist creation step when its too slow): https://tox.readthedocs.io/en/latest/example/general.html#avoiding-expensive-sdist However, I'm not sure doing that leads to the conclusion that we need to support in-place builds in PEP 517, as tox's approach to skipping the sdist step in general is to require the user to specify a custom build command. The only "in-place" option it supports directly is editable installs. So from that perspective, the PEP 517 answer to "How do I do an in-place build?" would be "Use whatever command your backend provides for that purpose". This makes sense, as this particular abstraction layer isn't meant to hide the build backend from the people *working on* a project - it's only meant to hide it from the people *using* the project. So as a NumPy or SciPy developer, it's entirely reasonable to have to know that the command for an in-place build is "python setup.py ...". > In this case it would be extremely rude to silently dump all our > intermediate build artifacts into ~/.something, but I also don't want > to require every new dev opt-in to some special infrastructure and > learn new commands – I want there to be gentle onramp from blindly > installing packages as a user to hacking on them. Making 'pip install' > automatically do incremental builds when run repeatedly on the same > working directory accomplishes this better than anything else. "Build this static snapshot of a thing someone else published so I can use it" and "Build this thing I'm working on so I can test it" are closely related, but not the same, so I think the latter concerns are likely to be better handled through an effort to replace the setuptools specific "pip install -e" with a more general "pip devinstall". That would then map directly to the existing logic in tox, allowing that to migrate from running "pip install -e" when usedevelop=True to instead running "pip devinstall". > It's not clear to me what cases you're concerned about breaking with > "containerised build tools, ...". Are you thinking about, like, > 'docker run some-volume -v $PWD:/io pip install /io'? Surely for > anything involving containers there should be an explicit wheel built > somewhere in there? With live re-loading support in web servers, it's really handy to volume mount your working directory into the container. Cheers, Nick. -- Nick Coghlan | [email protected] | Brisbane, Australia _______________________________________________ Distutils-SIG maillist - [email protected] https://mail.python.org/mailman/listinfo/distutils-sig
