[Distutils] Re: pip and missing shared system system library

2020-08-06 Thread Nathaniel Smith
On Thu, Aug 6, 2020 at 3:06 PM David Mathog  wrote:
>
> On Thu, Aug 6, 2020 at 11:54 AM Nathaniel Smith  wrote:
>
> > If the code that failed to give a good error message is in
> > louvain-igraph, then you should probably talk to them about that :-).
> > There's no way for the core packaging libraries to guess what this
> > kind of arbitrary package-specific code is going to do.
>
> That was the point I was trying to make, albeit not very well I guess.
> Because Requires-External was not supplied, and pip would not have
> done anything with it even if it had been, the package had to roll its
> own.  The documentation for Requires-External says what it requires,
> but it does not indicate that anything else happens besides (I assume)
> the installation halting if the condition is not met.  That is, if
> there is:
>
> Requires-External: libpng
>
> and pip acts on it that meant it found libpng.so, but there does not
> seem to be any requirement that it communicate any further information
> about libpng to setup.py in any standard way.  Which is why the
> setup.py for louvain rolled its own.  For posixy OS's it would be
> sufficient to know that if the "Requires-External" passed that
> "pkg-config --cflags libpng" and the like will work.  But again, that
> pushes the work into setup.py where it will not be standardized nor
> platform agnostic.  So for better portability passing one of these
> tests should also set some standard variables like:
>
>RE_libpng_cflags="-lpng16 -lz"
>RE_libpng_includedir="/usr/include"
>RE_libpng_libdir="/usr/lib64"
>(and so forth).
>
> which are then seen in setup.py.  Yes, these are just the various
> values already in the libpng.pc file, no reason to reinvent that
> wheel.  The result should be simpler setup.py's which are portable
> without requiring all the conditional "if it is this OS then look
> here" that they must currently contain.

Unfortunately, successfully building C libraries is way, way more
complicated than that. There are nearly as many ways to detect and
configure C libraries as there are C libraries; tools like pkg-config
help a bit but they're far from universal. There can be multiple
versions of libpng on the same system, with different ABIs. pip
doesn't even know what compiler the package will want to use (which
also affects which libraries are available). And at the end of the
day, the only thing pip could do with this information is print a
slightly nicer error message than you would get otherwise.

What pip *has* done in the last few years is made it possible to pull
in packages from PyPI when building packages from source, so you can
make your own pkg-config-handling library and put it on PyPI and
encourage everyone to use it instead of reinventing the wheel. Or use
more powerful build systems that have already solved these problems,
e.g. scikit-build lets you use CMake to build python packages.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/B5D2GDJZPZ73Q4JKJBVYX7U5HUCY26TA/


[Distutils] Re: pip and missing shared system system library

2020-08-06 Thread Nathaniel Smith
On Thu, Aug 6, 2020 at 11:39 AM David Mathog  wrote:
> Looking at the setup.py for louvain here:
>
>   https://github.com/vtraag/louvain-igraph/blob/master/setup.py
>
> around line 491 is the code for pkg-config and the "core" message .
> It looks like it should exit when pkg-config failed, and that is not
> what happened.

If the code that failed to give a good error message is in
louvain-igraph, then you should probably talk to them about that :-).
There's no way for the core packaging libraries to guess what this
kind of arbitrary package-specific code is going to do.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/HY563R3K4NNMPYEI55ZKKZXN4BYCPKXV/


[Distutils] Re: Archive this list & redirect conversation elsewhere?

2020-08-04 Thread Nathaniel Smith
On Tue, Aug 4, 2020 at 4:13 PM Oscar Benjamin
 wrote:
> What I haven't quite got my head around is: what exactly is the
> "workflow" with discourse if you are a regular follower/contributor on
> some forum?
>
> Do people who use it a lot begin by going to the forum website?
>
> Do they get the email notifications and interact via those?

I think it varies. I get email notifications, and then usually go to
the website when I want to reply, unless it's just 1-2 lines. IIRC
Donald sticks to the website only. Another mode that discourse is good
at is the "digest" approach where it sends you a weekly summary and
then you can go to the website if you want to follow up on anything.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/PEBVX26GDNUNHQDRBBHA6XAYNO77RDHX/


[Distutils] Fwd: Re: psycopg2/psycopg2-binary

2019-04-06 Thread Nathaniel Smith
Can a mod please disable this person's subscription to distutils-sig...?

-- Forwarded message -
From: 
Date: Sat, Apr 6, 2019 at 3:16 PM
Subject: Re: [Distutils] Re: psycopg2/psycopg2-binary
To: 


[image: BitBounce]



Thanks for emailing me! No, I haven’t been hacked :)

I signed up for a spam filtering service called BitBounce. To deliver your
email to my inbox, please click the link below and pay the small Bitcoin
fee. Thanks!

*$0.05* to deliver your email.

We’ve never met — I’ll pay your fee.


I know you — Add me to your whitelist.




*BitBounce*  is powered by the *Credo*
 cryptocurrency

*I’m from a business* —  what are my *delivery options*


BitBounce and Credo are transacted through *CredoEx*

Made by Turing Technology Inc. in San Mateo, California *Sign Up for
BitBounce* 


-- 
Nathaniel J. Smith -- https://vorpus.org 
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/CDTHB3A432MCUMUYF46BREGMWAZUOLXC/


[Distutils] Re: psycopg2/psycopg2-binary

2019-04-06 Thread Nathaniel Smith
On Sat, Apr 6, 2019 at 2:14 PM Bert JW Regeer  wrote:
>
> Hey all,
>
> You may have seen some hub hub around psycopg2 and no longer shipping binary 
> wheels by default [1][2] (and in fact using psycopg2-binary if you want 
> wheels), and I wanted to bring it up here because it demonstrates a problem 
> area with the current state of packaging in Python:
>
> There is no good way for a new package to specify that it provides what 
> another package would provide, and setuptools currently checks that all 
> distributions are found before running the console scripts (so if a console 
> script has a setup.py that depends on psycopg2 and you install 
> psycopg2-binary it fails) [3]
>
> So currently if you pip install psycopg2-binary and then install a project 
> that uses psycopg2 as a dependency it will install psycopg2 over top of 
> psycopg2-binary.
>
> The author of psycopg2 stopped distributing binaries for psycopg2 because of 
> issues with the version of SSL that Python was compiled for/what was used by 
> psycopg2 and it causing all kinds of issues.
>
> I don't have a proposal or a fix, but this is going to be an issue not just 
> for psycopg2 but also for other projects that potentially distribute wheels 
> that are built against a different version of OpenSSL.
>
> I see two things that should get some thought:
>
> 1. How to have a package provide for another package (there are keywords but 
> AFAIK they are currently ignored by pip)
> 2. How to handle/deal with shared libraries that are not versioned

The psycopg2 authors originally misdiagnosed the problem, and haven't
updated their docs since the problem was diagnosed further, so a lot
of people are confused about this whole psycopg2-binary thing :-(

There is no problem with shipping openssl in wheels. Lots of projects
do it fine. The reason psycopg2 is having problems is because of an
easily-fixable bug in psycopg2:

- Old versions of OpenSSL need some annoying configuration applied to
make them thread-safe
- libpq (which psycopg2 uses) normally does this configuration in one way
- the Python ssl module also normally does this configuration in a different way
- If libpq and the stdlib ssl module are both linked against the
*same* copy of openssl, then they can end up fighting with each other
- So the psycopg2 code has a special hack to unconditionally disable
libpq's thread-safety code, because the psycopg2 developers assumed
that psycopg2 and the stdlib ssl would *always* share the same copy of
openssl, and the stdlib ssl module would take care of the
thread-safety stuff
- Then they started distributing psycopg2 binaries with their own copy
of openssl in them, and of course they got crashes, because they've
turned off thread-safety, and now that they have their own copy of
openssl, no-one else is fixing it for them

So all they need to do to fix their wheels is either:

- somehow disable this patch in their wheel builds:
https://github.com/psycopg/psycopg2/commit/a59704cf93e8594dfe59cf12d416e82a816953a4

Or else:

- switch to building their wheels against a newer version of openssl,
since newer versions of openssl are now thread-safe by default (thank
goodness)

Either way, it's totally fixable, and only the psycopg2 devs can fix it.

They've known this since July, but said they don't have energy to fix
it: https://github.com/psycopg/psycopg2/issues/543#issuecomment-408352209

I sympathize with that, but I wish they would tell people "hey, this
is a bug in psycopg2, we need help", instead of blaming python
packaging and trying to force all their downstreams to do wacky stuff
with dependencies to work around their bug.

It would indeed be nice if Python packages had better support for
Provides:, but psycopg2 is not really a good motivating use case.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/L6MDUN4NXYHERAZMEIL7P7S4M6OHTT2I/


[Distutils] Re: PEP idea regarding non-glibc systems

2019-02-26 Thread Nathaniel Smith
OK, so what's your proposal for what auditwheel/pip/etc. should do to
support musl? Do we need to put a list of which symbols each wheel
uses in the filename, or ...?

On Tue, Feb 26, 2019 at 8:16 AM Alexander Revin  wrote:
>
> I've asked on musl mailing list and it looks like possible:
>
> -- Forwarded message -
> From: Rich Felker 
> Date: Tue, Feb 26, 2019 at 4:11 PM
> Subject: Re: [musl] ABI compatibility between versions
> To: Alexander Revin 
> Cc: 
>
>
> On Tue, Feb 26, 2019 at 12:28:31PM +0100, Alexander Revin wrote:
> > > but for this reason a binary compiled against a new version
> > > of glibc is unlikely to work with an older version (which
> > > is why anybody who wants to distribute a binary that works
> > > across different linux distros, compiles against a very old
> > > version of glibc, which of course means lots of old bugs)
> > > while for musl such breakage is much more rare (happens
> > > when a new symbol is introduced and the binary uses that).
> >
> > So it generally similar to glibc approach – link against old musl,
> > which doesn't expose new symbols?
>
> This works but isn't necessarily needed. As long as your application
> does not use any symbols that were introduced in a newer musl, it will
> run with an older one, subject to any bugs the older one might have.
> If configure is detecting and causing the program's build process to
> link to new symbols in the newer musl, and you don't want to depend on
> that, you can usually override the detections with configure variables
> on the configure command line or in an explicit config.cache file, or
> equivalent for other non-autoconf-based build systems.
>
> -- End of forwarded message -
>
> Alpine guys doesn't seem to use any specific build flags, though
> find_library function was customized:
> https://git.alpinelinux.org/aports/tree/main/python3
>
>
> On Mon, Feb 25, 2019 at 10:48 PM Nathaniel Smith  wrote:
> >
> > Sniffing out the ELF loader is definitely more complicated than ideal – 
> > e.g. it adds a "find the python binary" step that could go wrong – but, ok, 
> > if that were the only barrier maybe we could manage.
> >
> > The bigger problem is: how do we figure out whether a wheel built against 
> > *that* musl on *that* machine will work with *this* musl on *this* machine? 
> > For glibc, this involves three pieces, each of which is non-trivial:
> >
> > - the glibc maintainers provide some careful, documented guarantees about 
> > when a library built against one glibc version will run with another glibc 
> > version, and they encode this in machine-readable form in their symbol 
> > versions
> >
> > - auditwheel checks the symbol versions to derive a summary of what the 
> > wheel needs, and stores it the wheel metadata
> >
> > - pip checks the local system's glibc version against this metadata
> >
> > A simple "is this musl or not?" check isn't useful on its own. We also need 
> > some musl equivalent for this other machinery. (It doesn't have to work the 
> > same way, but it has to accomplish the same result.)
> >
> > If you want to keep moving this forward you're going to have to talk to the 
> > musl maintainers.
> >
> > -n
> >
> > On Mon, Feb 25, 2019, 09:49 Alexander Revin  wrote:
> >>
> >> I've put combined code here:
> >> https://gist.github.com/lyssdod/f51579ae8d93c8657a5564aefc2ffbca
> >>
> >> Just download it, make executable and run.
> >>
> >> amd64 Alpine:
> >> # ./guess_pyruntime.py
> >> Interpreter extracted: /lib/ld-musl-x86_64.so.1
> >> Running on musl
> >>
> >> amd64 Gentoo glibc:
> >> # ./guess_pyruntime.py
> >> Interpreter extracted: /lib64/ld-linux-x86-64.so.2
> >> Running on glibc version 2.27
> >>
> >> On Thu, Feb 21, 2019 at 3:57 AM Alexander Revin  wrote:
> >> >
> >> > Hi Nathaniel,
> >> >
> >> > Thanks for your answer.
> >> >
> >> > Basing on your example of RHEL and Ubuntu, let's take RHEL 6 which
> >> > uses glibc 2.12. If you cross-compile for it (using the same gcc RHEL
> >> > uses), wheel surely will work on Ubuntu 18.10 :)
> >> > I think it's not of an issue, since wheels are built with these
> >> > minimal runtime requirements anyway, unless they're built on a local
> >> > machine – but in this case they will just work™; anyway, having a
> >> > working toolchain is not the scope of Python tool

[Distutils] Re: PEP idea regarding non-glibc systems

2019-02-25 Thread Nathaniel Smith
Sniffing out the ELF loader is definitely more complicated than ideal –
e.g. it adds a "find the python binary" step that could go wrong – but, ok,
if that were the only barrier maybe we could manage.

The bigger problem is: how do we figure out whether a wheel built against
*that* musl on *that* machine will work with *this* musl on *this* machine?
For glibc, this involves three pieces, each of which is non-trivial:

- the glibc maintainers provide some careful, documented guarantees about
when a library built against one glibc version will run with another glibc
version, and they encode this in machine-readable form in their symbol
versions

- auditwheel checks the symbol versions to derive a summary of what the
wheel needs, and stores it the wheel metadata

- pip checks the local system's glibc version against this metadata

A simple "is this musl or not?" check isn't useful on its own. We also need
some musl equivalent for this other machinery. (It doesn't have to work the
same way, but it has to accomplish the same result.)

If you want to keep moving this forward you're going to have to talk to the
musl maintainers.

-n

On Mon, Feb 25, 2019, 09:49 Alexander Revin  wrote:

> I've put combined code here:
> https://gist.github.com/lyssdod/f51579ae8d93c8657a5564aefc2ffbca
>
> Just download it, make executable and run.
>
> amd64 Alpine:
> # ./guess_pyruntime.py
> Interpreter extracted: /lib/ld-musl-x86_64.so.1
> Running on musl
>
> amd64 Gentoo glibc:
> # ./guess_pyruntime.py
> Interpreter extracted: /lib64/ld-linux-x86-64.so.2
> Running on glibc version 2.27
>
> On Thu, Feb 21, 2019 at 3:57 AM Alexander Revin  wrote:
> >
> > Hi Nathaniel,
> >
> > Thanks for your answer.
> >
> > Basing on your example of RHEL and Ubuntu, let's take RHEL 6 which
> > uses glibc 2.12. If you cross-compile for it (using the same gcc RHEL
> > uses), wheel surely will work on Ubuntu 18.10 :)
> > I think it's not of an issue, since wheels are built with these
> > minimal runtime requirements anyway, unless they're built on a local
> > machine – but in this case they will just work™; anyway, having a
> > working toolchain is not the scope of Python tooling. Gentoo has an
> > awesome crossdev tool for creating cross-toolchains, and there's
> > crosstool-ng of course. It looks like there are workarounds for
> > putting code built on newer systems to older ones though, but they
> > seem to be pretty tedious ([1])
> >
> > Speaking of runtime detection, I see it this way for example – since
> > one of the most reliable ways to check for program's dependencies is
> > invoking something like ldd or objdump, it can essentially be done the
> > same way:
> >
> > 1. Pick the minimal required code to extract ELF's ".interp" field [2]
> > (I used code from [3]);
> > 2. Process sys.executable with it;
> >
> > Here what it returns (grepping by "/lib" because it starts from a new
> line):
> >
> > Gentoo amd64 glibc
> > # python3 readelf.py $(which python3) | grep "/lib"
> > b'/lib64/ld-linux-x86-64.so.2\x00'
> >
> > Alpine amd64 docker (official python3 alpine image):
> >  # python3 readelf.py $(which python3) | grep "/lib"
> > b'/lib/ld-musl-x86_64.so.1\x00'
> >
> > 3. Essentially that's enough in my opinion, but we can go further and
> > do what ldd does:
> >
> > # /lib/ld-musl-x86_64.so.1 --list $(which python3)
> > /lib/ld-musl-x86_64.so.1 (0x7fbc36567000)
> > libpython3.7m.so.1.0 => /usr/local/lib/libpython3.7m.so.1.0
> (0x7fbc3622a000)
> > libc.musl-x86_64.so.1 => /lib/ld-musl-x86_64.so.1 (0x7fbc36567000)
> >
> > Basically it's just string matching here, and the only question now if
> > the name of dynamic linker is enough or all libs should be iterated
> > until perfect "musl" or "libc" match. Parsing "Dynamic section" turns
> > out to be pretty useless – it's empty on Alpine (or parsing code is
> > buggy). If ".interp" field is not available, then interpreter is
> > statically linked :)
> >
> > 4. If any glibc-specific functionality is needed at this point, code
> > from PEP 513 is really good. Maybe it's also better to put it first
> > and use ELF parsing if it failed to open glibc in the first place.
> >
> >
> > Thanks,
> > Alex
> >
> >
> >
> > [1]
> https://snorfalorpagus.net/blog/2016/07/17/compiling-python-extensions-for-old-glibc-versions/
> > [2] https://www.linuxjournal.com/article/1060
> > [3]
> https://github.com/detailyang/readelf/blob/master/readelf/readelf.py#L545
> >
> &g

[Distutils] Re: PEP-582 concerns

2019-02-20 Thread Nathaniel Smith
On Wed, Feb 20, 2019 at 8:49 AM Steve Dower  wrote:
>
> On 20Feb2019 0831, Nathaniel Smith wrote:
> > Yeah, __pypackages__ has no way to handle scripts, and also no way to
> > access packages when you're running from a directory. Pipenv already
> > handles both of these cases fine today, so I'm not sure how having
> > __pypackages__ several years from now could help you.
>
> Uh, it totally has both. It has no way to handle updating your
> terminal's environment for you, but it can put scripts *somewhere* ;)
>
> It can also handle accessing packages when running from your project
> directory. If you meant subdirectory, sure, that would be a major
> security issue to do that (much as I want to), but if you meant "both
> scripts and -m don't work" then that's just incorrect.

Ugh, yeah, editing fail, I meant "subdirectory".

And yeah, of course you can make both of these work, but I was
specifically replying to Dan's comment about how he doesn't like
pipenv has to jump through hoops and mess with paths manually. Maybe a
better way to put it would be: the interpreter changes proposed in PEP
582 don't help pipenv, because even if pipenv ends up using
__pypackages__ then the way it does it will be by jumping through
hoops and messing with paths manually.

The part that might benefit pipenv is to have a conventional place to
put its environments. But that part doesn't need interpreter changes
or even a PEP.

> That said, I prefer the approach of pipx
> (https://pypi.org/project/pipx/) for scripts anyway. It too has the
> problem of not updating your PATH for you, but at least it keeps tools
> separate from dependencies, as they should be.

I think this is the third time we've had this conversation in a week
:-(. Pipx is great for the cases it targets, and if it's sufficient
for all of your use cases then that's great, but it isn't sufficient
for mine, and that's not because your use cases are right and mine are
wrong.

(For anyone reading this and wondering about context, see:
https://discuss.python.org/t/structured-exchangeable-lock-file-format-requirements-txt-2-0/876/22)

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/S4NN5VZVQVNCINCJJ72AYPLTVJ2XITL6/


[Distutils] Re: PEP-582 concerns

2019-02-20 Thread Nathaniel Smith
On Wed, Feb 20, 2019, 08:13 Dan Ryan  wrote:

> I don’t have a ton of concern with regard to pipenv. We already just jump
> through hoops to modify paths and such at runtime, this honestly sounds
> like a cleaner approach. Obviously we won’t actually get to clean up the
> code for a long time but you know...
>
> My basic position is that we are just pointing at python libraries and
> code at the end of the day. The only real concern is scripts— where will
> they live, etc.
>

Yeah, __pypackages__ has no way to handle scripts, and also no way to
access packages when you're running from a directory. Pipenv already
handles both of these cases fine today, so I'm not sure how having
__pypackages__ several years from now could help you.


> One final thing this enables as far as I understand is a sort of npm-like
> option for ignoring resolution conflicts and simply performing a sort of
> nested installation of subdependencies inside a top level dependency’s
> __pypackages__ folder. So if you did install two packages with a conflict,
> they wouldn’t necessarily have to find a resolution.
>

I don't think __pypackages__ would change anything here. The blocker for
doing npm-style nested subdependencies in Python isn't that we only have
one folder, it's that we only have one sys.modules, and I don't think there
are any proposals to change that.

-n
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/TFJ6M6Y3BC7SYW6D4XHAYVKFC3OIDR6X/


[Distutils] Re: PEP-582 concerns

2019-02-20 Thread Nathaniel Smith
I'd caution against folks getting too worked up about PEP 582. I know it's
been getting a lot of attention on social media recently, but, it's a draft
that hasn't even been submitted for discussion yet. Most PEPs at this stage
never end up going anywhere. And in general, when people start digging in
on positions for and against something it always leads to worse decisions,
and the earlier this happens the worse it gets.

It has some interesting ideas and also some real limitations. I think it's
a good signal that there are folks interested in helping make the python
dev workflow easier, including changing the interpreter if that turns out
to be the right thing to do. That's really all it means so far.

I wonder if we should stick a header on the PEP draft saying something like
this? There's a lot of scattershot responses happening and I think a lot of
the people reacting are lacking context.

-n

On Wed, Feb 20, 2019, 04:40 Alex Walters  wrote:

> I have 2 main concerns about PEP 582 that might just be me misunderstanding
> the pep.
>
> My first concern is the use of CWD, and prepending ./_pypackages_ for
> scripts.  For example, if you were in a directory with a _pypackages_
> subdirectory, and had installed the module "super.important.module".  My
> understanding is that any scripts you run will have
> "super.important.module"
> available to it before the system site-packages directory.  Say you also
> run
> "/usr/bin/an_apt-ly_named_python_script" that uses "super.important.module"
> (and there is no _pypackages_ subdirectory in /usr/bin).  You would be
> shadowing "super.important.module".
>
> In this case, this adds no more version isolation than "pip install
> --user",
> and adds to the confoundment factor for a new user.  If this is a
> misunderstanding of the pep (which it very well might be!), then ignore
> that
> concern.  If it's not a misunderstanding, I think that should be emphasized
> in the docs, and perhaps the pep.
>
> My second concern is a little more... political.
>
> This pep does not attempt to cover all the use-cases of virtualenvs - which
> is understandable.  However, this also means that we have to teach new
> users
> *both* right away in order to get them up and running, and teach them the
> complexities of both, and when to use one over the other.  Instead of
> making
> it easier for the new user, this pep makes it harder.  This also couldn't
> have come at a worse time with the growing use of pipenv which provides a
> fully third way of thinking about application dependencies (yes, pipenv
> uses
> virtualenvs under the hood, but it is a functionally different theory of
> operation from a user standpoint compared to traditional pip/virtualenv or
> this pep).
>
> Is it really a good idea to do this pep at this time?
>
> In a vacuum, I like this pep.  Aside from the (possible) issue of
> unexpected
> shadowing, it's clean and straight forward.  It's easy to teach.  But it
> doesn't exist in a vacuum, and we have to teach the methods it is intended
> to simplify anyways, and it exists in competition with other solutions.
>
> I am not a professional teacher; I don't run python training courses.  I
> do,
> however, volunteer quite a bit of time on the freenode channel.  I get that
> the audience there is self-selecting to those who want to donate their
> time,
> and those who are having a problem (sometimes, those are the same people).
> This is the kind of thing that generates a lot of confusion and frustration
> to the new users I interact with there.
> --
> Distutils-SIG mailing list -- distutils-sig@python.org
> To unsubscribe send an email to distutils-sig-le...@python.org
> https://mail.python.org/mailman3/lists/distutils-sig.python.org/
> Message archived at
> https://mail.python.org/archives/list/distutils-sig@python.org/message/SFMFKTQVKTONCYNN7UEKLFAQ2VRKXEHK/
>
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/5AGJJPMSWI7VIXXOBBYBH7PYINQI7HWM/


[Distutils] Re: PEP idea regarding non-glibc systems

2019-02-19 Thread Nathaniel Smith
On Tue, Feb 19, 2019 at 3:28 PM Alexander Revin  wrote:
>
> Hi all,
>
> I have an idea regarding Python binary wheels on non-glibc platforms,
> and it seems that initially I've posted it to the wrong list ([1])
>
> Long story short, the proposal is to use platform tuples (like
> compiler ones) for wheel names, which will allow much broader platform
> support, for example:
>
> package-1.0-cp36-cp36m-amd64_linux_gnu.whl
> package-1.0-cp36-cp36m-amd64_linux_musl.whl
>
> So eventually only {platform tag} part will be modified. Glibc/musl
> detection is quite trivial and eventually will be based on existing
> one in PEP 513 [2].

The challenge here is: the purpose of a target triple is to tell a
compiler/linker toolchain which kind of code they should generate,
e.g. when cross-compiling. The purpose of a wheel tag is to tell you
whether a given wheel will work on a given system. It turns out these
are different things :-).

For example, Ubuntu 18.10 and RHEL 6 are both 'amd64-linux-gnu',
because they use the same instruction set, the same binary format
(ELF), etc. But if you build a wheel on Ubuntu 18.10, it definitely
will not work on RHEL 6. (The other way around might work, if you do
other things right.)

In practice Windows and macOS are already fine; the place where this
would be useful is Linux wheels for platforms that use non-Intel-based
architectures or non-glibc-libcs. We do have an idea for making it
easier to support newer glibcs and also extending to all
architectures: 
https://mail.python.org/archives/list/distutils-sig@python.org/thread/6AFS4HKX6PVAS76EQNI7JNTGZZRHQ6SQ/

Adding musl is a bit trickier since I'm not sure what the details of
their ABI compatibility are, and they intentionally make it difficult
to figure out whether you're running on musl. But if someone could
convince them to publish more information then we could fix that too.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/H2LO7SOUQ3BIPIRA2Z6VC3VI3NFFVADS/


[Distutils] Re: Update PEP 508 to allow version specifiers

2019-01-29 Thread Nathaniel Smith
On Tue, Jan 29, 2019 at 4:11 PM Dan Ryan  wrote:
>
> In the ideal future would we avoid the build step by having PyPI host 
> primarily wheels? In which case anything available on PyPI would hopefully 
> have its metadata exposed on the JSON endpoint and we could sidestep that.
>
> Either way we will ultimately have to download whatever is specified as a 
> direct url dependency because even if it has a version that aligns we will 
> need to figure out what dependencies it demands, etc etc. And while an 
> optimistic candidate is a neat idea it’s not clear to me at least whether 
> it’s a good idea to expect this of users. What happens when they get it 
> wrong? Do you trust the version in the package or do you allow an override?  
> Conflict resolution is possible either way but the desired behavior there 
> would seem to favor the former imo

Everything I was talking about was stuff that would be happening
inside the resolver loop, no users involved. The "optimistic
candidate" would just be an initial tentative solution that the
resolver uses internally to decide which things to try downloading
first.

You know more about how resolvers actually work than I do though, so I
might have gotten it wrong :-).

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/GYZY6LO4LLYQ3HKPMBL6P4ZWZI4RYB5C/


[Distutils] Re: Update PEP 508 to allow version specifiers

2019-01-29 Thread Nathaniel Smith
On Tue, Jan 29, 2019, 06:59 Donald Stufft  On Jan 29, 2019, at 9:48 AM, Xavier Fernandez 
> wrote:
>
> I disagree that it *needs* the name: since the link is declared as a
> dependency, the installer will necessarily need to check/download it at
> some point to install it and could discover the package name at that point,
> just like it will discover the version at the same point.
> Providing the name in the direct reference is an optimization that ease
> the work of the installer and allowing to provide a version specifier could
> be an other one.
>
> It needs the name to do that without downloading, which is ideally the
> direction we’re heading towards, that we can do as much work prior to
> downloading files as possible.
>

This confused me too, but after thinking about it I think maybe I get it.

The thing is, there actually is no way for a resolver to know for certain
whether it wants to download a direct URL like this until after it has
downloaded it. Because, you could have a situation where after you download
it, you discover that it has a version, or its own requirements, that are
inconsistent with the other requirements we've gathered, and then the
resolver has to backtrack. And in backtracking it might decide not to use
that exact URL after all.

But, we can imagine a resolver that works in phases: first, it uses all the
information it can get without downloading/building anything to construct
an optimist candidate solution: "assuming everything I haven't downloaded
yet cooperates, this will work". And then we download/build until either
everything works out, or else we discover some conflict and go back to
phase 1 with more information in our database.

If package A depends on B and C, and if package B depends on "C @
some-url"... well, we don't know until we download it whether there
different C requirements are going to match up, it's true. But because the
package name is present in the URL requirement, we at least know that we
don't need to go find C somewhere *else*. If all we had was the URL without
the name, then in this case we might end up downloading C from PyPI,
spending time building it, and only after that download the URL and
discover that it was also package C.

-n
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/Z3VOS3IB77FY4GLJG3QVEOIKKAAK6GBN/


[Distutils] Re: Idea: perennial manylinux tag

2018-12-02 Thread Nathaniel Smith
On Sun, Dec 2, 2018 at 6:10 PM Robert T. McGibbon  wrote:
> I suspect that *I* am one of the major reasons that the manylinux1 -> 
> manylinux2010 transition has been unreasonably drawn out, rather than any 
> particular design flaw in the versioning scheme (manylinux_{cardinal number} 
> vs. manylinux_{year} vs. manylinux_{glibc version}).

Hey Robert, good to hear from you! And seriously, I don't think you
need to blame yourself for this... like, it was 13 months between when
CentOS 5 went EOL and when the PEP was accepted, which was a
precondition for everything else. 8 months after that, we still don't
have a pip release that can install manylinux2010 wheels. As it's
turned out, auditwheel wasn't the bottleneck at any point. And this
proposal would remove both the need for future PEPs and for future pip
updates, so it addresses the actual bottlenecks.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/KEH7ZWEAS3AZ6GZGBIYPHVE3CMSIK7EK/


[Distutils] Re: Idea: perennial manylinux tag

2018-12-01 Thread Nathaniel Smith
On Fri, Nov 30, 2018 at 7:13 AM Thomas Kluyver  wrote:
> Do we lose the ability for a system to explicitly declare that it is or isn't 
> compatible with a given manylinux variant (via the _manylinux?

Good question.

Straw man: if _manylinux is importable, and
_manylinux.manylinux_compatible is defined, then it must be a
callable, and manylinux_compatible() returns whether the given
tag should be considered supported.

Immediate question: should the possible return values be True/False,
or a ternary True/False/use-the-default-detection-logic?

> Presumably it would still require a new PEP, and changes to various tools, to 
> allow manylinux wheels based around an alternative libc implementation? Is it 
> worth naming these tags like manylinux_glibc_2_12, to anticipate that 
> possibility? Or is that unnecessary verbosity?

In practice, the "many" in "manylinux" has always been code for
"glibc-based", so "manylinux_glibc" is kind of redundant. I guess we
could call them "linux_glibc_2_12_x86_64", but at this point python
devs seem to understand the manylinux name, so changing names would
probably cause more confusion than clarity.

I'm not sure what to think about the "2" part of the glibc version. I
think the reality is that they will never have a "3"? And if they did
we have no idea why or what it would mean? I guess we could ask them.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/UVOIB7QBK375SILVZTZIVDZRA5UTRGOU/


[Distutils] Re: Idea: perennial manylinux tag

2018-11-30 Thread Nathaniel Smith
On Fri, Nov 30, 2018 at 7:35 AM Paul Moore  wrote:
> Only Linux users can really answer this. But what I will say is that
> on Windows, anything other than the core system libraries must be
> bundled in the wheel (so, for example, Pillow bundles the various
> image handling DLLs). Manylinux (as I understand it) does a certain
> amount of this, but expects dynamic linking for a much wider set of
> libraries. Maybe that reflects the same sort of mindset that results
> in Linux distros "debundling" tools like pip that vendor their
> dependencies. I'm not going to try to judge whether the Linux or the
> Windows approach is "right", but I'd be surprised if manylinux can
> take much inspiration from the Windows approach without confronting
> this difference in philosophy.
>
> Paul
>
> [1] I certainly don't want to spark any sort of flamewar here, but I
> do feel a certain wry amusement that the term "DLL Hell" was invented
> as a criticism of library management practices on Windows, and yet in
> this context, library management on Windows is pretty much a
> non-problem, and it's Linux (that prided itself on avoiding DLL hell
> at the time) that is now struggling with library versioning complexity
> ;-)

The Windows and Linux situations are actually almost identical, except
for the folklore around them. Both have a small but sufficient set of
libraries that you can rely on being there, and that are carefully
designed to maintain ABI backwards compatibility over time, and then
you have to vendor everything else.

Windows actually used to be worse than Linux at this, because its
version of libc wasn't in the set of base libraries, so it had to be
vendored along with every app, and you could have all kinds of "fun"
if Python and its extensions weren't built against the same libc. But
these days they've switched to a Linux-style libc (complete with a
clever implementation of glibc-style symbol versioning), so they
really are pretty much identical.

The hardest thing with distributing binaries on Linux is just
convincing Linux hackers that it's OK to do it the same way
Windows/macOS do, instead of inventing something more complicated.

"DLL hell" refers to how in the bad old days, the standard practice
for apps on Windows was not just to include vendored libraries, but to
*store all those vendored libraries in the global libraries
directory*, which unsurprisingly led to all kinds of chaos as
different apps overwrote each other's vendored libraries.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/HLWTZJLNYR7M2JULTPBIPPK2RTFEUWQT/


[Distutils] Re: Idea: perennial manylinux tag

2018-11-30 Thread Nathaniel Smith
On Fri, Nov 30, 2018 at 10:29 PM Paul Moore  wrote:
>
> On Sat, 1 Dec 2018 at 04:42, Nathaniel Smith  wrote:
> > How does this affect spec-writing? Well, we want to allow for non-pip
> > installers, so the part that pip does has to be specified. But pip's
> > part is really straightforward.
> [...]
> > So the proposal here is to refactor the spec to match how this
> > actually works: the official definition of a manylinux_${glibc
> > version}_${arch} wheel would be "I promise this wheel will work on any
> > Linux system with glibc >=${glibc version} and an ${arch} processor".
> > We'll still need to make changes as old distros go out of support, new
> > architectures get supported, etc., but the difference is, those
> > changes won't require complex cross-ecosystem coordination with new
> > formal specs for each one; instead they'll be routine engineering
> > problems for the docker image+auditwheel maintainers to solve.
>
> So if I follow, what you're saying is that the *spec* (i.e., the PEP)
> will simply say what installers like pip, and indexes like warehouse
> need to do[1] (which is for pip, generate the right list of supported
> tags, and for warehouse, add the relevant tags to the "allowed
> uploads" list). And everything else (all the stuff about libraries
> you're allowed to link dynamically to) becomes just internal design
> documentation for the auditwheel project (and amy other manylinux
> building support projects that exist)?

Yep.

> [1] Is there not also an element of what the wheel project needs to
> do? It has to generate wheels with the right tags in the first place.
> Actually, PEP 425 also needs an update, at a minimum to refer to the
> manylinux spec(s), which modify the definition of a "platform tag"
> from PEP 425...

We've actually never touched the wheel project in any of the manylinux
work. The workflow is:

- set up the special build environment
- run setuptools/wheel to generate a plain "linux" wheel (this is the
not-very-useful tag that for historical reasons just means "it works
on my machine", and isn't allowed on pypi)
- auditwheel processes the "linux" wheel to check for various possible
issues, vendor any necessary libraries, and if that all worked then it
rewrites the metadata to convert it to a  "manylinux" wheel

This feels a bit weird somehow, but it's worked really well so far.

Note that in a PEP 517-based world, regular local installs also create
an intermediate wheel, and in that case you don't want the special
auditwheel handling, you really just want the "it works on my machine"
wheel.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/XYRQPMIG5NSWBGSXN5IPJ7MZLCS3GDNP/


[Distutils] Re: Idea: perennial manylinux tag

2018-11-30 Thread Nathaniel Smith
It sounds like I should explain better how things currently work :-).

The original manylinux1 spec is PEP 513. But of course it's just text
-- it's a useful reference, but it doesn't do much by itself. And when
we wrote it we had no idea how this would actually work out.

In practice, there are two pieces to manylinux1's implementation, that
work together to make it successful.

First, there's pip's gatekeeping logic. If you put up a manylinux1
wheel on pypi, then pip will install it on any python that's built
again glibc 2.5 or greater, on x86-64 or x86-32. That means not
ancient distros like CentOS 4 (its glibc is too old), and not exotic
distros like Alpine or Android (they don't use glibc), but it includes
all vaguely-modern mainstream desktop or server distros. So in
practice the definition of a manylinux1 wheel is "I promise this wheel
will work on any system with glibc 2.5 or greater and an Intel
processor".

But most maintainers have no idea how to actually fulfill that
promise, which is where the docker image and auditwheel come in. There
are a lot of ways a wheel can fail to work on a glibc 2.5+ system: it
might depend on a newer glibc, or it might depend on library that the
target system doesn't have installed, or a whole bunch of other super
arcane traps that we've discovered over time (e.g. the Python used for
the build has to be linked using the correct configure options). These
are all encoded into the docker image/auditwheel. (So for example,
auditwheel has some built-in knowledge of which libraries you can
expect to find on every Intel system with glibc 2.5 or greater, that
it uses to make decisions about which libraries need to be vendored.)
Technically you don't *have* to use these tools to build your wheel,
pip doesn't care, but they provide some nice padded guardrails that
make it possible for ordinary maintainers to fulfill the manylinux1
promise in practice.

How does this affect spec-writing? Well, we want to allow for non-pip
installers, so the part that pip does has to be specified. But pip's
part is really straightforward.

All the complicated bit is in the docker image/auditwheel. But, for
these, it turns out the spec doesn't actually matter that much. We can
observe that most wheels do work in practice, and whenever someone
discovers some new edge case that the PEP never thought of, then it's
not a disaster, it just means there's one broken wheel on pypi, and we
figure out how to fix the tools to catch the new edge case, they
upload a new wheel, and life goes on.

So the proposal here is to refactor the spec to match how this
actually works: the official definition of a manylinux_${glibc
version}_${arch} wheel would be "I promise this wheel will work on any
Linux system with glibc >=${glibc version} and an ${arch} processor".
We'll still need to make changes as old distros go out of support, new
architectures get supported, etc., but the difference is, those
changes won't require complex cross-ecosystem coordination with new
formal specs for each one; instead they'll be routine engineering
problems for the docker image+auditwheel maintainers to solve.

-n

On Fri, Nov 30, 2018 at 12:09 AM Nathaniel Smith  wrote:
>
> Hi all,
>
> The manylinux1 -> manylinux2010 transition has turned out to be very 
> difficult. Timeline so far:
>
> March 2017: CentOS 5 went EOL
> April 2018: PEP 517 accepted
> May 2018: support for manylinux2010 lands in warehouse
> November 2018: support lands in auditwheel, and pip master
> December 2018: 21 months after CentOS 5 EOL, wwee still don't have an 
> official build environment, or support in a pip release
>
> We'll get through this, but it's been super painful and maybe we can change 
> things somehow so it will suck less next time.
>
> We don't have anything like this pain on Windows or macOS. We never have to 
> update pip, warehouse, etc., after those OSes hit EOLs. Why not?
>
> On Windows, we have just two tags: "win32" and "win_amd64". These are defined 
> to mean something like "this wheel will run on any recent-ish Windows 
> system". So the meaning of the tag actually changes over time: it used to be 
> that if a wheel said it ran on win32, then that meant it would work on winxp, 
> but since winxp hit EOL people started uploading "win32" wheels that don't 
> work on winxp, and that's worked fine.
>
> On macOS, the tags look like "macosx_10_9_x86_64". So here we have the OS 
> version embedded in the tag. This means that we do occasionally switch which 
> tags we're using, kind of like how manylinux1 -> manylinux2010 is intended to 
> work. But, unlike for the manylinux tags, defining a new macosx tag is 
> totally trivial: every time a new OS version is released, the tag springs 
> into existence without any human intervention. Warehouse already accepts 
> upl

[Distutils] Re: Idea: perennial manylinux tag

2018-11-30 Thread Nathaniel Smith
Yes
On Fri, Nov 30, 2018 at 1:27 AM Paul Moore  wrote:
>
> On Fri, 30 Nov 2018 at 09:22, Pradyun Gedam  wrote:
> >
> >
> > On Fri, 30 Nov 2018 at 1:42 PM, Nathaniel Smith  wrote:
> >>
> >> April 2018: PEP 517 accepted
> >
> >
> > I think you got the wrong PEP number.
>
> Should be 571, presumably?
> Paul



-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/MTIJKZROITZ2OZBAQ6PLVS2ZYUJICQDN/


[Distutils] Idea: perennial manylinux tag

2018-11-30 Thread Nathaniel Smith
Hi all,

The manylinux1 -> manylinux2010 transition has turned out to be very
difficult. Timeline so far:

March 2017: CentOS 5 went EOL
April 2018: PEP 517 accepted
May 2018: support for manylinux2010 lands in warehouse
November 2018: support lands in auditwheel, and pip master
December 2018: 21 months after CentOS 5 EOL, wwee still don't have an
official build environment, or support in a pip release

We'll get through this, but it's been super painful and maybe we can change
things somehow so it will suck less next time.

We don't have anything like this pain on Windows or macOS. We never have to
update pip, warehouse, etc., after those OSes hit EOLs. Why not?

On Windows, we have just two tags: "win32" and "win_amd64". These are
defined to mean something like "this wheel will run on any recent-ish
Windows system". So the meaning of the tag actually changes over time: it
used to be that if a wheel said it ran on win32, then that meant it would
work on winxp, but since winxp hit EOL people started uploading "win32"
wheels that don't work on winxp, and that's worked fine.

On macOS, the tags look like "macosx_10_9_x86_64". So here we have the OS
version embedded in the tag. This means that we do occasionally switch
which tags we're using, kind of like how manylinux1 -> manylinux2010 is
intended to work. But, unlike for the manylinux tags, defining a new macosx
tag is totally trivial: every time a new OS version is released, the tag
springs into existence without any human intervention. Warehouse already
accepts uploads with this tag; pip already knows which systems can install
wheels with this tag, etc.

Can we take any inspiration from this for manylinux?

We could do the Windows thing, and have a plain "manylinux" tag that means
"any recent-ish glibc-based Linux". Today it would be defined to be "any
distro newer than CentOS 6". When CentOS 6 goes out of service, we could
tweak the definition to be "any distro newer than CentOS 7". Most parts of
the toolchain wouldn't need to be updated, though, because the tag wouldn't
change, and by assumption, enforcement wouldn't really be needed, because
the only people who could break would be ones running on unsupported
platforms. Just like happens on Windows.

We could do the macOS thing, and have a "manylinux_${glibc version}" tag
that means "this package works on any Linux using glibc newer than ${glibc
version}". We're already using this as our heuristic to handle the current
manylinux profiles, so e.g. manylinux1 is effectively equivalent to
manylinux_2_5, and manylinux2010 will be equivalent to manylinux_2_12. That
way we'd define the manylinux tags once, get support into pip and warehouse
and auditwheel once, and then in the future the only thing that would have
to change to support new distro releases or new architectures would be to
set up a proper build environment.

What do y'all think?

-n
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/archives/list/distutils-sig@python.org/message/6AFS4HKX6PVAS76EQNI7JNTGZZRHQ6SQ/


[Distutils] Re: __pypackages__ discussion (was Re: Notes from python core sprint on workflow tooling)

2018-09-30 Thread Nathaniel Smith
On Sun, Sep 30, 2018 at 2:25 PM, Chris Jerdonek
 wrote:
> [Splitting off a new thread for this question even if it might not
> result in a discussion]
>
> On Sun, Sep 30, 2018 at 10:00 AM Dan Ryan  wrote:
>>
>> Anyway, this is all a good discussion to have and I really appreciate you 
>> kicking it off. I've been following the __pypackages__ conversation a bit 
>> since pycon and I honestly don't have much opinion about where we want to 
>> put stuff, but I'm not sure  that the impact of the folder is going to be as 
>> great to the user as  people might imagine
>
> Where is this conversation happening, by the way? I'm surprised I
> didn't know about it until Nathaniel mentioned it when he started his
> thread -- since I'm on a bunch of lists (python-dev, Distutils-SIG,
> etc).

It hasn't been formally presented to any lists yet, but the initial
informal discussion is here: https://github.com/kushaldas/peps/pull/1

(I guess this is OK to share, since it's also linked from here:
https://github.com/python/peps/pull/776)

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/WVFDYBPQ4HA6SQX2TTFXF2PXX7NNPQYE/


[Distutils] Notes from python core sprint on workflow tooling

2018-09-30 Thread Nathaniel Smith
Now that the basic wheels/pip/PyPI infrastructure is mostly
functional, there's been a lot of interest in improving higher-level
project workflow. We have a lot of powerful tools for this –
virtualenv, pyenv, conda, tox, pipenv, poetry, ... – and more in
development, like PEP 582 [1], which adds a support for project-local
packages directories (`__pypackages__/`) directly to the interpreter.

But to me it feels like right now, Python workflow tools are like the
blind men and the elephant [2]. Each group sees one part of the
problem, and so we end up with one set of people building legs,
another a trunk, a third some ears... and there's no overall plan for
how they can fit together.

For example, PEP 582 is trying to solve the problem that virtualenv is
really hard to use for beginners just starting out [3]. This is a
serious problem! But I don't want a solution that *only* works for
beginners starting out, so that once they get a little more
sophisticated they have to throw it out and learn something new from
scratch.

So I think now might be a time for a bit of top-down design. **I want
a picture of the elephant.** If we had that, maybe we could see how
all these different ideas could be put together into a coherent whole.
So at the Python core sprint a few weeks ago, I dragged some
interested parties [4] into a room with a whiteboard [5], and we made
a start at it. And now I'm writing it up to share with you all.

This is very much a draft, intended as a seed for discussion, not a conclusion.

[1] https://www.python.org/dev/peps/pep-0582/
[2] https://en.wikipedia.org/wiki/Blind_men_and_an_elephant
[3] https://www.python.org/dev/peps/pep-0582/#motivation
[4] I won't try to list names, because I know I'll forget someone, and
I don't know if everyone would agree with everything I wrote there.
But thank you all!
[5] https://photos.app.goo.gl/4HfY8P3ESPNi9oLMA, including special
guest appearance by Kushal's elbow


# The idealized lifecycle of a Python project

## 1. Beginner

Everyone starts out as a rank beginner. This may be the first time
they have programmed at all. At this stage, users want to:

- install *one* thing to get started (e.g. python itself)
- write and run simple scripts (standalone .py files)
- run a REPL
- install and use PyPI packages like requests or numpy
- install and use tools like jupyter
- their IDE should also be able to find these packages/tools

Over time, they'll probably end up with multiple scripts, and maybe
want to organize them into subdirectories. The above should all work
from subdirectories.

## 2. Sharing with others

Now we have a neat little script. Or maybe we've made a pretty jupyter
notebook that computes some crucial business analytics. We want to
share it with our friends or coworkers. We still need the features
above; and now we also care about:

- version control
- some way for our friend to reconstruct, on their computer:
  - the same PyPI packages that we were using
  - the same tools that we were using
  - the ways we invoked those tools

This last point is important: as projects grow in complexity, and are
used by a wider audience, they often end up with fairly complex tool
specifications that have to be shared among a team. For example:

- to run tests: in an environment that has pytest, pytest-cov, and
pytest-trio installed, and with our project working directory on
PYTHONPATH, run `pytest -Werror --cov ...`
- to format code: in an environment using python 3.6 or later, that
has black installed, run `black -l 79 *.py my-util-directory/*.py`

This kind of tool specification also puts us in a good position to set
up CI when we reach that point.

At this point our project can grow in a few different directions.


## 3a. Deployable webapp

This adds the requirement to "deploy". I think this is mostly covered
by the set-up-an-environment-to-run-a-command functionality already
described? I'm not super familiar with this, but it's pipenv's core
target, and pipenv doesn't have much more than that, so I assume
that's about right...

## 3b. Reusable library

For this we also need to:

- Build sdists and wheels
  - Which means: pyproject.toml, and some way to invoke it
- Install our library into our environments
  - Including dependency locking (best practice is to not pin
dependencies in wheel metadata, but to pin all dependencies in CI; so
there needs to be some way to track those separately, but integrated
enough that it's not a huge ceremony to add or change a dependency)

## 3c. Reusable standalone app

I think this is pretty much like the "Reusable library", except that
it'd be nice to have better tools to build/distribute standalone
applications. But if we had them, we could invoke them the same way as
we invoke other build systems?


# How do existing tools/proposals fit into this picture?

pyenv, virtualenv, and conda all solve parts of the "create an
environment" problem, but consider the other aspects out-of-scope.

tox solves the problem of keeping a 

[Distutils] Re: Opinions on requiring younger glibc in manylinux1 wheel?

2018-09-18 Thread Nathaniel Smith
On Mon, Sep 17, 2018, 08:25 Antoine Pitrou  wrote:

> Hi,
>
> According to recent messages, it seems manylinux2010 won't be ready soon.
> However, the baseline software in manylinux1 is becoming very old. As an
> example, a popular C++ library (Abseil - https://abseil.io/) requires a
> more recent glibc (see
> https://github.com/abseil/abseil-cpp/commit/add89fd0e4bfd7d874bb55b67f4e13bf8beca762#diff-9f9c7fefa83b53e16cd568e31f1bfcb9R81
> ).
>
> What do you think of publishing manylinux1 wheels that would require a
> more recent glibc? This is being discussed currently for the pyarrow
> package.
>

It's naughty, you shouldn't do it, and the energy you put into making
pseudo-manylinux1 wheels could probably be better put into making finishing
up the manylinux2010 work – there's not that much to do.

That said, if you do do it, then probably it'll work fine and no one will
notice.

-n

>
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/VFBFG6QRETUF5NYVYB5N2Q6IMXBTMRXC/


[Distutils] Re: Opinions on requiring younger glibc in manylinux1 wheel?

2018-09-17 Thread Nathaniel Smith
On Mon, Sep 17, 2018, 18:51 Joni Orponen  wrote:

> On Mon, Sep 17, 2018 at 6:07 PM Antoine Pitrou  wrote:
>
>> Paul Moore wrote:
>> > I'm not really familiar with manylinux1, but I'd be concerned if we
>> > started getting bug reports on pip because we installed a library that
>> > claimed to be manylinux1 and was failing because it wasn't. (And yes,
>> > packaging errors like this are a common source of pip bug reports).
>> >
>> > It seems to me that it's defeating the purpose of having standards if
>> > people aren't willing to follow them...
>>
>> I agree with that. OTOH it seems providing binary wheels is generally a
>> strong
>> demand from the community. I would be fine with only providing conda
>> packages myself.
>>
>
> The biggest demand seems to be for developer convenience of quick
> downloads / installs and by people whom have not delved very deep into the
> gnarly black arts of cross compilation and forwards / backwards
> compatibility maintenance.
>
> Deployment bandwidth costs and install times are a second tier use, but
> still a real concern to any parties whom should consider sponsoring any
> effort going towards solving anything within the scope, as solving their
> gripes would save them money.
>
> By the way other packages are already doing worse:
>> https://github.com/tensorflow/tensorflow/issues/8802
>>
>
> Domain specific packages with real industry needs will need to deviate
> from any standard put forth as the world of the bleeding edge moves faster
> than the standards can.
>
> What a lot of packages would actually need, is to have per operating
> system per distro per distro version wheels, but that'd get quite insane
> quick and put a lot of effort onto the package maintainers or the
> maintainers of the manylinux-esque build containers.
>

I'm doubtful that there are many packages that "need" this. People don't do
this on Windows or macOS, and those platforms seem to do ok.

Still we should have some way to describe such packages, so tensorflow can
at least have be accurate metadata, and for a variety of other use cases
(Alpine, arm, conda, etc.).


> And even something like that will still spectacularly fall apart on macOS
> by stuff like building against 3rd party libraries from macports vs. fink
> vs. homebrew installed into /usr/local/ vs. homebrew installed into
> $HOME/.homebrew varying between the unsuspecting package maintainer / wheel
> builder and the end users of the wheel.
>

This isn't really an issue. Whatever libraries you need should be vendored
into the wheel with a tool like 'delocate', and then it doesn't matter what
third-party package manager your end users do or don't use.


> Oddly enough this seems to be by far the least problematic on Windows.
>

There's no real difference between Windows/macOS/Linux in terms of binary
compatibility. On Windows people are more used to shipping everything with
their package, that's all. If you do the same thing on macOS and Linux, it
works great.

-n
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/A47YBCTWPH4ST45JJP4CXD6SSIZNOVNP/


[Distutils] Re: SEC: Spectre variant 2: GCC: -mindirect-branch=thunk -mindirect-branch-register

2018-09-16 Thread Nathaniel Smith
On Wed, Sep 12, 2018, 12:29 Joni Orponen  wrote:

> On Wed, Sep 12, 2018 at 8:48 PM Wes Turner  wrote:
>
>> Should C extensions that compile all add
>> `-mindirect-branch=thunk -mindirect-branch-register` [1] to mitigate the
>> risk of Spectre variant 2 (which does indeed affect user space applications
>> as well as kernels)?
>>
>
> Are those available on GCC <= 4.2.0 as per PEP 513?
>

Pretty sure no manylinux1 compiler is ever going to get these mitigations.

For manylinux2010 on x86-64, we can easily use a much newer compiler: RH
maintains a recent compiler, currently gcc 7.3, or if that doesn't work for
some reason then the conda folks have be apparently figured out how to
build the equivalent from gcc upstream releases.

Unfortunately, the manylinux2010 infrastructure is not quite ready... I'm
pretty sure it needs some volunteers to push it to the finish line, though
unfortunately I haven't had enough time to keep track.

-n
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/5A3VEMZXTQDFGFCHPM7Z2DU24KHYG26Y/


[Distutils] Re: Adopting virtualenv package maintenance

2018-09-07 Thread Nathaniel Smith
On Fri, Sep 7, 2018, 14:41 Tzu-ping Chung  wrote:

>
> Just want to mention that adding activate_this.py to venv has been
> proposed, and rejected.
> https://bugs.python.org/issue21496
>

Looks like the reason for the rejection was just that the submitter didn't
provide a good rationale. If this is still causing problems then it could
be revisited.

-n
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/4FGBGVWQ2MZCQVVGMB6KKGXXTNK7WG6D/


[Distutils] Re: Adopting virtualenv package maintenance

2018-09-07 Thread Nathaniel Smith
On Fri, Sep 7, 2018, 10:48 Brett Cannon  wrote:

>
>
> On Thu, 6 Sep 2018 at 13:44 Alex Becker  wrote:
>
>> Another +1 to the utility of a maintainer. I am also working on package
>> management and have found that venv is not a full replacement for
>> virtualenv--for example I don't believe the environment can be entered
>> programatically, while virtualenv provides activate_this.py which can be
>> exec'd. I'm sure there are many other limitations, so I don't think python
>> can give up on virtualenv soon.
>>
>
> But are those inherent limitations of venv or simply a lack of a provided
> API or library to have the equivalent abilities? I assume there's a
> difference between keeping virtualenv running versus developing a small
> library on top of venv to backfill some things.
>

I guess venv being in the stdlib means that any limitations it has are
going to keep limiting "python -m venv" for quite a while.

If we want to work around these limits on something other than the Python
release cycle, then it means training users to not run "python -m venv",
and instead run "python -m somethingelse".

So long as that's necessary, the "somethingelse" might as well be
"virtualenv", which is what everyone is already trained to do anyway...

There have been plans at various points to rewrite virtualenv on top of
venv, and as far as I know the limiting factor was time/energy, not that
they hit some intrinsic limitation.

-n
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/M55DGNOU42R2TGV7CHB36NQKRHFGOXYR/


[Distutils] Re: Environment markers for GPU/CUDA availibility

2018-09-04 Thread Nathaniel Smith
On Tue, Sep 4, 2018, 07:42 Nick Coghlan  wrote:

> On Tue, 4 Sep 2018 at 20:30, Nathaniel Smith  wrote:
> >
> > On Mon, Sep 3, 2018 at 4:51 PM, Nick Coghlan  wrote:
> > > On Mon., 3 Sep. 2018, 5:48 am Ronald Oussoren,  >
> > > wrote:
> > >>
> > >>
> > >> What’s the problem with including GPU and non-GPU variants of code in
> a
> > >> binary wheel other than the size of the wheel? I tend to prefer
> binaries
> > >> that work “everywhere", even if that requires some more work in
> building
> > >> binaries (such as including multiple variants of extensions to have
> > >> optimised code for different CPU variants, such as SSE and non-SSE
> variants
> > >> in the past).
> > >
> > >
> > > As far as I'm aware, binary artifact size *is* the problem. It's just
> that
> > > once you're  automatically building and pushing an artifact (or an
> image
> > > containing that artifact) to thousands or tens of thousands of managed
> > > systems, the wasted bandwidth from pushing redundant implementations
> of the
> > > same functionality becomes more of a concern than the convenience of
> being
> > > able to use the same artifact across multiple platforms.
> >
> > None of the links that Dustin gave at the top of the thread are about
> > managed systems though.
>
> When you're only managing a few systems, or only saving a few MB per
> download, "install both and pick at runtime" is an entirely viable
> option.
>

Sure, this is true, and obviously size is a major reason for splitting up
these packages, but this doesn't have anything in particular to do with
managed systems AFAICT.


> However, since tensorflow is the example, neither of those cases is true:
>
> 1. It's a Google project, so they have tens of thousands of instances
> to worry about (as do other cloud providers)
>

They do have those instances, but they handle them via totally different
methods that don't involve PyPI package names or pip's dependency tracking.
(Specifically, a giant internal monorepo where they check in every piece of
code they use, and then they build everything from source through their
internal version of Bazel.)

This is about how they, and other projects, are distributed to the general
public on PyPI, and how to manage that public, shared dependency graph.

-n
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/MF2YV3TJGS73ZRZL5T2ACBRWJDSGNHVF/


[Distutils] Re: Environment markers for GPU/CUDA availibility

2018-09-04 Thread Nathaniel Smith
On Mon, Sep 3, 2018 at 4:51 PM, Nick Coghlan  wrote:
> On Mon., 3 Sep. 2018, 5:48 am Ronald Oussoren, 
> wrote:
>>
>>
>> What’s the problem with including GPU and non-GPU variants of code in a
>> binary wheel other than the size of the wheel? I tend to prefer binaries
>> that work “everywhere", even if that requires some more work in building
>> binaries (such as including multiple variants of extensions to have
>> optimised code for different CPU variants, such as SSE and non-SSE variants
>> in the past).
>
>
> As far as I'm aware, binary artifact size *is* the problem. It's just that
> once you're  automatically building and pushing an artifact (or an image
> containing that artifact) to thousands or tens of thousands of managed
> systems, the wasted bandwidth from pushing redundant implementations of the
> same functionality becomes more of a concern than the convenience of being
> able to use the same artifact across multiple platforms.

None of the links that Dustin gave at the top of the thread are about
managed systems though. As far as I can tell, they all come down to
one of two issues: given "tensorflow" and "tensorflow-gpu" are both on
PyPI, how can (a) users automatically get the appropriate version
without having to manually select one, and (b) other packages express
a dependency on "tensorflow or tensorflow-gpu"? And maybe (c) how can
we stop tensorflow and tensorflow-gpu from accidentally getting
installed on top of each other.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/CVW7ZVMHJTY2LCQZ33KO3WJQJM76WNF3/


[Distutils] Re: Environment markers for GPU/CUDA availibility

2018-09-04 Thread Nathaniel Smith
On Tue, Sep 4, 2018 at 3:10 AM, Paul Moore  wrote:
> There's very much an 80-20 question here, we need to avoid letting the
> needs of the 20% of projects with unusual needs, complicate usage for
> the 80%. On the other hand, of course, leaving the specialist cases
> with no viable solution also isn't reasonable, so even if tags aren't
> practical here, finding a solution that allows projects to ship
> specialised binaries some other way would be good. Just as a
> completely un-thought through suggestion, maybe we could have a
> mechanism where a small "generic" wheel can include pointers to
> specialised extra code that gets downloaded at install time?
>
> Package X -> x-1.0-cp37_cp37m_win_amd64.whl (includes generic code)
> Metadata - Implementation links:
> If we have a GPU ->  the install>
> If we don't have a GPU -> 
>
> There's obviously a lot of unanswered questions here, but maybe
> something like this would be better than forcing everything into the
> wheel tags?

I think you've reinvented Requires-Dist and PEP 508 markers :-). (The
ones that look like '; python_version < "3.6"'.) Which IIUC was also
Dustin's original suggestion: make it possible to write requirements
like

  tensorflow; not has_gpu
  tensorflow-gpu; has_gpu

But... do we actually know enough to define a "has_gpu" marker? It
isn't literally "this system has a gpu", right, it's something more
like "this system has an NVIDIA-brand GPU of a certain generation or
later with their proprietary libraries installed"? Or something like
that? There are actually lots of packages on PyPI with foo/foo-gpu
pairs, e.g. strawberryfields, paddlepaddle, magenta, cntk, deepspeech,
... Do these -gpu packages all have the same environmental
requirements, or is it different from package to package?

It would help if we had folks in the conversation who actually work on
these packages :-/. Anyone have contacts on the Tensorflow team? (It'd
also be good to talk to them about platform specifiers... the
tensorflow "manylinux1" wheels are really ubuntu-only, but they
intentionally lie about that b/c there is no ubuntu tag; maybe they're
interested in fixing that...?)

Anyway, I don't see how we could add an environment marker without
having a precise definition, and one that's useful for multiple
packages. Which may or may not be possible here...

One thing that would help would be if tensorflow-gpu could say
"Provides-Dist: tensorflow", so that downstream packages can say
"Requires-Dist: tensorflow" and pip won't freak out if the user has
manually installed tensorflow-gpu instead. E.g. in the proposal at
[1], you could have 'tensorflow' as one wheel and 'tensorflow[gpu]' as
a second wheel that 'Provides-Dist: tensorflow'. Conflicts-Dist would
also be useful, though might require a real resolver first.

Another wacky idea, maybe worth thinking about: should we let packages
specify their own auto-detection code that pip should run? E.g. you
could have a PEP 508 requirement like "somepkg;
extension[otherpackage.key] = ..." and that means "install
otherpackage inside the target Python environment, look up
otherpackage.key, and use its value to decide whether to install
somepkg". Maybe that's too messy to be worth it, but if "gpu
detection" isn't a well-defined problem then maybe it's the best
approach? Though basically that's what sdists do right now, and IIUC
how tensorflow-gpu-detect works. Maybe tensorflow-gpu-detect should
become the standard tensorflow library, with an sdist only, and at
install time it could decide whether to pull in 'tensorflow-gpu' or
'tensorflow-nogpu'...

-n

[1] https://mail.python.org/pipermail/distutils-sig/2015-October/027364.html

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/AYFTIEFKJS7OBCIU3IEJWA6UWOV3UE7I/


[Distutils] Re: Three clarification questions about PEP 425 and PyPy3

2018-08-31 Thread Nathaniel Smith
On Thu, Aug 30, 2018 at 6:52 PM, Brett Cannon  wrote:
>
>
> On Thu, 30 Aug 2018 at 11:21 Nathaniel Smith  wrote:
>>
>> If we're going to rethink this,
>
>
> Well, I didn't want to "rethink" so much as "fill in". :)
>
>>
>> then I would really like to move away from assigning special meaning to
>> specific combinations of tags. The thing where if you use cp36m as you
>> Python ABI tag then you're forced to use cp36 as your python dialect tag
>> doesn't make sense. And the thing where if you have a wheel with an
>> extension module tagged as cp36-abi3, then that works fine on 3.7,
>
>
> I wouldn't expect that to if the stable ABI was expanded in Python 3.7 and
> you used the expanded part.

In my example, you have a wheel using the 3.6 ABI, running on 3.7.
That's case that's supposed to work :-).

>>
>> but if you *remove* the extension module then it *stops* working on 3.7?
>> That's just bizarre...
>
>
> I don't quite follow what you mean by "remove the extension module then it
> stops". What stops by removing the extension module(s)?

If I'm reading your proposal right, it says that when running on
Python 3.7, pip should be willing to install wheels tagged cp36-abi3,
but not wheels tagged cp36-none. But conceptually, if you take the
extension modules out of a cp36-abi3 wheel, then you're left with a
cp36-none wheel. So this is weird. Anywhere we're willing to install a
cp36-abi3 wheel, we should also be willing to install a cp36-none
wheel.

>>
>>
>> So my suggestions:
>>
>> * Make the 3 tag categories totally independent. Compute a separate set
>> for each, and then take the full cross product.
>
>
> I think Paul was hinting at this as part of his "wildcard" idea (and I
> honestly thought of this initially as well as it greatly simplifies things).
> So what would cp36-cp36m-plat expand to?

I don't know what it means to "expand" a wheel tag. Are you punning
the tag as also describing a Python installation ("CPython 3.6, with a
certain soabi tag and platform"), and then asking to find all the
wheel tags that could go in that installation? You can't actually do
this in general – for example, determining whether a target
interpreter is compatible with manylinux wheels requires running some
special sniffing code on that interpreter.

> cp36: cp3N where N is any positive digit(s)? And then toss in py3 and py3N?
> Prefer exact, then generic '3', then older, and finally newer?

Given that we don't have any real use cases for cpXY and pyXY, I'd
rather not expand the options – just preserve what we do now... so for
CPython 3.6, I'd say py3, py3N for N <= 6, cp3N for N <= 6, and I
don't really care about the exact order – some sort of
more-specific-before-less-specific makes sense, and whatever we do now
is probably fine. If someone goes wild and starts distributing py35
and py36 wheels for the same package (via a "36to35" tool, I guess?)
then that's probably what you want? But I don't imagine this will ever
be an important use case. We tried it with 2to3, and everyone decided
they'd rather write in the subset language for a decade instead.

> cp36m: that, abi3, and then 'none'? Do we care if someone has some crazy ABI
> that no one understands like 'b' (and if so should it only be applied to
> 'py' interpreter versions which break your nice cross product simplicity)?
> plat: depends on platform.

Do you mean, what happens if we find ourselves running on an
interpreter we don't recognize (not cpython/pypy/jython/...), and that
interpreter returns something wacky from
sysconfig.get_config_var("SOABI")? I think there's a reasonable
argument that in that case we should only accept 'none' as the tag.
(And then maybe whoever invented this new interpreter gets the job of
adding sysconfig.get_supported_abi_tags() as a standard stdlib feature
:-).)

> And the match preference goes to platform, interpreter version, and then
> ABI? I think that would work in terms of ignoring C ABIs that have very
> little chance of linking while being the broadest in terms of accepting a
> wheel that has some semblance of a chance of running.

I don't think the preference order matters that much as long as its
well-defined. The only cases where it would affect things are like...

Suppose in the future we add support for platform tags based on
different ISA levels, like x86_64_avx2 vs x86_64_sse3 vs x86_64. Then
you could find yourself in a situation where examplepkg v1.2.3 has the
following wheels available:

cp36-cp36m-manylinux1_x86_64
cp36-abi3-manylinux1_x86_64_avx2

Our environment is CPython 3.6, running on a manylinux1-compatible OS
and we do have AVX2 support available, so either of these wheels could
work. The first wheel has a "better" ABI (cp36

[Distutils] Re: Three clarification questions about PEP 425 and PyPy3

2018-08-30 Thread Nathaniel Smith
If we're going to rethink this, then I would really like to move away from
assigning special meaning to specific combinations of tags. The thing where
if you use cp36m as you Python ABI tag then you're forced to use cp36 as
your python dialect tag doesn't make sense. And the thing where if you have
a wheel with an extension module tagged as cp36-abi3, then that works fine
on 3.7, but if you *remove* the extension module then it *stops* working on
3.7? That's just bizarre...

So my suggestions:

* Make the 3 tag categories totally independent. Compute a separate set for
each, and then take the full cross product.

* Since the stable ABI actually changes over time, we should define new
tags abi35, abi36, etc. that mean "requires the stable ABI as defined by
this version of cpython or higher", instead of relying on abi3 + a dialect
tag. (Imagine if pypy started implementing the stable ABI –we'd have to
start allowing cpXY tags to match PyPy.)

* Plan to move away from the pyXY and cpXY tags over time; they're
confusing and not useful. Of course this will have to be a gradual process,
but if pip stops requiring them now then in a year or two we could make
setuptools stop generating then.

On Thu, Aug 30, 2018, 09:26 Brett Cannon  wrote:

> So based on all of this, here is my proposal of what the compatible tags
> should become (in priority order from most to least strict). In the list
> below yellow means the value changed compared to the previous tag, blue
> means it's something I'm proposing to add, and red is something I'm
> proposing to remove (using what pip considers compatible tags as the base
> list of tags). I have left out all of the platform variances of macOS for
> brevity as there's no questions regarding those.
>
> For PyPy3 6.0.0 (and any other non-CPython interpreter that reports Python
> 3.5 from sys.version_info, i.e. this represents the default logic for an
> interpreter that has no special handling):
>
>- ('pp360', 'pypy3_60', 'macosx_10_13_x86_64')
>- ('pp360', 'none', 'macosx_10_13_x86_64'),
>- ('py35', 'none', 'macosx_10_13_x86_64')
>- ('py3', 'none', 'macosx_10_13_x86_64'),
>- ('py34', 'none', 'macosx_10_13_x86_64')
>- ('py33', 'none', 'macosx_10_13_x86_64')
>- ('py32', 'none', 'macosx_10_13_x86_64')
>- ('py31', 'none', 'macosx_10_13_x86_64')
>- ('py30', 'none', 'macosx_10_13_x86_64')
>- ('pp360', 'none', 'any'),
>- ('pp3', 'none', 'any'),
>- ('py360', 'none', 'any'),
>- ('py35', 'none', 'any'
>- ('py3', 'none', 'any')
>- ('py34', 'none', 'any')
>- ('py33', 'none', 'any')
>- ('py32', 'none', 'any')
>- ('py31', 'none', 'any')
>- ('py30', 'none', 'any')
>
>
>
>
> For CPython 3.7.0 (whose logic will be unique to the CPython interpreter
> in the library, but other interpreters could have their own custom logic as
> well when it makes sense; there will be some API to just say "give me what
> makes sense based on this tag" so users don't have to know any of this if
> they don't want to):
>
>- ('cp37', 'cp37m', 'macosx_10_13_x86_64'),
>- ('cp37', 'abi3', 'macosx_10_13_x86_64'),
>- ('cp37', 'none', 'macosx_10_13_x86_64'),
>- ('cp36', 'abi3', 'macosx_10_13_x86_64'),
>- ('cp35', 'abi3', 'macosx_10_13_x86_64'),
>- ('cp34', 'abi3', 'macosx_10_13_x86_64'),
>- ('cp33', 'abi3', 'macosx_10_13_x86_64'),
>- ('cp32', 'abi3', 'macosx_10_13_x86_64'),
>- ('py37', 'none', 'macosx_10_13_x86_64')
>- ('py3', 'none', 'macosx_10_13_x86_64'),
>- ('py36', 'none', 'macosx_10_13_x86_64')
>- ('py35', 'none', 'macosx_10_13_x86_64')
>- ('py34', 'none', 'macosx_10_13_x86_64')
>- ('py33', 'none', 'macosx_10_13_x86_64')
>- ('py32', 'none', 'macosx_10_13_x86_64')
>- ('py31', 'none', 'macosx_10_13_x86_64')
>- ('py30', 'none', 'macosx_10_13_x86_64')
>- ('cp37', 'none', 'any'),
>- ('cp3', 'none', 'any'),
>- ('py37', 'none', 'any'),
>- ('py3', 'none', 'any'),
>- ('py36', 'none', 'any'),
>- ('py35', 'none', 'any'),
>- ('py34', 'none', 'any'),
>- ('py33', 'none', 'any'),
>- ('py32', 'none', 'any'),
>- ('py31', 'none', 'any'),
>- ('py30', 'none', 'any')]
>
>
> On Thu, 30 Aug 2018 at 09:03 Daniel Holth  wrote:
>
>> It's not an intuitive system. We have wheel tags to choose the best
>> alternative wheel or fall back to sdist. So py3-none-any is fine for
>> f-strings if no other candidate wheel (a list of all available wheels for
>> the same version number of a package) has been compiled to not require
>> f-strings. The tag only has to tell you which wheel is most likely to work.
>>
>> No sdist or wheel is ever guaranteed to work, for any number of reasons.
>>
>&g

[Distutils] Re: Three clarification questions about PEP 425 and PyPy3

2018-08-30 Thread Nathaniel Smith
On Thu, Aug 30, 2018, 08:23 Nick Coghlan  wrote:

> On Thu, 30 Aug 2018 at 09:58, Brett Cannon  wrote:
> > On Wed, 29 Aug 2018 at 15:54 Nathaniel Smith  wrote:
> >> This is a tricky decision. Any time a new Python comes out, some
> >> existing wheels will continue to work fine, and some will be broken.
> >> One goal is to avoid installing broken wheels. But, there's also
> >> another consideration: if we're too conservative, then with every
> >> release we create a bunch of make-work as projects have to re-roll old
> >> wheels that would have worked fine, and some percentage of projects
> >> won't do this (e.g. b/c they're abandoned), and we lose them forever.
> >> Also, for the py3x tags in particular, if the wheel fails on py3(x+1),
> >> then the sdist probably will too, so it's not like we have any useful
> >> fallback.
> >
> > Right, but isn't that what the py3-none-any tag is meant to represent?
> If someone doesn't use that tag then I would take that as there is some
> version-specific stuff in that wheel.
>
> The problem is that "py3-none-any" doesn't specify a *minimum*
> version, so if a project starts using a new feature like f-strings,
> they *have* to declare "py36-...".
>

That's the theory, but I think these tags are useless in practice.

If you're on py35 and pip sees a wheel with py36 as the tag, then it falls
back to building from the sdist. For ABI-related tags this makes sense,
because given an sdist and an appropriate compiler, you have a good chance
of being able to generate wheels for some arbitrary platform, even one that
the original authors never heard of. But... the python dialect tags are
different. If your wheel uses f-strings, then your sdist probably does too,
so all the tag does is move around the error to happen somewhere else.

To avoid this, you have to put a Python-Requires header in your metadata.
It's the only thing that works for sdists. And it also works for wheels.
And it's strictly more expressive than the wheel tag version (you can write
arbitrary restrictions like ">= 3.5.2, != 3.6.1". Note that 3.5.2 actually
is a common minimum version for lots of async libraries, because it had a
breaking change in the core async/await protocols).

So I don't think there's any case where the pyXY tags are actually useful.
You're always better off using Python-Requires.

-n
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/K72BHTS6S3JX2U7BWYVDK2EZB3ZY3VI4/


[Distutils] Re: Three clarification questions about PEP 425 and PyPy3

2018-08-29 Thread Nathaniel Smith
On Wed, Aug 29, 2018 at 10:25 AM, Brett Cannon  wrote:
>
>
> On Wed, 29 Aug 2018 at 01:56 Nathaniel Smith  wrote:
>>
>> On Tue, Aug 28, 2018 at 11:46 AM, Brett Cannon  wrote:
>> > py36
>> >
>> > py36-none-% but not py36-none-any: 2 (example)
>> >
>> > py3
>> >
>> > py3-none-% but not py3-none-any: 142 (example)
>>
>> Oh right, and these ones are totally sensible: this is the correct tag
>> for a project that ships some vendored shared libraries, and accesses
>> them using cffi's ABI mode, or through ctypes: it cares about the
>> CPU/OS ABI, but doesn't use the Python C ABI.
>
>
> Yep. I was just surprised that py37-none-% wasn't being emitted as
> acceptable since that technically makes sense.

Setuptools never creates such wheels, so I guess it's not well tested.
The main reason they exist at all is that Armin Ronacher jumped
through a bunch of hoops to make it happen in his milksnake [1]
project, and it's not even a year old.

[1] https://github.com/getsentry/milksnake

> I think figuring out what makes sense in terms of compatibility will be the
> toughest bit. E.g. for Python 3.7, pip will check for py37-none-any down to
> py30-none-any as well as py3-none-any. With python_requires in metadata well
> as the py3 interpreter tag, I'm not sure if it still makes sense to
> enumerate all the way down to py30, especially when Python doesn't follow
> strict semver. Maybe for Python 3.7 py37, py3, and py36 makes the most sense
> by assuming code is warning-free in Python 3.6 and so should be relatively
> safe to use in 3.7 with warnings? Otherwise I wouldn't expect e.g. 3.5 code
> to work in 3.7 since there's new keywords that old code might break on.

This is a tricky decision. Any time a new Python comes out, some
existing wheels will continue to work fine, and some will be broken.
One goal is to avoid installing broken wheels. But, there's also
another consideration: if we're too conservative, then with every
release we create a bunch of make-work as projects have to re-roll old
wheels that would have worked fine, and some percentage of projects
won't do this (e.g. b/c they're abandoned), and we lose them forever.
Also, for the py3x tags in particular, if the wheel fails on py3(x+1),
then the sdist probably will too, so it's not like we have any useful
fallback.

So, it's arguably better to be optimistic and assume that all py3x
wheels will work on py3(x+k), even if it's sometimes wrong, because
when we're wrong the failure modes are more acceptable.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/VD3BPEYZPMKXYTVON223WMH2PNLRXKLC/


[Distutils] Re: Three clarification questions about PEP 425 and PyPy3

2018-08-29 Thread Nathaniel Smith
On Tue, Aug 28, 2018 at 11:46 AM, Brett Cannon  wrote:
> py36
>
> py36-none-% but not py36-none-any: 2 (example)
>
> py3
>
> py3-none-% but not py3-none-any: 142 (example)

Oh right, and these ones are totally sensible: this is the correct tag
for a project that ships some vendored shared libraries, and accesses
them using cffi's ABI mode, or through ctypes: it cares about the
CPU/OS ABI, but doesn't use the Python C ABI.

> In the end I think you can view the interpreter tag as representing a
> namespace for the ABI tag.

The ABI tags are all designed to be unique though, without
namespacing. Also, in theory the semantics are slightly different,
because cp36 means "3.6 or higher", while cp36m means "exactly 3.6,
with --enable-pymalloc but without --enable-debug". A "cp35-cp36m"
wheel is technically possible, though of course not very useful in
practice...

> That's exactly what I'm in the process of doing. :) My goal is to have a
> library that tools will drop their internal copies of pep425tags for so
> there's a standardized PEP 425 implementation. I just wanted to make sure
> that before I write any more code that I knew what needed to be handled for
> backwards-compatibility versus what is a historical accident or was a guess
> at what the future might need when the PEP was written.
>
> Anyway, I will give this a think and try to come up with a reasonable
> algorithm for generating the sequence of supported tags based on a specific
> tag and Python version and then code that up into a library (at least I will
> definitely have something to work on at the dev sprints :) .

Cool, see you there :-)

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/JDUMMO57S3MHWIXZYD4WQGKM2VAK4FLZ/


[Distutils] Re: Three clarification questions about PEP 425 and PyPy3

2018-08-29 Thread Nathaniel Smith
On Tue, Aug 28, 2018 at 11:46 AM, Brett Cannon  wrote:
> cp36
>
> %cp36-none-any.whl: 7 (example)
> %cp36-none-%.whl: 70 (example)
> cp36-none-%.whl but not cp36-none-any.whl: 65 (example that Nathaniel knows
> very well ;)

Yeah, that's an old hack that never got removed, and causes problems:
https://github.com/numpy/numpy/issues/11508

Actually I wouldn't be surprised if most of those 65 are from projects
using 'multibuild' that inherited that hack.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/NBC3I2GJLWGHHCTEV6ZSKQZIZJAL4JZX/


[Distutils] Re: Three clarification questions about PEP 425 and PyPy3

2018-08-27 Thread Nathaniel Smith
I think the answer to all of these questions is "well, no-one's ever
really looked that closely".

There's a theory behind the tags; they're supposed to be a reasonably
expressive language for talking about Python dialect compatibility,
Python C ABI compatibility, and platform ABI compatibility,
respectively. But in practice so far only a small fixed set of tag
combinations actually gets used, so there's plenty of room for weird
stuff to accumulate in the corners where no-one looks.

I've never been able to figure out a use case for the interpreter tags
in the first field ("cp36", "pp3", etc). IIUC, the theory is that
they're supposed to mean "the Python code in this package is not
actually portable Python, but uses an interpreter-specific dialect of
the language". (This is very different from the second field, where
tags like "cp36m" tell you about the required C ABI -- that one is
obviously super useful.) I guess if you had a package that like,
absolutely depended on GC being refcount-based, you could use a cp3
tag to indicate that, or if you had a pure-Python, python 2 package,
that required 'dict' to be ordered, maybe that's 'pp2-none-any'? But
this never seems to actually happen in practice. It seems like an idea
that sounded plausible early on, and then never got fleshed out or
revisited.

The distutils folks have never sat down to seriously think about
non-CPython implementations, where the language version and the
implementation version are separate things.

The pypy folks have never sat down to seriously think about API/ABI
stability. Generally at the Python dialect level they try to match a
given version of (C)Python, and at the ABI level every new release is
a new ABI.

My guess is you shouldn't spend too much effort on trying to slavishly
reproduce pip's logic, and that if you wanted to go clean up pip's
logic (and maybe extract it into a reusable library?) then the devs
would be perfectly happy that someone was doing it...

-n

On Mon, Aug 27, 2018 at 6:28 PM, Brett Cannon  wrote:
> And to help in getting a reply, here is the trimmed-down results for CPython
> 3.7 to compare against:
>
> [('cp37', 'cp37m', 'macosx_10_13_x86_64'),
> …
>  ('cp37', 'abi3', 'macosx_10_13_x86_64'),
> …
>   ('cp37', 'none', 'macosx_10_13_x86_64'),
> …
>  ('cp36', 'abi3', 'macosx_10_13_x86_64'),
> …
>   ('cp35', 'abi3', 'macosx_10_13_x86_64'),
> …
>   ('cp34', 'abi3', 'macosx_10_13_x86_64'),
> …
>  ('cp33', 'abi3', 'macosx_10_13_x86_64'),
> …
>  ('cp32', 'abi3', 'macosx_10_13_x86_64'),
> …
>  ('py3', 'none', 'macosx_10_13_x86_64'),
> …
>  ('cp37', 'none', 'any'),
>  ('cp3', 'none', 'any'),
>  ('py37', 'none', 'any'),
>  ('py3', 'none', 'any'),
>  ('py36', 'none', 'any'),
>  ('py35', 'none', 'any'),
>  ('py34', 'none', 'any'),
>  ('py33', 'none', 'any'),
>  ('py32', 'none', 'any'),
>  ('py31', 'none', 'any'),
>  ('py30', 'none', 'any')]
>
> So, it re-iterate the questions:
>
> What is ('pp3', 'none', 'any') supposed to represent for PyPy3? Since the
> version of the interpreter is PyPy3 6.0 the lack of major version number
> seems like a bug more than a purposeful interpreter version (and there's
> only a single project -- cliquet -- that has a wheel that's compatible with
> that tag triple and it's not even for their latest release).
> Why does CPython have (*, 'none', 'any') from the version of the interpreter
> down to Python 3.0 plus generically Python 3 while PyPy3 only gets generic
> Python 3?
> Why isn't (*, 'none', platform) listed from Python 3.7 to 3.0 for either
> CPython or PyPy3? I understand not iterating through all versions when an
> ABI is involved (without knowing exactly which versions are compatible like
> abi3), but this triple seems safe to iterate through as a fallback just as
> much as (*, 'none', 'any'). Maybe because it's too ambiguous to know how
> important such a fallback would be between e.g. ('py36', 'none',
> 'macosx_10_13_x86_64') and ('py37', 'none', 'any'), and so why bother when
> the older version triples are there just for a safety net to have at least
> some chance of a match?
> I still think ('py360', 'none', 'any') is a bug. ;)
>
>
> P.S.: The ('py3', 'none', 'macosx_10_13_x86_64') triple being between e.g.
> ('pp360', 'none', 'macosx_10_13_x86_64') and ('pp360', 'none', 'any') is
> really messing with my head and making the code to generate supported
> triples a bit less elegant. ;)
>
> On Sat, 25 Aug 2018 at 15:03 Brett Cannon  wrote:
>>
>> I noticed that for PyPy3, the tag triples considered compatible were
>> (roughly; trimmed out the long list of macOS versions):
>>
>> [('pp360', 'pypy3_60', 'macosx_10_13_x86_64'),
>>  ('pp360', 'none', 'macosx_10_13_x86_64'),
>>   ('py3', 'none', 'macosx_10_13_x86_64'),
>>  ('pp360', 'none', 'any'),
>>  ('pp3', 'none', 'any'),
>>  ('py360', 'none', 'any'),
>>  ('py3', 'none', 'any')]
>>
>> Now the 

[Distutils] Re: Packaging Advice for EFF's Certbot

2018-07-26 Thread Nathaniel Smith
On Thu, Jul 26, 2018, 05:48 Ben Finney via Distutils-SIG <
distutils-sig@python.org> wrote:

> Brad Warren  writes:
>
> > Our main use case has been the long tail of individuals or small teams
> > of sysadmins who maintain servers and need or want help and automation
> > around maintaining a secure TLS configuration.
>
> For what it's worth, I certainly concur that most people in the group
> you describe will thank you to not have a bundle of custom dependencies
> from outside the OS repository, and instead make Certbot work with
> (and/or be advocates to introduce) the dependencies as OS-provided
> library packages.
>

I wonder if they have any user survey or anything that would speak to this.

FWIW my impression is that the kinds of sysadmins who know enough to have
opinions about packaging hygiene also know enough to set up one of
certbot's many simpler, less magical competitors. Certbot's target audience
has always been folks who didn't really understand any of this stuff and
wanted to just hit a button and have things somehow work out by whatever
means necessary. Which is great, those people deserve secure
communications. But the point is that you can't make everything your number
one priority, and when they have to choose, certbot has chosen to
prioritize "make it work" over "fit nicely into a traditional distro", and
I think that's the right decision for what they are.

> This is definitely our preferred approach to building native packages
> > right now. To be honest, no one on my team has any experience building
> > .debs and .rpms and we’re happy to learn what we need to if we go with
> > this approach, but the more reliable automation around the process we
> > can use the better.
>
> For Debian, instead of becoming packaging experts yourselves, instead we
> ask that you make Certbot easier for *us* to package and maintain in
> Debian. See https://wiki.debian.org/UpstreamGuide>; other OS
> projects likely have similar documentation.
>

That's a nice idea in principle and all, but see also
https://wiki.debian.org/LetsEncrypt which cheerily notes that the version
of certbot shipped in stretch is so old that it doesn't actually work, and
as a workaround it recommends running a command that will take down the
user's website without even warning that that's what it does.

I love Debian. I've been a loyal user for twenty years. But Debian is
really bad at coping with software that needs to react quickly to changing
external conditions, or that depends on a tight feedback loop between users
and developers. (Indeed, Debian's whole value-add is to insert themselves
as a buffer between users and developers, which is great in some situations
but terrible in others.)

Maybe if certbot worked hard enough they could arrange to get special
treatment like Firefox or clamav or something, but in their position I
would see this as a total non-starter. Even if they did somehow manage to
navigate Debian's politics, they'd still have to go and repeat the process
for Redhat, SuSE, Ubuntu, Gentoo, ...

-n

>
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/EDCELE3LBN7LMUXF6FX4EYERX4I33ELL/


[Distutils] Re: Packaging Advice for EFF's Certbot

2018-07-24 Thread Nathaniel Smith
On Tue, Jul 24, 2018 at 5:48 PM, Brad Warren  wrote:
> Do you know if our approach to using setuptools entry_points to find plugins
> will work [with conda]? This is described at
> https://setuptools.readthedocs.io/en/latest/setuptools.html#dynamic-discovery-of-services-and-plugins.

Yes, installing Python inside a conda environment gives you a regular
Python environment, and things like entry_points work fine.

> On Jul 23, 2018, at 11:10 PM, James Bennett  wrote:
>
>> On Mon, Jul 23, 2018 at 8:17 PM, Alex Walters 
>> wrote:
>>>
>>> As a user of certbot, docker, conda, nix, and guix are non-starters.  I'm
>>> not depending on those tools for my production server (and while docker may
>>> be a dependency for some people, that is hardly universal).  Adding
>>> heavyweight technical dependencies are problematic if your goal is to get
>>> everyone using your software.  You're better off with cx_freeze or
>>> pyinstaller binaries downloaded from a website or a PPA-like-system to add
>>> to system package managers, which are not perfect solutions either.
>>
>> I would emphasize this point.
>
> Not wanting to install a lot of extra software to use Certbot is certainly
> fair and we’d obviously prefer our packaging solution to be as lightweight
> as possible. Thanks for bringing this up as a consideration.

I feel like these responses may be lumping things together more than
is helpful... using docker is quite heavyweight in the sense that it
makes major demands on the host system: you need a docker daemon
running, it has to be root, maybe it needs some special kernel modules
to handle overlay filesystems, and so forth. That's fine if it's there
already, but if it's not then certbot shouldn't be trying to bootstrap
a docker installation from scratch.

Conda OTOH is nothing like that -- it's just tools and conventions for
working with regular files on a regular filesystem. One of its core
design goals was that non-technical grad students should be able to
drop it into a random directory on some HPC cluster where they don't
have root and that's running some weird old distro like Scientific
Linux 6, run one command, and have everything just work. Which sounds
pretty similar to your use case.

I don't know where nix and guix fall on this spectrum.

> On Jul 24, 2018, at 4:36 AM, Nick Coghlan  wrote:
>
> However, there *are* folks that have been working on allowing
> applications to be defined primarily as Python projects, and then have
> the creation of wrapper native installers be a pushbutton exercise,
> rather than requiring careful human handholding.

But it sounds like they also want to be able to install/remove/upgrade
*parts* of the Python project, for their plugin support. And maybe
upgrade the Python interpreter as well. Do any of these tools allow
that? That's the thing that really made me think about conda.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/S5YXVIMB3JTPNZKR3LNIGJGQ4ZL2P4VK/


[Distutils] Re: Packaging Advice for EFF's Certbot

2018-07-23 Thread Nathaniel Smith
On Mon, Jul 23, 2018 at 4:31 PM, Brad Warren  wrote:
> Hi!
>
> I work at the Electronic Frontier Foundation on Certbot which is the most
> popular end user application for obtaining and installing SSL/TLS
> certificates from Let’s Encrypt. Over the past few years, distributing
> Certbot has been one of our development team's biggest challenges and we’re
> currently rethinking how we do so.
>
> It was suggested to me that I post to this list to see if anyone was
> interested in offering advice for how we should approach this. Of course,
> Certbot is written entirely in Python.
>
> If you’re interested, I wrote up a bit of background and what we’re
> currently thinking at
> https://docs.google.com/document/d/1y2tc65yWnGuYsun9wsXu7ZRLCVT9eih9eu0pj7Ado60/edit?usp=sharing.
> Feel free to reach out to me on or off list or on IRC at bmw on Freenode.

Reading the problem description at the top of your document, my first
thought was that this seemed like exactly what conda is designed for:
a "real" package manager designed to be portable across platforms and
work in isolation from the system package manager.

You should also look at Nix and Guix, which are the other systems I
see people mention in this space.

I'm not an expert in conda at all -- if you want to go down this path
you should probably have a chat with Anaconda and also conda-forge
(which is a very impressive community-run packaging and build effort).
I have some idea about some of the questions you raised though :-):

> How will separately distributed plugins work?

Conda has a system they call "channels" to let third-parties
distribute extra conda packages, and existing systems for
using/hosting/maintaining them. (Sort of similar to Ubuntu PPAs, if
you know those.)

> How should the user invoke Certbot (and maybe conda) if we don’t want to put 
> another Python in the user’s PATH to avoid breaking other Python code on 
> their system?

A little shell script to set the PATH and then exec the right binary
should work. Or just setting up the #! line in your main script
properly.

> What should we do for systems not using 32-bit or 64-bit x86?

I know the conda folks have some stuff for ARM, though I don't know the details.

> If we didn’t want to trust any binaries built by someone else or proprietary 
> code, how much work would that be?

This is where you want to talk to conda-forge – one of the original
motivations was to make a community alternative to Anaconda Inc's
official packages (which were not originally open-source, and do still
contain proprietary code). Nowadays everyone's on better terms, but
having once rebuilt the whole distro from the ground up means they can
probably share some experience with you here.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/SFKA346UB3UQHZWNKONC63CT5VSKUTHB/


[Distutils] Re: [Distutils]PEP 518 - pyproject.toml with no build-system.requires

2018-07-20 Thread Nathaniel Smith
On Fri, Jul 20, 2018 at 5:01 PM, Brett Cannon  wrote:
> I have updated PEP 518:
> https://github.com/python/peps/commit/af73627e587c25b9ac6f28a0fda01953252df391#diff-f068c801ccb40fad40c0436ff1e25e3f

LGTM. Thanks Brett!

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/547OQJ33Z6J4KKHHTKGP3WEQ55NEUJ7M/


[Distutils]Re: The Wheel specification and compatibility tags on Windows

2018-07-17 Thread Nathaniel Smith
The promise behind the limited ABI is exactly that if your extension works
on 3.x, it will also work on 3.y, for y >= x.

One thing to watch out for: normally extension modules on Linux and MacOS
don't try to link to libpython. Instead they trust that someone else will
make sure they only get loaded into a compatible python interpreter, and
that all the symbols they need from python will be injected by the host
process.

On Windows, the way the dynamic loader works, you can't do this: every
extension module has to explicitly say "I'm getting PyNumber_Add from the
dll named: somethingsomething.dll"

But traditionally, version X.Y of python includes a pythonXY.dll, so
there's no consistent name across releases. So even if your library uses
only the limited ABI and all of your imports could just as well come from
python36.dll or python37.dll... you need some way to express that, and only
Windows has this problem.

I feel bad sending this without doing my own research, but I don't have
access to a Windows box right now. Does anyone know if this problem has
been solved? Is it still true that Windows python dlls always include the
python version in their name?

-n

On Tue, Jul 17, 2018, 09:38 Paul Moore  wrote:

> On 17 July 2018 at 16:59, Cosimo Lupo  wrote:
> > I would like to revive this 5 year old thread and see if we can stir
> things
> > up a bit.
> >
> > Basically the problem is that, in the current state of the PEPs and build
> > tools, it is still not possible to build and distribute a Windows wheel
> that
> > includes an extension module compiled with Py_LIMITED_API.
> > Setuptools can successfully build such extension module on Windows (with
> > `.pyd` file extension and no extra specifiers in the module filename),
> and
> > these seems to work at least on CPython 3.5 and above. However the
> > `--py-limited-api cpXX` option of bdist_wheel command fails on Windows
> > because it attempts to use the `abi3` tag but the latter is not in the
> list
> > of compatible tags for that platform.
> > One can work around this by creating a wheel with `none` as the abi tag,
> and
> > `cp35.cp36.cp37` as the python implementation tag but this feels a bit
> > hackish.
> >
> > Here are some unresolved questions from the old distutils-sig thread,
> > quoting Paul Moore:
> >
> >> 2. How should tools determine which ABIs a given Python supports?
> >> (This is the get_supported function in wheel and distlib). The "Use"
> >> section of the PEP (http://www.python.org/dev/peps/pep-0425/#id1)
> >> gives a Linux-based example, but nothing normative and nothing that is
> >> understandable to a Windows user.
> >
> > And from Vinay Sajip
> >
> >> For Windows, we (eventually) need some low-level sysconfig-supported way
> >> to get the ABI information in an analogous way to how it happens on
> POSIX:
> >> and
> >> because that's not currently there, distlib doesn't provide any ABI
> >> information
> >> on Windows other than "none".
> >
> > Other related links:
> > https://github.com/pypa/pip/issues/4445
> > https://mail.python.org/pipermail/distutils-sig/2018-January/031856.html
> >
> > So.. what needs to be done here to allow distributing/installing Windows
> > wheels with Py_LIMITED_API support?
>
> IMO, the question I posed back then remains key. Vinay's response is
> fair, but I don't think that waiting for core Python to provide
> something via sysconfig is practical (it's not happened yet, so why
> would we expect things to change?). So I think the next step is
> probably for someone to propose an algorithm that can be used.
> Actually, what I'd like to see is a full end to end proposal of the
> process someone would use to build and install a limited-ABI
> extension. That would probably tease out a number of issues.
>
> I imagine the steps would be something like this:
>
> 1. Create an extension. Presumably you'd need to #define
> PY_LIMITED_ABI in the source.
> 2. Build a wheel from it - how would you do that? I gather it's
> possible to do this with plain setuptools - would it be necessary to
> do this with setuptools/bdist_wheel, or should there be a way to
> request a limited ABI build via pip? If we do want to be able to
> request this from a build tools like pip, do we need something in PEP
> 517? Are we only looking at the prebuilt wheel case, or do we need to
> support building from source?
> 3. What tags would the built wheel have?
> 4. Install that wheel - `pip install xxx`. Pip needs to be able to
> enumerate the full list of valid tags here (cp37-abi3, cp3-abi3, ...)
> There are also questions like - if there's a limited ABI wheel and a
> full ABI (version specific) wheel, which takes precedence?
>
> I don't honestly know how well the limited ABI actually achieves its
> goals - is "cp3-abi3-win_x86_64" a realistic tag to apply? Can limited
> ABI wheels really be used on any version of Python 3? That's a
> question for python-dev, rather than distutils-sig, but if we take the
> position that this is what's 

[Distutils]Re: PEP 518 - pyproject.toml with no build-system.requires

2018-07-16 Thread Nathaniel Smith
On Mon, Jul 16, 2018 at 11:27 AM, Donald Stufft  wrote:
>
> On Jul 16, 2018, at 5:22 AM, Paul Moore  wrote:
>
>> 1. If [build-system] is present but requires is missing, raise an error.
>> 2. If [build-system] is missing, they can take one of the following
>> two approaches:
>>   a) Act as if pyproject.toml is missing altogether
>>   b) Act as if [build-system] is present, with a requires value of
>> ["setuptools", "wheel"]
>>
>> Whether tools act differently in cases 2a and 2b is tool-dependent
>> (for pip, we would isolate in case 2b but not in case 2a) which is why
>> the choice is left to individual tools. That makes the
>> "Thomas/Nathaniel" debate into a tool implementation choice, and both
>> of the options are allowable from the perspective of the PEP.
>
> This sounds fine to me, and I prefer a 2b approach.

I also prefer option 2 (and specifically 2b but like you say, 2a vs 2b
isn't something the PEP cares about), just because it's the simplest
possible approach: we always act the same when build-system.requires
is missing, regardless of why it's missing. And it's the same logic as
we use to handle a missing build-system.build-backend.

It doesn't matter that much though. It seems extremely unlikely that
anyone's going to create an empty [build-system] section just because
they can...

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/5E6O2PKSQEAK232INZLTOJESOJDFYHKK/


[Distutils]Re: pypi/twine complains about license

2018-07-11 Thread Nathaniel Smith
PyPI is not the license police. You can specify any license you like in the
dedicated, free-form text, "license" field.

That's the "license" field. But, PyPI does require that values in the
"classifiers" field have to be taken from a known set. Among other things,
this prevents typos, and prevents people making up different names for the
same thing, which would defeat the purpose of classifier-based searching.
This isn't a new thing; old PyPI did the same thing.

The list of legal classifiers is stored inside the PyPI database. New ones
are added from time to time on request.

I don't know why you're having this experience of a classifier you think
used to be supported no longer being supported. You say the license field
is the same as on previous uploads. But the license field isn't the issue
here. Is the classifiers field also the same?

I believe there is no longer any manual upload mechanism – or rather, twine
is the manual upload mechanism :-).

I'm not sure what's going on with uploading the same file repeatedly
without error – that seems weird. But I know in general that PyPI is very
strict about making sure that once a file is uploaded, it never changes. So
I don't think there's any risk of that. Possibly PyPI is noticing that the
file you're trying to upload is identical to the one that's already there
and counting that as a "successful upload"?

On Wed, Jul 11, 2018, 09:15 Robin Becker  wrote:

> After release of Python-3.7 I wanted to upload to pypi a newly built
> version of a C-extension which already has been migrated to
> the new site.
>
>
> $ twine --version
> twine version 1.11.0 (pkginfo: 1.4.2, requests: 2.18.1, setuptools: 36.2.0,
> requests-toolbelt: 0.8.0, tqdm: 4.14.0)
> $ twine upload *.whl
> Uploading distributions to https://upload.pypi.org/legacy/
> Uploading pyRXP-2.1.1-cp37-cp37m-manylinux1_i686.whl
> 100%||
> 104K/104K [00:00<00:00,
> 141KB/s]
> HTTPError: 400 Client Error: Invalid value for classifiers. Error:
> 'License :: OSI Approved :: ReportLab BSD derived' is not a
> valid choice for this field for url: https://upload.pypi.org/legacy/
>
> 1) I think it is completely wrong for twine/pypi to fail to upload because
> of the license field. The license is derived from BSD
> and the same string is present in the previously uploaded versions of this
> package. What are valid licenses? Presumably pypi is
> now a gatekeeper for the license police.
>
> 2) I looked in vain on the new pypi.org site for a manual upload
> mechanism. Is this now frowned on?
>
> 3) I was able to upload the same package several times without error; does
> this mean I am overwriting the file?
> --
> Robin Becker
> --
> Distutils-SIG mailing list -- distutils-sig@python.org
> To unsubscribe send an email to distutils-sig-le...@python.org
> https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
> Message archived at
> https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/RAFVT2Z23NZOAVURYKRASZTBWEGWSUDI/
>
>
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/JWB2NZQ5KQD66ROPSNMKFRRY6RP7XUTU/


[Distutils]Re: PEP 518 - pyproject.toml with no build-system.requires

2018-07-07 Thread Nathaniel Smith
On Sat, Jul 7, 2018 at 3:41 AM, Paul Moore  wrote:
> On 30 June 2018 at 06:33, Nick Coghlan  wrote:
>> On 28 June 2018 at 11:37, Nathaniel Smith  wrote:
>>> So my inclination is to plan on ending up with build-system.requires
>>> defaulting to ["setuptools", "wheel"], and build-system.backend
>>> defaulting to "setuptools". Hopefully we'll eventually get to a place
>>> where ~no-one uses these defaults, but carrying around the code to
>>> handle the defaults isn't really a burden.
>>
>> While I was going to post to say I liked this approach, after a bit of
>> reflection, I realised I prefer Thomas Kluyver's suggestion: instead
>> of taking "pyproject.toml" as indicating a build-isolation compatible
>> sdist file, instead make "pyproject.toml with a build-system table"
>> the marker for that case.
>
> As far as I can see, the only difference this makes is that it means
> pip retains the legacy (non-isolated) behaviour in a few more places
> (specifically places where it's quite likley the project hasn't
> thought about build isolation). So it's basically a slightly more
> forgiving version of Nathaniel's proposal.
>
> The part of Nathaniel's approach that I think would be most confusing
> is a project that currently uses setup_requires which adds a
> pyproject.toml for (say) towncrier. The build would become isolated,
> but setup_requires (which is implemented by setuptools, not pip) would
> ignore the isolated environment and install in the wrong place (maybe?
> I honestly don't know). I'm quite happy to call this deprecated
> behaviour and point out that the project should switch to explicitly
> using PEP 518, but given that this whole discussion is because people
> haven't done that, I suspect Nathaniel's proposal doesn't actually
> solve the root issue here...

Re: the interaction of build isolation and setup_requires: it looks
like this is totally fine, actually. Based on some experiments +
checking the docs, it appears that setup_requires has always done some
magic where it doesn't actually try to install the requested packages
into the current environment, but instead drops them inside the build
directory and then uses some import system tricks to "overlay" them
onto the current process's python path. So build isolation +
setup_requires actually work very well together.

I think in the long run, we want to enable build isolation everywhere.
Packages that break when installed with build isolation are already
broken when running 'pip install' in a fresh virtualenv. There
probably are a few of these out there still that say things like
"before installing this package, please install these other packages,
as a separate call to pip", but it's been a long time now since I've
seen one of those. And since they're already asking users to follow
some finicky manual install procedure, requiring --no-build-isolation
isn't a big deal.

So, I don't care that much about what we use to trigger build
isolation mode, because it's only a temporary thing anyway. The value
of keying off something involving pyproject.toml is that it
automatically gives us a kind of soft rollout: people adopting
pyproject.toml are probably more willing to put up with issues with
new packaging features, so we can hopefully shake out any problems
before it becomes the standard.

This suggests that our decision should be based on: if we want to be
relatively more aggressive about rolling out build isolation, then we
should key on the existence of pyproject.toml. If we want to be
relatively more conservative, then we should key on the existence of
build-system.requires.

>> If you don't have a build-system table at all, then you'll continue to
>> get the legacy sdist handling, allowing the addition of other tool
>> config without impacting the way your sdist gets built.
>>
>> If you do add a build-system table, then you have to populates the
>> "requires" field properly, even if you're using setuptools as your
>> build backend.
>>
>> That way, the "build-system.backend defaults to setuptools" behaviour
>> is only there to support pyproject.toml files that have already opted
>> in to build isolation by writing:
>>
>> [build-system]
>> requires = ["setuptools", "wheel"]
>>
>
> That sounds fair to me. In terms of PEP wording:
>
> 1. build-system.requires becomes *optional* in pyproject.toml
> 2. Tools should process projects without pyproject.toml in the same
> way as they always have (backward compatibility). For pip, that means
> no build isolation, and the old-style processing path.
> 3. Tools should treat projects with pyproject.toml, but with *no*
> build-system.requir

[Distutils]Re: Handing over default BDFL-Delegate responsibilities for packaging interoperability PEPs to Paul Moore

2018-07-06 Thread Nathaniel Smith
Nick, thanks so much for your service in an often thankless job. It is
appreciated! And Paul, thanks for taking this on!

On Fri, Jul 6, 2018, 19:08 Nick Coghlan  wrote:

> Hi folks,
>
> Since 2013, I've been the default BDFL-Delegate for packaging
> interoperability PEPs. In that time, the Python packaging ecosystem
> has moved forward in a lot of different areas, with pip being shipped
> by default with CPython, the wheel binary packaging format reaching
> ever-increasing heights of popularity, the cross-distro manylinux ABI
> compatibility specification being developed, the new pyproject.toml
> based sdist format being defined, the PSF's Packaging Working Group
> being formed, the Python Packaging User Guide being developed, and
> various aspects of the packaging metadata being enhanced to improve
> the general user experience of the Python packaging ecosystem.
>
> The role of the BDFL-Delegate in that process is partly about making
> arbitrary decisions when arbitrary decisions need to be made ("The
> bikeshed shall be green!"), but also about helping to guide
> discussions in productive directions, as well as determining when more
> complex PEP level proposals have reached a sufficient level of
> consensus that it makes sense to provisionally accept them and move on
> to publishing reference implementations.
>
> While it's been a fascinating ~5 years, I've decided that it's time
> for me to hand over those responsibilities to another PyPA
> contributor. With Guido's approval, I've asked Paul Moore if he'd be
> willing to take on the role, and Paul has graciously accepted the
> additional responsibility.
>
> Paul's a long term pip contributor, and also a CPython core developer,
> with a lot of practical experience in getting Python (and Python
> packaging) to work well in Windows environments. He's also a familiar,
> calm, and constructive presence in design discussions within
> distutils-sig, pip and other PyPA projects, which is an important
> characteristic when taking on BDFL-Delegate responsibilities.
>
> I'd like to personally thank Paul for being willing to take on this
> task, and I look forward to many more productive design discussions!
>
> Cheers,
> Nick.
>
> P.S. I'm not stepping down from Python packaging related activities
> entirely, as I'll still be involved in Python Packaging User Guide and
> pipenv maintenance, and will continue as a member of the PSF's
> Packaging Working Group. However, the final sign-off for packaging
> interoperability PEPs will now rest with Paul or someone else that
> he appoints, rather than with me :)
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> --
> Distutils-SIG mailing list -- distutils-sig@python.org
> To unsubscribe send an email to distutils-sig-le...@python.org
> https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
> Message archived at
> https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/QT7SKORCF6OKWO3OVP5KO6XNGU2AR6TU/
>
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/3E3H7LDKE3ZJL5Z75725DMYDFMJM6EJK/


[Distutils]Re: PEP 518 - pyproject.toml with no build-system.requires

2018-06-28 Thread Nathaniel Smith
On Thu, Jun 28, 2018 at 11:19 AM, Paul Moore  wrote:
> On 28 June 2018 at 18:45, Bernat Gabor  wrote:
>> In the pep it's stated tools can use the tool section
>> https://www.python.org/dev/peps/pep-0518/#id28 and at no point says build
>> tools only. So I don't think at all strange that towncrier uses it. It
>> follows the words of the pep quite rigourously.
>
> The whole PEP is about *build* tools, and in that context, that's what
> was meant. Maybe the PEP should have been more explicit. Maybe we
> should have thought about this at the time. Maybe lots of things, but
> the reality is that we *didn't* intend it to be used for non-build
> tools, and nevertheless it is being used in that way. We should deal
> with things as they are now, and not spend ages debating whether
> things "should" be the way they are.

Oh, I totally imagined that it would be used for non-build tools. I
was thinking of stuff like pytest when I wrote that 'tool namespace'
section, and of the way all kinds of non-build tools had already
colonized setup.cfg (cf. Donald's "paving the cow paths").

It's the One Obvious place for project-specific configuration. That's
awesome! Python packaging needs more One Obvious things.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/ADS5VOFZFFYWIS43VROBTIQM5B3MYO77/


[Distutils]Re: PEP 518 - pyproject.toml with no build-system.requires

2018-06-27 Thread Nathaniel Smith
On Wed, Jun 27, 2018 at 2:25 PM, Paul Moore  wrote:
> On 27 June 2018 at 22:09, Pradyun Gedam  wrote:
>>
>> On Wed, Jun 27, 2018 at 9:15 PM Paul Moore  wrote:
>>>
>>> On 27 June 2018 at 15:59, Pradyun Gedam  wrote:
>
>>> >
>>> > Assuming we are going to disallow missing build-requires,
>>> > I think a better way for this would be to allow a user to override
>>> > build-requires on a per-package basis. It'd be a more verbose
>>> > and also clearer about which packages are needing some sort
>>> > of work-around to install, pushing packages to just directly
>>> > specify build-requires in future releases.
> [...]
>>> 1. Project has pyproject.toml with build-system.requires specified. No
>>> problem, full PEP 518 behaviour (and pip uses build isolation).
>>> 2. Project has pyproject.toml but no build-system.requires, Illegal,
>>> confirmed by this discussion, so terminate with an error (pip currenly
>>> warns, but will move to an error after a deprecation period).
>>> 3. Project has no pyproject.toml. Old-style project, PEP 518 says that
>>> the default is [setuptools, wheel]. Pip will actually use the legacy
>>> (no build isolation) code path, which is a backward compatibility
>>> choice. I'm not actually sure PEP 518 needs to even comment on this
>>> case, as such projects clearly don't conform to that PEP and tools
>>> like pip will of necessity have to handle them as "legacy".
>>>
>>> The only case where I can see your "per-package overrides" fitting in
>>> would be (3), which is outside the scope of PEP 518 and so really a
>>> pip-only issue.
>>
>> I am suggesting it for (2).
>
> But Brett clearly stated that he views PEP 518 as stating that the
> build-system.requires key in pyproject.toml is *not* optional. And I
> think that's the correct reading of the PEP.

FWIW, I'm a co-author on the PEP, and I've apparently been shipping
software without this key for most of a year:
https://github.com/python-trio/trio/blob/master/pyproject.toml
Clearly I should have read my PEP more closely...

> OK. But I'm going to take the view that having explicitly requested
> clarification on distutils-sig, if we want to do anything other than
> reject a pyproject.toml with a missing build-system.requires as
> invalid, we need to first of all get the PEP changed. And for that
> we'll have to have a good use case to justify the change. Personally,
> I don't see the value.

If pip starts erroring out when pyproject.toml is missing
build-system.requires, then 'pip install twisted' will stop working
and everyone will start screaming on twitter:
https://github.com/twisted/twisted/blob/trunk/pyproject.toml

(Trio won't actually be affected b/c it has a universal wheel, so pip
always uses that instead of the sdist. But twisted only has wheels for
python 2 on Windows; all other configurations build from source, and
thus will break.)

We probably need to update the PEPs some here anyway, if only for clarity.

Also, right now PEP 517 says "If the pyproject.toml file is absent, or
the build-backend key is missing, the source tree is not using this
specification, and tools should fall back to running setup.py." But
thinking about it again, we probably don't want to do this, because it
adds Yet Another build configuration that pip has to handle:

1. no pyproject.toml -> legacy non-isolated build via legacy setup.py support
2. pyproject.toml without build-backend specified -> new isolated
build via legacy setup.py support
3. pyproject.toml with build-backend specified -> new isolated build
via new build-backend support

Once pip and setuptools both have PEP 517 build backend support (which
should hopefully happen soon?), then option (2) will become weird and
unnecessary. It would be nice to get rid of it. So I think we'll be
having a similar discussion in a few months about how to handle
pyproject.toml without build-backend keys. At that point we'll have a
few options:

- We could say that build-system.build-backend is mandatory, and error
out if it's missing. But that will break the world (probably even more
so than making build-system.requires mandatory now will break the
world, since by the time this happens there will be more
pyproject.toml files in the wild).

- We could shrug and say that updating PEPs is a lot of work so let's
just go with what we wrote way back when. But that forces pip to
forever carry around code to implement both option (2) and (3), which
do essentially the same thing except with extra code and probably
there will be weird bugs that show up in one configuration or the
other, etc.

- We could say that when build-system.build-backend is missing, it
defaults to "setuptools". But it's *really weird* to have a default
value for build-system.build-backend while not having one for
build-system.requires, because, effectively, the interpretation of
build-system.build-backend depends on the value of
build-system.requires. The build backend is an object that gets looked
up inside those required packages. If we're going 

[Distutils]Re: PEP 518 - pyproject.toml with no build-system.requires

2018-06-27 Thread Nathaniel Smith
On Wed, Jun 27, 2018 at 8:00 AM, Pradyun Gedam  wrote:
>
> On Sun, Jun 24, 2018 at 10:50 AM Nathaniel Smith  wrote:
>>
>> To go a bit against the grain here, I think at this point I'd suggest
>> that if "build-system.requires" is missing, it should be silently
>> treated as if it had been set to ["setuptools", "wheel"]. Reasoning:
>>
>> - Implementing this should require only a trivial amount of code, now
>> and in the long run. In particular, I'm *not* suggesting that if the
>> "build-system.requires" key is missing then we should act like
>> pyproject.toml is missing altogether -- that's a much more complex
>> legacy code path that we'd like to eventually remove. I'm suggesting
>> we literally do something like:
>>
>> try:
>> requires = config["build-system"]["requires"]
>> except KeyError:
>> requires = ["setuptools", "wheel"]
>>
>> and then treat them exactly the same from then on.
>
>
> Defaulting to this behavior means that the way source distribution is
> built changes (build isolation is enabled by pip) because configuration
> for a tool was added. This is surprising for users since one of things
> this means is they need to have provide wheels (so that pip that can
> find/use them) for `setuptools` and `wheel` to install packages. We've
> had multiple users report this on pip's tracker.

I'm not sure I understand the specific surprise you're talking about
-- are you saying it's common for people to somehow find themselves in
environments where setuptools and wheel are not installable?

> Having users specify their build-requirements explicitly is a stronger
> opt-in that can used to explain the behavior in this case as that project
> using PEP 518 and build-isolation vs that project has configuration for
> towncrier. :)
>
> The only other option is falling back to legacy behavior in this case,
> which obviously isn't what we want here.

This does make some sense. I'm pondering it :-). To make sure I
understand, you're thinking of a scenario like:

 Your software is broken! Installing it used to work fine,
but now it's giving me an error saying 'can't install setuptools'!
 Uh, that's weird, I'm not sure why that happens. Let me look into it.
[...time passes...]
 Okay, I figured it out: it's because we added a pyproject.toml
file, and that switches on this 'build isolation' thing, which
shouldn't make a difference, unless... does 'pip install setuptools'
work for you?
 Oh yeah I didn't think it was worth mentioning, but we have
a broken firewall where we have to manually download each package and
install it one at a time. Do you think my not being able to install
setuptools might have something to do with getting an error saying
'can't install setuptools'?
 It's possible, yeah.
 So how are you going to fix it?
 Well, we're not going to stop using towncrier because your
firewall is broken, and even if we did then you'd have the same
problem with like, every other Python package you ever want to
install, so you should probably fix stuff on your end. Good luck.

And in particular, you're talking about trying to optimize that
'[...time passes...]' part at the beginning: if the dev added a
pyproject.toml just to use towncrier and never even realized that
[build-system] sections are a thing that exists, then it might take
them a while to figure out that the error message and the
pyproject.toml file could be related. OTOH, if they were forced to
type '[build-system] requires = ["setuptools", "wheel"]', then it
increases the chances that a few months later when they get this
request from a user, they'll think "hey, wasn't there something about
setuptools that I touched recently.?" and that will speed up their
figuring it out. Have I understood all that correctly?

That all sounds like something that could happen, but it still feels a
bit... tenuous, or something? If I were the dev here then there's an
excellent chance that I'd have totally forgotten about that
'[build-system]' stuff before the bug report comes in, and even if I
haven't, the connection to their problem isn't obvious, and in any
case knowing the issue doesn't really change the conclusion -- if
someone can't install setuptools then that's the thing they need to
fix. If this is a common scenario, then isn't the real solution for
pip to print a better error message in the first place, so that the
dev doesn't need to debug this from scratch?

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/WZ2EEWWVUAAKAZT5TA6IAO34CTGGKMAL/


[Distutils]Re: Possible change to PEP 517: look up the backend as $BACKEND.__build_backend__?

2018-06-24 Thread Nathaniel Smith
On Sun, Jun 24, 2018 at 12:47 AM, Thomas Kluyver  wrote:
> On Sun, Jun 24, 2018, at 7:19 AM, Nathaniel Smith wrote:
>> What do you think? (Thomas, I'd love your thoughts in particular :-).)
>
> I agree that it looks nicer, but I'm not sure that it's worth the added 
> complexity: is 'flit' equivalent to 'flit.__build_api__' (i.e. from flit 
> import __build_api__), or to 'flit:__build_api__' (import flit and get an 
> attribute called __build_api__)?

I'd say the latter (flit:__build_api__). It doesn't make much
difference in practice, because if you do want to make it a submodule,
then flit/__init__.py can just do 'from . import __build_api__' (or
'from . import buildapi as __build_api__'). And this only affects
build system designers, who already have to jump through a bunch of
small but slightly fiddly hoops -- it's sort of inherent in
implementing this kind of API.

> For Flit, I treat the buildsystem table as boilerplate, and 'flit init' 
> inserts it automatically. So the extra word in 'flit.buildapi' is a very 
> minor inconvenience.

Yes, but pyproject.toml is something that every python dev will be
looking at on a regular basis, and "Beautiful is better than ugly".

(What does 'flit init' do if someone already has a pyproject.toml, by the way?)

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/TOR6IFKGNSYQJY6HMSMNZXTZH77EMJPT/


[Distutils]Possible change to PEP 517: look up the backend as $BACKEND.__build_backend__?

2018-06-24 Thread Nathaniel Smith
Hi all,

I had a thought for something that might be a simple way to improve
dev experience with custom build backends.

A PEP 517 build backend is a Python object that has some special
methods on it. And the way a project picks which object to use, is via
pyproject.toml:

[build-system]
build-backend = "module1.module2:object"

Currently, this means that the build backend is the Python object:

module1.module2.object

Here's my idea: what if change it, so that the above config is
interpreted as meaning that the build backend is the Python object:

module1.module2.object.__build_backend__

(I.e., we tack a "__build_backend__" on the end before looking it up.)

Why does this matter? Well, with the current system, if you want to
use flit [1] as your build backend, you have to write:

build-backend = "flit.buildapi"

And if you want to use intreehooks [2],you have to writ:

build-backend = "intreehooks:loader"

These names are slightly awkward, because these projects don't want to
just jam all the PEP 517 methods directly onto the top-level module
object, so they each have to invent some ad hoc sub-object to put the
methods on. And then that's exposed to all their users as a bit of
random cruft you have to copy-paste.

The idea of __build_backend__ is that these projects could rename the
'buildapi' and 'loader' objects to be '__build_backend__' instead, and
then users could write:

build-backend = "flit"
build-backend = "intreehooks"
build-backend = "setuptools"

and it just feels nicer.

Right now PEP 517 is still marked provisional, and pip hasn't shipped
support yet, so I think changing this is still pretty easy. (It would
mean a small amount of work for projects like flit that have already
implemented backends.)

What do you think? (Thomas, I'd love your thoughts in particular :-).)

-n

[1] https://github.com/takluyver/flit/
[2] https://github.com/takluyver/intreehooks

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/FTYC5MD4GSNRWMVPF6ZNFWACTMF7K2Q5/


[Distutils]Re: PEP 518 - pyproject.toml with no build-system.requires

2018-06-23 Thread Nathaniel Smith
To go a bit against the grain here, I think at this point I'd suggest
that if "build-system.requires" is missing, it should be silently
treated as if it had been set to ["setuptools", "wheel"]. Reasoning:

- Implementing this should require only a trivial amount of code, now
and in the long run. In particular, I'm *not* suggesting that if the
"build-system.requires" key is missing then we should act like
pyproject.toml is missing altogether -- that's a much more complex
legacy code path that we'd like to eventually remove. I'm suggesting
we literally do something like:

try:
requires = config["build-system"]["requires"]
except KeyError:
requires = ["setuptools", "wheel"]

and then treat them exactly the same from then on.

- Not doing this breaks a number of real projects. Sometimes this is
justifiable because we have to break things to make progress, but it
always creates busywork and pisses people off, so we should only do it
when we have a good reason. In this case providing a default value is
pretty trivial, and will prevent a lot of frustrated queries about why
it's mandatory.

- Providing a default doesn't really compromise the final vision for
the feature: we envision that eventually, pretty much every project
will be specifying this explicitly, and won't *want* to leave it
blank. There isn't any other meaning we want to assign to this being
left blank.

- We're soon going to have to jump through all these hoops *anyway*
for the PEP 517 "build-system.build-backend" key. If it's missing,
then we're going to want to default it to "setuptools" (once
setuptools exports a PEP 517 build backend), which means we're going
to be hardcoding some defaults and knowledge of setuptools into the
pyproject.toml defaults. So we might as well do this for both keys in
the same way.

-n

On Fri, Jun 22, 2018 at 9:32 AM, Pradyun Gedam  wrote:
> Hey everyone!
>
> In PEP 518, it is not clearly specified how a project that has a
> pyproject.toml
> file but has no build-system.requires should be treated (i.e. build-system
> table).
>
> In pip 10, such a pyproject.toml file was allowed and built with setuptools
> and wheel, which has resulted in a lot of projects making releases that
> assumed
> that such a pyproject.toml file is valid and they use setuptools and wheel.
> I understand that at least pytest, towncrier and Twisted might have done so.
> This happened since these projects have included configuration for some tool
> in
> pyproject.toml (some of which use only pyproject.toml for configuration --
> black, towncrier).
>
> There's a little bit of subtlety here, in pip 10's implementation: adding a
> pyproject.toml file enables a new code path that does the build in isolation
> (in preparation for PEP 517; it's a good idea on it's own too) with only the
> build-system.requires packages available. When the build-system.requires key
> is missing, pip falls back to assuming it should be ["setuptools", "wheel"].
> The in-development version of pip currently prints warnings when the key is
> not specified -- along the lines of "build-system.requires is missing" +
> "A future version of pip will reject pyproject.toml files that do not comply
> with PEP 518." and falls back to legacy behavior.
>
> Basically, pip 10 has a distinction between a missing pyproject.toml and
> build-system.requires = ["setuptools", "wheel"] and the PEP doesn't.
> However,
> the PEP's precise wording here would help inform the debate about how pip
> should behave in this edge case.
>
> I can think of at least 2 options for behavior when build-system.requires is
> missing:
>
> 1. Consider a missing build-system.requires equivalent to either a missing
>pyproject.toml or build-system.requires = ["setuptools", "wheel"].
>
> 2. Making the build-system table mandatory in pyproject.toml.
>
> I personally think (2) would be fine -- "Explicit is better than implicit."
>
> It'll be easy to detect and error out in this case, in a way that it's
> possible
> to provide meaningful information to the user about what to do here.
> However,
> this does mean that some existing releases of projects become
> not-installable,
> which is concerning; I do think the benefits outweigh the costs though.
>
> Thoughts on this?
>
> Cheers,
> Pradyun
>
>
> --
> Distutils-SIG mailing list -- distutils-sig@python.org
> To unsubscribe send an email to distutils-sig-le...@python.org
> https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
> Message archived at
> https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/7A4QHWCHR54TSIO2DQZUVNZHZS6ZPBLY/
>



-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/4QWJJAPLH7HECEMOPUFVFX35VVR5MJ4L/


[Distutils][ANN] Python 3.7 now available in the manylinux docker images

2018-06-15 Thread Nathaniel Smith
Hi all,

Thanks to Emanuele Gaifas [1], the manylinux docker images now include
Python 3.7 (currently the rc1 release), so if you want you can now
start building and uploading wheels for 3.7 ahead of its expected
release on June 27 [2].

-n

[1] https://github.com/pypa/manylinux/pull/196
[2] https://www.python.org/dev/peps/pep-0537/

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list -- distutils-sig@python.org
To unsubscribe send an email to distutils-sig-le...@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/FQLZ3HIHJ3QRW7OUO3XJRNEHO4UVPGDJ/


[Distutils] Re: Python 3.7 binary wheels

2018-05-11 Thread Nathaniel Smith
On Fri, May 11, 2018 at 12:10 PM, Lele Gaifax  wrote:
> Thank you Nathaniel,
>
> AFAICT current manylinux1 image still does not carry Python 3.7: is there an
> ETA for that to happen?

The ETA is whenever someone submits a working PR :-).

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list
distutils-sig@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/R6J5ISWSG4AWNEOPH55PHKWBYEUUUDXP/


[Distutils] Re: Python 3.7 binary wheels

2018-05-08 Thread Nathaniel Smith
On Tue, May 8, 2018 at 7:35 PM, Steve Dower <steve.do...@python.org> wrote:
> On 08May2018 2134, Nathaniel Smith wrote:
>> for 3.6 there was a last minute problem
>> with the Windows ABI that only got discovered during the rc period. But
>> if you're willing to keep an ear out for that sort of thing, go wild.
>
> I thought this was 3.5 (or maybe I've blanked out the more recent one)?

Doh, no, this is just what I get for trying to send email while
waiting for bags at the airport.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
--
Distutils-SIG mailing list
distutils-sig@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/ACTRYRTY422O3PA33NCHNFCKT7HDTWYH/


[Distutils] Re: Python 3.7 binary wheels

2018-05-08 Thread Nathaniel Smith
On Tue, May 8, 2018, 20:59 Lele Gaifax  wrote:

> Hi all,
>
> Python 3.7 is steadily approaching its final release and I start wondering
>
> a) when will be the right time to start uploading 3.7 binary wheels on
> PyPI?
>

The ABI was frozen at 3.7b3 (we're currently at b4), so in theory you can
start uploading wheels any time. Of course, there's still some risk until
it's actually released – for 3.6 there was a last minute problem with the
Windows ABI that only got discovered during the rc period. But if you're
willing to keep an ear out for that sort of thing, go wild.


> b) if/when manylinux2010 happens, does that mean that I should build both
>manylinux1 and manylinux2010 variants?
>

Totally up to you. The only real differences are that RHEL/CentOS 5 users
can use manylinux1 wheels but won't be able to use manylinux2010 wheels,
and that manylinux2010 will require a newer pip. You can upload one or both
depending on what you think works best for your users.

-n
--
Distutils-SIG mailing list
distutils-sig@python.org
https://mail.python.org/mm3/mailman3/lists/distutils-sig.python.org/
Message archived at 
https://mail.python.org/mm3/archives/list/distutils-sig@python.org/message/3NFDHX47NDH4ZFTWHYEVMDQ6K7VX4MQZ/


Re: [Distutils] How to eliminate on part of a package?

2018-04-26 Thread Nathaniel Smith
If you're lazy, you could distribute the server package to everyone
and just make sure that if someone tries to import it on python 2 then
they get a useful error.

On Thu, Apr 26, 2018 at 9:17 AM, Skip Montanaro
 wrote:
> Yeah, splitting client and server packages is on my to-do list. Was
> just hoping to keep Python2 users from shooting themselves in the foot
> with a server subpackage which wouldn't work.
>
> S
>
> On Thu, Apr 26, 2018 at 10:59 AM, Chris Barker  wrote:
>> frankly, I'd give up n find_packages -- it's not that magic, it's just a
>> convenience function so you don't need to hand-specify them.
>>
>> But in this case, you're doing something weird, so I"d just be explicit.
>>
>> Though what I'd probably really do is make the client and server completely
>> separate packages. After all, you say your users only want the client side
>> anyway.
>>
>> and if the server depends on the client (which I"d hope it doesn't!) then
>> you can simply make it a dependency.
>>
>> -CHB
>>
>>
>>
>>
>> On Wed, Apr 25, 2018 at 1:28 PM, Skip Montanaro 
>> wrote:
>>>
>>> >
>>> > If by "top/server tree" you mean that there are more subpackages under
>>> > top.server (not just a server.py file as your diagram shows), then you 
>>> > need
>>> > to filter out all of those subpackages as well, e.g.:
>>> >
>>> > packages = setuptools.find_packages()
>>> > if sys.version_info.major < 3:
>>> > packages = [
>>> > pkg for pkg in packages
>>> > if pkg != "top.server" and not
>>> > pkg.startswith("top.server.")
>>> > ]
>>>
>>> Thanks, yes, there is another subpackage within top/server, but I
>>> eliminated it as well. I was simplifying for the email. The raw
>>> find_packages() output looks like this:
>>>
>>> ['tests', 'top', 'tests.python', 'top.client', 'top.server',
>>> 'top.server.db']
>>>
>>> I was excising the last two elements from the returned list, so the
>>> argument of the packages keyword looked like this:
>>>
>>> ['tests', 'top', 'tests.python', 'top.client']
>>>
>>> Does the presence of 'top' in the list imply everything under it will
>>> be copied (I do want 'top', as that's the top level package, not just
>>> a directory in my repo.)
>>>
>>> I'll keep messing with it.
>>>
>>> Skip
>>> ___
>>> Distutils-SIG maillist  -  Distutils-SIG@python.org
>>> https://mail.python.org/mailman/listinfo/distutils-sig
>>
>>
>>
>>
>> --
>>
>> Christopher Barker, Ph.D.
>> Oceanographer
>>
>> Emergency Response Division
>> NOAA/NOS/OR(206) 526-6959   voice
>> 7600 Sand Point Way NE   (206) 526-6329   fax
>> Seattle, WA  98115   (206) 526-6317   main reception
>>
>> chris.bar...@noaa.gov
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig



-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] providing a way for pip to communicate extra info to users

2018-04-12 Thread Nathaniel Smith
>From the TUF perspective it seems like it would be straightforward to make
the MOTD a "package", whose "contents" is the MOTD text, and that we
"upgrade" it to get the latest text before displaying anything.

-n

On Thu, Apr 12, 2018, 05:10 Nick Coghlan  wrote:

> On 12 April 2018 at 07:01, Paul Moore  wrote:
> > HTTPS access to the index server is fundamental to pip - if an
> > attacker can subvert that, they don't need to mess with a message,
> > they can just replace packages. So I don't see that displaying a
> > message that's available from that same index server is an additional
> > vulnerability, surely? But I'm not a security expert - I'd defer to
> > someone like Donald to comment on the security aspects of any proposal
> > here.
>
> Right now it doesn't create any additional vulnerabilities, since
> we're relying primarily on HTTPS for PyPI -> installer security.
>
> However, that changes once PEP 458 gets implemented, as that will
> switch the primary package level security mechanism over to TUF, which
> includes a range of mechanisms designed to detect tampering with the
> link to PyPI (including freeze attacks that keep you from checking for
> new packages, or attempting to lie about which versions are
> available).
>
> So the scenario we want to avoid is one where an attacker can present
> a notice that says "Please ignore that scary security warning your
> installer is giving you, we're having an issue with the metadata
> generation process on the server. To resolve the problem, please force
> upgrade pip".
>
> That's a solvable problem (e.g. only check for the MOTD *after*
> successfully retrieving a valid metadata file), but it's still
> something to take into account.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] providing a way for pip to communicate extra info to users

2018-04-11 Thread Nathaniel Smith
On Mon, Apr 9, 2018, 16:47 Chris Jerdonek  wrote:

>
> One of Donald's comments in response to the idea (and that occurred to
> me too and that I agree with) is that providing a way to communicate
> messages to users introduces another possible avenue for attack.


I agree that this is worth thinking about, but having thought about it I'm
having trouble coming up with a threat model where it creates additional
exposure?

If someone takes over package distribution, that's obviously a far more
serious problem. A messaging mechanism could amplify such an attack by
encouraging people to install the compromised packages – but pip's existing
check for new pip versions can also do that. Or if we have a mechanism for
securing package updates, like TUF, then presumably we can use it to
protect the MOTD as well?

-n
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Removing wheel signing features from the wheel library

2018-03-22 Thread Nathaniel Smith
Even if no maintenance were required, it's still a feature that promises to
provide security but doesn't. This kind of feature has negative value.

I'd also suggest adding a small note to the PEP documenting that the
signing feature didn't work out, and maybe linking to Donald's package
signing blog post. I know updating PEPs isn't the most common thing, but
it's the main documentation of the wheel format and it'll save confusion
later.

On Mar 22, 2018 10:57 AM, "Wes Turner"  wrote:

> What maintenance is required?
>
> Here's a link to the previous discussion of this issue:
>
> "Remove or deprecate wheel-signing features"
> https://github.com/pypa/wheel/issues/196
>
> What has changed? There is still no method for specifying a keyring;
> whereas with GPG, all keys in the ring are trusted.
>
> On Thursday, March 22, 2018, Nick Coghlan  wrote:
>
>> On 22 March 2018 at 22:35,  wrote:
>>
>>> I am not changing the format of RECORD, I'm simply removing the
>>> cryptographic signing and verifying functionality, just the way you
>>> described. Hash checking will stay. As we agreed earlier, those
>>> features could be deprecated or removed from the PEP entirely.
>>>
>>
>> Cool, that's what I thought you meant, but I figured I should double
>> check since our discussion was a while ago now :)
>>
>> Cheers,
>> Nick.
>>
>> --
>> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
>>
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] new stuff overview, beta next week, user tests, & other Warehouse updates

2018-03-14 Thread Nathaniel Smith
On Tue, Mar 13, 2018 at 11:39 PM, Sumana Harihareswara  
wrote:
> I've started preparing a
> draft overview of what's new in PyPI/packaging/distribution to publicize
> along with the beta; it says "not to be publicized" but I'll let you in on
> the secret early. Maybe something in it is new to you as well!

- Missing parentheses at the end of the GPG/PGP line

- I'd put the signup link for the new announce list right at the top,
like "this post has lots of important stuff, and if you don't want to
miss future important stuff, sign up here."

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] /etc files

2018-03-01 Thread Nathaniel Smith
Note that this will make it impossible to distribute wheels of your package
or to install it into a virtualenv, because those don't have an /etc. So
it's mostly only suitable for projects that you use internally under a
known and restricted set of deployment options, not for anything
distributed on pypi.


On Mar 1, 2018 5:01 AM, "Victor Porton"  wrote:

How to deal with the files to be placed into /etc or a similar dir?

In the previous email I forgot to say I use setuptools not distutils.
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Invalid Packages

2018-02-17 Thread Nathaniel Smith
On Fri, Feb 16, 2018 at 2:39 PM, Matt Gieger  wrote:
> I would like to see a clause added to the "Ivalid Package" section of PEP541
> that allows some mechanism for other pypi users to mark a package as spam.
> Every day i see more spam packages added to pypi and currently the only way
> to get them removed is to create an issue in github.

The purpose of PEP 541 is to define which packages can/can't be
removed/reassigned. Actually finding those packages is a separate
question; that could just be a feature request on a warehouse.

What do you mean by a "spam package"? I guess it might be covered
under this section:
  https://www.python.org/dev/peps/pep-0541/#invalid-projects

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Installed Extras Metadata

2018-02-11 Thread Nathaniel Smith
On Fri, Jan 26, 2018 at 8:37 PM, Nick Coghlan <ncogh...@gmail.com> wrote:
> On 27 January 2018 at 13:46, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> The advantages are:
>>
>> - it's a simpler way to record information the information you want
>> here, without adding more special cases to dist-info: most code
>> doesn't even have to know what 'extras' are, just what packages are
>>
>> - it opens the door to lots of more advanced features, like
>> 'foo[test]' being a package that actually contains foo's tests, or
>> build variants like 'numpy[mkl]' being numpy built against the MKL
>> library, or maybe making it possible to track which version of numpy's
>> ABI different packages use. (The latter two cases need some kind of
>> provides: support, which is impossible right now because we don't want
>> to allow random-other-package to say 'provides-dist: cryptography';
>> but, it would be okay if 'numpy[mkl]' said 'provides-dist: numpy',
>> because we know 'numpy[mkl]' and 'numpy' are maintained by the same
>> people.)
>>
>> I know there's a lot of precedent for this kind of clever use of
>> metadata-only packages in Debian (e.g. search for "metapackages"), and
>> I guess the RPM world probably has similar tricks.
>
>
> While I agree with this idea in principle, I'll note that RPM makes it
> relatively straightforward to have a single SRPM emit multiple RPMs, so
> defining a metapackage is just a few extra lines in a spec file. (I'm not
> sure how Debian's metapackages work, but I believe they're similarly simple
> on the publisher's side).
>
> We don't currently have a comparable mechanism to readily allow a single
> source project to expand to multiple package index entries that all share a
> common sdist, but include different subsets in their respective wheel files
> (defining one would definitely be possible, it's just a tricky migration
> problem to work out).

Yeah, the migration is indeed the tricky part. Here's one possible approach.

First, figure out what exactly an "extra" should become in the new
system. I think it's: if package $PACKAGE version $VERSION defines an
extra $EXTRA, then that corresponds to a wheel named
"$PACKAGE[$EXTRA]" (the brackets become part of the package name),
version $VERSION, and it has Requires-Dist: $PACKAGE = $VERSION, as
well as whatever requirements were originally part of the extra.

Now, if we didn't have to worry about migrations, we'd extend
setuptools/bdist_wheel so that when they see the current syntax for
defining an extra, they generate extra wheels following the formula
above. (So 'setup.py bdist_wheel' generates N+1 wheels for a package
with N extras.) And we'd teach PyPI that packages named like
"$PACKAGE[$EXTRA]" should be collected together with packages named
"$PACKAGE" (e.g. the same access control apply to both, and probably
you want to display them together in the UI when their versions
match). And we'd teach pip that square brackets are legal in package
names. And that'd be about it.

Of course, we do have to worry about migration, and in the first
instance, what we care about is making pip's database of installed
packages properly record these new wheels. So my proposal is:

- Requirements like 'requests[security,socks]' need to be expanded to
'requests[security], requests[socks]'. More specifically, when pip
processes a requirement like '$PACKAGE[$EXTRA1,$EXTRA2,...] $OP1
$VERSION1, $OP2 $VERSION2, ...', it expands it to multiple packages
and then applies the constraints to each of them: ['$PACKAGE[$EXTRA1]
$OP1 $VERSION1, $OP2 $VERSION2 ...', '$PACKAGE[$EXTRA2] $OP1
$VERSION1, $OP2 $VERSION2 ...', ...].

- When pip needs to find a wheel like 'requests[security]', then it
first checks to see if this exact wheel (with the brackets) is
available on PyPI (or whatever package sources it has available). If
so, it uses that. If not, then it falls back to looking for a
'requests' wheel, and if it finds one, and that wheel has 'extra'
metadata, then it *uses that metadata to generate a wheel on the
spot*, and then carries on as if it had found it on PyPI.

  - Special case: when hash-checking mode is enabled and pip ends up
doing this fallback, then pip always checks the hash against the wheel
it found on PyPI – so 'requests[security] --hash=...' checks the hash
of requests.whl, not the auto-generated requests[security].whl.

(There is some question to discuss here about how sdists should be
handled: in many cases, possibly all of them, it doesn't really make
sense to have separate sdists for different square-bracket packages.
'requests[security]' will probably always be generated from the
requests source tree, and for build variants like 'numpy[mkl]' you
definitely want to build that from numpy.tar.gz, with some special
flag

Re: [Distutils] draft PEP: manylinux2

2018-02-05 Thread Nathaniel Smith
On Mon, Feb 5, 2018 at 1:17 PM, Jonathan Helmus  wrote:
> Moving to GCC 5 and above will introduced the new libstd++ ABI.  [1]  The
> manylinux2 standard need to define which ABI compiled libraries should be
> compiled against as older version of libstdc++ will not support the new ABI.
> From what I recall the devtoolset packages for CentOS can only target the
> older, _GLIBCXX_USE_CXX11_ABI=0, ABI.

We're stuck on the devtoolset packages, but it doesn't really matter
for manylinux purposes. None of the libraries you're allowed to assume
exist expose a C++ ABI, and everything else you have to ship yourself.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] draft PEP: manylinux2

2018-02-03 Thread Nathaniel Smith
On Wed, Jan 31, 2018 at 4:01 PM, Mark Williams  wrote:
> Hi everyone!
>
> The manylinux1 platform tag has been tremendously useful, but unfortunately 
> it's showing its age:
>
> https://mail.python.org/pipermail/distutils-sig/2017-April/030360.html
> https://mail.python.org/pipermail/wheel-builders/2016-December/000239.html
>
> Nathaniel identified a list of things to do for its successor, manylinux2:
>
> https://mail.python.org/pipermail/distutils-sig/2017-April/030361.html
>
> Please find below a draft PEP for manylinux2 that attempts to address these 
> issues.  I've also opened a PR against python/peps:
>
> https://github.com/python/peps/pull/565
>
> Docker images for x86_64 (and soon i686) are available to test drive:
>
> https://hub.docker.com/r/markrwilliams/manylinux2/tags/

Huzzah! This is an amazing bit of work, and I'm glad you got that
weird email problem sorted out :-).

I have a few minor comments below, but overall this all looks fine and
sensible to me. Also, I think we should try to move quickly on this if
we can, because the manylinux1 images are currently in the process of
collapsing into unmaintainability. (For example: the openssl that
CentOS 5 ships with is now so old that you can no longer use it to
connect to the openssl web site to download a newer version.)

> 4. If a wheel is built for any version of CPython 2 or CPython
>versions 3.0 up to and including 3.2, it *must* include a CPython
>ABI tag indicating its Unicode ABI.  A ``manylinux2`` wheel built
>against Python 2, then, must include either the ``cpy27mu`` tag
>indicating it was built against an interpreter with the UCS-4 ABI
>or the ``cpy27m`` tag indicating an interpeter with the UCS-2
>ABI. *[Citation for UCS ABI tags?]*

For the citation: maybe PEP 3149? Or just https://github.com/pypa/pip/pull/3075

> Compilation of Compliant Wheels
> ===
>
> Like ``manylinux1``, the ``auditwheel`` tool adds ```manylinux2``
> platform tags to ``linux`` wheels built by ``pip wheel`` or
> ``bdist_wheel`` a ``manylinux2`` Docker container.

Missing word: "*in* a"

> Docker Images
> -
>
> ``manylinux2`` Docker images based on CentOS 6.9 x86_64 and i686 are
> provided for building binary ``linux`` wheels that can reliably be
> converted to ``manylinux2`` wheels.  [8]_ These images come with a
> full compiler suite installed (``gcc``, ``g++``, and ``gfortran``
> 4.8.2) as well as the latest releases of Python and  ``pip``.

We can and should use newer compiler versions than that, and probably
upgrade them again over the course of the image's lifespan, so let's
just drop the version numbers from the PEP entirely. (Maybe s/6.9/6/
as well for the same reason.)

> Compatibility with kernels that lack ``vsyscall``
> ~

This section is maybe not *strictly* necessary in the PEP but I think
we might as well keep it; maybe someone will find it useful.

> Backwards compatibility with ``manylinux1`` wheels
> ==
>
> As explained in PEP 513, the specified symbol versions for
> ``manylinux1`` whitelisted libraries constitute an *upper bound*.  The
> same is true for the symbol versions defined for ``manylinux2`` in
> this PEP.  As a result, ``manylinux1`` wheels are considered
> ``manylinux2`` wheels.  A ``pip`` that recognizes the ``manylinux2``
> platform tag will thus install ``manylinux1`` wheels for
> ``manylinux2`` platforms -- even when explicitly set -- when no
> ``manylinux2`` wheels are available. [20]_

I'm a little confused about what this section is trying to say
(especially the words "even when explicitly set"). Should we maybe
just say something like:

In general, systems that can use manylinux2 wheels can also use
manylinux1 wheels; pip and similar installers should prefer manylinux2
wheels where available.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 508, environment markers & the possibility of Python 3.10

2018-01-27 Thread Nathaniel Smith
+1 to all of that.

On Sat, Jan 27, 2018 at 9:44 PM, Nick Coghlan  wrote:
> Hi folks,
>
> In https://github.com/python/peps/issues/560, a user pointed out that
> the current definition of python_version in PEP 508 assumes
> single-digit major and minor version numbers:
>
>platform.python_version()[:3]
>
> There's a reasonable chance we'll see 3.10 rather than 4.0 in a few
> years time, at which point that definition would break.
>
> The suggested fix is to amend that definition to be:
>
> ".".join(platform.python_version_tuple()[:2])
>
> This seems like a good suggestion to me, so my inclination is to
> handle this in a similar way to
> https://www.python.org/dev/peps/pep-0440/#summary-of-changes-to-pep-440:
> fix it in place, and add a section at the end of the PEP listing the
> post-publication changes.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   ncogh...@gmail.com   |   Brisbane, Australia
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig



-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Installed Extras Metadata

2018-01-26 Thread Nathaniel Smith
On Fri, Jan 26, 2018 at 7:11 AM, Pradyun Gedam  wrote:
> Hello! I hope everyone's had a great start to 2018! :)
>
> A few months back, while working on pip, I had noticed an oddity about
> extras.
>
> Installing a package with extras would not store information about the fact
> that the extras were requested. This means, later, it is not possible to
> know which extra-based optional dependencies of a package have to be
> considered when verifying that the packages are compatible with each other.
> This information is relavant for resolution/validation since without it, it
> is not possible to know which the extra-requirements to care about.
>
> As an example, installing ``requests[security]`` and then uninstalling
> ``PyOpenSSL`` leaves you in a state where you don't really satisfy what was
> asked for but there's no way to detect that either.

Another important use case is upgrades: if requests[security] v1 just
depends on pyopenssl, and then requests[security] v2 adds a dependency
on certifi, and I do

pip install requests[security] == 1
pip upgrade

then upgrade should give me requests[security] == 2, and thus install
certifi. But this doesn't work if you don't have any record that
'requests[security]' is even installed :-).

> Thus, obviously, I'm interested in making pip to be able to store this
> information. As I understand, this is done needs to be specified in a PEP
> and/or on PyPUG's specification page.
>
> To that end, here's seeding proposal for the discussion: a new
> `extras-requested.txt` file in the .dist-info directory, storing the extra
> names in a one-per-line format.

I'm going to put in another plug here for my "reified extras" idea:
https://mail.python.org/pipermail/distutils-sig/2015-October/027364.html

Essentially, the idea is to promote extras to full packages --
normally ones that contain no files, just metadata like dependencies,
though that's not a necessary requirement, it's just how we'd
interpret existing extras specifications.

Then installing 'requests[security]' would install the
'requests[security]' package, which depends on both 'requests' and
'pyopenssl', and we have a 'requests[security]-$VERSION.dist-info'
directory recording that we installed it.

The advantages are:

- it's a simpler way to record information the information you want
here, without adding more special cases to dist-info: most code
doesn't even have to know what 'extras' are, just what packages are

- it opens the door to lots of more advanced features, like
'foo[test]' being a package that actually contains foo's tests, or
build variants like 'numpy[mkl]' being numpy built against the MKL
library, or maybe making it possible to track which version of numpy's
ABI different packages use. (The latter two cases need some kind of
provides: support, which is impossible right now because we don't want
to allow random-other-package to say 'provides-dist: cryptography';
but, it would be okay if 'numpy[mkl]' said 'provides-dist: numpy',
because we know 'numpy[mkl]' and 'numpy' are maintained by the same
people.)

I know there's a lot of precedent for this kind of clever use of
metadata-only packages in Debian (e.g. search for "metapackages"), and
I guess the RPM world probably has similar tricks.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 566 - metadata 1.3 changes

2018-01-16 Thread Nathaniel Smith
On Tue, Jan 16, 2018 at 8:55 PM, Nick Coghlan  wrote:
> The reason for *not* making PEP 566 a major version bump is in case
> anyone actually implemented this draft requirement from PEP 426:
> "Automated tools consuming metadata SHOULD warn if metadata_version is
> greater than the highest version they support, and MUST fail if
> metadata_version has a greater major version than the highest version
> they support (as described in PEP 440, the major version is the value
> before the first dot)."

>From a quick glance at 'git annotate', it appears that every wheel
built between 2013 and now has used metadata_version=2.0. So I think
we can be pretty sure that no-one is implementing this recommendation!
Or if they are, then they've coded their tools to assume that they
*do* understand metadata_version=2.0, which is even worse.

That's the advantage of bumping to 2.0 now -- it keeps our ordering
linear, so that we have the option of implementing a rule like the one
from PEP 426 in the future without breaking existing packages.
Otherwise we're in this weird position where we have teach our tools
that just because they understand 1.3 and 2.0 doesn't mean they
understand 1.4.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 566 - metadata 1.3 changes

2018-01-16 Thread Nathaniel Smith
On Jan 12, 2018 8:00 AM, "Alex Grönholm"  wrote:

On the same note, wheel currently writes "2.0" as its metadata version.
Shouldn't this be changed into 1.3 (along with ditching metadata.json)?


Should wheel change to emit 1.3, or should the PEP change to become 2.0? I
know there were great hopes for "metadata 2.0", but given that there are
bazillions of packages out there with a metadata version of 2.0, we're
never going to be able to meaningfully use that version number for anything
else, and it's confusing if when reading package metadata the ordering is
1.2 < 2.0 == 1.3 < 1.4. So maybe we should declare that this update is 2.0
or 2.1, the next one will be 2.1 or 2.2, etc., and if anyone asks why the
major version bump, well, it's hardly the strangest thing we've done for
compatibility :-). (And in the unlikely event that PEP 426 lurches back to
life, we can make it 3.0.)

-n
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Or in version spec...

2017-12-14 Thread Nathaniel Smith
PEP 508 already allows parenthesized expressions with 'and' and 'or' for
environment markers, so that's probably the most relevant prior art. I
doubt '|' will be added at this point.

On Dec 14, 2017 09:08, "Chris Barker - NOAA Federal" 
wrote:

>
> Sorry to lose the thread — lousy iPhone mail app...
>
> Conda supports or in its meta.yaml format:
>
> https://conda.io/docs/user-guide/tasks/build-packages/
> package-spec.html#build-version-spec
>
> Maybe look to that for prior art?
>
> And it would be mildly less confusing to have consistency between the
> systems.
>
> -Chris
>
>
> Sent from my iPhone
>
>
>
> ___
> Distutils-SIG maillist  -  Distutils-SIG@python.org
> https://mail.python.org/mailman/listinfo/distutils-sig
>
>
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 345, 566, 508 version specifiers and OR clauses

2017-12-13 Thread Nathaniel Smith
On Wed, Dec 13, 2017 at 6:11 PM, Ivan Pozdeev via Distutils-SIG
 wrote:
>
>
> On 14.12.2017 3:17, Barry Warsaw wrote:
>>
>> I'm about to release a new version of importlib_resources, so I want to
>> get my flit.ini's require-python clause right.  We support Python 2.7,
>> and 3.4 and beyond.  This makes me sad:
>>
>> requires-python = '>=2.7,!=3.0,!=3.1,!=3.2,!=3.3'
>>
>> Of course, I'd like to write this like:
>>
>> requires-python = '(>=2.7 and <3) or >= 3.4'
>>
>> I understand that OR clauses aren't supported under any syntax
>> currently, but as PEPs 566 and 508 are still open/active, wouldn't it be
>> reasonable to support something like this explicitly?
>>
>> It seems like wanting to support 2.7 and some versions of Python 3 (but
>> not all) is a fairly common need.
>
> What you're actually asking for is for the >= operator to be limited to a
> specified major version.

We actually have the ~= operator that's basically that-- not sure if
it's allowed in requires-python. But that's not sufficient. You also
need an "or" primitive if you want to express "~= 2.7 or ~= 3.4".
Right now all you could write is "~= 2.7 and ~= 3.4", which is the
null set :-).

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] RFC: PEP 566 - Metadata for Python Software Packages 1.3

2017-12-05 Thread Nathaniel Smith
On Dec 5, 2017 08:42, "Dustin Ingram"  wrote:


Provides-Extra (optional, multiple use)
:::

A string containing the name of an optional feature. Must be a valid Python
identifier. May be used to make a dependency conditional on whether the
optional feature has been requested.

This introduction of this field allows packge installation tools (such as
``pip``) to determine which extras are provided by a given package, and so
that
package publication tools (such as ``twine``) can check for issues with
environment markers which use extras.


I haven't followed this so sorry if this is an annoying comment, but having
read this description I still don't really understand what Provides-Extra
is doing. Don't packages already include extra information? What problem
motivated this?

-n
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Entry points: specifying and caching

2017-10-27 Thread Nathaniel Smith
On Fri, Oct 27, 2017 at 5:34 AM, Nick Coghlan <ncogh...@gmail.com> wrote:
> On 27 October 2017 at 18:10, Nathaniel Smith <n...@pobox.com> wrote:
>>
>> On Thu, Oct 26, 2017 at 9:02 PM, Nick Coghlan <ncogh...@gmail.com> wrote:
>> > Option 2: temporary (or persistent) per-user-session cache
>> >
>> > * Pro: only the first query per path entry per user session incurs a
>> > linear
>> > DB read
>> > * Pro: given persistent cache dirs (e.g. XDG_CACHE_HOME, ~/.cache) even
>> > that
>> > overhead can be avoided
>> > * Pro: sys.path directory mtimes are sufficient for cache invalidation
>> > (subject to filesystem timestamp granularity)
>>
>> Timestamp granularity is a solvable problem. You just have to be
>> careful not to write out the cache unless the directory mtime is
>> sufficiently far in the past, like 10 seconds old, say. (This is an
>> old trick that VCSes use to make commands like 'git status'
>> fast-and-reliable.)
>
>
> Yeah, we just recently fixed a bug related to that in pyc file caching (If
> you managed to modify and reload a source file multiple times in the same
> second we could end up missing the later edits. The fix was to check the
> source timestamp didn't match the current timestamp before actually updating
> the cached copy on the filesystem)
>
>>
>> This does mean you can get in a weird state where if the directory
>> mtime somehow gets set to the future, then start time starts sucking
>> because caching goes away.
>
>
> For pyc files, we're able to avoid that by looking for cache *inconsistency*
> without making any assumptions about which direction time moves - as long as
> the source timestamp recorded in the file pyc doesn't match the source
> file's mtime, we'll refresh the cache.
>
> This is necessary to cope with things like version controlled directories,
> where directory mtimes can easily go backwards because you switched branches
> or reverted to an earlier version.

Yeah, this is a good idea, but it doesn't address the reason why some
systems refuse to update their caches when they see mtimes in the
future. The motivation there is that if the mtime is in the future,
then it's possible that at some point in the future, the mtime will
match the current time, and then if the directory is modified at that
moment, the cache will become silently invalid.

It's not clear how important this really is; you have to get somewhat
unlucky, and if you're seeing timestamps from the future then
timekeeping has obviously broken down somehow and nothing based on
mtimes can be reliable without reliable timekeeping. (For example,
even if the mtime seems to be in the past, the clock could get set
backwards and now the same mtime is in the future after all.) But
that's the reasoning I've seen.

> The os module has atomic write support on Windows in 3.x now:
> https://docs.python.org/3/library/os.html#os.replace
>
> So the only problematic case is 2.7 on WIndows, and for that Christian
> Heimes backported pyosreplace here: https://pypi.org/project/pyosreplace/
>
> (The "may be non-atomic" case is the same situation where it will fail
> outright on POSIX systems: when you're attempting to do the rename across
> filesystems. If you stay within the same directory, which you want to do
> anyway for permissions inheritance and automatic file labeling, it's
> atomic).

I've never been able to tell whether this is trustworthy or not; MS
documents the rename-across-filesystems case as an *example* of a case
where it's non-atomic, and doesn't document any atomicity guarantees
either way. Is it really atomic on FAT filesystems? On network
filesystems? (Do all versions of CIFS even give a way to express file
replacement as a single operation?) But there's folklore saying it's
OK...

I guess in this case atomicity wouldn't be that crucial anyway though.

>> > Option 3: persistent per-path-entry cache
>> >
>> > * Pro: assuming cache freshness means zero runtime queries incur a
>> > linear DB
>> > read (cache creation becomes an install time cost)
>> > * Con: if you don't assume cache freshness, you need option 1 or 2
>> > anyway,
>> > and the install time cache just speeds up that first linear read
>> > * Con: filesystem access control requires either explicit cache refresh
>> > or
>> > implicit metadata caching support in installers
>> > * Con: sys.path directory mtimes are no longer sufficient for cache
>> > invalidation (due to potential for directory relocation)
>>
>> Not sure what problem you're thinking of here? In this model we
>> wouldn't be using mtimes for cache invalidation anyway, beca

Re: [Distutils] Disabling non HTTPS access to APIs on PyPI

2017-10-27 Thread Nathaniel Smith
On Oct 27, 2017 11:49, "Alex Domoradov"  wrote:

RUN pip install --upgrade pip

Try upgrading setuptools here too.

-n
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Entry points: specifying and caching

2017-10-27 Thread Nathaniel Smith
On Thu, Oct 26, 2017 at 9:02 PM, Nick Coghlan  wrote:
> Option 2: temporary (or persistent) per-user-session cache
>
> * Pro: only the first query per path entry per user session incurs a linear
> DB read
> * Pro: given persistent cache dirs (e.g. XDG_CACHE_HOME, ~/.cache) even that
> overhead can be avoided
> * Pro: sys.path directory mtimes are sufficient for cache invalidation
> (subject to filesystem timestamp granularity)

Timestamp granularity is a solvable problem. You just have to be
careful not to write out the cache unless the directory mtime is
sufficiently far in the past, like 10 seconds old, say. (This is an
old trick that VCSes use to make commands like 'git status'
fast-and-reliable.)

This does mean you can get in a weird state where if the directory
mtime somehow gets set to the future, then start time starts sucking
because caching goes away.

Note also that you'll want to explicitly write the observed directory
mtime to the cache file, rather than comparing it to the cache file's
mtime, to avoid the race condition where the directory gets modified
just after we scan it but before we write out the cache.

> * Pro: zero elevated privileges needed (cache would be stored in a per-user
> directory tree)
> * Con: interprocess locking likely needed to avoid the "thundering herd"
> cache update problem [1]

Interprocess filesystem locking is going to be far more painful than
any problem it might solve. Seriously. At least on Unix, the right
approach is to go ahead and regenerate the cache, and then atomically
write it to the given place, and if someone else overwrites it a few
milliseconds later then oh well.

I guess on Windows locking might be OK, given that it has no atomic
writes and less gratuitously broken filesystem locking. But you'd
still want to make sure you never block when acquiring the lock; if
the lock is already taken because someone else is in the middle of
updating the cache, then you need to fall back on doing a linear scan.
This is explicitly *not* avoiding the thundering herd problem, because
it's more important to avoid the "one process got stuck and now
everyone else freezes on startup waiting for it" problem.

> * Con: if a non-persistent storage location is used, zero benefit over an
> in-memory cache for throwaway environments (e.g. container startup)

You also have to be careful about whether you have a writeable storage
location at all, and if so whether you have the right permissions. (It
might be bad if 'sudo somescript.py' leaves me with root-owned cache
files in /home/njs/.cache/.)

Filesystems are just a barrel of fun.

> * Con: cost of the cache freshness check will still scale linearly with the
> number of sys.path entries
>
> Option 3: persistent per-path-entry cache
>
> * Pro: assuming cache freshness means zero runtime queries incur a linear DB
> read (cache creation becomes an install time cost)
> * Con: if you don't assume cache freshness, you need option 1 or 2 anyway,
> and the install time cache just speeds up that first linear read
> * Con: filesystem access control requires either explicit cache refresh or
> implicit metadata caching support in installers
> * Con: sys.path directory mtimes are no longer sufficient for cache
> invalidation (due to potential for directory relocation)

Not sure what problem you're thinking of here? In this model we
wouldn't be using mtimes for cache invalidation anyway, because it'd
be the responsibility of those modifying the directory to update the
cache. And if you rename a whole directory, that doesn't affect its
mtime anyway?

> * Con: interprocess locking arguably still needed to avoid the "thundering
> herd" cache update problem (just between installers rather than runtime
> processes)

If two installers are trying to rearrange the same directory at the
same time then they can conflict in lots of ways. For the most part
people get away with it because doing multiple 'pip install' runs in
parallel is generally considered a Bad Idea and unlikely to happen by
accident; and if it is a problem then we should add locking anyway
(like dpkg and rpm already do), regardless of the cache update part.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Entry points: specifying and caching

2017-10-26 Thread Nathaniel Smith
On Fri, Oct 20, 2017 at 11:59 PM, Nick Coghlan  wrote:
> Yeah, here's the gist of what I had in mind regarding the malware problem
> (i.e. aiming to ensure we don't get all of setup.py's problems back again):
>
> - a package's own install hooks do *not* get called for that package

Doesn't that break the entry point caching use case that started this
whole discussion? When you first install the caching package, then it
has to immediately build the cache for the first time.

I don't really have the time or interest to dig into this (I know
there are legitimate use cases for entry points but I'm very wary of
any feature where package A starts doing something different because
package B was installed). But, I just wanted to throw out that I see
at least two reasons we might want to "bake in" the caching as part of
our PEPified metadata:

- if we do want to add "install hooks", then we need some way for a
package to declare it has an install hook and for pip-or-whoever to
find it. The natural way would be to use an entry point, which means
entry points are in some sense "more fundamental" than install hooks.

- in general, the only thing that can update an entry-point cache is
the package that's doing the install, at the time it runs. In
particular, consider an environment with some packages installed in
/usr, some in /usr/local, some in ~/.local/. Really you want one cache
in each location, and then to have dpkg/rpm responsible for updating
the /usr cache (this is something they're familiar with, it's
isomorphic to stuff like /etc/ld.so.cache), 'sudo pip' responsible for
updating the /usr/local cache, and 'pip --user' responsible for
updating the ~/.local/ cache. If we go the install hook route instead,
then when I do 'pip install --user entry_point_cacher' then there's no
way that it'll ever have the permissions to write to /usr, and maybe
not to /usr/local either depending on how you want to handle the
interaction between 'sudo pip' and ~/.local/ install hooks, so it
just... won't actually work as a caching tool. Similarly, it's
probably easier to convince conda to regenerate a single standard
entry point cache after installing a conda package, than it would be
to convince them to run generic wheel install hooks when not even
installing wheels.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Entry points: specifying and caching

2017-10-20 Thread Nathaniel Smith
On Oct 19, 2017 11:10, "Donald Stufft"  wrote:


EXCEPT, for the fact that with the desire to cache things, it would be
beneficial to “hook” into the lifecycle of a package install. However I
know that there are other plugin systems out there that would like to also
be able to do that (Twisted Plugins come to mind) and that I think outside
of plugin systems, such a mechanism is likely to be useful in general for
other cases.

So heres a different idea that is a bit more ambitious but that I think is
a better overall idea. Let entrypoints be a setuptools thing, and lets
define some key lifecycle hooks during the installation of a package and
some mechanism in the metadata to let other tools subscribe to those hooks.
Then  a caching layer could be written for setuptools entrypoints to make
that faster without requiring standardization, but also a whole new, better
plugin system could to, Twisted plugins could benefit, etc [1].


In this hypothetical system, how do installers like pip find the list of
hooks to call? By looking up an entrypoint? (Sorry if this was discussed
downthread; I didn't see it but I admit I only skimmed.)

-n
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] string types for paths in PEP 517

2017-09-05 Thread Nathaniel Smith
On Tue, Sep 5, 2017 at 1:00 AM, Thomas Kluyver  wrote:
> I considered this. It's *potentially* a problem, but I think we should
> not try to deal with it for now:
>
> - Normally, temp files will go in /tmp - so it should be fine to
> construct paths of entirely ascii characters.

Does pip in fact use /tmp for temporary directories? (It's not always
the right choice, because /tmp has limited space on some systems, e.g.
b/c it's on a ramdisk. If we still had build_directory= then this
could be an issue, since build directories can be arbitrarily large;
maybe it's not a big deal now that we only need the tmpdir to handle a
single sdist/wheel/dist-info.)

> - Frontends that want the wheel to end up elsewhere can ask for it in a
> tmp directory first and then move it, so there's a workaround if it
> becomes an issue.
> - We already have workarounds for the commonest case of UTF-8 paths + C
> locale: ignore the locale and treat paths as UTF-8.

Only in 3.7, I think? Or do you mean, backends should be doing this
manually on Python 2?

(To be clear, I think the current text is potentially fine, I just
want to make sure I/we understand the full consequences instead of
discovering them a year from now when we're stuck with them :-).)

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-09-04 Thread Nathaniel Smith
On Mon, Sep 4, 2017 at 5:51 PM, xoviat <xov...@gmail.com> wrote:
>> The only reason I can think of that setuptools would need a dist_info
> command would be to implement the PEP 517 prepare_wheel_metadata hook.
>
> Yes. That is absolutely correct.
>
>> But this hook is optional and in fact provides no value right now, so
> it can't be a blocker for anything.
>
> The simplest way to start on this issue is to replace egg_info in pip with
> prepare_metadata_for_build_wheel. It is absolutely a historical artifact but
> I need to work on one issue at a time. The next issue will be replacing
> egg_info in pip with prepare_metadata_for_build_wheel.

This still doesn't make sense to me.

To support PEP 517, pip HAS to support backends that don't provide
prepare_metadata_for_build_wheel. You will have to handle this before
PEP 517 support can ship. So what's your plan for after you replace
egg_info with prepare_metadata_for_build_wheel? Turning around and
deleting the code you just wrote?

-n

> 2017-09-04 19:34 GMT-05:00 Nathaniel Smith <n...@pobox.com>:
>>
>> On Mon, Sep 4, 2017 at 5:09 PM, xoviat <xov...@gmail.com> wrote:
>> > Nathaniel:
>> >
>> > Pip requires egg_info to discover dependencies of source distributions
>> > so
>> > that it can build wheels all at once after downloading the requirements.
>> > I
>> > need to move pip off of egg_info as soon as possible and dist_info is
>> > required to do that.
>>
>> "Requires" is a strong word -- AFAIK this is just a historical
>> artifact. I don't really know what you're talking about in the second
>> sentence.
>>
>> The only reason I can think of that setuptools would need a dist_info
>> command would be to implement the PEP 517 prepare_wheel_metadata hook.
>> But this hook is optional and in fact provides no value right now, so
>> it can't be a blocker for anything. (In fact I can't see any reason
>> why pip would ever call it before the resolver lands.) So either (a)
>> there's some other reason you want a dist_info command, (b) there's
>> some reason I'm missing why prepare_wheel_metadata matters, or (c) one
>> of us is misunderstanding something :-).
>>
>> -n
>>
>> > 2017-09-03 21:00 GMT-05:00 Nathaniel Smith <n...@pobox.com>:
>> >>
>> >> On Sun, Sep 3, 2017 at 11:14 AM, xoviat <xov...@gmail.com> wrote:
>> >> > Just an update for everyone here:
>> >> >
>> >> > 1. We're currently waiting on the implementation of the 'dist_info"
>> >> > command
>> >> > in the wheel project.
>> >> > 2. Once that is done we can switch pip over to reading dist-info
>> >> > rather
>> >> > than
>> >> > egg_info.
>> >> > 3. Then we can move the backend over to setuptools. Because Jacob has
>> >> > a
>> >> > much
>> >> > more efficient release system than pip, I anticipate having a release
>> >> > of
>> >> > setuptools first and then we can switch pip over to requiring a newer
>> >> > setuptools via PEP 518.
>> >>
>> >> I don't think pip actually has any use for the PEP 517
>> >> prepare_wheel_metadata hook right now though? Historically 'setup.py
>> >> egg-info' was needed to kluge around unwanted behavior in 'setup.py
>> >> install', but with a PEP 517 backend that's irrelevant because
>> >> 'setup.py install' is never used. And in the future when pip has a
>> >> real resolver, then prepare_wheel_metadata should allow some
>> >> optimizations. But right now, prepare_wheel_metadata is completely
>> >> useless AFAIK.
>> >>
>> >> So why is 'setup.py dist_info' a blocker for things?
>> >>
>> >> -n
>> >>
>> >> --
>> >> Nathaniel J. Smith -- https://vorpus.org
>> >
>> >
>>
>>
>>
>> --
>> Nathaniel J. Smith -- https://vorpus.org
>
>



-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-09-04 Thread Nathaniel Smith
On Mon, Sep 4, 2017 at 5:09 PM, xoviat <xov...@gmail.com> wrote:
> Nathaniel:
>
> Pip requires egg_info to discover dependencies of source distributions so
> that it can build wheels all at once after downloading the requirements. I
> need to move pip off of egg_info as soon as possible and dist_info is
> required to do that.

"Requires" is a strong word -- AFAIK this is just a historical
artifact. I don't really know what you're talking about in the second
sentence.

The only reason I can think of that setuptools would need a dist_info
command would be to implement the PEP 517 prepare_wheel_metadata hook.
But this hook is optional and in fact provides no value right now, so
it can't be a blocker for anything. (In fact I can't see any reason
why pip would ever call it before the resolver lands.) So either (a)
there's some other reason you want a dist_info command, (b) there's
some reason I'm missing why prepare_wheel_metadata matters, or (c) one
of us is misunderstanding something :-).

-n

> 2017-09-03 21:00 GMT-05:00 Nathaniel Smith <n...@pobox.com>:
>>
>> On Sun, Sep 3, 2017 at 11:14 AM, xoviat <xov...@gmail.com> wrote:
>> > Just an update for everyone here:
>> >
>> > 1. We're currently waiting on the implementation of the 'dist_info"
>> > command
>> > in the wheel project.
>> > 2. Once that is done we can switch pip over to reading dist-info rather
>> > than
>> > egg_info.
>> > 3. Then we can move the backend over to setuptools. Because Jacob has a
>> > much
>> > more efficient release system than pip, I anticipate having a release of
>> > setuptools first and then we can switch pip over to requiring a newer
>> > setuptools via PEP 518.
>>
>> I don't think pip actually has any use for the PEP 517
>> prepare_wheel_metadata hook right now though? Historically 'setup.py
>> egg-info' was needed to kluge around unwanted behavior in 'setup.py
>> install', but with a PEP 517 backend that's irrelevant because
>> 'setup.py install' is never used. And in the future when pip has a
>> real resolver, then prepare_wheel_metadata should allow some
>> optimizations. But right now, prepare_wheel_metadata is completely
>> useless AFAIK.
>>
>> So why is 'setup.py dist_info' a blocker for things?
>>
>> -n
>>
>> --
>> Nathaniel J. Smith -- https://vorpus.org
>
>



-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-09-03 Thread Nathaniel Smith
On Sun, Sep 3, 2017 at 11:14 AM, xoviat  wrote:
> Just an update for everyone here:
>
> 1. We're currently waiting on the implementation of the 'dist_info" command
> in the wheel project.
> 2. Once that is done we can switch pip over to reading dist-info rather than
> egg_info.
> 3. Then we can move the backend over to setuptools. Because Jacob has a much
> more efficient release system than pip, I anticipate having a release of
> setuptools first and then we can switch pip over to requiring a newer
> setuptools via PEP 518.

I don't think pip actually has any use for the PEP 517
prepare_wheel_metadata hook right now though? Historically 'setup.py
egg-info' was needed to kluge around unwanted behavior in 'setup.py
install', but with a PEP 517 backend that's irrelevant because
'setup.py install' is never used. And in the future when pip has a
real resolver, then prepare_wheel_metadata should allow some
optimizations. But right now, prepare_wheel_metadata is completely
useless AFAIK.

So why is 'setup.py dist_info' a blocker for things?

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-09-01 Thread Nathaniel Smith
On Fri, Sep 1, 2017 at 11:30 AM, Chris Barker  wrote:
> I'm still confused -- if setuptools ( invoked  by pip) is producing
> incorrectly named wheels -- surely that's a bug-fix/workaround that should
> go into setuptools?
>
> If the build is being run by pip, then doesn't setuptools have all the info
> about the system that pip has?

Some setup.py files are written by project authors who want to use
them to generate wheels for uploading to PyPI, so they're carefully
written to generate good wheels, the wheels go through QA, etc. They
rely on setuptools's current (and fairly sensible) defaults for how to
tag wheels, and if those go wrong, then they take the responsibility
for fixing things.

Other setup.py files were written by project authors who never
considered any possibility outside of 'setup.py install', and haven't
changed since. For them, setuptools's defaults are not so great, but
the authors don't care, because they never guaranteed that it would
work.

Setuptools can't tell which kind of setup.py is calling it. But pip
can make a pretty good guess: if it found a wheel on pypi, then
someone had to have uploaded it, and it's that person's job to make
sure the tags are correct. OTOH if it's taking some random setup.py
file it found in an sdist, then it could be the second type, so better
to play it safe and use a more restrictive wheel tag.

It's a bit quirky and annoying, but that's life. And in the grand
scheme of things this isn't a big deal. The only program that has to
care about this is pip, and pip can always change to a different
heuristic if the situation changes.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-31 Thread Nathaniel Smith
On Thu, Aug 31, 2017 at 8:41 AM, Chris Barker - NOAA Federal
 wrote:
> The package manager should manage the package, not built it, or change it.
>
> Surely the build system should know how to correctly name the wheel it builds.

It's probably worth mentioning the specific problem that motivated pip
to start doing this.

It used to be standard, and is still quite common, for setup.py
scripts to contain stuff like:

install_requires = [...]
if sys.version_info < (3, 4):
install_requires += [...]
if platform.python_implementation() == "PyPy":
install_requires += [...]

setup(..., install_requires=install_requires)

This kind of logic in setup.py worked fine in the old days when all
you did was 'setup.py install', but then wheels came along and
retroactively turned lots of setup.py scripts from working into
broken. The problem is that with this kind of setup.py, setuptools has
*no idea* that the install_requires you gave it would have been
different if you had run setup.py with a different version of Python,
so when it has to assign Python tags to a built wheel it guesses wrong
and uses ones that are too general.

The right way to do this is to use PEP 508 environment markers:
https://www.python.org/dev/peps/pep-0508/#environment-markers
or the non-standard extras hack:
https://wheel.readthedocs.io/en/latest/#defining-conditional-dependencies
Both of these let you export the whole requirements-choosing logic
into the wheel metadata, so that it can be evaluated at install time
instead of build time.

But it will take a while for existing setup.py files transition to
using those, and in the mean time pip can't assume that a random wheel
generated by 'setup.py bdist_wheel' has accurate Python tags.

Hopefully new legacy-free backends will get this right from the start.
For example flit makes it impossible to get this wrong.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-30 Thread Nathaniel Smith
On Wed, Aug 30, 2017 at 9:56 PM, Nick Coghlan  wrote:
> On 31 August 2017 at 14:22, xoviat  wrote:
>> Again, let me repeat that: wheels generated using setuptools are valid for
>> CPython only if build on CPython. This is not the current setuptools
>> behavior but will be for all setuptools build backend calls (I assume legacy
>> will remain the same).
>
> While I do think your proposal would work (on the assumption that
> folks aren't use "pip wheel" to generate their wheel files for
> upload),

I use 'pip wheel' to generate wheel files for upload... (I like to
generate an sdist and then build a wheel from that, and 'pip wheel
sdist.tar.gz' is more convenient than manually unpacking and running
bdist_wheel. )

> an alternative approach with a lower risk of unintended side
> effects would be for *pip* to either rename the autobuilt file before
> adding it to the cache, or else to adjust its caching strategy a bit
> to internally separate a shared wheel download cache from
> per-interpreter-compatibility-tag caches for locally built wheel
> files.

+1

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-28 Thread Nathaniel Smith
On Mon, Aug 28, 2017 at 1:27 PM, Thomas Kluyver  wrote:
> On Mon, Aug 28, 2017, at 09:13 PM, Daniel Holth wrote:
>
> > Then end the debate by letting the PEP authors decide the return type, and
> > write a paragraph explaining why the other options were rejected. It is not
> > going to make a big difference.
>
>
> Will that work now? Are we all so tired of this endless war that people will
> sign a peace treaty written by the people whose names are on the PEP
> (Nathaniel & me)?
>
> If so, let the trumpets sound, and the heralds declare that "return
> NotImplemented" is the way to do it. (I hope I've remembered Nathaniel's
> preference right ;-)

Fine with me, though if it turns out Donald and Nick prefer the
version where the backend has to export an exception class then I'm
fine with that too. (I'm basing this on -- Donald has said he likes
it, and Nick hasn't commented yet but AFAICT it does address his
concerns with NotImplemented, so it seem like a plausible outcome.)

I hope Nick had a good weekend at the beach or whatever, because this
is going to be a heck of an email backlog to come back to...

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-27 Thread Nathaniel Smith
On Sun, Aug 27, 2017 at 4:27 PM, Greg Ewing <greg.ew...@canterbury.ac.nz> wrote:
> Nathaniel Smith wrote:
>>
>> - creating an sdist failed for unexpected reasons, that need a human
>> to sort out (due to a broken system, or bugs – hey, they happen – or
>> ...)
>
>
> I think that should still be reported via an exception. Returning
> None should only be for the specific case that the backend doesn't
> support the requested operation.

Well, you can't exactly say "if your code is buggy, then you should
signal that by doing this well defined thing" :-). One of my
objections to None is that it's very easy to return accidentally,
i.e., buggy code *will* sometimes return None no matter what the spec
says.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-26 Thread Nathaniel Smith
On Sat, Aug 26, 2017 at 6:30 PM, C Anthony Risinger
<c...@anthonyrisinger.com> wrote:
> On Aug 26, 2017 5:13 PM, "Nathaniel Smith" <n...@pobox.com> wrote:
>
> On Sat, Aug 26, 2017 at 1:47 PM, C Anthony Risinger
> <c...@anthonyrisinger.com> wrote:
>
> Sure sure, I understand all that, and why we think we need some special
> error signal from `build_sdist`, as currently written.
>
> What I'm suggesting, is maybe calling `build_sdist` without knowing if it
> can succeed is already a mistake.
>
> Consider instead, if we make the following small changes:
>
> 1. `get_requires_for_build_*` is passed the sdist and wheel directories,
> just like `build_*`, giving them the chance to actually look at tree before
> deciding what other reqs might be necessary.

That's not a change, that's how it works :-).

> 2. `get_requires_for_build_*` returns None to signal `build_*` is
> unsupported (superceded by static reqs defined in TOML) and [...] to signal
> support (can be empty).
>
> 3. `get_requires_for_build_*` assumed to return None if missing (so optional
> and implies no support).

This is what I originally proposed, except you use None where I use
NotImplemented, which has the disadvantages I noted earlier. Also,
people didn't like the missing get_requires_for_build_* being treated
as no-support, which makes sense, since we expect that
get_requires_for_build_* won't be used very often. But one can switch
the default here without affecting much else. The reason we want to
let build_sdist report failure is just for convenience of backends who
don't have any other reason to implement get_requires_for_build_sdist.

> 4. sdist reqs = `get_requires_for_build_sdist` (dynamic) + ??? (static)
>
> 5. wheel reqs = `get_requires_for_build_wheel` (dynamic) +
> `build-system.requires` (static)

build-system.requires contains the requirements that are always
installed before we even try importing the backend, so they're
available to all backend hooks equally.

> 6. If no reqs are found for sdist (no declared reqs in TOML and
> `get_requires_for_build_sdist` is missing or returns None), then
> `build_sdist` is unsupported.
>
> 7. If no reqs are found for wheel (no declared reqs in TOML and
> `get_requires_for_build_wheel` is missing or returns None), then
> `build_wheel` is unsupported. This one is a spec violation because at least
> one req is expected here (the backed itself).

The TOML requires aren't really useful as a signal about whether sdist
specifically is supported. Plus I think we probably want to leave
no-requires-in-TOML as a valid option for saying "I don't need
anything installed" (maybe because the backend is shipped inside the
source tree) rather than overloading it to have extra meanings.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-26 Thread Nathaniel Smith
On Sat, Aug 26, 2017 at 1:47 PM, C Anthony Risinger
<c...@anthonyrisinger.com> wrote:
> On Aug 26, 2017 2:17 PM, "Nathaniel Smith" <n...@pobox.com> wrote:
>>
>> [removed Guido from CC]
>>
>> On Aug 26, 2017 02:29, "Paul Moore" <p.f.mo...@gmail.com> wrote:
>>
>> On 26 August 2017 at 03:17, Guido van Rossum <gu...@python.org> wrote:
>> > In pretty much any other context, if you have an operation that returns
>> > an
>> > regular value or an error value, the error value should be None.
>> > (Exceptions
>> > include e.g. returning a non-negative int or -1 for errors, or True for
>> > success and False for errors.)
>>
>> So, given that build_sdist returns the path of the newly built sdist,
>> the correct way to signal "I didn't manage to build a sdist" would be
>> to return None.
>>
>> Now that it's put this way, it seems glaringly obvious to me that this
>> is the correct thing to do.
>>
>>
>> Eh... I would really prefer something that's (a) more explicit about what
>> specifically went wrong, and (b) harder to return by accident. It's not at
>> all obvious that if the list of requirements is 'None' that means 'this
>> build supports making sdists in general but cannot make them from this
>> source tree but might still be able to make a wheel'. And if you forget to
>> put in a return statement, then python returns None for you, which seems
>> like it could lead to some super confusing error modes.
>
>
> Why does the frontend need to know why an sdist was not created?

This whole discussion is about handling a specific case: suppose you
have a frontend like pip that when given a source directory and asked
to build a wheel, wants to implement that as:
  - build sdist
  - unpack sdist
  - build wheel from unpacked sdist

And suppose you have a backend like flit, that can build sdists from
some source directories (e.g. VCS checkouts) but not others (e.g.
unpacked sdists). We need some way for pip and flit to negotiate that
even though pip *normally* would implement its build-a-wheel operation
by first building an sdist, in this case it's ok to silently fall back
to some other strategy (like building the wheel directly in the source
tree, or manually copying the source tree somewhere else and then
building a wheel in it).

But, we don't want this fallback behavior to hide real bugs. So if the
backend says "look, I just can't do sdists here, and that's an
expected thing, it's not something where the user needs to take any
particular action like filing a bug report or fixing their system or
anything like that, so if you have an alternative way to accomplish
what you're trying to do then you should just silently discard this
error and try that", ...cool. But if it doesn't explicitly say that,
then we don't want to silently discard the error and do something
else.

It's taken a *lot* of back and forth to reach consensus that all we
need here is some special error signal from the *_sdist operations.
Let's focus on resolving that :-)

> Frontend is asking the backend, given the current state of the world, to
> either produce an sdist, or not. Sans ahead-of-time knowledge (see below), I
> would expect build_sdist to make some sanity checks about the world, then
> make a binary choice about whether sdist creation is a valid goal. If not
> possible, return None or NotImplemented or False or dict-of-reasons or
> whatever. Only if creation was *attempted*, and in the exceptional event it
> then failed, would I expect an Exception. We don't have structured
> exceptions sadly so they can't really carry much useful information from a
> protocol perspective above and beyond a simple None or the like anyway.
>
> I'd personally like to see some parity between build_sdist and build_wheel
> in this regard. Maybe the disconnect here is we have a way to specify hard
> reqs for building a wheel, statically or dynamically, and build_wheel is
> expected to never fail, but no way to specify hard reqs needed for
> build_sdist, necessitating this optional signaling path?

Not sure what you mean about hard reqs. The reason for the lack of
parity is that we don't currently have any use cases where build_wheel
is expected to fail, but this is expected in some sense (not sure what
that would even mean), and there's some fallback that the frontend may
want to invoke instead.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-26 Thread Nathaniel Smith
On Sat, Aug 26, 2017 at 2:06 PM, xoviat  wrote:
> I also think that Guido pretty much ruled out Notimplemented.

As I've said, I don't think it matters a huge deal whether we use
NotImplemented or not. But please don't treat Guido as some kind of
pronouncement generating machine where you hurl out-of-context
questions at him and then use his response as a club to beat down
discussion. It's rude to Guido, it's rude to Nick and Donald (to whom
Guido has explicitly delegated his BDFL authority in packaging-related
matters), and it's rude to everyone trying to discuss proposals on
their merits.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-26 Thread Nathaniel Smith
On Sat, Aug 26, 2017 at 12:54 PM, Paul Moore <p.f.mo...@gmail.com> wrote:
> On 26 August 2017 at 20:17, Nathaniel Smith <n...@pobox.com> wrote:
>> Eh... I would really prefer something that's (a) more explicit about what
>> specifically went wrong, and (b) harder to return by accident. It's not at
>> all obvious that if the list of requirements is 'None' that means 'this
>> build supports making sdists in general but cannot make them from this
>> source tree but might still be able to make a wheel'. And if you forget to
>> put in a return statement, then python returns None for you, which seems
>> like it could lead to some super confusing error modes.
>
> Well, we've had an extensive discussion about how frontends need to
> trust backends to get things right. I don't really see it as
> reasonable to now argue that backends might "forget" to return the
> right value - they might just as well "forget" to properly isolate
> builds...

It's not about division of responsibilities, it's about handling
errors gracefully when they happen. There are three bins:

- creating an sdist succeeded
- creating an sdist failed for expected reasons, and a clever frontend
might be able to handle the problem automatically if it understands
what the problem is (sdist creation isn't supported in this case) and
understands its goals (just trying to build a wheel really, so the
sdist isn't crucial)
- creating an sdist failed for unexpected reasons, that need a human
to sort out (due to a broken system, or bugs – hey, they happen – or
...)

The whole discussion has been about how we can most reliably
distinguish between the second and third categories, and give good
error messages for the third category. The argument for NotImplemented
is that it avoids cases where some internal call raises
NotImplementedError and it "leaks out" accidentally, causing a
unexpected error to be incorrectly treated as expected error -- we
don't want pip to be hiding real bugs in backend code. The argument
for NotImplementedError is that it produces better error messages on
buggy frontends. 'return None' is kind of the worst of both worlds, in
that it's an easy thing to return accidentally, and it gives confusing
error messages if the frontend fails to handle it properly. (Even more
confusing, actually, because 'NoneType object has no attribute ...' is
even harder to track down than 'NotImplementedType object has no
attribute ...'.)

> As regards an explicit description of what went wrong, why can't we
> just use the same reporting methods that we will for any other build
> issue (backends simply report the problem on stdout/stderr)? I don't
> see why the backend has to package up its error information and send
> it to the frontend to report, when we already have a perfectly
> effective way for backends to report errors and/or warnings to the
> user. If you're worried that the frontend might suppress the
> information (maybe because it's planning on falling back to a direct
> wheel build) then isn't that just the converse - backends need to
> trust frontends to do the right thing?

What I mean is more, if you're some random user and you see this in a
build backend, what do you guess it means?

  def get_requires_for_build_sdist(config_settings=None):
  return None

Now how about these?

  def get_requires_for_build_sdist(config_settings=None):
  return NotImplemented

  def get_requires_for_build_sdist(config_settings=None):
  raise NotImplementedError

  def get_requires_for_build_sdist(config_settings=None):
  raise SdistBuildNotSupported

I mean, obviously return None will work. Basically anything that's
different from "return a list or string" will work :-). That's what
makes this a bikeshed topic, and I still think we're mostly just
spinning our wheels here until Nick and Donald have a chance to hash
something out that they both can agree on. But I really don't see any
advantages to 'return None' compared to the other options that have
been discussed

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-26 Thread Nathaniel Smith
[removed Guido from CC]

On Aug 26, 2017 02:29, "Paul Moore"  wrote:

On 26 August 2017 at 03:17, Guido van Rossum  wrote:
> In pretty much any other context, if you have an operation that returns an
> regular value or an error value, the error value should be None.
(Exceptions
> include e.g. returning a non-negative int or -1 for errors, or True for
> success and False for errors.)

So, given that build_sdist returns the path of the newly built sdist,
the correct way to signal "I didn't manage to build a sdist" would be
to return None.

Now that it's put this way, it seems glaringly obvious to me that this
is the correct thing to do.


Eh... I would really prefer something that's (a) more explicit about what
specifically went wrong, and (b) harder to return by accident. It's not at
all obvious that if the list of requirements is 'None' that means 'this
build supports making sdists in general but cannot make them from this
source tree but might still be able to make a wheel'. And if you forget to
put in a return statement, then python returns None for you, which seems
like it could lead to some super confusing error modes.

-n
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Conditionless setup.py

2017-08-25 Thread Nathaniel Smith
On Fri, Aug 25, 2017 at 5:46 PM, xoviat  wrote:
> I personally do not understand the aversion to YAML. I mean yes, the
> specification is more complicated, but it's also more popular and the YAML
> files will not be complex enough for a C library to help that much. And
> since it's more popular, people might even prefer specifying package
> metadata in a pyproject.yaml. pip could even cache a wheel of the pyyaml
> package between builds that could be imported at build time with a
> zipimporter rather than vendoring the package. And as a plus it's not named
> after an alleged sexist.
>
> Honestly this is not an issue that interests me very much but this rant is
> because I was surprised that toml was chosen when I first found out about
> it.

If you want to know why it was chosen then there's lots of discussion
in the list archives. I don't think this is a great place to
relitigate it.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Conditionless setup.py

2017-08-25 Thread Nathaniel Smith
On Fri, Aug 25, 2017 at 1:00 PM, Jeremy Stanley  wrote:
> (The
> community around it is sensitive to gender diversity issues and
> wants to avoid acquiring more of a "brogrammer" image, so some of us
> worry that any conspicuous TOML files checked into revision control
> repositories could be seen as a tacit endorsement of the author's
> alleged behavior at GH a few years ago.)

I was one of the folks championing TOML during the original
discussions, and this is an issue that also worried me a lot. In case
it's a useful data point: I actually contacted several of the main
rust/cargo developers, since they were the major users of TOML and are
also well known to be sensitive to these issues, to ask if they've had
any issues with this, and they said that they haven't heard any
complaints.

Obviously there's a difference between "no-one complained" and "no-one
was bothered", and I suspect the community's existing reputation may
affect how this is interpreted as well, but... maybe useful as a data
point.

Between this and the way the TOML spec appears to have been abandoned
at v0.4 (with the admonition "you should assume that is is unstable
and act accordingly") I've wondered if we should fork it, rename it
"the obvious minimal language", and release our own 1.0.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-25 Thread Nathaniel Smith
On Fri, Aug 25, 2017 at 2:26 PM, xoviat  wrote:
>> I'm more or less persuaded by Nathaniel's argument that the source
>> directory shouldn't be on sys.path
>
> I do too. There should be an option in pyproject.toml to disable this
> behavior though so that numpy can build itself.

My original proposal was to leave the srcdir off of sys.path, and then
have a key in pyproject.toml like:

[build-system]
backend-python-path = ["."]

Frontends would then do something like:

os.chdir(srcdir)
# This line is new to handle the above proposal:
sys.path[:0] = [abspath(p) for p in
config["build-system"].get("backend-python-path", [])]
backend = resolve_entrypoint(config["build-system"]["build-backend"])

I don't have a strong opinion on whether we put this into PEP 517
(it's pretty light weight and doesn't interact with any other features
AFAICT), or make it a followup PEP, or start out by deferring this
option to a dedicated build backend, like:

[build-system]
requires = ["override_backend_path"]
build-backend = "override_backend_path"

[tool.override_backend_path]
python-path = ["."]
real-backend = "my_awesome:backend"

These are all pretty similar.

I think the big question for debate is: should sys.path be
configurable, or not configurable? IIUC, the main argument for putting
the source directory on the path was that extra configuration options
are annoying, so we don't want one of those, but we do want to support
in-tree backends (even though we expect that most projects won't use
this), so we had better put the srcdir on sys.path.

My feeling is that in practice, though, that the "no configuration,
srcdir always on sys.path" approach is not going to hold up. So first,
obviously, the hack above works just fine to make it configurable even
if we don't put it in any PEP, so in fact it's configurable no
matter what. Plus, sooner or later, someone's going to say "hey
distutils-sig, I have a build backend package that I want to be able
to bootstrap itself, AND I want to put my package inside src/ for
reasons [1], wouldn't it be nice if I could put src/ on sys.path
without jumping through hoops?". Or someone will release a new version
of their build backend that adds a new dependency, or one of their
transitive dependencies will release a new version that adds a new
dependency, and it collides with some already-released package that
uses that build-backend, and the project suffering the collision gets
annoyed at being told they need to rearrange their source tree
(retroactively, in already released versions!). And they'll come here
and say "hey distutils-sig, can solve this problem once and for all?".
And we'll be like... uh, fixing this would take what, <5 lines of new
code in pip? Kinda hard to argue with that.

So... it'll be configurable, one way or another.

And if it's configurable... then the question is about whether the
default configuration: is it srcdir on sys.path ("opt-out"), or srcdir
not on sys.path ("opt-in"). And it seems to me that in this case, all
the standard criteria say it should be opt-in.

If it's opt-in, then everyone using build backends distributed on PyPI
-- which we expect to be the vast majority of project -- never have to
think about it and it does the right thing, with no risk of collisions
or anything. In fact the only people who have to think about it are
the ones implementing in-tree backends, and if you're already like,
writing a whole backend and configuring your pyproject.toml to invoke
it, then asking you to add one more line of configuration is really
not a big deal.

OTOH if it's opt-out, then it becomes Yet Another Bad Packaging
Default, where conscientious package authors will fret about the risk
of collisions and write blog posts about how every project needs to
make sure to opt-out of this as a Best Practice, and I am so, so tired
of those.

-n

[1] https://blog.ionelmc.ro/2014/05/25/python-packaging/#the-structure

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-25 Thread Nathaniel Smith
On Fri, Aug 25, 2017 at 2:17 PM, xoviat <xov...@gmail.com> wrote:
> Nathaniel:
>
> What do you think of the proposal regarding DistutilsUnsupportedOperation?

So yeah, we could potentially say:

1) Every backend must have an attribute SdistBuildNotSupportedError,
which is a type object subclassing Exception

2) When invoking the sdist build hooks, the frontend does something like:

try:
requires = backend.get_requires_for_build_sdist()
except backend.SdistBuildNotSupportedError:
switch_to_fallback_path()

It's not ridiculous. If we did this then the name should be more
specific than "DistutilsUnsupportedOperation", plus distutils has
nothing to do with this. Also I don't think we'd ever want to move the
exception into the stdlib, because the advantages are minor compared
to the transition costs. And it does have the advantage that it
resolves Nick's concern (IIUC) and also solves the accidental bubbling
problem. I'm not sure if it would make Donald happy or not.

It also provides a general template for how to handle custom errors in
future hooks.

The main annoyance would be that every backend has to contain some
boilerplate like

class SdistBuildNotSupportedError(Exception):
pass

even though most of them that won't ever raise this error, because
otherwise the frontend will get an AttributeError in its except:
statement. This is an awkward wart.

To mitigate it, I guess we could make it optional for backends to
export the exception, and frontends would instead write:

try:
requires = backend.get_requires_for_build_sdist()
except getattr(backend, "SdistBuildNotSupportedError", ()):
invoke_fallbacks()

That's also kind of awkward, but... could be worse?

-n

> 2017-08-25 16:13 GMT-05:00 Nathaniel Smith <n...@pobox.com>:
>>
>> On Fri, Aug 25, 2017 at 9:49 AM, Thomas Kluyver <tho...@kluyver.me.uk>
>> wrote:
>> > Can I gently ask everyone involved to consider whether the
>> > notimplemented/error discussion is verging into bikeshedding
>> > (http://bikeshed.org/)?
>> >
>> > The technical arguments I have seen so far are:
>> > - The exception can include a message
>> > - The return value can't 'bubble up' from the internals of a hook like
>> > an
>> > exception
>>
>> I believe Nick also feels that an important advantage of
>> NotImplementedError is: if a frontend doesn't bother to properly
>> implement the spec and special case NotImplemented, then you'll end up
>> eventually getting some obscure error like "'NotImplementedType'
>> object has no attribute ..." when it tries to treat it as a normal
>> return value. With NotImplementedError, if the frontend doesn't treat
>> it specially, the default is to treat it like other exceptions and
>> show a proper traceback and error message. So lazy frontends give
>> better UX for NotImplementedError than NotImplemented.
>>
>> Personally, I don't find the argument about lazy frontends terribly
>> compelling because if we're assuming that we're hitting some buggy
>> corner case with code not following the spec, then we should also
>> consider the case of accidentally bubbled NotImplementedErrors.
>> Between these two cases, an accidentally bubbled NotImplementedError
>> causes even more confusing outcomes (the build frontend may silently
>> start invoking other things! vs. a suboptimal error message), and it's
>> harder to guard against (both designs require properly written
>> frontends to contain a few lines of code special casing
>> NotImplemented/NotImplementedError; only NotImplementedError also
>> requires all careful *backends* to contain just-in-case try/except
>> guards).
>>
>> Another minor point that's made me less happy with NotImplemented is:
>> originally I thought we could make it a general fact about all the
>> hooks that returning "NotImplemented" should be treated the same as if
>> the hook were undefined. (Which is pretty much how it works for
>> __binop__ methods.) On further consideration though I don't think this
>> is a good idea. (Rationale: it's not really what we want for
>> get_build_requires_for_sdist, & if we define future hooks that
>> normally have no return value then there's a danger of buggy frontends
>> missing it entirely, & it wouldn't have worked for Nick's suggestion
>> that build_wheel(build_directory=foo) triggering a NotImplemented
>> should fall back to build_wheel(build_directory=None), which is gone
>> from the spec now but suggests that this could cause problems in the
>> future.) So the bottom line of all this is that if we do go with
>> NotImplemented, I now think it should only be a defined return value

Re: [Distutils] PEP 517 again

2017-08-25 Thread Nathaniel Smith
On Fri, Aug 25, 2017 at 9:49 AM, Thomas Kluyver  wrote:
> Can I gently ask everyone involved to consider whether the
> notimplemented/error discussion is verging into bikeshedding
> (http://bikeshed.org/)?
>
> The technical arguments I have seen so far are:
> - The exception can include a message
> - The return value can't 'bubble up' from the internals of a hook like an
> exception

I believe Nick also feels that an important advantage of
NotImplementedError is: if a frontend doesn't bother to properly
implement the spec and special case NotImplemented, then you'll end up
eventually getting some obscure error like "'NotImplementedType'
object has no attribute ..." when it tries to treat it as a normal
return value. With NotImplementedError, if the frontend doesn't treat
it specially, the default is to treat it like other exceptions and
show a proper traceback and error message. So lazy frontends give
better UX for NotImplementedError than NotImplemented.

Personally, I don't find the argument about lazy frontends terribly
compelling because if we're assuming that we're hitting some buggy
corner case with code not following the spec, then we should also
consider the case of accidentally bubbled NotImplementedErrors.
Between these two cases, an accidentally bubbled NotImplementedError
causes even more confusing outcomes (the build frontend may silently
start invoking other things! vs. a suboptimal error message), and it's
harder to guard against (both designs require properly written
frontends to contain a few lines of code special casing
NotImplemented/NotImplementedError; only NotImplementedError also
requires all careful *backends* to contain just-in-case try/except
guards).

Another minor point that's made me less happy with NotImplemented is:
originally I thought we could make it a general fact about all the
hooks that returning "NotImplemented" should be treated the same as if
the hook were undefined. (Which is pretty much how it works for
__binop__ methods.) On further consideration though I don't think this
is a good idea. (Rationale: it's not really what we want for
get_build_requires_for_sdist, & if we define future hooks that
normally have no return value then there's a danger of buggy frontends
missing it entirely, & it wouldn't have worked for Nick's suggestion
that build_wheel(build_directory=foo) triggering a NotImplemented
should fall back to build_wheel(build_directory=None), which is gone
from the spec now but suggests that this could cause problems in the
future.) So the bottom line of all this is that if we do go with
NotImplemented, I now think it should only be a defined return value
for get_requires_for_build_sdist and build_sdist, and should have
special "sorry I can't do that Dave" semantics that are different from
e.g. a missing get_requires_for_build_sdist hook. All of which will
work fine, it's just less... aesthetically pleasing.

Personally, I still have a weak preference for NotImplemented over
NotImplementedError, but I don't care enough to have a big fight about
it.

It sounds like Nick and Donald are the only two folks who really have
strong opinions here: can the two of you work something out? Should we
flip a coin?

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-25 Thread Nathaniel Smith
On Fri, Aug 25, 2017 at 1:51 PM, Thomas Kluyver  wrote:
> On Fri, Aug 25, 2017, at 09:50 PM, xoviat wrote:
>
> > Genius!
>
>
> 1% inspiration, 99% frustration :-P

This joke is so clever that I fear we may be forced to implement the
solution after all, just to punish Thomas.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] Fwd: Re: PEP 517 again

2017-08-24 Thread Nathaniel Smith
On Thu, Aug 24, 2017 at 9:17 PM, xoviat  wrote:
> > I'm *not* OK with banning in-tree builds in the spec, since that's
> > both unnecessary and unenforceable
>
> Well then either we can trust the backend or we cannot. If we can, then this
> is both necessary and enforceable. If not, then we're back to pip copying
> files. You can't make and argument that it's okay to trust build_sdist but
> not build_wheel.

I think at this point everyone has made their peace with the pip
developers' decision that they want to keep copying files -- at least
for now -- and that's just how it's going to be. This email has a more
detailed discussion of the options, their "threat model", and the
tradeoffs:

https://mail.python.org/pipermail/distutils-sig/2017-July/031020.html

I can see an argument for adding language saying that build_sdist
SHOULD avoid modifying the source tree if possible, and MAY write
scratch files to the sdist_directory.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


Re: [Distutils] PEP 517 again

2017-08-24 Thread Nathaniel Smith
On Thu, Aug 24, 2017 at 6:11 AM, Thomas Kluyver  wrote:
> Nathaniel seems to be busy with other things at the moment, so I hope he
> won't mind me passing on this list of things he'd like to resolve with
> the draft PEP. I'll quote his comments and put my responses inline.

More like taking a break for mental health reasons, really, but I've
been meaning to get back to it -- thanks for the nudge and I don't
mind your posting it at all.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Distutils-SIG maillist  -  Distutils-SIG@python.org
https://mail.python.org/mailman/listinfo/distutils-sig


  1   2   3   4   5   >