> On 17 Nov 2016, at 11:35, Paul Moore <[email protected]> wrote:
>
> On 17 November 2016 at 10:58, Cory Benfield <[email protected]> wrote:
>> Paul, you mentioned that discovery on PyPI is a problem: I don’t contest
>> that at all. But I don’t think the solution to that problem is to jam
>> modules into the standard library, and I think even less of that idea when
>> there is no formal process available for python-dev to consider the
>> implementations available for the standard library.
>
> Yeah, in the process of the discussion a certain amount of context was
> lost. I also don't think that the solution is to "jam" modules into
> the standard library.
Fair enough: “jam” was probably a more emotive term than I needed to use there,
I’ll happily concede that.
> I *do* think that part of the solution should be to have good
> solutions to common programming problems in the standard library. What
> is a "common problem" changes over time, as well as by problem domain,
> and we need to take that into account. My feeling is that
> (client-level, web service consumer focused) OAuth is tending towards
> being one of those "common problems" (as the authentication side of
> the whole REST/JSON/etc web API toolset) and warrants consideration
> for inclusion in the stdlib.
So this argument seems reasonable to me, but my problem with it is that it
seems to be fuzzy. It leads me to all kinds of follow-on questions, such as:
- What counts as a common problem? Is there an objective measure, or do we
decide by gut feel?
- How do we scope the problem? Is the problem we’re solving in this specific
case OAuth? OpenID Connect? HTTP authentication in general?
- How complex does that problem have to be before we decide that the solution
doesn’t belong in the standard library? Alternatively, does being complex make
it *more* important that we have a standard library solution?
- Should the standard library have a greenfield implementation or adopt the
third-party one?
- If it adopts the third party one what happens to the third-party
maintainers, are they expected to keep maintaining?
- If they object, does CPython do a hostile fork and take over maintenance
itself, or pursue another implementation?
- How do we balance the desire to increase the scope of the stdlib with the
increased maintenance burden that brings?
- Do we ever *remove* modules that are solutions to problems that are no longer
common?
- Is it acceptable to solve only part of the problem? (For context, OAuth2 is a
complex specification that leaves a lot of detail out: requests-oauthlib
contains a lot of “compliance fixes” for specific oauth2 servers that deviate
from the specification in unexpected ways. Is it acceptable to write exactly to
the spec and to leave anyone who needs custom code to their own devices?)
- What about asyncio integration? Is that mandatory for new protocol code?
Optional? How important? Can it be asyncio-only?
- As a follow-on, what about integration with other stdlib modules? Does
the new OAuth module have to work with all stdlib HTTP clients? Only one? Is it
its own client you use directly?
- What happens if/when the protocol is revised?
- What happens if/when the maintainers move on from the project?
- Do we also maintain an out-of-tree backport for users on older Pythons? If
not, is it acceptable for those users to have older versions of the library
unless they upgrade their whole Python distribution?
This isn’t me disagreeing with you, just me pointing out that the fuzziness
around this makes me nervous. It has been my experience that a large number of
protocol implementations in the standard library are already struggling to meet
their maintenance goals, and I’d be pretty reluctant about wanting to add to
that burden.
> But I don't agree with the principle that we should stop adding
> solutions to common problems to the stdlib "because PyPI is only a pip
> install away". There will always be users who can't, won't or simply
> don't use PyPI and judge Python on what you can do with a base
> install. And that's a valid judgement to make. One of the reasons I
> prefer Python over (say) Perl, is that if I go onto a Linux server
> that's isolated from the internet, both are available but on Python I
> can do things like compose a MIME email and send it via SMTP, work
> with dates and times, read and write CSV files, parse XML data from an
> external program, etc. On Perl I can't because the Perl standard
> library doesn't have those things available, so I end up having to
> write my own - which means I take time away from getting my *actual*
> job done.
I can understand that. I definitely think you and I have disagreements on the
best way to solve this problem (and that we’re unlikely to resolve them in this
thread!), but I certainly acknowledge that this use case is real and important.
I think my biggest disagreement on this use case is simply about scope: at what
point does a use become too niche to be supported by the standard library?
> Put it another way - being in the stdlib isn't a solution to the
> discoverability problem, but it is a solution to the access problem
> (which is a real problem for some people, despite pip and PyPI).
Fair enough.
Cory
_______________________________________________
Python-ideas mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-ideas
Code of Conduct: http://python.org/psf/codeofconduct/