Re: [Python-Dev] Python 2/3 porting HOWTO has been updated

2014-12-06 Thread Brett Cannon
Thanks for the feedback. I'll update the doc probably on Friday.

On Sat Dec 06 2014 at 12:41:54 AM Nick Coghlan  wrote:

> On 6 December 2014 at 14:40, Nick Coghlan  wrote:
> > On 6 December 2014 at 10:44, Benjamin Peterson 
> wrote:
> >> On Fri, Dec 5, 2014, at 18:16, Donald Stufft wrote:
> >>> Do we need to update it? Can it just redirect to the 3 version?
> >>
> >> Technically, yes, of course. However, that would unexpected take you out
> >> of the Python 2 docs "context". Also, that doesn't solve the problem for
> >> the downloadable versions of the docs.
> >
> > As Benjamin says, we'll likely want to update the Python 2 version
> > eventually for the benefit of the downloadable version of the docs,
> > but Brett's also right it makes sense to wait for feedback on the
> > Python 3 version and then backport the most up to date text wholesale.
> >
> > In terms of the text itself, this is a great update Brett - thanks!
> >
> > A couple of specific notes:
> >
> > * http://python-future.org/compatible_idioms.html is my favourite
> > short list of "What are the specific Python 2 only habits that I need
> > to unlearn in order to start writing 2/3 compatible code?". It could
> > be worth mentioning in addition to the What's New documents and the
> > full Python 3 Porting book.
> >
> > * it's potentially worth explicitly noting the "bytes(index_value)"
> > and "str(bytes_value)" traps when discussing the bytes/text changes.
> > Those do rather different things in Python 2 & 3, but won't emit an
> > error or warning in either version.
>
> Given that 3.4 and 2.7.9 will be the first exposure some users will
> have had to pip, would it perhaps be worth explicitly mentioning the
> "pip install " commands for the various tools? At least pylint's
> PyPI page only gives the manual download instructions, including which
> dependencies you will need to install.
>
> Cheers,
> Nick.
>
> --
> Nick Coghlan   |   [email protected]   |   Brisbane, Australia
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Fri Dec 05 2014 at 3:24:38 PM Donald Stufft  wrote:

>
> On Dec 5, 2014, at 3:04 PM, Brett Cannon  wrote:
> 
>
>
> This looks like a pretty good write up, seems to pretty fairly evaluate
> the various sides and the various concerns.
>

Thanks! It seems like I have gotten the point across that I don't care what
the solution is as long as it's a good one and that we have to look at the
whole process and not just a corner of it if we want big gains.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Donald Stufft

> On Dec 6, 2014, at 8:45 AM, Brett Cannon  wrote:
> 
> 
> 
> On Fri Dec 05 2014 at 3:24:38 PM Donald Stufft  > wrote:
> 
>> On Dec 5, 2014, at 3:04 PM, Brett Cannon > > wrote:
>> 
> 
> This looks like a pretty good write up, seems to pretty fairly evaluate the 
> various sides and the various concerns.
> 
> Thanks! It seems like I have gotten the point across that I don't care what 
> the solution is as long as it's a good one and that we have to look at the 
> whole process and not just a corner of it if we want big gains.


One potential solution is Phabricator (http://phabricator.org 
) which is a gerrit like tool except it also works 
with Mercurial. It is a fully open source platform though it works on a “patch” 
bases rather than a pull request basis. They are also coming out with hosting 
for it (http://phacility.com/ ) but that is “coming 
soon” and I’m not sure what the cost will be and if they’d be willing to donate 
to an OSS project. It makes it easier to upload a patch using a command like 
tool (like gerrit does) called arc. Phabricator itself is OSS and the coming 
soon page for phacility says that it’s easy to migrate from a hosted to a 
self-hosted solution.

Phabricator supports hosting the repository itself but as I understand it, it 
also supports hosting the repository elsewhere. So it could mean that we host 
the repository on a platform that supports Pull Requests (as you might expect, 
I’m a fan of Github here) and also deploy Phabricator on top of it. I haven’t 
actually tried that so I’d want to play around with it to make sure this works 
how I believe it does, but it may be a good way to enable both pull requests 
(and the web editors that tend to come with those workflows) for easier changes 
and a different tool for more invasive changes.

Terry spoke about CLAs, which is an interesting thing too, because phabricator 
itself has some workflow around this I believe, at least one of the examples in 
their tour is setting up some sort of notification about requiring a CLA. It 
even has a built in thing for signing legal documents (although I’m not sure if 
that’s acceptable to the PSF, we’d need to ask VanL I suspect). Another neat 
feature, although I’m not sure we’re actually setup to take advantage of it, is 
that if you run test coverage numbers you can report that directly inline with 
the review / diff to see what lines of the patch are being exercised by a test 
or not.

I’m not sure if it’s actually workable for us but it probably should be 
explored a little bit to see if it is and if it might be a good solution. They 
also have a copy of it running which they develop phabricator itself on 
(https://secure.phabricator.com/ ) though they 
also accept pull requests on github.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Fri Dec 05 2014 at 8:31:27 PM R. David Murray 
wrote:

> On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow 
> wrote:
> > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon  wrote:
> > > We don't exactly have a ton of people
> > > constantly going "I'm so bored because everything for Python's
> development
> > > infrastructure gets sorted so quickly!" A perfect example is that R.
> David
> > > Murray came up with a nice update for our workflow after PyCon but
> then ran
> > > out of time after mostly defining it and nothing ever became of it
> (maybe we
> > > can rectify that at PyCon?). Eric Snow has pointed out how he has
> written
> > > similar code for pulling PRs from I think GitHub to another code review
> > > tool, but that doesn't magically make it work in our infrastructure or
> get
> > > someone to write it and help maintain it (no offense, Eric).
> >
> > None taken.  I was thinking the same thing when I wrote that. :)
> >
> > >
> > > IOW our infrastructure can do anything, but it can't run on hopes and
> > > dreams. Commitments from many people to making this happen by a certain
> > > deadline will be needed so as to not allow it to drag on forever.
> People
> > > would also have to commit to continued maintenance to make this viable
> > > long-term.
>
> The biggest blocker to my actually working the proposal I made was that
> people wanted to see it in action first, which means I needed to spin up
> a test instance of the tracker and do the work there.  That barrier to
> getting started was enough to keep me from getting started...even though
> the barrier isn't *that* high (I've done it before, and it is easier now
> than it was when I first did it), it is still a *lot* higher than
> checking out CPython and working on a patch.
>
> That's probably the biggest issue with *anyone* contributing to tracker
> maintenance, and if we could solve that, I think we could get more
> people interested in helping maintain it.  We need the equivalent of
> dev-in-a-box for setting up for testing proposed changes to
> bugs.python.org, but including some standard way to get it deployed so
> others can look at a live system running the change in order to review
> the patch.
>

Maybe it's just me and all the Docker/Rocket hoopla that's occurred over
the past week, but this just screams "container" to me which would make
getting a test instance set up dead simple.


>
> Maybe our infrastructure folks will have a thought or two about this?
> I'm willing to put some work into this if we can figure out what
> direction to head in.  It could well be tied in to moving
> bugs.python.org in with the rest of our infrastructure, something I know
> Donald has been noodling with off and on; and I'm willing to help with
> that as well.
>
> It sounds like being able to propose and test changes to our Roundup
> instance (and test other services talking to Roundup, before deploying
> them for real) is going to be critical to improving our workflow no
> matter what other decisions are made, so we need to make it easier to
> do.
>
> In other words, it seems like the key to improving the productivity of
> our CPython patch workflow is to improve the productivity of the patch
> workflow for our key workflow resource, bugs.python.org.
>

Quite possible and since no one is suggesting we drop bugs.python.org it's
a worthy goal to have regardless of what PEP gets accepted.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Sat Dec 06 2014 at 2:53:43 AM Terry Reedy  wrote:

> On 12/5/2014 3:04 PM, Brett Cannon wrote:
>
> > 1. Contributor clones a repository from hg.python.org <
> http://hg.python.org>
> > 2. Contributor makes desired changes
> > 3. Contributor generates a patch
> > 4. Contributor creates account on bugs.python.org
> >  and signs the
> > [contributor
> > agreement](https://www.python.org/psf/contrib/contrib-form/)
>
> I would like to have the process of requesting and enforcing the signing
> of CAs automated.
>

So would I.


>
> > 4. Contributor creates an issue on bugs.python.org
> >  (if one does not already exist) and uploads a
> patch
>
> I would like to have patches rejected, or at least held up, until a CA
> is registered.  For this to work, a signed CA should be immediately
> registered on the tracker, at least as 'pending'.  It now can take a
> week or more to go through human processing.
>

This is one of the reasons I didn't want to create an issue magically from
PRs initially. I think it's totally doable with some coding.

-Brett


>
>
> > 5. Core developer evaluates patch, possibly leaving comments through our
> > [custom version of Rietveld](http://bugs.python.org/review/)
> > 6. Contributor revises patch based on feedback and uploads new patch
> > 7. Core developer downloads patch and applies it to a clean clone
> > 8. Core developer runs the tests
> > 9. Core developer does one last `hg pull -u` and then commits the
> > changes to various branches
>
> --
> Terry Jan Reedy
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> brett%40python.org
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Fri Dec 05 2014 at 5:17:35 PM Eric Snow 
wrote:

> Very nice, Brett.
>

Thanks!


>
> On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon  wrote:
> > And we can't forget the people who help keep all of this running as well.
> > There are those that manage the SSH keys, the issue tracker, the review
> > tool, hg.python.org, and the email system that let's use know when stuff
> > happens on any of these other systems. The impact on them needs to also
> be
> > considered.
>
> It sounds like Guido would rather as much of this was done by a
> provider rather than relying on volunteers.  That makes sense though
> there are concerns about control of certain assents.  However, that
> applies only to some, like hg.python.org.
>

Sure, but that's also the reason Guido stuck me with the job of being the
Great Decider on this. =) I have a gut feeling of how much support would
need to be committed in order to consider things covered well enough (I
can't give a number because it will vary depending on who steps forward;
someone who I know and trust to stick around is worth more than someone who
kindly steps forward and has never volunteered, but that's just because I
don't know the stranger and not because I don't want people who are unknown
on python-dev to step forward innately).


>
> >
> > ## Contributors
> > I see two scenarios for contributors to optimize for. There's the simple
> > spelling mistake patches and then there's the code change patches. The
> > former is the kind of thing that you can do in a browser without much
> effort
> > and should be a no-brainer commit/reject decision for a core developer.
> This
> > is what the GitHub/Bitbucket camps have been promoting their solution for
> > solving while leaving the cpython repo alone. Unfortunately the bulk of
> our
> > documentation is in the Doc/ directory of cpython. While it's nice to
> think
> > about moving the devguide, peps, and even breaking out the tutorial to
> repos
> > hosting on Bitbucket/GitHub, everything else is in Doc/ (language
> reference,
> > howtos, stdlib, C API, etc.). So unless we want to completely break all
> of
> > Doc/ out of the cpython repo and have core developers willing to edit two
> > separate repos when making changes that impact code **and** docs, moving
> > only a subset of docs feels like a band-aid solution that ignores the
> big,
> > white elephant in the room: the cpython repo, where a bulk of patches are
> > targeting.
>
> With your ideal scenario this would be a moot point, right?  There
> would be no need to split out doc-related repos.
>

Exactly, which is why I stressed we can't simply ignore the cpython repo.
If someone is bored they could run an analysis on the various repos,
calculate the number of contributions for outsiders -- maybe check the logs
for the use of the work "Thank" since we typically say "Thanks to ..." --
and see how many external contributions we got in all the repos and also a
detailed breakdown for Doc/.


>
> >
> > For the code change patches, contributors need an easy way to get a hold
> of
> > the code and get their changes to the core developers. After that it's
> > things like letting contributors knowing that their patch doesn't apply
> > cleanly, doesn't pass tests, etc.
>
> This is probably more work than it seems at first.
>

Maybe, maybe not. Depends on what external services someone wants to rely
on. E.g., could a webhook with some CI company be used so that it's more
"grab the patch from here and run the tests" vs. us having to manage the
whole CI infrastructure? Just because the home-grown solution requires
developers and maintenance doesn't mean that the maintenance is more
maintaining the code to interface with an external service provider instead
of providing the service ourselves from scratch. And don't forget companies
will quite possibly donate services if you ask or the PSF could pay for
some things.


>
> > As of right now getting the patch into the
> > issue tracker is a bit manual but nothing crazy. The real issue in this
> > scenario is core developer response time.
> >
> > ## Core developers
> > There is a finite amount of time that core developers get to contribute
> to
> > Python and it fluctuates greatly. This means that if a process can be
> found
> > which allows core developers to spend less time doing mechanical work and
> > more time doing things that can't be automated -- namely code reviews --
> > then the throughput of patches being accepted/rejected will increase.
> This
> > also impacts any increased patch submission rate that comes from
> improving
> > the situation for contributors because if the throughput doesn't change
> then
> > there will simply be more patches sitting in the issue tracker and that
> > doesn't benefit anyone.
>
> This is the key concern I have with only addressing the contributor
> side of things.  I'm all for increasing contributions, but not if they
> are just going to rot on the tracker and we end up with disillusioned
> contribu

Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Donald Stufft

> On Dec 6, 2014, at 9:11 AM, Brett Cannon  wrote:
> 
> 
> 
> On Fri Dec 05 2014 at 8:31:27 PM R. David Murray  > wrote:
> On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow  > wrote:
> > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon  > > wrote:
> > > We don't exactly have a ton of people
> > > constantly going "I'm so bored because everything for Python's development
> > > infrastructure gets sorted so quickly!" A perfect example is that R. David
> > > Murray came up with a nice update for our workflow after PyCon but then 
> > > ran
> > > out of time after mostly defining it and nothing ever became of it (maybe 
> > > we
> > > can rectify that at PyCon?). Eric Snow has pointed out how he has written
> > > similar code for pulling PRs from I think GitHub to another code review
> > > tool, but that doesn't magically make it work in our infrastructure or get
> > > someone to write it and help maintain it (no offense, Eric).
> >
> > None taken.  I was thinking the same thing when I wrote that. :)
> >
> > >
> > > IOW our infrastructure can do anything, but it can't run on hopes and
> > > dreams. Commitments from many people to making this happen by a certain
> > > deadline will be needed so as to not allow it to drag on forever. People
> > > would also have to commit to continued maintenance to make this viable
> > > long-term.
> 
> The biggest blocker to my actually working the proposal I made was that
> people wanted to see it in action first, which means I needed to spin up
> a test instance of the tracker and do the work there.  That barrier to
> getting started was enough to keep me from getting started...even though
> the barrier isn't *that* high (I've done it before, and it is easier now
> than it was when I first did it), it is still a *lot* higher than
> checking out CPython and working on a patch.
> 
> That's probably the biggest issue with *anyone* contributing to tracker
> maintenance, and if we could solve that, I think we could get more
> people interested in helping maintain it.  We need the equivalent of
> dev-in-a-box for setting up for testing proposed changes to
> bugs.python.org , but including some standard way to 
> get it deployed so
> others can look at a live system running the change in order to review
> the patch.
> 
> Maybe it's just me and all the Docker/Rocket hoopla that's occurred over the 
> past week, but this just screams "container" to me which would make getting a 
> test instance set up dead simple.

Heh, one of my thoughts on deploying the bug tracker into production was via a 
container, especially since we have multiple instances of it. I got side 
tracked on getting the rest of the infrastructure readier for a web application 
and some improvements there as well as getting a big postgresql database 
cluster set up (2x 15GB RAM servers running in Primary/Replica mode). The 
downside of course to this is that afaik Docker is a lot harder to use on 
Windows and to some degree OS X than linux. However if the tracker could be 
deployed as a docker image that would make the infrastructure side a ton 
easier. I also have control over the python/ organization on Docker Hub too for 
whatever uses we have for it.

Unrelated to the tracker:

Something that any PEP should consider is security, particularly that of 
running the tests. Currently we have a buildbot fleet that checks out the code 
and executes the test suite (aka code). A problem that any pre-merge test 
runner needs to solve is that unlike a post-merge runner, which will only run 
code that has been committed by a committer, a pre-merge runner will run code 
that _anybody_ has submitted. This means that it’s not merely enough to simply 
trigger a build in our buildbot fleet prior to the merge happening as that 
would allow anyone to execute arbitrary code there. As far as I’m aware there 
are two solutions to this problem in common use, either use throw away 
environments/machines/containers that isolate the running code and then get 
destroyed after each test run, or don’t run the pre-merge tests immediately 
unless it’s from a “trusted” person and for “untrusted” or “unknown” people 
require a “trusted” person to give the OK for each test run. 

The throw away machine solution is obviously much nicer experience for the 
“untrusted” or “unknown” users since they don’t require any intervention to get 
their tests run which means that they can see if their tests pass, fix things, 
and then see if that fixes it much quicker. The obvious downside here is that 
it’s more effort to do that and the availability of throw away environments for 
all the systems we support. Linux, most (all?) of the BSDs, and Windows are 
pretty easy here since there are cloud offerings for them that can be used to 
spin up a temporary environment, run tests, and then delete it. OS X is a 
problem because afaik you can only virtual

Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Nick Coghlan
On 7 December 2014 at 01:07, Donald Stufft  wrote:
> A likely solution is to use a pre-merge test runner for the systems that we
> can isolate which will give a decent indication if the tests are going to
> pass across the entire supported matrix or not and then continue to use the
> current post-merge test runner to handle testing the esoteric systems that
> we can’t work into the pre-merge testing.

Yep, that's exactly the approach I had in mind for this problem.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Nick Coghlan
On 7 December 2014 at 00:11, Brett Cannon  wrote:
> On Fri Dec 05 2014 at 8:31:27 PM R. David Murray 
> wrote:
>>
>> That's probably the biggest issue with *anyone* contributing to tracker
>> maintenance, and if we could solve that, I think we could get more
>> people interested in helping maintain it.  We need the equivalent of
>> dev-in-a-box for setting up for testing proposed changes to
>> bugs.python.org, but including some standard way to get it deployed so
>> others can look at a live system running the change in order to review
>> the patch.
>
>
> Maybe it's just me and all the Docker/Rocket hoopla that's occurred over the
> past week, but this just screams "container" to me which would make getting
> a test instance set up dead simple.

It's not just you (and Graham Dumpleton has even been working on
reference images for Apache/mod_wsgi hosting of Python web services:
http://blog.dscpl.com.au/2014/12/hosting-python-wsgi-applications-using.html)

You still end up with Vagrant as a required element for Windows and
Mac OS X, but that's pretty much a given for a lot of web service
development these days.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Sat Dec 06 2014 at 10:07:50 AM Donald Stufft  wrote:

>
> On Dec 6, 2014, at 9:11 AM, Brett Cannon  wrote:
>
>
>
> On Fri Dec 05 2014 at 8:31:27 PM R. David Murray 
> wrote:
>
>> On Fri, 05 Dec 2014 15:17:35 -0700, Eric Snow <
>> [email protected]> wrote:
>> > On Fri, Dec 5, 2014 at 1:04 PM, Brett Cannon  wrote:
>> > > We don't exactly have a ton of people
>> > > constantly going "I'm so bored because everything for Python's
>> development
>> > > infrastructure gets sorted so quickly!" A perfect example is that R.
>> David
>> > > Murray came up with a nice update for our workflow after PyCon but
>> then ran
>> > > out of time after mostly defining it and nothing ever became of it
>> (maybe we
>> > > can rectify that at PyCon?). Eric Snow has pointed out how he has
>> written
>> > > similar code for pulling PRs from I think GitHub to another code
>> review
>> > > tool, but that doesn't magically make it work in our infrastructure
>> or get
>> > > someone to write it and help maintain it (no offense, Eric).
>> >
>> > None taken.  I was thinking the same thing when I wrote that. :)
>> >
>> > >
>> > > IOW our infrastructure can do anything, but it can't run on hopes and
>> > > dreams. Commitments from many people to making this happen by a
>> certain
>> > > deadline will be needed so as to not allow it to drag on forever.
>> People
>> > > would also have to commit to continued maintenance to make this viable
>> > > long-term.
>>
>> The biggest blocker to my actually working the proposal I made was that
>> people wanted to see it in action first, which means I needed to spin up
>> a test instance of the tracker and do the work there.  That barrier to
>> getting started was enough to keep me from getting started...even though
>> the barrier isn't *that* high (I've done it before, and it is easier now
>> than it was when I first did it), it is still a *lot* higher than
>> checking out CPython and working on a patch.
>>
>> That's probably the biggest issue with *anyone* contributing to tracker
>> maintenance, and if we could solve that, I think we could get more
>> people interested in helping maintain it.  We need the equivalent of
>> dev-in-a-box for setting up for testing proposed changes to
>> bugs.python.org, but including some standard way to get it deployed so
>> others can look at a live system running the change in order to review
>> the patch.
>>
>
> Maybe it's just me and all the Docker/Rocket hoopla that's occurred over
> the past week, but this just screams "container" to me which would make
> getting a test instance set up dead simple.
>
>
> Heh, one of my thoughts on deploying the bug tracker into production was
> via a container, especially since we have multiple instances of it. I got
> side tracked on getting the rest of the infrastructure readier for a web
> application and some improvements there as well as getting a big postgresql
> database cluster set up (2x 15GB RAM servers running in Primary/Replica
> mode). The downside of course to this is that afaik Docker is a lot harder
> to use on Windows and to some degree OS X than linux. However if the
> tracker could be deployed as a docker image that would make the
> infrastructure side a ton easier. I also have control over the python/
> organization on Docker Hub too for whatever uses we have for it.
>

I think it's something worth thinking about, but like you I don't know if
the containers work on OS X or Windows (I don't work with containers
personally).


>
> Unrelated to the tracker:
>
> Something that any PEP should consider is security, particularly that of
> running the tests. Currently we have a buildbot fleet that checks out the
> code and executes the test suite (aka code). A problem that any pre-merge
> test runner needs to solve is that unlike a post-merge runner, which will
> only run code that has been committed by a committer, a pre-merge runner
> will run code that _anybody_ has submitted. This means that it’s not merely
> enough to simply trigger a build in our buildbot fleet prior to the merge
> happening as that would allow anyone to execute arbitrary code there. As
> far as I’m aware there are two solutions to this problem in common use,
> either use throw away environments/machines/containers that isolate the
> running code and then get destroyed after each test run, or don’t run the
> pre-merge tests immediately unless it’s from a “trusted” person and for
> “untrusted” or “unknown” people require a “trusted” person to give the OK
> for each test run.
>
> The throw away machine solution is obviously much nicer experience for the
> “untrusted” or “unknown” users since they don’t require any intervention to
> get their tests run which means that they can see if their tests pass, fix
> things, and then see if that fixes it much quicker. The obvious downside
> here is that it’s more effort to do that and the availability of throw away
> environments for all the systems we support. Linux, most (all?) of the
> BSDs,

Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Brett Cannon
On Sat Dec 06 2014 at 10:30:54 AM Nick Coghlan  wrote:

> On 7 December 2014 at 00:11, Brett Cannon  wrote:
> > On Fri Dec 05 2014 at 8:31:27 PM R. David Murray 
> > wrote:
> >>
> >> That's probably the biggest issue with *anyone* contributing to tracker
> >> maintenance, and if we could solve that, I think we could get more
> >> people interested in helping maintain it.  We need the equivalent of
> >> dev-in-a-box for setting up for testing proposed changes to
> >> bugs.python.org, but including some standard way to get it deployed so
> >> others can look at a live system running the change in order to review
> >> the patch.
> >
> >
> > Maybe it's just me and all the Docker/Rocket hoopla that's occurred over
> the
> > past week, but this just screams "container" to me which would make
> getting
> > a test instance set up dead simple.
>
> It's not just you (and Graham Dumpleton has even been working on
> reference images for Apache/mod_wsgi hosting of Python web services:
> http://blog.dscpl.com.au/2014/12/hosting-python-wsgi-
> applications-using.html)
>
> You still end up with Vagrant as a required element for Windows and
> Mac OS X, but that's pretty much a given for a lot of web service
> development these days.
>

If we need a testbed then we could try it out with a devinabox and see how
it works with new contributors at PyCon. Would be nice to just have Clang,
all the extras for the stdlib, etc. already pulled together for people to
work from.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Donald Stufft

> On Dec 6, 2014, at 10:26 AM, Nick Coghlan  wrote:
> 
> On 7 December 2014 at 01:07, Donald Stufft  wrote:
>> A likely solution is to use a pre-merge test runner for the systems that we
>> can isolate which will give a decent indication if the tests are going to
>> pass across the entire supported matrix or not and then continue to use the
>> current post-merge test runner to handle testing the esoteric systems that
>> we can’t work into the pre-merge testing.
> 
> Yep, that's exactly the approach I had in mind for this problem.
> 

I’m coming around to the idea for pip too, though I’ve been trying to figure 
out a way to do pre-merge testing using isolated for even the esoteric 
platforms. One thing that I’d personally greatly appreciate is if this whole 
process made it possible for selected external projects to re-use the 
infrastructure for the harder to get platforms. Pip and setuptools in 
particular would make good candidates for this I think.

---
Donald Stufft
PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tracker test instances (was: My thinking about the development process)

2014-12-06 Thread R. David Murray
On Sat, 06 Dec 2014 15:21:46 +, Brett Cannon  wrote:
> On Sat Dec 06 2014 at 10:07:50 AM Donald Stufft  wrote:
> > On Dec 6, 2014, at 9:11 AM, Brett Cannon  wrote:
> >
> >> On Fri Dec 05 2014 at 8:31:27 PM R. David Murray 
> >> wrote:
> >>> That's probably the biggest issue with *anyone* contributing to tracker
> >>> maintenance, and if we could solve that, I think we could get more
> >>> people interested in helping maintain it.  We need the equivalent of
> >>> dev-in-a-box for setting up for testing proposed changes to
> >>> bugs.python.org, but including some standard way to get it deployed so
> >>> others can look at a live system running the change in order to review
> >>> the patch.
> >>
> >> Maybe it's just me and all the Docker/Rocket hoopla that's occurred over
> >> the past week, but this just screams "container" to me which would make
> >> getting a test instance set up dead simple.
> >
> > Heh, one of my thoughts on deploying the bug tracker into production was
> > via a container, especially since we have multiple instances of it. I got
> > side tracked on getting the rest of the infrastructure readier for a web
> > application and some improvements there as well as getting a big postgresql
> > database cluster set up (2x 15GB RAM servers running in Primary/Replica
> > mode). The downside of course to this is that afaik Docker is a lot harder
> > to use on Windows and to some degree OS X than linux. However if the
> > tracker could be deployed as a docker image that would make the
> > infrastructure side a ton easier. I also have control over the python/
> > organization on Docker Hub too for whatever uses we have for it.
> >
> 
> I think it's something worth thinking about, but like you I don't know if
> the containers work on OS X or Windows (I don't work with containers
> personally).

(Had to fix the quoting there, somebody's email program got it wrong.)

For the tracker, being unable to run a test instance on Windows would
likely not be a severe limitation.  Given how few Windows people we get
making contributions to CPython, I'd really rather encourage them to
work there, rather than on the tracker.  OS/X is a bit more problematic,
but it sounds like it is also a bit more doable.

On the other hand, what's the overhead on setting up to use Docker?  If
that task is non-trivial, we're back to having a higher barrier to
entry than running a dev-in-a-box script...

Note also in thinking about setting up a test tracker instance we have
an additional concern: it requires postgres, and needs either a copy of
the full data set (which includes account data/passwords which would
need to be creatively sanitized) or a fairly large test data set.  I'd
prefer a sanitized copy of the real data.

--David
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Terry Reedy

On 12/6/2014 10:26 AM, Nick Coghlan wrote:

On 7 December 2014 at 01:07, Donald Stufft  wrote:

A likely solution is to use a pre-merge test runner for the systems that we
can isolate which will give a decent indication if the tests are going to
pass across the entire supported matrix or not and then continue to use the
current post-merge test runner to handle testing the esoteric systems that
we can’t work into the pre-merge testing.


Yep, that's exactly the approach I had in mind for this problem.


Most patches are tested on just one (major) system before being 
committed.  The buildbots confirm that there is no oddball failure 
elsewhere, and there is usually is not.  Testing user submissions on one 
system should usually be enough.


Committers should generally have an idea when wider testing is needed, 
and indeed it should be nice to be able to get wider testing on occasion 
*before* making a commit, without begging on the tracker.


What would be *REALLY* helpful for Idle development (and tkinter, 
turtle, and turtle demo testing) would be if there were a 
test.support.screenshot function that would take a screenshot and email 
to the tracker or developer.  There would also need to be at least one 
(stable) *nix test machine that actually runs tkinter code, and the 
ability to test on OSX with its different graphics options.  Properly 
testing Idle tkinter code that affects what users see is a real bottleneck.


--
Terry Jan Reedy


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Tracker test instances (was: My thinking about the development process)

2014-12-06 Thread Nick Coghlan
On 7 December 2014 at 02:11, R. David Murray  wrote:
> For the tracker, being unable to run a test instance on Windows would
> likely not be a severe limitation.  Given how few Windows people we get
> making contributions to CPython, I'd really rather encourage them to
> work there, rather than on the tracker.  OS/X is a bit more problematic,
> but it sounds like it is also a bit more doable.
>
> On the other hand, what's the overhead on setting up to use Docker?  If
> that task is non-trivial, we're back to having a higher barrier to
> entry than running a dev-in-a-box script...
>
> Note also in thinking about setting up a test tracker instance we have
> an additional concern: it requires postgres, and needs either a copy of
> the full data set (which includes account data/passwords which would
> need to be creatively sanitized) or a fairly large test data set.  I'd
> prefer a sanitized copy of the real data.

If you're OK with git as an entry requirement, then something like the
OpenShift free tier may be a better place for test instances, rather
than local hosting - with an appropriate quickstart, creating your own
tracker instance can be a single click operation on a normal
hyperlink. That also has the advantage of making it easy to share
changes to demonstrate UI updates. (OpenShift doesn't support running
containers directly yet, but that capability is being worked on in the
upstream OpenShift Origin open source project)

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Wes Turner
On Sat, Dec 6, 2014 at 8:01 AM, Donald Stufft  wrote:

>
>
> One potential solution is Phabricator (http://phabricator.org) which is a
> gerrit like tool except it also works with Mercurial. It is a fully open
> source platform though it works on a “patch” bases rather than a pull
> request basis.
>

I've been pleasantly unsurprised with the ReviewBoard CLI tools (RBtools):

* https://www.reviewboard.org/docs/rbtools/dev/
* https://www.reviewboard.org/docs/codebase/dev/contributing-patches/
* https://www.reviewboard.org/docs/manual/2.0/users/

ReviewBoard supports Markdown, {Git, Mercurial, Subversion, ... },
full-text search

* https://wiki.jenkins-ci.org/display/JENKINS/Reviewboard+Plugin
* [ https://wiki.jenkins-ci.org/display/JENKINS/Selenium+Plugin ]
*
https://github.com/saltstack/salt-testing/blob/develop/salttesting/jenkins.py
  * GetPullRequestAction
  * https://wiki.jenkins-ci.org/display/JENKINS/saltstack-plugin (spin up
an instance)
  * https://github.com/saltstack-formulas/jenkins-formula
  * https://github.com/saltstack/salt-jenkins




> Terry spoke about CLAs, which is an interesting thing too, because
> phabricator itself has some workflow around this I believe, at least one of
> the examples in their tour is setting up some sort of notification about
> requiring a CLA. It even has a built in thing for signing legal documents
> (although I’m not sure if that’s acceptable to the PSF, we’d need to ask
> VanL I suspect). Another neat feature, although I’m not sure we’re actually
> setup to take advantage of it, is that if you run test coverage numbers you
> can report that directly inline with the review / diff to see what lines of
> the patch are being exercised by a test or not.
>

AFAIU, these are not (yet) features of ReviewBoard (which is written in
Python).


>
> I’m not sure if it’s actually workable for us but it probably should be
> explored a little bit to see if it is and if it might be a good solution.
> They also have a copy of it running which they develop phabricator itself
> on (https://secure.phabricator.com/) though they also accept pull
> requests on github.
>

What a good looking service.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Wes Turner
On Sat, Dec 6, 2014 at 9:07 AM, Donald Stufft  wrote:

>
> Heh, one of my thoughts on deploying the bug tracker into production was
> via a container, especially since we have multiple instances of it. I got
> side tracked on getting the rest of the infrastructure readier for a web
> application and some improvements there as well as getting a big postgresql
> database cluster set up (2x 15GB RAM servers running in Primary/Replica
> mode). The downside of course to this is that afaik Docker is a lot harder
> to use on Windows and to some degree OS X than linux. However if the
> tracker could be deployed as a docker image that would make the
> infrastructure side a ton easier. I also have control over the python/
> organization on Docker Hub too for whatever uses we have for it.
>

Are you referring to https://registry.hub.docker.com/repos/python/ ?

IPython / Jupyter have some useful Docker images:

* https://registry.hub.docker.com/repos/ipython/
* https://registry.hub.docker.com/repos/jupyter/

CI integration with roundup seems to be the major gap here:

* https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
* https://wiki.jenkins-ci.org/display/JENKINS/saltstack-plugin
* https://github.com/saltstack-formulas/docker-formula



>
> Unrelated to the tracker:
>
> Something that any PEP should consider is security, particularly that of
> running the tests. Currently we have a buildbot fleet that checks out the
> code and executes the test suite (aka code). A problem that any pre-merge
> test runner needs to solve is that unlike a post-merge runner, which will
> only run code that has been committed by a committer, a pre-merge runner
> will run code that _anybody_ has submitted. This means that it’s not merely
> enough to simply trigger a build in our buildbot fleet prior to the merge
> happening as that would allow anyone to execute arbitrary code there. As
> far as I’m aware there are two solutions to this problem in common use,
> either use throw away environments/machines/containers that isolate the
> running code and then get destroyed after each test run, or don’t run the
> pre-merge tests immediately unless it’s from a “trusted” person and for
> “untrusted” or “unknown” people require a “trusted” person to give the OK
> for each test run.
>
> The throw away machine solution is obviously much nicer experience for the
> “untrusted” or “unknown” users since they don’t require any intervention to
> get their tests run which means that they can see if their tests pass, fix
> things, and then see if that fixes it much quicker. The obvious downside
> here is that it’s more effort to do that and the availability of throw away
> environments for all the systems we support. Linux, most (all?) of the
> BSDs, and Windows are pretty easy here since there are cloud offerings for
> them that can be used to spin up a temporary environment, run tests, and
> then delete it. OS X is a problem because afaik you can only virtualize OS
> X on Apple hardware and I’m not aware of any cloud provider that offers
> metered access to OS X hosts. The more esoteric systems like AIX and what
> not are likely an even bigger problem in this regard since I’m unsure of
> the ability to get virtualized instances of these at all. It may be
> possible to build our own images of these on a cloud provider assuming that
> their licenses allow that.
>
> The other solution would work easier with our current buildbot fleet since
> you’d just tell it to run some tests but you’d wait until a “trusted”
> person gave the OK before you did that.
>
> A likely solution is to use a pre-merge test runner for the systems that
> we can isolate which will give a decent indication if the tests are going
> to pass across the entire supported matrix or not and then continue to use
> the current post-merge test runner to handle testing the esoteric systems
> that we can’t work into the pre-merge testing.
>
> ---
> Donald Stufft
> PGP: 7C6B 7C5D 5E2B 6356 A926 F04F 6E3C BCE9 3372 DCFA
>
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/wes.turner%40gmail.com
>
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] My thinking about the development process

2014-12-06 Thread Wes Turner
On Sat, Dec 6, 2014 at 7:27 PM, Wes Turner  wrote:

>
>
> On Sat, Dec 6, 2014 at 9:07 AM, Donald Stufft  wrote:
>
>>
>> Heh, one of my thoughts on deploying the bug tracker into production was
>> via a container, especially since we have multiple instances of it. I got
>> side tracked on getting the rest of the infrastructure readier for a web
>> application and some improvements there as well as getting a big postgresql
>> database cluster set up (2x 15GB RAM servers running in Primary/Replica
>> mode). The downside of course to this is that afaik Docker is a lot harder
>> to use on Windows and to some degree OS X than linux. However if the
>> tracker could be deployed as a docker image that would make the
>> infrastructure side a ton easier. I also have control over the python/
>> organization on Docker Hub too for whatever uses we have for it.
>>
>
> Are you referring to https://registry.hub.docker.com/repos/python/ ?
>
> IPython / Jupyter have some useful Docker images:
>
> * https://registry.hub.docker.com/repos/ipython/
> * https://registry.hub.docker.com/repos/jupyter/
>
> CI integration with roundup seems to be the major gap here:
>
> * https://wiki.jenkins-ci.org/display/JENKINS/Docker+Plugin
> * https://wiki.jenkins-ci.org/display/JENKINS/saltstack-plugin
> * https://github.com/saltstack-formulas/docker-formula
>

ShiningPandas supports virtualenv and tox, but I don't know how well suited
it would be
 for fail-fast CPython testing across a grid/graph:

* https://wiki.jenkins-ci.org/display/JENKINS/ShiningPanda+Plugin
* https://wiki.jenkins-ci.org/display/JENKINS/Matrix+Project+Plugin

The branch merging workflows of
https://datasift.github.io/gitflow/IntroducingGitFlow.html (hotfix/name,
feature/name, release/name) are surely portable across VCS systems.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com