Re: [Python-Dev] To reduce Python "application" startup time

2017-09-06 Thread Neil Schemenauer
INADA Naoki  wrote:
> Current `python -v` is not useful to optimize import.
> So I use this patch to profile import time.
> https://gist.github.com/methane/e688bb31a23bcc437defcea4b815b1eb

I have implemented DTrace probes that do almost the same thing.
Your patch is better in that it does not require an OS with DTrace
or SystemTap.  The DTrace probes are better in that they can be a
part of the standard Python build.

https://github.com/nascheme/cpython/tree/dtrace-module-import

DTrace script:

https://gist.github.com/nascheme/c1cece36a3369926ee93cecc3d024179

Pretty printer for script output (very minimal):

https://gist.github.com/nascheme/0bff5c49bb6b518f5ce23a9aea27f14b


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] To reduce Python "application" startup time

2017-09-06 Thread INADA Naoki
> I’m not sure however whether burying imports inside functions (as a kind of 
> poor man’s lazy import) is ultimately going to be satisfying.  First, it’s 
> not natural, it generally violates coding standards (e.g. PEP 8), and can 
> make linters complain.

Of course,  I tried to move imports only when (1) it's only used one
or two of many functions in the module,
(2) it's relatively heavy, (3) it's rerely imported from other modules.


>   Second, I think you’ll end up chasing imports all over the stdlib and third 
> party modules in any sufficiently complicated application.

Agree.  I won't use much time to optimization by moving import from
top to inner function in stdlib.

I think my import-profiler patch can be polished and committed in Python to help
library maintainers to know import time easily.  (Maybe, `python -X
import-profile`)


> Third, I’m not sure that the gains you’ll get won’t just be overwhelmed by 
> lots of other things going on, such as pkg_resources entry point processing, 
> pth file processing, site.py effects, command line processing libraries such 
> as click, and implicitly added distribution exception hooks (e.g. Ubuntu’s 
> apport).

Yes.  I noticed some of them while profiling imports.
For example, old-style namespace package imports types module for types.Module.
Types module imports functools, and functools imports collections.
So some efforts in CPython (Avoid importing collections and functools
from site) is not
worth enough when there are at least one old-style namespace package
is installed.


>
> Many of these can’t be blamed on Python itself, but all can contribute 
> significantly to Python’s apparent start up time.  It’s definitely worth 
> investigating the details of Python import, and a few of us at the core 
> sprint have looked at those numbers and thrown around ideas for improvement, 
> but we’ll need to look at the effects up and down the stack to improve the 
> start up performance for the average Python application.
>

Yes. I totally agree with you.  That's why I use import-profile.patch
for some 3rd party libraries.

Currently, I have these ideas to optimize application startup time.

* Faster, or lazily compiling regular expression. (pkg_resources
imports pyparsing, which has lot regex)
* More usable lazy import. (which can be solved "PEP 549: Instance
Properties (aka: module properties)")
* Optimize enum creation.
* Faster namedtuple (There is pull request already)
* Faster ABC
* Breaking large import tree in stdlib.  (PEP 549 may help this too)

Regards,

INADA Naoki  
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Map and Wirelessly Control the Human Brain Using 50 Nanometer Particles and Radio Waves

2017-09-06 Thread Muresan Ionut Alexandru
If you are not able to see this mail, click 
http://r.sciencemedia.eu/336n1lt27ors7f.htmlHello,

I'm a computer programmer from Romania. I'm very interested in neuroscience. I 
have some ideas for an international science project.
The project is about increasing the worldwide efforts to map the human brain.

Methods that can be used to map and wirelessly control the brain
1. Study how the cordyceps mushroom controls the brains of ants. Create 
synthetic particles that can do the same in other life forms.
2. Use tiny transmitters (the size of a small bacteria) 50 nanometers each 
(decoder, antenna) that can send and receive data. The decoder should identify 
uniquely each neuron. They could be built using computer chip transistor 
technology (1 nanometers).
To get them close to each neuron they should be placed inside bacteria that can 
cross the blood-brain barrier. Another method to get them close to neurons is 
to inject them into the brain of the lab animal you experiment on.
3. Generate very low fequency radio waves that go through the human brain and 
in combination with the waves produced by the brain create a pattern that can 
be interpreted by a software.

In my opinion mapping the human brain will allow us to extend the lifespan of 
humans by at least 20 years all over the world, cure Parkinson, Alzheimer's, 
depression, senility (worth $1 trillion a year), cure cancer($2.5 trillion a 
year) and other ailments, and create human like artificial intelligence.

I think that mapping the human brain is the most important thing we can do in 
the next 10 years as a species. When it is finished this project will help us 
save trillions of dollars and millions of lives each year.


Best regards,
IonutIf you wish to unsubscribe from our newsletter, click 
http://r.sciencemedia.eu/336n1lt27ors7g.html

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Elvis Pranskevichus
On Wednesday, September 6, 2017 8:06:36 PM EDT Greg Ewing wrote:
> Nathaniel Smith wrote:
> > The implementation strategy changed radically between v1
> > and v2 because of considerations around generator (not coroutine)
> > semantics. I'm not sure what more it can do to dispel these
> > feelings> 
> > :-).
> 
> I can't say the changes have dispelled any feelings on my part.
> 
> The implementation suggested in the PEP seems very complicated
> and messy. There are garbage collection issues, which it
> proposes using weak references to mitigate. There is also
> apparently some issue with long chains building up and
> having to be periodically collapsed. None of this inspires
> confidence that we have the basic design right.
> 
> My approach wouldn't have any of those problems. The
> implementation would be a lot simpler.

I might have missed something, but your claim doesn't make any sense 
to me.  All you've proposed is to replace the implicit and guaranteed 
push_lc()/pop_lc() around each generator with explicit LC stack 
management.

You *still* need to retain and switch the current stack on every 
generator send() and throw().  Everything else written out in PEP 550 
stays relevant as well.

As for the "long chains building up", your approach is actually much 
worse.  The absense of a guaranteed context fence around generators 
would mean that contextvar context managers will *have* to push LCs 
whether really needed or not.  Consider the following (naive) way of 
computing the N-th Fibonacci number:

def fib(n):
with decimal.localcontext():
if n == 0:
return 0
elif n == 1:
return 1
else:
return fib(n - 1) + fib(n - 2)

Your proposal can cause the LC stack to grow incessantly even in 
simple cases, and will affect code that doesn't even use generators.

A great deal of effort was put into PEP 550, and the matter discussed 
is far from trivial.  What you see as "complicated and messy" is 
actually the result of us carefully considering the solutions to real-
world problems, and then the implications of those solutions 
(including the worst-case scenarios.)

   Elvis


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] New C API not leaking implementation details: an usable stable ABI

2017-09-06 Thread Victor Stinner
Hi,

I am currently at a CPython sprint 2017 at Facebook. We are discussing
my idea of writing a new C API for CPython hiding implementation
details and replacing macros with function calls.

I wrote a short blog post to explain the issue of the current API, the
link between the API and the ABI, and give examples of optimizations
which become possible with an "unsable" stable ABI:

https://haypo.github.io/new-python-c-api.html


I am sorry, I'm too busy to write a proper PEP. But here is the link
to my old PEP draft written in June. I didn't update it yet. But
multiple people are asking me for the PEP draft, so here you have!

https://github.com/haypo/misc/blob/master/python/pep_c_api.rst


See also the thread on python-ideas last June, where I first proposed my draft:

[Python-ideas] PEP: Hide implementation details in the C API
https://mail.python.org/pipermail/python-ideas/2017-July/046399.html

This is not a request for comments :-) I will write a proper PEP for that.

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Python 3.3.7rc1 now available prior to Python 3.3 end-of-life

2017-09-06 Thread Ned Deily
On behalf of the Python development community and the Python 3.3 release teams, 
I would like to announce the availability of Python 3.3.7rc1, the release 
candidate of Python 3.3.7.  It is a security-fix source-only release.  Python 
3.3.0 was released 5 years ago on 2012-09-29 and has been in security-fix-only 
mode since 2014-03-08.  Per project policy, all support for the 3.3 series of 
releases ends on 2017-09-29, five years after the initial release.  Therefore, 
Python 3.3.7 is expected to be the final release of any kind for the 3.3 series.

After 2017-09-29, **we will no longer accept bug reports nor provide fixes of 
any kind for Python 3.3.x**; of course, third-party distributors of Python 
3.3.x may choose to offer their own extended support.  Because 3.3.x has long 
been in security-fix mode, 3.3.7 may no longer build correctly on all current 
operating system releases and some tests may fail. If you are still using 
Python 3.3.x, we **strongly** encourage you to upgrade to a more recent, fully 
supported version of Python 3; see https://www.python.org/downloads/.  If you 
are still using your own build of Python 3.3.x , please report any critical 
issues with 3.3.7rc1 to the Python bug tracker prior to 2017-09-18, the 
expected release date for Python 3.3.7 final.  Even better, use the time to 
upgrade to Python 3.6.x!

Thank you to everyone who has contributed to the success of Python 3.3.x over 
the past years!

You can find Python 3.3.7rc1 here:
https://www.python.org/downloads/release/python-337rc1/ 

--
  Ned Deily
  n...@python.org -- []

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Barry Warsaw
On Sep 6, 2017, at 16:55, Fernando Perez  wrote:
> 
> If I may suggest a small API tweak, I think it would be useful if 
> breakpoint() accepted an optional header argument. In IPython, the equivalent 
> for non-postmortem debugging is IPython.embed, which can be given a header. 
> This is useful to provide the user with some information about perhaps where 
> the breakpoint is coming from, relevant data they might want to look at, etc:
> 
> ```
> from IPython import embed
> 
> def f(x=10):
>   y = x+2
>   embed(header="in f")
>   return y
> 
> x = 20
> print(f(x))
> embed(header="Top level")
> ```
> 
> I understand in most cases these are meant to be deleted right after usage 
> and the author is likely to have a text editor open next to the terminal 
> where they're debugging.  But still, I've found myself putting multiple such 
> calls in a code to look at what's going on in different parts of the 
> execution stack, and it can be handy to have a bit of information to get your 
> bearings.
> 
> Just a thought...

Thanks Fernando, this is exactly the kind of feedback from other debuggers that 
I’m looking for.  It certainly sounds like a handy feature; I’ve found myself 
wanting something like that from pdb from time to time.

The PEP has an open issue regarding breakpoint() taking *args and **kws, which 
would just be passed through the call stack.  It sounds like you’d be in favor 
of that enhancement.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Yury Selivanov
On Wed, Sep 6, 2017 at 5:06 PM, Greg Ewing  wrote:
> Nathaniel Smith wrote:
>>
>> The implementation strategy changed radically between v1
>> and v2 because of considerations around generator (not coroutine)
>> semantics. I'm not sure what more it can do to dispel these feelings
>> :-).
>
>
> I can't say the changes have dispelled any feelings on my part.
>
> The implementation suggested in the PEP seems very complicated
> and messy. There are garbage collection issues, which it
> proposes using weak references to mitigate.

"messy" and "complicated" doesn't sound like a valuable feedback :(

There are no "garbage collection issues", sorry.  The issue that we
use weak references for is the same issue why threading.local() uses
them:

def foo():
 var = ContextVar()
 var.set(1)

for _ in range(10**6): foo()

If 'var' is strongly referenced, we would have a bunch of them.

> There is also
> apparently some issue with long chains building up and
> having to be periodically collapsed. None of this inspires
> confidence that we have the basic design right.
>
> My approach wouldn't have any of those problems. The
> implementation would be a lot simpler.

Cool.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Yury Selivanov
On Wed, Sep 6, 2017 at 5:00 PM, Greg Ewing  wrote:
> Nathaniel Smith wrote:
>>
>> Literally the first motivating example at the beginning of the PEP
>> ('def fractions ...') involves only generators, not coroutines, and
>> only works correctly if generators get special handling. (In fact, I'd
>> be curious to see how Greg's {push,pop}_local_storage could handle
>> this case.)
>
>
> I've given a decimal-based example, but it was a bit
> scattered. Here's a summary and application to the
> fractions example.
>
> I'm going to assume that the decimal module has been
> modified to keep the current context in a context var,
> and that getcontext() and setcontext() access that
> context var.
>
> THe decimal.localcontext context manager is also
> redefined as:
>
>class localcontext():
>
>   def __enter__(self):
>  push_local_context()
>  ctx = getcontext().copy()
>  setcontext(ctx)
>  return ctx
>
>   def __exit__(self):
>  pop_local_context()

1. So essentially this means that we will have one "local context" per
context manager storing one value.

2. If somebody makes a mistake and calls "push_local_context" without
a corresponding "pop_local_context" -- you will have an unbounded
growth of LCs (happen's in Koos' proposal too, btw).

3. Users will need to know way more to correctly use the mechanism.

So far, both you and Koos can't give us a realistic example which
illustrates why we should suffer the implications of (1), (2), and
(3).

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Yury Selivanov
On Wed, Sep 6, 2017 at 4:27 PM, Greg Ewing  wrote:
> Ivan Levkivskyi wrote:
>>
>> Normal generators fall out from this "scheme", and it looks like their
>> behavior is determined by the fact that coroutines are implemented as
>> generators.
>
>
> This is what I disagree with. Generators don't implement
> coroutines, they implement *parts* of coroutines.
>
> We want "task local storage" that behaves analogously
> to thread local storage. But PEP 550 as it stands doesn't
> give us that; it gives something more like "function
> local storage" for certain kinds of function.

The PEP gives you a Task Local Storage, where Task is:

1. your single-threaded code
2. a generator
3. an async task

If you correctly use context managers, PEP 550 works intuitively and
similar to how one would think that threading.local() should work.

The only example you (and Koos) can come up with is this:

def generator():
 set_decimal_context()
 yield

next(generator())
# decimal context is not set

# or
yield from generator()
# decimal context is still not set

I consider that the above is a feature.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Greg Ewing

Nathaniel Smith wrote:

The implementation strategy changed radically between v1
and v2 because of considerations around generator (not coroutine)
semantics. I'm not sure what more it can do to dispel these feelings
:-).


I can't say the changes have dispelled any feelings on my part.

The implementation suggested in the PEP seems very complicated
and messy. There are garbage collection issues, which it
proposes using weak references to mitigate. There is also
apparently some issue with long chains building up and
having to be periodically collapsed. None of this inspires
confidence that we have the basic design right.

My approach wouldn't have any of those problems. The
implementation would be a lot simpler.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Greg Ewing

Nathaniel Smith wrote:

Literally the first motivating example at the beginning of the PEP
('def fractions ...') involves only generators, not coroutines, and
only works correctly if generators get special handling. (In fact, I'd
be curious to see how Greg's {push,pop}_local_storage could handle
this case.)


I've given a decimal-based example, but it was a bit
scattered. Here's a summary and application to the
fractions example.

I'm going to assume that the decimal module has been
modified to keep the current context in a context var,
and that getcontext() and setcontext() access that
context var.

THe decimal.localcontext context manager is also
redefined as:

   class localcontext():

  def __enter__(self):
 push_local_context()
 ctx = getcontext().copy()
 setcontext(ctx)
 return ctx

  def __exit__(self):
 pop_local_context()

Now we can write the fractions generator as:

   def fractions(precision, x, y):
  with decimal.localcontext() as ctx:
ctx.prec = precision
yield Decimal(x) / Decimal(y)
yield Decimal(x) / Decimal(y ** 2)

You may notice that this is exactly the same as
what you would write today for the same task...

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Fernando Perez
If I may suggest a small API tweak, I think it would be useful if 
breakpoint() accepted an optional header argument. In IPython, the 
equivalent for non-postmortem debugging is IPython.embed, which can be 
given a header. This is useful to provide the user with some 
information about perhaps where the breakpoint is coming from, relevant 
data they might want to look at, etc:


```
from IPython import embed

def f(x=10):
   y = x+2
   embed(header="in f")
   return y

x = 20
print(f(x))
embed(header="Top level")
```

I understand in most cases these are meant to be deleted right after 
usage and the author is likely to have a text editor open next to the 
terminal where they're debugging.  But still, I've found myself putting 
multiple such calls in a code to look at what's going on in different 
parts of the execution stack, and it can be handy to have a bit of 
information to get your bearings.


Just a thought...

Best

f


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Yury Selivanov
On Wed, Sep 6, 2017 at 1:39 PM, Koos Zevenhoven  wrote:
[..]
> Now this was of course a completely fictional example, and hopefully I
> didn't introduce any bugs or syntax errors other than the ones I described.
> I haven't seen code like this anywhere, but somehow we caught the problems
> anyway.

Thank you for the example, Koos.  FWIW I agree it is a "completely
fictional example".

There are two ways how we can easily adapt PEP 550 to follow your semantics:

1. Set gen.__logical_context__ to None when it is being 'yield frommmed'

2. Merge gen.__logical_context__ with the outer LC when the generator
is iterated to the end.

But I still really dislike the examples you and Greg show to us.  They
are not typical or real-world examples, they are showcases of ways to
abuse contexts.

I still think that giving Python programmers one strong rule: "context
mutation is always isolated in generators" makes it easier to reason
about the EC and write maintainable code.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] To reduce Python "application" startup time

2017-09-06 Thread Barry Warsaw
On Sep 6, 2017, at 00:42, INADA Naoki  wrote:

> Additionally, faster startup time (and smaller memory footprint) is good
> for even Web applications.
> For example, CGI is still comfortable tool sometimes.
> Another example is GAE/Python.
> 
> Anyway, I think researching import tree of popular library is good startline
> about optimizing startup time.
> For example, modules like ast and tokenize are imported often than I thought.

Improving start up time may indeed help long running processes but start up 
costs will generally be amortized across the lifetime of the process, so it 
isn’t as noticeable.  However, startup time *is* a real issue for command line 
tools.

I’m not sure however whether burying imports inside functions (as a kind of 
poor man’s lazy import) is ultimately going to be satisfying.  First, it’s not 
natural, it generally violates coding standards (e.g. PEP 8), and can make 
linters complain.  Second, I think you’ll end up chasing imports all over the 
stdlib and third party modules in any sufficiently complicated application.  
Third, I’m not sure that the gains you’ll get won’t just be overwhelmed by lots 
of other things going on, such as pkg_resources entry point processing, pth 
file processing, site.py effects, command line processing libraries such as 
click, and implicitly added distribution exception hooks (e.g. Ubuntu’s apport).

Many of these can’t be blamed on Python itself, but all can contribute 
significantly to Python’s apparent start up time.  It’s definitely worth 
investigating the details of Python import, and a few of us at the core sprint 
have looked at those numbers and thrown around ideas for improvement, but we’ll 
need to look at the effects up and down the stack to improve the start up 
performance for the average Python application.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Greg Ewing

Ivan Levkivskyi wrote:
Normal generators fall out from this "scheme", and it looks like their 
behavior is determined by the fact that coroutines are implemented as 
generators.


This is what I disagree with. Generators don't implement
coroutines, they implement *parts* of coroutines.

We want "task local storage" that behaves analogously
to thread local storage. But PEP 550 as it stands doesn't
give us that; it gives something more like "function
local storage" for certain kinds of function.

--
Greg
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 548: More Flexible Loop Control

2017-09-06 Thread R. David Murray
On Wed, 06 Sep 2017 09:43:53 -0700, Guido van Rossum  wrote:
> I'm actually not in favor of this. It's another way to do the same thing.
> Sorry to rain on your dream!

So it goes :)  I learned things by going through the process, so it
wasn't wasted time for me even if (or because) I made several mistakes.
Sorry for wasting anyone else's time :(

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Compiling without multithreading support -- still useful?

2017-09-06 Thread Victor Stinner
2017-09-06 22:19 GMT+02:00 Berker Peksağ :
> Do we still have buildbots for testing the --without-threads option?

We had such buildbot once, but it's gone. I just removed its unused
class from the buildbot configuration:
https://github.com/python/buildmaster-config/commit/091f52aa05a8977966796ba3ef4b8257bef1c0e9

Victor
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Guido van Rossum
On Wed, Sep 6, 2017 at 1:39 PM, Koos Zevenhoven  wrote:

> On Wed, Sep 6, 2017 at 8:16 PM, Guido van Rossum  wrote:
>
>> On Wed, Sep 6, 2017 at 8:07 AM, Koos Zevenhoven 
>> wrote:
>>
>>> I think yield from should have the same semantics as iterating over the
>>> generator with next/send, and PEP 555 has no issues with this.
>>>
>>
>> I think the onus is on you and Greg to show a realistic example that
>> shows why this is necessary.
>>
>>
> ​Well, regarding this part, it's just that things like
>
> for obj in gen:
> ​yield obj
>
> often get modernized into
>
> yield from gen
>

I know that that's the pattern, but everybody just shows the same foo/bar
example.


> And realistic examples of that include pretty much any normal use of yield
> from.
>

There aren't actually any "normal" uses of yield from. The vast majority of
uses of yield from are in coroutines written using yield from.


>
>
> So far all the argumentation about this has been of the form "if you have
>> code that currently does this (example using foo) and you refactor it in
>> using yield from (example using bar), and if you were relying on context
>> propagation back out of calls, then it should still propagate out."
>>
>>
> ​So here's a realistic example, with the semantics of PEP 550 applied to a
> decimal.setcontext() kind of thing, but it could be anything using
> var.set(value):
>
> def process_data_buffers(​buffers):
> setcontext(default_context)
> for buf in buffers:
> for data in buf:
> if data.tag == "NEW_PRECISION":
> setcontext(context_based_on(data))
> else:
> yield compute(data)
>
>
> Code smells? Yes, but maybe you often see much worse things, so let's say
> it's fine.
>
> ​But then, if you refactor it into a subgenerator like this:
>
> def process_data_buffer(buffer):
> for data in buf:
> if data.tag == "NEW_PRECISION":
> setcontext(context_based_on(data))
> else:
> yield compute(data)
>
> def process_data_buffers(​buffers):
> setcontext(default_context)
> for buf in buffers:
> yield from buf
>
>
> Now, if setcontext uses PEP 550 semantics, the refactoring broke the code,
> because a generator introduce a scope barrier by adding a LogicalContext on
> the stack, and setcontext is only local to the process_data_buffer
> subroutine. But the programmer is puzzled, because with regular functions
> it had worked just fine in a similar situation before they learned about
> generators:
>
>
> def process_data_buffer(buffer, output):
> for data in buf:
> if data.tag == "precision change":
> setcontext(context_based_on(data))
> else:
> output.append(compute(data))
>
> def process_data_buffers(​buffers):
> output = []
> setcontext(default_context)
> for buf in buffers:
> process_data_buffer(buf, output)
>
> ​In fact, this code had another problem, namely that the context state is
> leaked out of process_d​ata_buffers, because PEP 550 leaks context state
> out of functions, but not out of generators. But we can easily imagine that
> the unit tests for process_data_buffers *do* pass.
>
> But let's look at a user of the functionality:
>
> def get_total():
> return sum(process_data_buffers(get_buffers()))
>
> setcontext(somecontext)
> value = get_total() * compute_factor()
>
>
> Now the code is broken, because setcontext(somecontext) has no effect,
> because get_total() leaks out another context. Not to mention that our data
> buffer source now has control over the behavior of compute_factor(). But if
> one is lucky, the last line was written as
>
> value = compute_factor() * get_total()
>
>
> And hooray, the code works!
>
> (Except for perhaps the code that is run after this.)
>
>
> Now this was of course a completely fictional example, and hopefully I
> didn't introduce any bugs or syntax errors other than the ones I described.
> I haven't seen code like this anywhere, but somehow we caught the problems
> anyway.
>

Yeah, so my claim this is simply a non-problem, and you've pretty much just
proved that by failing to come up with pointers to actual code that would
suffer from this. Clearly you're not aware of any such code.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Barry Warsaw
On Sep 6, 2017, at 14:59, Terry Reedy  wrote:
> 
> Currently, the debugger is started in response to a menu seletion in the IDLE 
> process while the python process is idle.  One reason for the 'idle' 
> requirement' is because when code is exec-uting, the loop that reads 
> commands, executes them, and sends responses is blocked on the exec call. The 
> IDLE process sets up its debugger window, its ends of the rpc channels, and 
> commands to python process to set up Idb and the other ends of the channels.  
> The challenge would be to initiate setup from the server process and deal 
> with the blocked loop.

Would the environment variable idea in the latest version of the PEP help you 
here?

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Barry Warsaw
On Sep 6, 2017, at 10:14, Fabio Zadrozny  wrote:
> 
> I think it's a nice idea.

Great!

> Related to the name, on the windows c++ there's "DebugBreak":  
> https://msdn.microsoft.com/en-us/library/windows/desktop/ms679297(v=vs.85).aspx,
>  which I think is a better name (so, it'd be debug_break for Python -- I 
> think it's better than plain breakpoint(), and wouldn't clash as debug()).

It’s important to understand that a new built-in is technically never going to 
clash with existing code, regardless of what it’s called.  Given Python’s name 
resolution rules, if your code uses any built, it’ll just shadow it.  That’s 
one big reason why the PEP proposed a built-in rather than say a keyword.

That said, while I like the more succinct `debug()` name, Guido prefers 
`breakpoint()` and that works fine for me.

> I think I could change the hook on a custom sitecustomize (there's already 
> one in place in PyDev) so that the debug_break() would actually read some env 
> var to do that work (and provide some utility for users to pre-setup it when 
> not launching from inside the IDE).

I had meant to add an open issue about the idea of adding an environment 
variable, such as $PYTHONBREAKPOINTHOOK which could be set to the callable to 
bind to sys.breakpointhook().  I’ve now added that to PEP 553, and I think that 
handles the use case you outline above.

> Still, there may be other settings that the user needs to pass to settrace() 
> when doing a remote debug session -- i.e.: things such as the host, port to 
> connect, etc -- see: 
> https://github.com/fabioz/PyDev.Debugger/blob/master/pydevd.py#L1121, so, 
> maybe the debug_break() method should accept keyword arguments to pass along 
> to support other backends?

Possibly, and there’s an open issue about that, but I’m skeptical about that 
for your use case.  Will a user setting a breakpoint know what to pass there?

Cheers,
-Barry




signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 549: Instance Properties (aka: module properties)

2017-09-06 Thread Ronald Oussoren

> On 6 Sep 2017, at 00:03, Larry Hastings  wrote:
> 
> 
> 
> I've written a PEP proposing a language change:
> https://www.python.org/dev/peps/pep-0549/ 
> 
> The TL;DR summary: add support for property objects to modules.  I've already 
> posted a prototype.
> 
> 
> How's that sound?

To be honest this sounds like a fairly crude hack. Updating the __class__ of a 
module object feels dirty, but at least you get normal behavior w.r.t. 
properties.

Why is there no mechanism to add new descriptors that can work in this context? 

BTW. The interaction with import is interesting… Module properties only work as 
naive users expect when accessing them as attributes of the module object, in 
particular importing the name using “from module import prop” would only call 
the property getter once and that may not be the intended behavior.

Ronald

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Terry Reedy

On 9/6/2017 10:42 AM, Barry Warsaw wrote:


I don’t think that’s a good idea.  pdb is a thing, and that thing is the 
standard library debugger.  I don’t think ‘pdb’ should be the term we use to 
describe a generic Python debugger interface.  That to me is one of the 
advantages of PEP 553; it separates the act of invoking the debugging from the 
actual debugger so invoked.


I tried inserting import pdb; pdb.set_trace() into a simple file and 
running it from an IDLE editor.  It works fine, using IDLE's shell as 
the console.  (The only glitch is that (Pdb) q raises bdb.BdbQuit, which 
causes Shell to restart.)  So I expect the proposed breakpoint() to work 
similarly.


IDLE has a gui-based debugger based on its Idb subclass of bdb.Bdb.  I 
took a first look into whether IDLE could set it as the breakpoint() 
handler and concluded maybe, with some work, if some problems are solved.


Currently, the debugger is started in response to a menu seletion in the 
IDLE process while the python process is idle.  One reason for the 
'idle' requirement' is because when code is exec-uting, the loop that 
reads commands, executes them, and sends responses is blocked on the 
exec call. The IDLE process sets up its debugger window, its ends of the 
rpc channels, and commands to python process to set up Idb and the other 
ends of the channels.  The challenge would be to initiate setup from the 
server process and deal with the blocked loop.


--
Terry Jan Reedy


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Cherry picker bot deployed in CPython repo

2017-09-06 Thread Mariatta Wijaya
Update:

Older merged PRs not yet backported?
==

A core developer can re-apply the `needs backport ..` label to trigger the
backport. Meaning, remove the existing label, then add it back. If there
was no label and now you want it to be backported, adding the label will
also trigger the backport.

Don't want PR to be backported by a bot?


Close the backport PR made by Miss Islington and make your own backport PR.


Thanks!



Mariatta Wijaya

On Tue, Sep 5, 2017 at 6:10 PM, Mariatta Wijaya 
wrote:

> Hi,
>
> The cherry picker bot has just been deployed to CPython repo, codenamed
> miss-islington.
>
> miss-islington made the very first backport PR for CPython and became a
> first time GitHub contributor: https://github.com/python/cpython/pull/3369
>
>
> GitHub repo: https://github.com/python/miss-islington
>
> What is this?
> ==
>
> As part of our workflow, quite often changes made on the master branch
> need to be backported to the earlier versions. (for example: from master to
> 3.6 and 2.7)
>
> Previously the backport has to be done manually by either a core developer
> or the original PR author.
>
> With the bot, the backport PR is created automatically after the PR has
> been merged. A core developer will need to review the backport PR.
>
> The issue was tracked in https://github.com/python/core-workflow/issues/8
>
> How it works
> ==
>
> 1. If a PR needs to be backported to one of the maintenance branches, a
> core developer should apply the "needs backport to X.Y" label. Do this
> **before** you merge the PR.
>
> 2. Merge the PR
>
> 3. miss-islington will leave a comment on the PR, saying it is working on
> backporting the PR.
>
> 4. If there's no merge conflict, the PR should be created momentarily.
>
> 5. Review the backport PR created by miss-islington and merge it when
> you're ready.
>
> Merge Conflicts / Problems?
> ==
>
> In case of merge conflicts, or if a backport PR was not created within 2
> minutes, it likely failed and you should do the backport manually.
>
> Manual backport can be done using cherry_picker: https://pypi.
> org/project/cherry-picker/
>
> Older merged PRs not yet backported?
> ==
>
> At the moment, those need to be backported manually.
>
> Don't want PR to be backported by a bot?
> 
>
> My recommendation is to apply the "needs backport to X.Y" **after** the PR
> has been merged. The label is still useful to remind ourselves that this PR
> still needs backporting.
>
> Who is Miss Islington?
> =
>
> I found out from Wikipedia that Miss Islington is the name of the witch in
> Monty Python and The Holy Grail.
>
> miss-islington has not signed the CLA!
> =
>
> A core dev can ignore the warning and merge the PR anyway.
>
> Thanks!
>
>
> Mariatta Wijaya
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 549: Instance Properties (aka: module properties)

2017-09-06 Thread Koos Zevenhoven
On Wed, Sep 6, 2017 at 1:52 AM, Nathaniel Smith  wrote:
​[...]​


> import sys, types
> class _MyModuleType(types.ModuleType):
> @property
> def ...
>
> @property
> def ...
> sys.modules[__name__].__class__ = _MyModuleType
>
> It's definitely true though that they're not the most obvious lines of
> code :-)
>
>
​It would kind of be in line with the present behavior if you could simply
write something like this in the module:

class __class__(types.ModuleType):
​@property
def hello(self):
return "hello"

def __dir__(self):
return ["hello"]



assuming it would be equivalent to setting __class__ afterwards.​

​--Koos​




-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Koos Zevenhoven
On Wed, Sep 6, 2017 at 8:22 PM, Yury Selivanov 
wrote:
​[...]
​


> PEP 550 treats coroutines and generators as objects that support out
> of order execution.


​Out of order? More like interleaved.​
​​

> PEP 555 still doesn't clearly explain how exactly it is different from
> PEP 550.  Because 555 was posted *after* 550, I think that it's PEP
> 555 that should have that comparison.
>

555 was *posted* as a pep after 550, yes. And yes, there could be a
comparison, especially now that PEP 550 semantics seem to have converged,
so PEP 555 does not have to adapt the comparison to PEP 550 changes.

-- Koos​


-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Consolidate stateful runtime globals

2017-09-06 Thread Glenn Linderman

On 9/6/2017 1:18 PM, Gregory P. Smith wrote:
I'm not concerned about moving things into a state structure rather 
than wildly scattered globals declared all over the place.  It is good 
code hygiene. It ultimately moves us closer (much more work to be 
done) to being able to actually have multiple independent interpreters 
within the same process (including potentially even of different 
Python versions).


For commonly typed things that get annoying,

#define _Py_grail   _PyRuntme.ceval.holy.grail

within the .c source file that does a lot of grail flinging seems fine 
to me.


-gps


You just need a PEP 550 (or 555) to use instead of C globals.

But why would you ever want multiple Python versions in one process? 
Sounds like a debug headache in the making. Name collisions would abound 
for libraries and functions even if globals were cured!
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Koos Zevenhoven
On Wed, Sep 6, 2017 at 8:16 PM, Guido van Rossum  wrote:

> On Wed, Sep 6, 2017 at 8:07 AM, Koos Zevenhoven  wrote:
>
>> I think yield from should have the same semantics as iterating over the
>> generator with next/send, and PEP 555 has no issues with this.
>>
>
> I think the onus is on you and Greg to show a realistic example that shows
> why this is necessary.
>
>
​Well, regarding this part, it's just that things like

for obj in gen:
​yield obj

often get modernized into

yield from gen

And realistic examples of that include pretty much any normal use of yield
from.


So far all the argumentation about this has been of the form "if you have
> code that currently does this (example using foo) and you refactor it in
> using yield from (example using bar), and if you were relying on context
> propagation back out of calls, then it should still propagate out."
>
>
​So here's a realistic example, with the semantics of PEP 550 applied to a
decimal.setcontext() kind of thing, but it could be anything using
var.set(value):

def process_data_buffers(​buffers):
setcontext(default_context)
for buf in buffers:
for data in buf:
if data.tag == "NEW_PRECISION":
setcontext(context_based_on(data))
else:
yield compute(data)


Code smells? Yes, but maybe you often see much worse things, so let's say
it's fine.

​But then, if you refactor it into a subgenerator like this:

def process_data_buffer(buffer):
for data in buf:
if data.tag == "NEW_PRECISION":
setcontext(context_based_on(data))
else:
yield compute(data)

def process_data_buffers(​buffers):
setcontext(default_context)
for buf in buffers:
yield from buf


Now, if setcontext uses PEP 550 semantics, the refactoring broke the code,
because a generator introduce a scope barrier by adding a LogicalContext on
the stack, and setcontext is only local to the process_data_buffer
subroutine. But the programmer is puzzled, because with regular functions
it had worked just fine in a similar situation before they learned about
generators:


def process_data_buffer(buffer, output):
for data in buf:
if data.tag == "precision change":
setcontext(context_based_on(data))
else:
output.append(compute(data))

def process_data_buffers(​buffers):
output = []
setcontext(default_context)
for buf in buffers:
process_data_buffer(buf, output)

​In fact, this code had another problem, namely that the context state is
leaked out of process_d​ata_buffers, because PEP 550 leaks context state
out of functions, but not out of generators. But we can easily imagine that
the unit tests for process_data_buffers *do* pass.

But let's look at a user of the functionality:

def get_total():
return sum(process_data_buffers(get_buffers()))

setcontext(somecontext)
value = get_total() * compute_factor()


Now the code is broken, because setcontext(somecontext) has no effect,
because get_total() leaks out another context. Not to mention that our data
buffer source now has control over the behavior of compute_factor(). But if
one is lucky, the last line was written as

value = compute_factor() * get_total()


And hooray, the code works!

(Except for perhaps the code that is run after this.)


Now this was of course a completely fictional example, and hopefully I
didn't introduce any bugs or syntax errors other than the ones I described.
I haven't seen code like this anywhere, but somehow we caught the problems
anyway.


-- Koos



-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Consolidate stateful runtime globals

2017-09-06 Thread Antoine Pitrou
On Wed, 06 Sep 2017 20:18:52 +
"Gregory P. Smith"  wrote:
> 
> For commonly typed things that get annoying,
> 
> #define _Py_grail   _PyRuntme.ceval.holy.grail
> 
> within the .c source file that does a lot of grail flinging seems fine to
> me.

That sounds fine to me too.  Thank you!

Regards

Antoine.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Barry Warsaw
On Sep 6, 2017, at 10:19, Guido van Rossum  wrote:
> 
> 99% of the time I use a debugger I use pdb.set_trace(). The pm() stuff is 
> typically useful for debugging small, simple programs only -- complex 
> programs likely hide the exception somewhere (after logging it) so there's 
> nothing for pdb.pm() to look at. I think Barry is wisely focusing on just the 
> ability to quickly and programmatically insert a breakpoint.

Thanks Guido, that’s my thinking exactly.  pdb isn’t going away of course, so 
those less common use cases are still always available.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Compiling without multithreading support -- still useful?

2017-09-06 Thread Berker Peksağ
On Tue, Sep 5, 2017 at 7:42 PM, Victor Stinner  wrote:
> I'm strongly in favor of dropping this option from Python 3.7. It
> would remove a lot of code!

+1

Do we still have buildbots for testing the --without-threads option?

--Berker
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Consolidate stateful runtime globals

2017-09-06 Thread Gregory P. Smith
On Wed, Sep 6, 2017 at 10:26 AM Benjamin Peterson 
wrote:

>
> On Wed, Sep 6, 2017, at 10:08, Antoine Pitrou wrote:
> > On Wed, 06 Sep 2017 09:42:29 -0700
> > Benjamin Peterson  wrote:
> > > On Wed, Sep 6, 2017, at 03:14, Antoine Pitrou wrote:
> > > >
> > > > Hello,
> > > >
> > > > I'm a bit concerned about
> > > >
> https://github.com/python/cpython/commit/76d5abc8684bac4f2fc7cccfe2cd940923357351
> > > >
> > > > My main gripe is that makes writing C code more tedious.  Simple C
> > > > global variables such as "_once_registry" are now spelled
> > > > "_PyRuntime.warnings.once_registry".  The most egregious example
> seems
> > > > to be "_PyRuntime.ceval.gil.locked" (used to be simply "gil_locked").
> > > >
> > > > Granted, C is more verbose than Python, but it doesn't have to become
> > > > that verbose.  I don't know about you, but when code becomes annoying
> > > > to type, I tend to try and take shortcuts.
> > >
> > > How often are you actually typing the names of runtime globals, though?
> >
> > Not very often, but if I want to experiment with some low-level
> > implementation details, it is nice to avoid the hassle.
>
> It seems like this could be remediated with some inline functions or
> macros, which would also help safely encapsulate state.
>
> >
> > There's also a readability argument: with very long names, expressions
> > can become less easy to parse.
> >
> > > If you are using a globals, perhaps the typing time will allow you to
> > > fully consider the gravity of the situation.
> >
> > Right, I needed to be reminded of how perilous the use of C globals is.
> > Perhaps I should contact the PSRT the next time I contemplate using a C
> > global.
>
> It's not just you but future readers.
>

I'm not concerned about moving things into a state structure rather than
wildly scattered globals declared all over the place.  It is good code
hygiene. It ultimately moves us closer (much more work to be done) to being
able to actually have multiple independent interpreters within the same
process (including potentially even of different Python versions).

For commonly typed things that get annoying,

#define _Py_grail   _PyRuntme.ceval.holy.grail

within the .c source file that does a lot of grail flinging seems fine to
me.

-gps
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Compiling without multithreading support -- still useful?

2017-09-06 Thread Gregory P. Smith
My take on platforms without thread support is that they should provide a
their own fake/green/virtual threading APIs.  I don't know how practical
that thought actually is for things like web assembly but I'm with Antoine
here.  The maintenance burden for --without-threads builds is a pain I'd
love to avoid.

-G

On Wed, Sep 6, 2017 at 11:49 AM Ethan Smith  wrote:

> Certainly, I understand it can be burdensome. I suppose I can use 3.6
> branch for the initial port, so it shouldn't be an issue.
>
> On Wed, Sep 6, 2017 at 11:13 AM, Antoine Pitrou 
> wrote:
>
>> On Wed, 6 Sep 2017 10:50:11 -0700
>> Ethan Smith  wrote:
>> > I think this is useful as it can make porting easier. I am using it in
>> my
>> > attempts to cross compile CPython to WebAssembly (since WebAssembly in
>> its
>> > MVP does not support threading).
>>
>> The problem is that the burden of maintenance falls on us (core CPython
>> developers), while none of us and probably 99.99% of our userbase have
>> absolutely no use for the "functionality".
>>
>> Perhaps there's a simpler, cruder way to "support" threads-less
>> platforms.  For example a Python/thread_nothreads.h where
>> PyThread_start_new_thread() would always fail (and with trivial
>> implementations of locks and TLS keys).  But I'm not sure how much it
>> would help those porting attempts.
>>
>
>> Regards
>>
>> Antoine.
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>>
> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/ethan%40ethanhs.me
>>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/greg%40krypto.org
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Compiling without multithreading support -- still useful?

2017-09-06 Thread Ethan Smith
Certainly, I understand it can be burdensome. I suppose I can use 3.6
branch for the initial port, so it shouldn't be an issue.

On Wed, Sep 6, 2017 at 11:13 AM, Antoine Pitrou  wrote:

> On Wed, 6 Sep 2017 10:50:11 -0700
> Ethan Smith  wrote:
> > I think this is useful as it can make porting easier. I am using it in my
> > attempts to cross compile CPython to WebAssembly (since WebAssembly in
> its
> > MVP does not support threading).
>
> The problem is that the burden of maintenance falls on us (core CPython
> developers), while none of us and probably 99.99% of our userbase have
> absolutely no use for the "functionality".
>
> Perhaps there's a simpler, cruder way to "support" threads-less
> platforms.  For example a Python/thread_nothreads.h where
> PyThread_start_new_thread() would always fail (and with trivial
> implementations of locks and TLS keys).  But I'm not sure how much it
> would help those porting attempts.
>
> Regards
>
> Antoine.
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> ethan%40ethanhs.me
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Compiling without multithreading support -- still useful?

2017-09-06 Thread Antoine Pitrou
On Wed, 6 Sep 2017 10:50:11 -0700
Ethan Smith  wrote:
> I think this is useful as it can make porting easier. I am using it in my
> attempts to cross compile CPython to WebAssembly (since WebAssembly in its
> MVP does not support threading).

The problem is that the burden of maintenance falls on us (core CPython
developers), while none of us and probably 99.99% of our userbase have
absolutely no use for the "functionality".

Perhaps there's a simpler, cruder way to "support" threads-less
platforms.  For example a Python/thread_nothreads.h where
PyThread_start_new_thread() would always fail (and with trivial
implementations of locks and TLS keys).  But I'm not sure how much it
would help those porting attempts.

Regards

Antoine.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Compiling without multithreading support -- still useful?

2017-09-06 Thread Ethan Smith
I think this is useful as it can make porting easier. I am using it in my
attempts to cross compile CPython to WebAssembly (since WebAssembly in its
MVP does not support threading).

On Wed, Sep 6, 2017 at 10:15 AM, Antoine Pitrou  wrote:

>
> I made an experimental PR to remove support for threads-less builds:
> https://github.com/python/cpython/pull/3385
>
> The next effect is to remove almost 2000 lines of code (including many
> #ifdef sections in C code).
>
> Regards
>
> Antoine.
>
>
> On Tue, 5 Sep 2017 18:36:51 +0200
> Antoine Pitrou  wrote:
> > Hello,
> >
> > It's 2017 and we are still allowing people to compile CPython without
> > threads support.  It adds some complication in several places
> > (including delicate parts of our internal C code) without a clear
> > benefit.  Do people still need this?
> >
> > Regards
> >
> > Antoine.
> >
> >
>
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> ethan%40ethanhs.me
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 548: More Flexible Loop Control

2017-09-06 Thread Guido van Rossum
I'm actually not in favor of this. It's another way to do the same thing.
Sorry to rain on your dream!

On Wed, Sep 6, 2017 at 9:34 AM, R. David Murray 
wrote:

> On Wed, 06 Sep 2017 15:05:51 +1000, Chris Angelico 
> wrote:
> > On Wed, Sep 6, 2017 at 10:11 AM, R. David Murray 
> wrote:
> > > I've written a PEP proposing a small enhancement to the Python loop
> > > control statements.  Short version: here's what feels to me like a
> > > Pythonic way to spell "repeat until":
> > >
> > > while:
> > > 
> > > break if 
> > >
> > > The PEP goes into some detail on why this feels like a readability
> > > improvement in the more general case, with examples taken from
> > > the standard library:
> > >
> > >  https://www.python.org/dev/peps/pep-0548/
> >
> > Is "break if" legal in loops that have their own conditions as well,
> > or only in a bare "while:" loop? For instance, is this valid?
> >
> > while not found_the_thing_we_want:
> > data = sock.read()
> > break if not data
> > process(data)
>
> Yes.
>
> > Or this, which uses the condition purely as a descriptor:
> >
> > while "moar socket data":
> > data = sock.read()
> > break if not data
> > process(data)
>
> Yes.
>
> > Also - shouldn't this be being discussed first on python-ideas?
>
> Yep, you are absolutely right.  Someone has told me I also missed
> a related discussion on python-ideas in my searching for prior
> discussions.  (I haven't looked for it yet...)
>
> I'll blame jet lag :)
>
> --David
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Yury Selivanov
On Wed, Sep 6, 2017 at 8:07 AM, Koos Zevenhoven  wrote:
> On Wed, Sep 6, 2017 at 10:07 AM, Greg Ewing 
> wrote:
>>
>> Yury Selivanov wrote:
>>>
>>> Greg, have you seen this new section:
>>>
>>> https://www.python.org/dev/peps/pep-0550/#should-yield-from-leak-context-changes
>>
>>
>> That section seems to be addressing the idea of a generator
>> behaving differently depending on whether you use yield-from
>> on it.
>
>
> Regarding this, I think yield from should have the same semantics as
> iterating over the generator with next/send, and PEP 555 has no issues with
> this.
>
>>
>>
>> I never suggested that, and I'm still not suggesting it.
>>
>>> The bottomline is that it's easier to
>>> reason about context when it's guaranteed that context changes are
>>> always isolated in generators no matter what.
>>
>>
>> I don't see a lot of value in trying to automagically
>> isolate changes to global state *only* in generators.
>>
>> Under PEP 550, if you want to e.g. change the decimal
>> context temporarily in a non-generator function, you're
>> still going to have to protect those changes using a
>> with-statement or something equivalent. I don't see
>> why the same thing shouldn't apply to generators.
>>
>> It seems to me that it will be *more* confusing to give
>> generators this magical ability to avoid with-statements.
>>
>
> Exactly. To state it clearly: PEP 555 does not have this issue.

It would be great if you or Greg could show a couple of real-world
examples showing the "issue" (with the current PEP 550
APIs/semantics).

PEP 550 treats coroutines and generators as objects that support out
of order execution.  OS threads are similar to them in some ways.  I
find it questionable to try to enforce context management rules we
have for regular functions to generators/coroutines.  I don't really
understand the "refactoring" argument you and Greg are talking about
all the time.

PEP 555 still doesn't clearly explain how exactly it is different from
PEP 550.  Because 555 was posted *after* 550, I think that it's PEP
555 that should have that comparison.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Guido van Rossum
99% of the time I use a debugger I use pdb.set_trace(). The pm() stuff is
typically useful for debugging small, simple programs only -- complex
programs likely hide the exception somewhere (after logging it) so there's
nothing for pdb.pm() to look at. I think Barry is wisely focusing on just
the ability to quickly and programmatically insert a breakpoint.

On Wed, Sep 6, 2017 at 10:00 AM, Nathaniel Smith  wrote:

> On Wed, Sep 6, 2017 at 7:39 AM, Barry Warsaw  wrote:
> >> On Tue, Sep 5, 2017 at 7:58 PM, Nathaniel Smith  wrote:
> >> This would also avoid confusion with IPython's very
> >> useful debug magic:
> >> https://ipython.readthedocs.io/en/stable/interactive/
> magics.html#magic-debug
> >> and which might also be worth stealing for the builtin REPL.
> >> (Personally I use it way more often than set_trace().)
> >
> > Interesting.  I’m not an IPython user.  Do you think its %debug magic
> would benefit from PEP 553?
>
> Not in particular. But if you're working on making debugger entry more
> discoverable/human-friendly, then providing a friendlier alias for the
> pdb.pm() semantics might be useful too?
>
> Actually, if you look at the pdb docs, the 3 ways of entering the
> debugger that merit demonstrations at the top of the manual page are:
>
>   pdb.run("...code...")  # "I want to debug this code"
>   pdb.set_trace()# "break here"
>   pdb.pm()# "wtf just happened?"
>
> The set_trace() name is particularly opaque, but if we're talking
> about adding a friendly debugger abstraction layer then I'd at least
> think about whether to make it cover all three of these.
>
> -n
>
> --
> Nathaniel J. Smith -- https://vorpus.org
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Consolidate stateful runtime globals

2017-09-06 Thread Benjamin Peterson


On Wed, Sep 6, 2017, at 10:08, Antoine Pitrou wrote:
> On Wed, 06 Sep 2017 09:42:29 -0700
> Benjamin Peterson  wrote:
> > On Wed, Sep 6, 2017, at 03:14, Antoine Pitrou wrote:
> > > 
> > > Hello,
> > > 
> > > I'm a bit concerned about
> > > https://github.com/python/cpython/commit/76d5abc8684bac4f2fc7cccfe2cd940923357351
> > > 
> > > My main gripe is that makes writing C code more tedious.  Simple C
> > > global variables such as "_once_registry" are now spelled
> > > "_PyRuntime.warnings.once_registry".  The most egregious example seems
> > > to be "_PyRuntime.ceval.gil.locked" (used to be simply "gil_locked").
> > > 
> > > Granted, C is more verbose than Python, but it doesn't have to become
> > > that verbose.  I don't know about you, but when code becomes annoying
> > > to type, I tend to try and take shortcuts.  
> > 
> > How often are you actually typing the names of runtime globals, though?
> 
> Not very often, but if I want to experiment with some low-level
> implementation details, it is nice to avoid the hassle.

It seems like this could be remediated with some inline functions or
macros, which would also help safely encapsulate state.

> 
> There's also a readability argument: with very long names, expressions
> can become less easy to parse.
> 
> > If you are using a globals, perhaps the typing time will allow you to
> > fully consider the gravity of the situation.
> 
> Right, I needed to be reminded of how perilous the use of C globals is.
> Perhaps I should contact the PSRT the next time I contemplate using a C
> global.

It's not just you but future readers.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Guido van Rossum
On Wed, Sep 6, 2017 at 8:07 AM, Koos Zevenhoven  wrote:

> I think yield from should have the same semantics as iterating over the
> generator with next/send, and PEP 555 has no issues with this.
>

I think the onus is on you and Greg to show a realistic example that shows
why this is necessary.

So far all the argumentation about this has been of the form "if you have
code that currently does this (example using foo) and you refactor it in
using yield from (example using bar), and if you were relying on context
propagation back out of calls, then it should still propagate out."

This feels like a very abstract argument. I have a feeling that context
state propagating out of a call is used relatively rarely -- it  must work
for cases where you refactor something that changes context inline into a
utility function (e.g. decimal.setcontext()), but I just can't think of a
realistic example where coroutines (either of the yield-from variety or of
the async/def form) would be used for such a utility function. A utility
function that sets context state but also makes a network call just sounds
like asking for trouble!

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 549: Instance Properties (aka: module properties)

2017-09-06 Thread Cody Piersall
On Wed, Sep 6, 2017 at 10:26 AM, Guido van Rossum  wrote:
>
> So we've seen a real use case for __class__ assignment: deprecating things on 
> access. That use case could also be solved if modules natively supported 
> defining __getattr__ (with the same "only used if attribute not found 
> otherwise" semantics as it has on classes), but it couldn't be solved using 
> @property (or at least it would be quite hacky).
>
> Is there a real use case for @property? Otherwise, if we're going to mess 
> with module's getattro, it makes more sense to add __getattr__, which would 
> have made Nathaniel's use case somewhat simpler. (Except for the __dir__ 
> thing -- what else might we need?)
> --
> --Guido van Rossum (python.org/~guido)


I think a more natural way for the __dir__ problem would be to update
module_dir() in moduleobject.c to check if __all__ is defined and then
just return that list if it is defined.  I think that would be a
friendlier default for __dir__ anyway.

Cody
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Compiling without multithreading support -- still useful?

2017-09-06 Thread Antoine Pitrou

I made an experimental PR to remove support for threads-less builds:
https://github.com/python/cpython/pull/3385

The next effect is to remove almost 2000 lines of code (including many
#ifdef sections in C code).

Regards

Antoine.


On Tue, 5 Sep 2017 18:36:51 +0200
Antoine Pitrou  wrote:
> Hello,
> 
> It's 2017 and we are still allowing people to compile CPython without
> threads support.  It adds some complication in several places
> (including delicate parts of our internal C code) without a clear
> benefit.  Do people still need this?
> 
> Regards
> 
> Antoine.
> 
> 



___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Fabio Zadrozny
Hi Barry,

I think it's a nice idea.

Related to the name, on the windows c++ there's "DebugBreak":
https://msdn.microsoft.com/en-us/library/windows/desktop/ms679297(v=vs.85).aspx,
which I think is a better name (so, it'd be debug_break for Python -- I
think it's better than plain breakpoint(), and wouldn't clash as debug()).

For the PyDev.Debugger (https://github.com/fabioz/PyDev.Debugger), which is
the one used by PyDev & PyCharm, I think it would also work.

For instance, for adding the debugger in PyDev, there's a template
completion that'll add the debugger to the PYTHONPATH and start the remote
debugger (same as pdb.set_trace()):

i.e.: the 'pydevd' template expands to something as:

import sys;sys.path.append(r'path/to/ide/shipped_debugger/pysrc')
import pydevd;pydevd.settrace()

I think I could change the hook on a custom sitecustomize (there's already
one in place in PyDev) so that the debug_break() would actually read some
env var to do that work (and provide some utility for users to pre-setup it
when not launching from inside the IDE).

Still, there may be other settings that the user needs to pass to
settrace() when doing a remote debug session -- i.e.: things such as the
host, port to connect, etc -- see:
https://github.com/fabioz/PyDev.Debugger/blob/master/pydevd.py#L1121, so,
maybe the debug_break() method should accept keyword arguments to pass
along to support other backends?

Cheers,

Fabio

On Wed, Sep 6, 2017 at 1:44 PM, Barry Warsaw  wrote:

> On Sep 6, 2017, at 07:46, Guido van Rossum  wrote:
> >
> > IIRC they indeed insinuate debug() into the builtins. My suggestion is
> also breakpoint().
>
> breakpoint() it is then!  I’ll rename the sys hooks too, but keep the
> naming scheme of the existing sys hooks.
>
> Cheers,
> -Barry
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> fabiofz%40gmail.com
>
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Consolidate stateful runtime globals

2017-09-06 Thread Antoine Pitrou
On Wed, 06 Sep 2017 09:42:29 -0700
Benjamin Peterson  wrote:
> On Wed, Sep 6, 2017, at 03:14, Antoine Pitrou wrote:
> > 
> > Hello,
> > 
> > I'm a bit concerned about
> > https://github.com/python/cpython/commit/76d5abc8684bac4f2fc7cccfe2cd940923357351
> > 
> > My main gripe is that makes writing C code more tedious.  Simple C
> > global variables such as "_once_registry" are now spelled
> > "_PyRuntime.warnings.once_registry".  The most egregious example seems
> > to be "_PyRuntime.ceval.gil.locked" (used to be simply "gil_locked").
> > 
> > Granted, C is more verbose than Python, but it doesn't have to become
> > that verbose.  I don't know about you, but when code becomes annoying
> > to type, I tend to try and take shortcuts.  
> 
> How often are you actually typing the names of runtime globals, though?

Not very often, but if I want to experiment with some low-level
implementation details, it is nice to avoid the hassle.

There's also a readability argument: with very long names, expressions
can become less easy to parse.

> If you are using a globals, perhaps the typing time will allow you to
> fully consider the gravity of the situation.

Right, I needed to be reminded of how perilous the use of C globals is.
Perhaps I should contact the PSRT the next time I contemplate using a C
global.

Regards

Antoine.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Koos Zevenhoven
On Wed, Sep 6, 2017 at 10:07 AM, Greg Ewing 
wrote:

> Yury Selivanov wrote:
>
>> Greg, have you seen this new section:
>> https://www.python.org/dev/peps/pep-0550/#should-yield-from-
>> leak-context-changes
>>
>
> That section seems to be addressing the idea of a generator
> behaving differently depending on whether you use yield-from
> on it.
>

​Regarding this, I think yield from should have the same semantics as
iterating over the generator with next/send, and PEP 555 has no issues with
this.


>
> I never suggested that, and I'm still not suggesting it.
>
> The bottomline is that it's easier to
>> reason about context when it's guaranteed that context changes are
>> always isolated in generators no matter what.
>>
>
> I don't see a lot of value in trying to automagically
> isolate changes to global state *only* in generators.
>
> Under PEP 550, if you want to e.g. change the decimal
> context temporarily in a non-generator function, you're
> still going to have to protect those changes using a
> with-statement or something equivalent. I don't see
> why the same thing shouldn't apply to generators.
> ​​
>
> It seems to me that it will be *more* confusing to give
> generators this magical ability to avoid with-statements.
> ​​
>
>
​Exactly. To state it clearly: PEP 555 does not have this issue.


​––Koos​



-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Nathaniel Smith
On Wed, Sep 6, 2017 at 7:39 AM, Barry Warsaw  wrote:
>> On Tue, Sep 5, 2017 at 7:58 PM, Nathaniel Smith  wrote:
>> This would also avoid confusion with IPython's very
>> useful debug magic:
>> 
>> https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-debug
>> and which might also be worth stealing for the builtin REPL.
>> (Personally I use it way more often than set_trace().)
>
> Interesting.  I’m not an IPython user.  Do you think its %debug magic would 
> benefit from PEP 553?

Not in particular. But if you're working on making debugger entry more
discoverable/human-friendly, then providing a friendlier alias for the
pdb.pm() semantics might be useful too?

Actually, if you look at the pdb docs, the 3 ways of entering the
debugger that merit demonstrations at the top of the manual page are:

  pdb.run("...code...")  # "I want to debug this code"
  pdb.set_trace()# "break here"
  pdb.pm()# "wtf just happened?"

The set_trace() name is particularly opaque, but if we're talking
about adding a friendly debugger abstraction layer then I'd at least
think about whether to make it cover all three of these.

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 549: Instance Properties (aka: module properties)

2017-09-06 Thread Guido van Rossum
On Wed, Sep 6, 2017 at 9:15 AM, Ivan Levkivskyi 
wrote:

>
> On 6 September 2017 at 17:26, Guido van Rossum  wrote:
>
>>
>> Is there a real use case for @property? Otherwise, if we're going to mess
>> with module's getattro, it makes more sense to add __getattr__, which would
>> have made Nathaniel's use case somewhat simpler. (Except for the __dir__
>> thing -- what else might we need?)
>>
>>
> One additional (IMO quite strong) argument in favor of module level
> __getattr__ is that this is already used by PEP 484 for stub files and is
> supported by mypy, see https://github.com/python/mypy/pull/3647
>

So we're looking for a competing PEP here. Shouldn't be long, just
summarize the discussion about use cases and generality here.

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Consolidate stateful runtime globals

2017-09-06 Thread Benjamin Peterson


On Wed, Sep 6, 2017, at 03:14, Antoine Pitrou wrote:
> 
> Hello,
> 
> I'm a bit concerned about
> https://github.com/python/cpython/commit/76d5abc8684bac4f2fc7cccfe2cd940923357351
> 
> My main gripe is that makes writing C code more tedious.  Simple C
> global variables such as "_once_registry" are now spelled
> "_PyRuntime.warnings.once_registry".  The most egregious example seems
> to be "_PyRuntime.ceval.gil.locked" (used to be simply "gil_locked").
> 
> Granted, C is more verbose than Python, but it doesn't have to become
> that verbose.  I don't know about you, but when code becomes annoying
> to type, I tend to try and take shortcuts.

How often are you actually typing the names of runtime globals, though?

If you are using a globals, perhaps the typing time will allow you to
fully consider the gravity of the situation.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Barry Warsaw
On Sep 6, 2017, at 07:46, Guido van Rossum  wrote:
> 
> IIRC they indeed insinuate debug() into the builtins. My suggestion is also 
> breakpoint().

breakpoint() it is then!  I’ll rename the sys hooks too, but keep the naming 
scheme of the existing sys hooks.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 548: More Flexible Loop Control

2017-09-06 Thread R. David Murray
On Wed, 06 Sep 2017 15:05:51 +1000, Chris Angelico  wrote:
> On Wed, Sep 6, 2017 at 10:11 AM, R. David Murray  
> wrote:
> > I've written a PEP proposing a small enhancement to the Python loop
> > control statements.  Short version: here's what feels to me like a
> > Pythonic way to spell "repeat until":
> >
> > while:
> > 
> > break if 
> >
> > The PEP goes into some detail on why this feels like a readability
> > improvement in the more general case, with examples taken from
> > the standard library:
> >
> >  https://www.python.org/dev/peps/pep-0548/
> 
> Is "break if" legal in loops that have their own conditions as well,
> or only in a bare "while:" loop? For instance, is this valid?
> 
> while not found_the_thing_we_want:
> data = sock.read()
> break if not data
> process(data)

Yes.

> Or this, which uses the condition purely as a descriptor:
> 
> while "moar socket data":
> data = sock.read()
> break if not data
> process(data)

Yes.

> Also - shouldn't this be being discussed first on python-ideas?

Yep, you are absolutely right.  Someone has told me I also missed
a related discussion on python-ideas in my searching for prior
discussions.  (I haven't looked for it yet...)

I'll blame jet lag :)

--David
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 549: Instance Properties (aka: module properties)

2017-09-06 Thread Ivan Levkivskyi
On 6 September 2017 at 17:26, Guido van Rossum  wrote:

>
> Is there a real use case for @property? Otherwise, if we're going to mess
> with module's getattro, it makes more sense to add __getattr__, which would
> have made Nathaniel's use case somewhat simpler. (Except for the __dir__
> thing -- what else might we need?)
>
>
One additional (IMO quite strong) argument in favor of module level
__getattr__ is that this is already used by PEP 484 for stub files and is
supported by mypy, see https://github.com/python/mypy/pull/3647

--
Ivan
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 549: Instance Properties (aka: module properties)

2017-09-06 Thread Guido van Rossum
So we've seen a real use case for __class__ assignment: deprecating things
on access. That use case could also be solved if modules natively supported
defining __getattr__ (with the same "only used if attribute not found
otherwise" semantics as it has on classes), but it couldn't be solved using
@property (or at least it would be quite hacky).

Is there a real use case for @property? Otherwise, if we're going to mess
with module's getattro, it makes more sense to add __getattr__, which would
have made Nathaniel's use case somewhat simpler. (Except for the __dir__
thing -- what else might we need?)

On Tue, Sep 5, 2017 at 3:52 PM, Nathaniel Smith  wrote:

> On Tue, Sep 5, 2017 at 3:03 PM, Larry Hastings  wrote:
> >
> > I've written a PEP proposing a language change:
> >
> > https://www.python.org/dev/peps/pep-0549/
> >
> > The TL;DR summary: add support for property objects to modules.  I've
> > already posted a prototype.
>
> Interesting idea! It's definitely less arcane than the __class__
> assignment support that was added in 3.5. I guess the question is
> whether to add another language feature here, or to provide better
> documentation/helpers for the existing feature.
>
> If anyone's curious what the __class__ trick looks like in practice,
> here's some simple deprecation machinery:
>   https://github.com/njsmith/trio/blob/ee8d909e34a2b28d55b5c6137707e8
> 861eee3234/trio/_deprecate.py#L102-L138
>
> And here's what it looks like in use:
>   https://github.com/njsmith/trio/blob/ee8d909e34a2b28d55b5c6137707e8
> 861eee3234/trio/__init__.py#L91-L115
>
> Advantages of PEP 549:
> - easier to explain and use
>
> Advantages of the __class__ trick:
> - faster (no need to add an extra step to the normal attribute lookup
> fast path); only those who need the feature pay for it
>
> - exposes the full power of Python's class model. Notice that the
> above code overrides __getattr__ but not __dir__, so the attributes
> are accessible via direct lookup but not listed in dir(mod). This is
> on purpose, for two reasons: (a) tab completion shouldn't be
> suggesting deprecated attributes, (b) when I did expose them in
> __dir__, I had trouble with test runners that iterated through
> dir(mod) looking for tests, and ended up spewing tons of spurious
> deprecation warnings. (This is especially bad when you have a policy
> of running your tests with DeprecationWarnings converted to errors.) I
> don't think there's any way to do this with PEP 549.
>
> - already supported in CPython 3.5+ and PyPy3, and with a bit of care
> can be faked on all older CPython releases (including, crucially,
> CPython 2). PEP 549 OTOH AFAICT will only be usable for packages that
> have 3.7 as their minimum supported version.
>
> I don't imagine that I would actually use PEP 549 any time in the
> foreseeable future, due to the inability to override __dir__ and the
> minimum version requirement. If you only need to support CPython 3.5+
> and PyPy3 5.9+, then you can effectively get PEP 549's functionality
> at the cost of 3 lines of code and a block indent:
>
> import sys, types
> class _MyModuleType(types.ModuleType):
> @property
> def ...
>
> @property
> def ...
> sys.modules[__name__].__class__ = _MyModuleType
>
> It's definitely true though that they're not the most obvious lines of
> code :-)
>
> -n
>
> --
> Nathaniel J. Smith -- https://vorpus.org
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] To reduce Python "application" startup time

2017-09-06 Thread Chris Barker - NOAA Federal
> Anyway, I think researching import tree of popular library is good
> startline
> about optimizing startup time.
>

I agree -- in this case, you've identified that asyncio is expensive --
good to know.

In the jinja2 case, does it always need asyncio?

Pep8 as side, I think it often makes sense for expensive optional imports
to be done only if needed. Perhaps a patch to jinja2 is in order.

CHB


For example, modules like ast and tokenize are imported often than I
> thought.
>
> Jinja2 is one of libraries I often use. I'm checking other libraries
> like requests.


> Thanks,
>
> INADA Naoki  >
> ___
> Python-Dev mailing list
> Python-Dev@python.org 
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> wes.turner%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe:
https://mail.python.org/mailman/options/python-dev/chris.barker%40noaa.gov
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Brian Curtin
On Wed, Sep 6, 2017 at 10:46 AM, Guido van Rossum  wrote:

> IIRC they indeed insinuate debug() into the builtins. My suggestion is
> also breakpoint().
>

I'm also a bigger fan of the `breakpoint` name. `debug` as a name is
already pretty widely used, plus breakpoint is more specific in naming
what's actually going to happen.

sys.breakpoint_hook() and sys.__breakpoint_hook__ seem reasonable
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Yury Selivanov
On Wed, Sep 6, 2017 at 12:07 AM, Greg Ewing  wrote:
> Yury Selivanov wrote:
[..]
> I don't see a lot of value in trying to automagically
> isolate changes to global state *only* in generators.
>
> Under PEP 550, if you want to e.g. change the decimal
> context temporarily in a non-generator function, you're
> still going to have to protect those changes using a
> with-statement or something equivalent. I don't see
> why the same thing shouldn't apply to generators.
>
> It seems to me that it will be *more* confusing to give
> generators this magical ability to avoid with-statements.

Greg, just to make sure that we are talking about the same thing,
could you please show an example (using the current PEP 550
API/semantics) of something that in your opinion should work
differently for generators?

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Guido van Rossum
IIRC they indeed insinuate debug() into the builtins. My suggestion is also
breakpoint().

On Wed, Sep 6, 2017 at 7:39 AM, Barry Warsaw  wrote:

> On Sep 5, 2017, at 20:15, Guido van Rossum  wrote:
> >
> > Yeah, I like the idea, but I don't like the debug() name -- IIRC there's
> a helper named debug() in some codebase I know of that prints its arguments
> under certain circumstances.
> >
> > On Tue, Sep 5, 2017 at 7:58 PM, Nathaniel Smith  wrote:
> >
> > Maybe breakpoint() would be a better description of what set_trace()
> > actually does?
>
> Originally I was thinking of a keyword like ‘break here’, but once (after
> discussion with a few folks at the sprint) I settled on a built-in
> function, I was looking for something concise that most directly reflected
> the intent.  Plus I knew I wanted to mirror the sys.*hooks, so again I
> looked for something short.  debug() was the best I could come up with!
>
> breakpoint() could work, although would the hooks then be
> sys.breakpointhook() and sys.__breakpointhook__?  Too bad we can’t just use
> break() :).
>
> Guido, is that helper you’re thinking of implemented as a built-in?  If
> you have a suggestion, it would short-circuit the inevitable bikeshedding.
>
> > This would also avoid confusion with IPython's very
> > useful debug magic:
> > https://ipython.readthedocs.io/en/stable/interactive/
> magics.html#magic-debug
> > and which might also be worth stealing for the builtin REPL.
> > (Personally I use it way more often than set_trace().)
>
> Interesting.  I’m not an IPython user.  Do you think its %debug magic
> would benefit from PEP 553?
>
> (Aside: improving/expanding the stdlib debugger is something else I’d like
> to work on, but this is completely independent of PEP 553.)
>
> Cheers,
> -Barry
>
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> guido%40python.org
>
>


-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Yury Selivanov
On Wed, Sep 6, 2017 at 5:58 AM, Ivan Levkivskyi  wrote:
> On 6 September 2017 at 11:13, Nathaniel Smith  wrote:
>>
>> On Wed, Sep 6, 2017 at 1:49 AM, Ivan Levkivskyi 
>> wrote:
>> > Normal generators fall out from this "scheme", and it looks like their
>> > behavior is determined by the fact that coroutines are implemented as
>> > generators. What I think miht help is to add few more motivational
>> > examples
>> > to the design section of the PEP.
>>
>> Literally the first motivating example at the beginning of the PEP
>> ('def fractions ...') involves only generators, not coroutines.
>
>
> And this is probably what confuses people. As I understand, the
> tasks/coroutines are among the primary motivations for the PEP,
> but they appear somewhere later. There are four potential ways to see the
> PEP:
>
> 1) Generators are broken*, and therefore coroutines are broken, we want to
> fix the latter therefore we fix the former.
> 2) Coroutines are broken, we want to fix them and let's also fix generators
> while we are at it.
> 3) Generators are broken, we want to fix them and let's also fix coroutines
> while we are at it.
> 4) Generators and coroutines are broken in similar ways, let us fix them as
> consistently as we can.

Ivan, generators and coroutines are fundamentally different objects
(even though they share the implementation). The only common thing is
that they both allow for out of order execution of code in the same OS
thread.  The PEP explains the semantical difference of EC in the
High-level Specification in detail, literally on the 2nd page of the
PEP.  I don't see any benefit in reshuffling the rationale section.

Yury
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RFC] Removing pure Python implementation of OrderedDict

2017-09-06 Thread Guido van Rossum
On Wed, Sep 6, 2017 at 3:49 AM, INADA Naoki  wrote:

> OK, I stop worring about thread safety and other implementation
> detail behavior on edge cases.
>

That sounds like overreacting. At the risk of stating the obvious:

I want the data structure itself to maintain its integrity even under edge
cases of multi-threading. However that's different from promising that all
(or certain) operations will be atomic in all cases. (Plus, for dicts and
sets and other data structures that compare items, you can't have atomicity
if those comparisons call back into Python -- so it's extra important that
even when that happens the data structure's *integrity* is still
maintained.)

IMO, in edge cases, it's okay to not do an operation, do it twice, get the
wrong answer, or raise an exception, as long as the data structure's
internal constraints are still satisfied (e.g. no dangling pointers, no
inconsistent indexes, that sort of stuff.)

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Barry Warsaw
On Sep 5, 2017, at 19:31, Giampaolo Rodola'  wrote:
> 
> True. Personally I have a shortcut in my IDE (Sublime) so I when I type "pdb" 
> -> TAB it auto completes it.
> 
> Somehow I think debug() would make this a bit harder as it's more likely a 
> "debug()" line will pass unnoticed.
> For this reason I would give a -1 to this proposal.

I think if your linter or editor can take note of the pdb idiom, it can also do 
so for the debug() built-in.

> Personally I would find it helpful if there was a hook to choose the default 
> debugger to use on "pdb.set_trace()" via .pdbrc or PYTHONDEBUGGER environment 
> variable or something.
> I tried (unsuccessfully) to run ipdb on "pdb.set_trace()", I gave up and 
> ended up emulating auto completion and commands history with this:
> https://github.com/giampaolo/sysconf/blob/master/home/.pdbrc.py

I don’t think that’s a good idea.  pdb is a thing, and that thing is the 
standard library debugger.  I don’t think ‘pdb’ should be the term we use to 
describe a generic Python debugger interface.  That to me is one of the 
advantages of PEP 553; it separates the act of invoking the debugging from the 
actual debugger so invoked.

Cheers,
-Barry



signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] To reduce Python "application" startup time

2017-09-06 Thread Wes Turner
On Wednesday, September 6, 2017, INADA Naoki  wrote:

> > How significant is application startup time to something that uses
> > Jinja2? Are there short-lived programs that use it? Python startup
> > time matters enormously to command-line tools like Mercurial, but far
> > less to something that's designed to start up and then keep running
> > (eg a web app, which is where Jinja is most used).
>
> Since Jinja2 is very popular template engine, it is used by CLI tools
> like ansible.


SaltStack uses Jinja2. It really is a good idea to regularly restart the
minion processes.

Celery can also cycle through worker processes, IIRC.


>
> Additionally, faster startup time (and smaller memory footprint) is good
> for even Web applications.
> For example, CGI is still comfortable tool sometimes.
> Another example is GAE/Python.


Short-lived processes are sometimes preferable from a security standpoint.
Python is currently less viable for CGI use than other scripting languages
due to startup time.

Resource leaks (e.g. memory, file handles, database references; valgrind)
do not last w/ short-lived CGI processes. If there's ASLR, that's also
harder.

Scale up operations with e.g. IaaS platforms like Kubernetes and PaaS
platforms like AppScale all incur Python startup time on a regular basis.


>
> Anyway, I think researching import tree of popular library is good
> startline
> about optimizing startup time.
> For example, modules like ast and tokenize are imported often than I
> thought.
>
> Jinja2 is one of libraries I often use. I'm checking other libraries
> like requests.


> Thanks,
>
> INADA Naoki  >
> ___
> Python-Dev mailing list
> Python-Dev@python.org 
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> wes.turner%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 553: Built-in debug()

2017-09-06 Thread Barry Warsaw
On Sep 5, 2017, at 20:15, Guido van Rossum  wrote:
> 
> Yeah, I like the idea, but I don't like the debug() name -- IIRC there's a 
> helper named debug() in some codebase I know of that prints its arguments 
> under certain circumstances.
> 
> On Tue, Sep 5, 2017 at 7:58 PM, Nathaniel Smith  wrote:
> 
> Maybe breakpoint() would be a better description of what set_trace()
> actually does?

Originally I was thinking of a keyword like ‘break here’, but once (after 
discussion with a few folks at the sprint) I settled on a built-in function, I 
was looking for something concise that most directly reflected the intent.  
Plus I knew I wanted to mirror the sys.*hooks, so again I looked for something 
short.  debug() was the best I could come up with!

breakpoint() could work, although would the hooks then be sys.breakpointhook() 
and sys.__breakpointhook__?  Too bad we can’t just use break() :).

Guido, is that helper you’re thinking of implemented as a built-in?  If you 
have a suggestion, it would short-circuit the inevitable bikeshedding.

> This would also avoid confusion with IPython's very
> useful debug magic:
> 
> https://ipython.readthedocs.io/en/stable/interactive/magics.html#magic-debug
> and which might also be worth stealing for the builtin REPL.
> (Personally I use it way more often than set_trace().)

Interesting.  I’m not an IPython user.  Do you think its %debug magic would 
benefit from PEP 553?

(Aside: improving/expanding the stdlib debugger is something else I’d like to 
work on, but this is completely independent of PEP 553.)

Cheers,
-Barry




signature.asc
Description: Message signed with OpenPGP
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Ivan Levkivskyi
On 6 September 2017 at 11:13, Nathaniel Smith  wrote:

> On Wed, Sep 6, 2017 at 1:49 AM, Ivan Levkivskyi 
> wrote:
> > Normal generators fall out from this "scheme", and it looks like their
> > behavior is determined by the fact that coroutines are implemented as
> > generators. What I think miht help is to add few more motivational
> examples
> > to the design section of the PEP.
>
> Literally the first motivating example at the beginning of the PEP
> ('def fractions ...') involves only generators, not coroutines.
>

And this is probably what confuses people. As I understand, the
tasks/coroutines are among the primary motivations for the PEP,
but they appear somewhere later. There are four potential ways to see the
PEP:

1) Generators are broken*, and therefore coroutines are broken, we want to
fix the latter therefore we fix the former.
2) Coroutines are broken, we want to fix them and let's also fix generators
while we are at it.
3) Generators are broken, we want to fix them and let's also fix coroutines
while we are at it.
4) Generators and coroutines are broken in similar ways, let us fix them as
consistently as we can.

As I understand the PEP is based on option (4), please correct me if I am
wrong.
Therefore maybe this should be said more straight,
and maybe then we should show _in addition_ a task example in rationale,
show how it is broken,
and explain that they are broken in slightly different ways (since expected
semantics is a bit different).

--
Ivan

* here and below by broken I mean "broken" (sometimes behave in
non-intuitive way, and lack some functionality we would like them to have)
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RFC] Removing pure Python implementation of OrderedDict

2017-09-06 Thread alex goretoy
https://www.youtube.com/watch?v=pNe1wWeaHOU=PLYI8318YYdkCsZ7dsYV01n6TZhXA6Wf9i=1


On Wed, Sep 6, 2017 at 5:49 PM, INADA Naoki  wrote:
> OK, I stop worring about thread safety and other implementation
> detail behavior on edge cases.
>
> Thanks,
>
> INADA Naoki  
>
>
> On Wed, Sep 6, 2017 at 7:40 PM, Paul Moore  wrote:
>> On 6 September 2017 at 11:09, Antoine Pitrou  wrote:
>>> On Wed, 6 Sep 2017 11:26:52 +0900
>>> INADA Naoki  wrote:

 Like that, should we say "atomic & threadsafe __setitem__ for simple
 key is implementation detail of CPython and PyPy.  We recommend
 using mutex when using OrderedDict from multiple thread."?
>>>
>>> I think you may be overstating the importance of making OrderedDict
>>> thread-safe.  It's quite rare to be able to rely on the thread safety
>>> of a single structure, since most often your state is more complex than
>>> that and you have to use a lock anyway.
>>>
>>> The statu quo is that only experts rely on the thread-safety of list
>>> and dict, and they should be ready to reconsider if some day the
>>> guarantees change.
>>
>> Agreed. I wasn't even aware that list and dict were guaranteed
>> threadsafe (in the language reference). And even if they are, there's
>> going to be a lot of provisos that mean in practice you need to know
>> what you're doing to rely on that. Simple example:
>>
>> mydict[the_value()] += 1
>>
>> isn't thread safe, no matter how thread safe dictionaries are.
>>
>> I don't have a strong opinion on making OrderedDict "guaranteed thread
>> safe" according to the language definition. But I'm pretty certain
>> that whether we do or not will make very little practical difference
>> to the vast majority of Python users.
>>
>> Paul
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe: 
>> https://mail.python.org/mailman/options/python-dev/songofacandy%40gmail.com
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/agoretoy%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 548: More Flexible Loop Control

2017-09-06 Thread alex goretoy
https://www.youtube.com/watch?v=pNe1wWeaHOU=PLYI8318YYdkCsZ7dsYV01n6TZhXA6Wf9i=1
Thank you,
-Alex Goretoy


On Wed, Sep 6, 2017 at 7:05 PM, Ben Hoyt  wrote:
> I think Serhiy's response is excellent and agree with it. My gut reaction is
> "this looks like Perl" (and not in a good way), but more specifically it
> makes the control flow almost invisible. So I'm definitely -1 on this.
>
> The current while True ... break idiom is not pretty, but it's also very
> clear and obvious, and the control flow is immediately visible.
>
> One thing I do like about the proposal is the bare "while:", and I think
> that syntax is obvious and might be worth keeping (separately to the rest of
> the proposal). A bare "while:" (meaning "while True:") seems somehow less
> insulting to the intelligence, is still clear, and has precedent in Go's
> bare "for { ... }".
>
> -Ben
>
> On Wed, Sep 6, 2017 at 2:42 AM, Serhiy Storchaka 
> wrote:
>>
>> 06.09.17 03:11, R. David Murray пише:
>>>
>>> I've written a PEP proposing a small enhancement to the Python loop
>>> control statements.  Short version: here's what feels to me like a
>>> Pythonic way to spell "repeat until":
>>>
>>>  while:
>>>  
>>>  break if 
>>>
>>> The PEP goes into some detail on why this feels like a readability
>>> improvement in the more general case, with examples taken from
>>> the standard library:
>>>
>>>   https://www.python.org/dev/peps/pep-0548/
>>>
>>> Unlike Larry, I don't have a prototype, and in fact if this idea
>>> meets with approval I'll be looking for a volunteer to do the actual
>>> implementation.
>>>
>>> --David
>>>
>>> PS: this came to me in a dream on Sunday night, and the more I explored
>>> the idea the better I liked it.  I have no idea what I was dreaming about
>>> that resulted in this being the thing left in my mind when I woke up :)
>>
>>
>> This looks rather like Perl way than Python way.
>>
>> "There should be one-- and preferably only one --obvious way to do it."
>>
>> This proposing saves just a one line of the code. But it makes "break" and
>> "continue" statement less visually distinguishable as it is seen in your
>> example from uuid.py.
>>
>> If allow "break if" and "continue if", why not allow "return if"? Or
>> arbitrary statement before "if"? This adds PHP-like inconsistency in the
>> language.
>>
>> Current idiom is easier for modification. If you add the second condition,
>> it may be that you need to execute different code before "break".
>>
>> while True:
>> 
>> if not :
>> 
>> break
>> 
>> if not :
>> 
>> break
>>
>> It is easy to modify the code with the current syntax, but the code with
>> the proposed syntax should be totally rewritten.
>>
>> Your example from sre_parse.py demonstrates this. Please note that
>> pre-exit code is slightly different. In the first case self.error() is
>> called with one argument, and in the second case it is called with two
>> arguments. Your rewritten code is not equivalent to the existing one.
>>
>> Other concern is that the current code is highly optimized for common
>> cases. Your rewritten code checks the condition "c is None" two times in
>> common case.
>>
>> I'm -1 for this proposition.
>>
>>
>> ___
>> Python-Dev mailing list
>> Python-Dev@python.org
>> https://mail.python.org/mailman/listinfo/python-dev
>> Unsubscribe:
>> https://mail.python.org/mailman/options/python-dev/benhoyt%40gmail.com
>
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/agoretoy%40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 548: More Flexible Loop Control

2017-09-06 Thread Ben Hoyt
I think Serhiy's response is excellent and agree with it. My gut reaction
is "this looks like Perl" (and not in a good way), but more specifically it
makes the control flow almost invisible. So I'm definitely -1 on this.

The current while True ... break idiom is not pretty, but it's also very
clear and obvious, and the control flow is immediately visible.

One thing I do like about the proposal is the bare "while:", and I think
that syntax is obvious and might be worth keeping (separately to the rest
of the proposal). A bare "while:" (meaning "while True:") seems somehow
less insulting to the intelligence, is still clear, and has precedent in
Go's bare "for { ... }".

-Ben

On Wed, Sep 6, 2017 at 2:42 AM, Serhiy Storchaka 
wrote:

> 06.09.17 03:11, R. David Murray пише:
>
>> I've written a PEP proposing a small enhancement to the Python loop
>> control statements.  Short version: here's what feels to me like a
>> Pythonic way to spell "repeat until":
>>
>>  while:
>>  
>>  break if 
>>
>> The PEP goes into some detail on why this feels like a readability
>> improvement in the more general case, with examples taken from
>> the standard library:
>>
>>   https://www.python.org/dev/peps/pep-0548/
>>
>> Unlike Larry, I don't have a prototype, and in fact if this idea
>> meets with approval I'll be looking for a volunteer to do the actual
>> implementation.
>>
>> --David
>>
>> PS: this came to me in a dream on Sunday night, and the more I explored
>> the idea the better I liked it.  I have no idea what I was dreaming about
>> that resulted in this being the thing left in my mind when I woke up :)
>>
>
> This looks rather like Perl way than Python way.
>
> "There should be one-- and preferably only one --obvious way to do it."
>
> This proposing saves just a one line of the code. But it makes "break" and
> "continue" statement less visually distinguishable as it is seen in your
> example from uuid.py.
>
> If allow "break if" and "continue if", why not allow "return if"? Or
> arbitrary statement before "if"? This adds PHP-like inconsistency in the
> language.
>
> Current idiom is easier for modification. If you add the second condition,
> it may be that you need to execute different code before "break".
>
> while True:
> 
> if not :
> 
> break
> 
> if not :
> 
> break
>
> It is easy to modify the code with the current syntax, but the code with
> the proposed syntax should be totally rewritten.
>
> Your example from sre_parse.py demonstrates this. Please note that
> pre-exit code is slightly different. In the first case self.error() is
> called with one argument, and in the second case it is called with two
> arguments. Your rewritten code is not equivalent to the existing one.
>
> Other concern is that the current code is highly optimized for common
> cases. Your rewritten code checks the condition "c is None" two times in
> common case.
>
> I'm -1 for this proposition.
>
>
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/benhoyt%
> 40gmail.com
>
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RFC] Removing pure Python implementation of OrderedDict

2017-09-06 Thread INADA Naoki
OK, I stop worring about thread safety and other implementation
detail behavior on edge cases.

Thanks,

INADA Naoki  


On Wed, Sep 6, 2017 at 7:40 PM, Paul Moore  wrote:
> On 6 September 2017 at 11:09, Antoine Pitrou  wrote:
>> On Wed, 6 Sep 2017 11:26:52 +0900
>> INADA Naoki  wrote:
>>>
>>> Like that, should we say "atomic & threadsafe __setitem__ for simple
>>> key is implementation detail of CPython and PyPy.  We recommend
>>> using mutex when using OrderedDict from multiple thread."?
>>
>> I think you may be overstating the importance of making OrderedDict
>> thread-safe.  It's quite rare to be able to rely on the thread safety
>> of a single structure, since most often your state is more complex than
>> that and you have to use a lock anyway.
>>
>> The statu quo is that only experts rely on the thread-safety of list
>> and dict, and they should be ready to reconsider if some day the
>> guarantees change.
>
> Agreed. I wasn't even aware that list and dict were guaranteed
> threadsafe (in the language reference). And even if they are, there's
> going to be a lot of provisos that mean in practice you need to know
> what you're doing to rely on that. Simple example:
>
> mydict[the_value()] += 1
>
> isn't thread safe, no matter how thread safe dictionaries are.
>
> I don't have a strong opinion on making OrderedDict "guaranteed thread
> safe" according to the language definition. But I'm pretty certain
> that whether we do or not will make very little practical difference
> to the vast majority of Python users.
>
> Paul
> ___
> Python-Dev mailing list
> Python-Dev@python.org
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/songofacandy%40gmail.com
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RFC] Removing pure Python implementation of OrderedDict

2017-09-06 Thread Paul Moore
On 6 September 2017 at 11:09, Antoine Pitrou  wrote:
> On Wed, 6 Sep 2017 11:26:52 +0900
> INADA Naoki  wrote:
>>
>> Like that, should we say "atomic & threadsafe __setitem__ for simple
>> key is implementation detail of CPython and PyPy.  We recommend
>> using mutex when using OrderedDict from multiple thread."?
>
> I think you may be overstating the importance of making OrderedDict
> thread-safe.  It's quite rare to be able to rely on the thread safety
> of a single structure, since most often your state is more complex than
> that and you have to use a lock anyway.
>
> The statu quo is that only experts rely on the thread-safety of list
> and dict, and they should be ready to reconsider if some day the
> guarantees change.

Agreed. I wasn't even aware that list and dict were guaranteed
threadsafe (in the language reference). And even if they are, there's
going to be a lot of provisos that mean in practice you need to know
what you're doing to rely on that. Simple example:

mydict[the_value()] += 1

isn't thread safe, no matter how thread safe dictionaries are.

I don't have a strong opinion on making OrderedDict "guaranteed thread
safe" according to the language definition. But I'm pretty certain
that whether we do or not will make very little practical difference
to the vast majority of Python users.

Paul
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] [RFC] Removing pure Python implementation of OrderedDict

2017-09-06 Thread Antoine Pitrou
On Wed, 6 Sep 2017 11:26:52 +0900
INADA Naoki  wrote:
> 
> Like that, should we say "atomic & threadsafe __setitem__ for simple
> key is implementation detail of CPython and PyPy.  We recommend
> using mutex when using OrderedDict from multiple thread."?

I think you may be overstating the importance of making OrderedDict
thread-safe.  It's quite rare to be able to rely on the thread safety
of a single structure, since most often your state is more complex than
that and you have to use a lock anyway.

The statu quo is that only experts rely on the thread-safety of list
and dict, and they should be ready to reconsider if some day the
guarantees change.

Regards

Antoine.
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Consolidate stateful runtime globals

2017-09-06 Thread Antoine Pitrou

Hello,

I'm a bit concerned about
https://github.com/python/cpython/commit/76d5abc8684bac4f2fc7cccfe2cd940923357351

My main gripe is that makes writing C code more tedious.  Simple C
global variables such as "_once_registry" are now spelled
"_PyRuntime.warnings.once_registry".  The most egregious example seems
to be "_PyRuntime.ceval.gil.locked" (used to be simply "gil_locked").

Granted, C is more verbose than Python, but it doesn't have to become
that verbose.  I don't know about you, but when code becomes annoying
to type, I tend to try and take shortcuts.

Regards

Antoine.


___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Koos Zevenhoven
On Wed, Sep 6, 2017 at 12:13 PM, Nathaniel Smith  wrote:

> On Wed, Sep 6, 2017 at 1:49 AM, Ivan Levkivskyi 
> wrote:
> > Another comment from bystander point of view: it looks like the
> discussions
> > of API design and implementation are a bit entangled here.
> > This is much better in the current version of the PEP, but still there
> is a
> > _feelling_ that some design decisions are influenced by the
> implementation
> > strategy.
> >
> > As I currently see the "philosophy" at large is like this:
> > there are different level of coupling between concurrently executing
> code:
> > * processes: practically not coupled, designed to be long running
> > * threads: more tightly coupled, designed to be less long-lived, context
> is
> > managed by threading.local, which is not inherited on "forking"
> > * tasks: tightly coupled, designed to be short-lived, context will be
> > managed by PEP 550, context is inherited on "forking"
> >
> > This seems right to me.
> >
> > Normal generators fall out from this "scheme", and it looks like their
> > behavior is determined by the fact that coroutines are implemented as
> > generators. What I think miht help is to add few more motivational
> examples
> > to the design section of the PEP.
>
> Literally the first motivating example at the beginning of the PEP
> ('def fractions ...') involves only generators, not coroutines, and
> only works correctly if generators get special handling. (In fact, I'd
> be curious to see how Greg's {push,pop}_local_storage could handle
> this case.) The implementation strategy changed radically between v1
> and v2 because of considerations around generator (not coroutine)
> semantics. I'm not sure what more it can do to dispel these feelings
> :-).
>
>
​Just to mention that this is now closely related to the discussion on my
proposal on python-ideas. BTW, that proposal is now submitted as PEP 555 on
the peps repo.

––Koos


-- 
+ Koos Zevenhoven + http://twitter.com/k7hoven +
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Nathaniel Smith
On Wed, Sep 6, 2017 at 1:49 AM, Ivan Levkivskyi  wrote:
> Another comment from bystander point of view: it looks like the discussions
> of API design and implementation are a bit entangled here.
> This is much better in the current version of the PEP, but still there is a
> _feelling_ that some design decisions are influenced by the implementation
> strategy.
>
> As I currently see the "philosophy" at large is like this:
> there are different level of coupling between concurrently executing code:
> * processes: practically not coupled, designed to be long running
> * threads: more tightly coupled, designed to be less long-lived, context is
> managed by threading.local, which is not inherited on "forking"
> * tasks: tightly coupled, designed to be short-lived, context will be
> managed by PEP 550, context is inherited on "forking"
>
> This seems right to me.
>
> Normal generators fall out from this "scheme", and it looks like their
> behavior is determined by the fact that coroutines are implemented as
> generators. What I think miht help is to add few more motivational examples
> to the design section of the PEP.

Literally the first motivating example at the beginning of the PEP
('def fractions ...') involves only generators, not coroutines, and
only works correctly if generators get special handling. (In fact, I'd
be curious to see how Greg's {push,pop}_local_storage could handle
this case.) The implementation strategy changed radically between v1
and v2 because of considerations around generator (not coroutine)
semantics. I'm not sure what more it can do to dispel these feelings
:-).

-n

-- 
Nathaniel J. Smith -- https://vorpus.org
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Ivan Levkivskyi
Another comment from bystander point of view: it looks like the discussions
of API design and implementation are a bit entangled here.
This is much better in the current version of the PEP, but still there is a
_feelling_ that some design decisions are influenced by the implementation
strategy.

As I currently see the "philosophy" at large is like this:
there are different level of coupling between concurrently executing code:
* processes: practically not coupled, designed to be long running
* threads: more tightly coupled, designed to be less long-lived, context is
managed by threading.local, which is not inherited on "forking"
* tasks: tightly coupled, designed to be short-lived, context will be
managed by PEP 550, context is inherited on "forking"

This seems right to me.

Normal generators fall out from this "scheme", and it looks like their
behavior is determined by the fact that coroutines are implemented as
generators. What I think miht help is to add few more motivational examples
to the design section of the PEP.

--
Ivan
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 548: More Flexible Loop Control

2017-09-06 Thread Ivan Levkivskyi
On 6 September 2017 at 08:42, Serhiy Storchaka  wrote:

> 06.09.17 03:11, R. David Murray пише:
>
>> I've written a PEP proposing a small enhancement to the Python loop
>> control statements.  Short version: here's what feels to me like a
>> Pythonic way to spell "repeat until":
>>
>>  while:
>>  
>>  break if 
>>
>>
> I'm -1 for this proposition.
>
>
>
I also think this is not worth it. This will save few lines of code but
introduces some ambiguity and makes syntax more complex.

--
Ivan
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] To reduce Python "application" startup time

2017-09-06 Thread INADA Naoki
> How significant is application startup time to something that uses
> Jinja2? Are there short-lived programs that use it? Python startup
> time matters enormously to command-line tools like Mercurial, but far
> less to something that's designed to start up and then keep running
> (eg a web app, which is where Jinja is most used).

Since Jinja2 is very popular template engine, it is used by CLI tools
like ansible.

Additionally, faster startup time (and smaller memory footprint) is good
for even Web applications.
For example, CGI is still comfortable tool sometimes.
Another example is GAE/Python.

Anyway, I think researching import tree of popular library is good startline
about optimizing startup time.
For example, modules like ast and tokenize are imported often than I thought.

Jinja2 is one of libraries I often use. I'm checking other libraries
like requests.

Thanks,

INADA Naoki  
___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 550 v4

2017-09-06 Thread Greg Ewing

Yury Selivanov wrote:

Greg, have you seen this new section:
https://www.python.org/dev/peps/pep-0550/#should-yield-from-leak-context-changes


That section seems to be addressing the idea of a generator
behaving differently depending on whether you use yield-from
on it.

I never suggested that, and I'm still not suggesting it.


The bottomline is that it's easier to
reason about context when it's guaranteed that context changes are
always isolated in generators no matter what.


I don't see a lot of value in trying to automagically
isolate changes to global state *only* in generators.

Under PEP 550, if you want to e.g. change the decimal
context temporarily in a non-generator function, you're
still going to have to protect those changes using a
with-statement or something equivalent. I don't see
why the same thing shouldn't apply to generators.

It seems to me that it will be *more* confusing to give
generators this magical ability to avoid with-statements.


This will have some performance implications and make the API way more
complex.


I can't see how it would have any significant effect on
performance. The implementation would be very similar to
what's currently described in the PEP. You'll have to
elaborate on how you think it would be less efficient.

As for complexity, push_local_context() and push_local_context()
would be considered low-level primitives that you
wouldn't often use directly. Most of the time they
would be hidden inside context managers.

You could even have a context manager just for applying
them:

   with new_local_context():
  # go nuts with context vars here


But I'm not convinced yet that real-life code needs the
semantics you want.


And I'm not convinced that it needs as much magic as you want.


If you write code that uses 'with' statements consistently, you will
never even know that context changes are isolated in generators.


But if you write code that uses context managers consistently,
and those context managers know about and handle local
contexts properly, generators don't *need* to isolate their
context automatically.

--
Greg

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 548: More Flexible Loop Control

2017-09-06 Thread Serhiy Storchaka

06.09.17 03:11, R. David Murray пише:

I've written a PEP proposing a small enhancement to the Python loop
control statements.  Short version: here's what feels to me like a
Pythonic way to spell "repeat until":

 while:
 
 break if 

The PEP goes into some detail on why this feels like a readability
improvement in the more general case, with examples taken from
the standard library:

  https://www.python.org/dev/peps/pep-0548/

Unlike Larry, I don't have a prototype, and in fact if this idea
meets with approval I'll be looking for a volunteer to do the actual
implementation.

--David

PS: this came to me in a dream on Sunday night, and the more I explored
the idea the better I liked it.  I have no idea what I was dreaming about
that resulted in this being the thing left in my mind when I woke up :)


This looks rather like Perl way than Python way.

"There should be one-- and preferably only one --obvious way to do it."

This proposing saves just a one line of the code. But it makes "break" 
and "continue" statement less visually distinguishable as it is seen in 
your example from uuid.py.


If allow "break if" and "continue if", why not allow "return if"? Or 
arbitrary statement before "if"? This adds PHP-like inconsistency in the 
language.


Current idiom is easier for modification. If you add the second 
condition, it may be that you need to execute different code before "break".


while True:

if not :

break

if not :

break

It is easy to modify the code with the current syntax, but the code with 
the proposed syntax should be totally rewritten.


Your example from sre_parse.py demonstrates this. Please note that 
pre-exit code is slightly different. In the first case self.error() is 
called with one argument, and in the second case it is called with two 
arguments. Your rewritten code is not equivalent to the existing one.


Other concern is that the current code is highly optimized for common 
cases. Your rewritten code checks the condition "c is None" two times in 
common case.


I'm -1 for this proposition.

___
Python-Dev mailing list
Python-Dev@python.org
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com