Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Maciej Fijalkowski
On Tue, Aug 30, 2016 at 3:00 AM, Brett Cannon  wrote:
>
>
> On Mon, Aug 29, 2016, 17:06 Terry Reedy  wrote:
>>
>> On 8/29/2016 5:38 PM, Brett Cannon wrote:
>>
>> > who objected to the new field did either for memory ("it adds another
>> > pointer to the struct that won't be typically used"), or for conceptual
>> > reasons ("the code object is immutable and you're proposing a mutable
>> > field"). The latter is addressed by not exposing the field in Python and
>>
>> Am I correct is thinking that you will also not add the new field as an
>> argument to PyCode_New?
>
>
> Correct.
>
>>
>>  > clearly stating that code should never expect the field to be filled.
>>
>> I interpret this as "The only code that should access the field should
>> be code that put something there."
>
>
> Yep, seems like a reasonable rule to follow.
>
> -brett

How do we make sure that multuple tools don't stomp on each other?
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Victor Stinner
Le 30 août 2016 03:10, "Nick Coghlan"  a écrit :
> However, this view is also why I don't agree with being aggressive in
> making this behaviour the default on Windows - I think we should make
> it readily available as a provisional feature through a single
> cross-platform command line switch and environment setting (e.g. "-X
> utf8" and "PYTHONASSUMEUTF8") so folks that need it can readily opt in
> to it,

I'm sorry, but I should have start from this point.

Modifyng the default or adding an option are completely different. I like
the idea of adding a -X utf8 option on Windows. If it's an opt-in option,
the developer takes the responsabilty of any possible backward incompatible
change and plays the Unicode dance when handling input/output data with
other applications.

My long email tries to explain why I consider that modifying the default in
3.6 is a strong NO for me.

> but we can defer making it the default until 3.7 after folks
> have had a full release cycle's worth of experience with it in the
> wild.

If Steve changes its project to add an option but don't change the default,
I will help to make it happen before 3.6 and help to implement the UNIX
part. It would even make it stronger if the option is "portable", even if
the exact semantic is differnent between UNIX and Windows.

If the default doesn't change, I don't think that a PEP is required.

Later when we will get enough feedback, we will be able to decide to drop
the option (if it was a very bad idea because of very bad feedback), or
even make it the default on a platform (Windows).

Victor
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Supported versions of OpenSSL

2016-08-30 Thread Terry Reedy

On 8/29/2016 10:59 PM, Nick Coghlan wrote:


By contrast (and assuming I understand the situation correctly), the
Windows build is already set up around the assumption that you'll need
to build OpenSSL yourself.


If one installs a minimal svn client and passes -e to Zack's wonderful 
built.bat, current versions of external dependencies are automatically 
downloaded and compiled with the result placed where needed.  So I did 
nothing more to have OpenSSL updated to 1.0.2h last June. This is too 
easy to say I 'built it myself'.


--
Terry Jan Reedy

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Terry Reedy

On 8/30/2016 4:20 AM, Maciej Fijalkowski wrote:

On Tue, Aug 30, 2016 at 3:00 AM, Brett Cannon  wrote:



On Mon, Aug 29, 2016, 17:06 Terry Reedy  wrote:


On 8/29/2016 5:38 PM, Brett Cannon wrote:


who objected to the new field did either for memory ("it adds another
pointer to the struct that won't be typically used"), or for conceptual
reasons ("the code object is immutable and you're proposing a mutable
field"). The latter is addressed by not exposing the field in Python and


Am I correct is thinking that you will also not add the new field as an
argument to PyCode_New?



Correct.



 > clearly stating that code should never expect the field to be filled.

I interpret this as "The only code that should access the field should
be code that put something there."



Yep, seems like a reasonable rule to follow.

-brett


How do we make sure that multuple tools don't stomp on each other?

AFAIK, we can't.  The multiple tool people will have to work that out, 
or document incompatibilities between tools.


--
Terry Jan Reedy

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Christian Heimes
On 2016-08-30 01:14, Brett Cannon wrote:
> So the struct in question can be found at
> https://github.com/python/cpython/blob/2d264235f6e066611b412f7c2e1603866e0f7f1b/Include/code.h#L10
>  .
> The official docs say the fields can be changed at any time, so
> re-arranging them shouldn't break any ABI compatibility promises:
> https://docs.python.org/3/c-api/code.html#c.PyCodeObject . Would
> grouping all the fields of the same type together, sorting them by
> individual field size (i.e. PyObject*, void*, int, unsigned char*), and
> then adding the co_extra field at the end of the grouping of PyObject *
> fields do what you're suggesting?

You don't have to resort them all, just move co_firstlineno after
co_flags, so all int fields are together. Pointers are typically
alignment to multiple of 64 on a 64bit machine. In its current shape
PyCodeObject is padded with two unused areas of 32bit: 5 * int32 + 32
bits of padding, 9 * pointers (64 bits each), 1 * int32 + another 32
bits of padding, 3 * pointers. When you move co_firstlineno, you fill in
the gap.

Christian


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What do we do about bad slicing and possible crashes (issue 27867)

2016-08-30 Thread Serhiy Storchaka

On 28.08.16 01:25, Terry Reedy wrote:

0. Do nothing.


The problem is not in pathological __index__. The problem is in 
executing Python code and releasing GIL. In multithread production code 
one thread can read a slice when other thread modifies a collection. In 
very very rare case it causes a crash (or worse, a corruption of data). 
We shouldn't left it as is.



1. Detect length change and raise.


It would be simpler solution. But I afraid that this can break 
third-party code that "just works" now. For example slicing a list "just 
works" if step is 1. It can return not what the author expected if a 
list grows, but it never crashes, and existing code can depends on 
current behavior. This solution is not applicable in maintained versions.



2. Retrieve length after any possible changes and proceed as normal.


This behavior looks the most expectable to me, but needs more work.


B. Add PySlice_GetIndicesEx2 (or ExEx?), which would receive *collection
instead of length, so the length could be retrieved after the __index__
calls.  Change calls. Deprecate PySlice_GetIndicesEx.


This is not always possible. The limit for a slice is not always the 
length of the collection (see multidimensional arrays). And how to 
determine the length? Call __len__? It can be overridden in Python, this 
causes releasing GIL, switching to other thread and modifying the 
collection. The original problem is returned.



And what versions should be patched?


Since this is heisenbug that can cause a crash, I think we should apply 
some solutions for all versions. But in develop version we a free to 
introduce small incompatibility.


I prefer 2A in maintained versions (may be including 3.3 and 3.4) and 2A 
or 1A in 3.6.



___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What do we do about bad slicing and possible crashes (issue 27867)

2016-08-30 Thread Dima Tisnek
On 30 August 2016 at 14:13, Serhiy Storchaka  wrote:
>> 1. Detect length change and raise.
>
>
> It would be simpler solution. But I afraid that this can break third-party
> code that "just works" now. For example slicing a list "just works" if step
> is 1. It can return not what the author expected if a list grows, but it
> never crashes, and existing code can depends on current behavior. This
> solution is not applicable in maintained versions.

Serhiy,

If dictionary is iterated in thread1 while thread2 changes the
dictionary, thread1 currently raises RuntimeError.

Would cloning current dict behaviour to slice with overridden
__index__ make sense?


I'd argue 3rd party code depends on slicing not to raise an exception,
is same as 3rd party code depending on dict iteration not to raise and
exception; If same container may be concurrently used in another
thread, then 3rd party code is actually buggy. It's OK to break such
code.


Just my 2c.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What do we do about bad slicing and possible crashes (issue 27867)

2016-08-30 Thread Maciej Fijalkowski
On Tue, Aug 30, 2016 at 2:31 PM, Dima Tisnek  wrote:
> On 30 August 2016 at 14:13, Serhiy Storchaka  wrote:
>>> 1. Detect length change and raise.
>>
>>
>> It would be simpler solution. But I afraid that this can break third-party
>> code that "just works" now. For example slicing a list "just works" if step
>> is 1. It can return not what the author expected if a list grows, but it
>> never crashes, and existing code can depends on current behavior. This
>> solution is not applicable in maintained versions.
>
> Serhiy,
>
> If dictionary is iterated in thread1 while thread2 changes the
> dictionary, thread1 currently raises RuntimeError.
>
> Would cloning current dict behaviour to slice with overridden
> __index__ make sense?
>
>
> I'd argue 3rd party code depends on slicing not to raise an exception,
> is same as 3rd party code depending on dict iteration not to raise and
> exception; If same container may be concurrently used in another
> thread, then 3rd party code is actually buggy. It's OK to break such
> code.
>
>
> Just my 2c.
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: 
> https://mail.python.org/mailman/options/python-dev/fijall%40gmail.com

I'm with Dima here.

It's more complicated - if the third party rely on the code working
when one thread slices while the other thread modifies that gives
implicit atomicity requirements. Those specific requirements are very
hard to maintain across the python versions and python
implementations. Replicating the exact CPython behavior (for each
CPython version too!) is a major nightmare for such specific
scenarios.

I propose the following:

* we raise an error if detected

-or-

* we define the exact behavior what it means to modify the collection
in one thread while the other is slicing it (what do you get? what are
the guarantees? does it also apply if the list is resized?)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What do we do about bad slicing and possible crashes (issue 27867)

2016-08-30 Thread Stefan Krah
On Tue, Aug 30, 2016 at 03:11:25PM +0200, Maciej Fijalkowski wrote:
> It's more complicated - if the third party rely on the code working
> when one thread slices while the other thread modifies that gives
> implicit atomicity requirements. Those specific requirements are very
> hard to maintain across the python versions and python
> implementations. Replicating the exact CPython behavior (for each
> CPython version too!) is a major nightmare for such specific
> scenarios.
> 
> I propose the following:
> 
> * we raise an error if detected

+1


Stefan Krah

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Steve Dower
As I've said before, on Windows this is a compatibility hack to make code 
written for other platforms work. Making the user opt in is not fair, and does 
not help improve the libraries that need it, because few people will change 
their library to work with a non default option.

The "developer" I'm concerned about doesn't need to turn this on - bytes work 
just about fine on POSIX (if you don't inspect the contents). It's the random 
user on Windows who installed their library that has the problem. They don't 
know the fix, and may not know how to apply it (e.g. if it's their Jupyter 
notebook that won't find one of their files - no obvious command line options 
here).

Any system that requires communication between two different versions of Python 
must have install instructions (if it's public) or someone who maintains it. It 
won't magically break without an upgrade, and it should not get an upgrade 
without testing. The environment variable is available for this kind of 
scenario, though I'd hope the testing occurs during beta and it gets fixed by 
the time we release.

Changing the locale encoding is something I'm quite happy to back out of. It's 
already easy enough for the developer to specify the encoding when opening a 
file, or to wrap open() and change their own default. But developers cannot 
change the encoding used by the os module, which is why we need to do it.

Cheers,
Steve

Top-posted from my Windows Phone

-Original Message-
From: "Victor Stinner" 
Sent: ‎8/‎30/‎2016 1:21
To: "Nick Coghlan" 
Cc: "Steve Dower" ; "Python Dev" 
Subject: Re: [Python-Dev] File system path encoding on Windows

Le 30 août 2016 03:10, "Nick Coghlan"  a écrit :
> However, this view is also why I don't agree with being aggressive in
> making this behaviour the default on Windows - I think we should make
> it readily available as a provisional feature through a single
> cross-platform command line switch and environment setting (e.g. "-X
> utf8" and "PYTHONASSUMEUTF8") so folks that need it can readily opt in
> to it,
I'm sorry, but I should have start from this point.
Modifyng the default or adding an option are completely different. I like the 
idea of adding a -X utf8 option on Windows. If it's an opt-in option, the 
developer takes the responsabilty of any possible backward incompatible change 
and plays the Unicode dance when handling input/output data with other 
applications.
My long email tries to explain why I consider that modifying the default in 3.6 
is a strong NO for me.
> but we can defer making it the default until 3.7 after folks
> have had a full release cycle's worth of experience with it in the
> wild.
If Steve changes its project to add an option but don't change the default, I 
will help to make it happen before 3.6 and help to implement the UNIX part. It 
would even make it stronger if the option is "portable", even if the exact 
semantic is differnent between UNIX and Windows.
If the default doesn't change, I don't think that a PEP is required.
Later when we will get enough feedback, we will be able to decide to drop the 
option (if it was a very bad idea because of very bad feedback), or even make 
it the default on a platform (Windows).
Victor___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What do we do about bad slicing and possible crashes (issue 27867)

2016-08-30 Thread Serhiy Storchaka

On 30.08.16 15:31, Dima Tisnek wrote:

On 30 August 2016 at 14:13, Serhiy Storchaka  wrote:

1. Detect length change and raise.



It would be simpler solution. But I afraid that this can break third-party
code that "just works" now. For example slicing a list "just works" if step
is 1. It can return not what the author expected if a list grows, but it
never crashes, and existing code can depends on current behavior. This
solution is not applicable in maintained versions.


Serhiy,

If dictionary is iterated in thread1 while thread2 changes the
dictionary, thread1 currently raises RuntimeError.

Would cloning current dict behaviour to slice with overridden
__index__ make sense?


No, these are different things. The problem with dict iterating is 
unavoidable, but slicing can be defined consistently (as described by 
Terry in option 2). Changing a dict can change the order and invalidates 
iterators (except trivial cases of just created or finished iterators). 
But slicing can be atomic (and it is atomic de facto in many cases), we 
just should call all __index__-es before looking on a sequence.



I'd argue 3rd party code depends on slicing not to raise an exception,
is same as 3rd party code depending on dict iteration not to raise and
exception; If same container may be concurrently used in another
thread, then 3rd party code is actually buggy. It's OK to break such
code.


We shouldn't break third-party code in maintained releases. De facto 
slicing is atomic now in many cases, and it is nowhere documented that 
it is not atomic. The problem only with using broken by design 
PySlice_GetIndicesEx() in CPython. If CPython would implemented without 
PySlice_GetIndicesEx() (with more cumbersome code), it is likely this 
issue wouldn't be raised.



___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Victor Stinner
2016-08-30 16:31 GMT+02:00 Steve Dower :
> It's the
> random user on Windows who installed their library that has the problem.
> They don't know the fix, and may not know how to apply it (e.g. if it's
> their Jupyter notebook that won't find one of their files - no obvious
> command line options here).

There is already a DeprecationWarning. Sadly, it's hidden by default:
you need a debug build of Python or more simply to pass -Wd command
line option.

Maybe we should make this warning (Deprecation warning on bytes paths)
visible by default, or add a new warning suggesting to enable -X utf8
the first time a Python function gets a byte string (like a filename)?


> Any system that requires communication between two different versions of
> Python must have install instructions (if it's public) or someone who
> maintains it. It won't magically break without an upgrade, and it should not
> get an upgrade without testing. The environment variable is available for
> this kind of scenario, though I'd hope the testing occurs during beta and it
> gets fixed by the time we release.

I disagree that breaking backward compatibility is worth it. Most
users don't care of Unicode since their application already "just
works well" for their use case.

Having to set an env var to "repair" their app to be able to upgrade
Python is not really convenient.

Victor
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Lib/http/client.py: could it return an OSError with the current response?

2016-08-30 Thread Ivo Bellin Salarin
Hi everybody,

Sorry for bothering you, this is my first post to the python-dev ML.

While using requests to tunnel a request via a proxy requiring user
authentication, I have seen that httplib (
https://hg.python.org/cpython/file/3.5/Lib/http/client.py#l831) raises the
message returned by the proxy, along with its status code (407) without
including the proxy response. This one could be very interesting to the
consumer, since it could contain some useful headers (like the supported
authentication schemes).

Would it be possible to change the http/client.py behavior in order to
raise an exception including the whole response?

If you don't see any problem with my proposal, how can I propose a pull
request? :-)

Thanks in advance,
Ivo
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Lib/http/client.py: could it return an OSError with the current response?

2016-08-30 Thread Guido van Rossum
If you can do it without breaking existing code that doesn't expect
the extra information, please go ahead! For things like this it is
typically best to go straight to the bug tracker (bugs.python.org)
rather than asking the list first -- if the issue turns out to be
controversial or mysterious it's always possible to go to the list
later at the advice of the person doing triage in the tracker.

On Tue, Aug 30, 2016 at 6:41 AM, Ivo Bellin Salarin
 wrote:
> Hi everybody,
>
> Sorry for bothering you, this is my first post to the python-dev ML.
>
> While using requests to tunnel a request via a proxy requiring user
> authentication, I have seen that httplib
> (https://hg.python.org/cpython/file/3.5/Lib/http/client.py#l831) raises the
> message returned by the proxy, along with its status code (407) without
> including the proxy response. This one could be very interesting to the
> consumer, since it could contain some useful headers (like the supported
> authentication schemes).
>
> Would it be possible to change the http/client.py behavior in order to raise
> an exception including the whole response?
>
> If you don't see any problem with my proposal, how can I propose a pull
> request? :-)
>
> Thanks in advance,
> Ivo
>
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/guido%40python.org
>



-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Supported versions of OpenSSL

2016-08-30 Thread Antoine Pitrou
On Sun, 28 Aug 2016 22:40:11 +0200
Christian Heimes  wrote:
> 
> Here is the deal for 2.7 to 3.5:
> 
> 1) All versions older than 0.9.8 are completely out-of-scope and no
> longer supported.
> 
> 2) 0.9.8 is semi-support. Python will still compile and work with 0.9.8.
> However we do NOT promise that is secure to run 0.9.8. We also require a
> recent version. Patch level 0.9.8zc from October 2014 is reasonable
> because it comes with SCSV fallback (CVE-2014-3566).
> 
> 3) 1.0.0 is irrelevant. Users are either stuck on 0.9.8 or are able to
> upgrade to 1.0.1+. Let's not support it.
> 
> 4) 1.0.1 is discouraged but still supported until its EOL.
> 
> 5) 1.0.2 is the recommend version.
> 
> 6) 1.1 support will be added by #26470 soon.
> 
> 7) LibreSSL 2.3 is supported but with a slightly limited feature set.

Can you expand briefly how "limited" the feature set is?  Does it only
disable some arcane features, so that e.g. asyncio + TLS supports works
fine?

Other than that, it all sounds good to me.

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Antoine Pitrou
On Mon, 29 Aug 2016 21:38:19 +
Brett Cannon  wrote:
> For quick background for those that don't remember, part of PEP 523
> proposed adding a co_extra field to code objects along with making the
> frame evaluation function pluggable (
> https://www.python.org/dev/peps/pep-0523/#expanding-pycodeobject). The idea
> was that things like JITs and debuggers could use the field as a scratch
> space of sorts to store data related to the code object. People who
> objected to the new field did either for memory ("it adds another pointer
> to the struct that won't be typically used"), or for conceptual reasons
> ("the code object is immutable and you're proposing a mutable field"). The
> latter is addressed by not exposing the field in Python and clearly stating
> that code should never expect the field to be filled.

Just a question: Maciej mentioned the field may be useful for vmprof.
That's already two potential users (vmprof and Pyjion) for a single
field.  OTOH, the PEP says:

"""It is not recommended that multiple users attempt to use the co_extra
simultaneously. While a dictionary could theoretically be set to the
field and various users could use a key specific to the project, there
is still the issue of key collisions as well as performance degradation
from using a dictionary lookup on every frame evaluation. Users are
expected to do a type check to make sure that the field has not been
previously set by someone else."""

Does it mean that, in the event vmprof comes in and changes the field,
Pyjion will have to recompile the function the next time it comes to
execute it?

And, conversely, if Pyjion compiles a function while vmprof is enabled,
vmprof will lose timing information (or whatever else, because I'm not
sure what vmprof plans to store thre) for that code object?

Regards

Antoine.


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Brett Cannon
On Tue, 30 Aug 2016 at 09:08 Antoine Pitrou  wrote:

> On Mon, 29 Aug 2016 21:38:19 +
> Brett Cannon  wrote:
> > For quick background for those that don't remember, part of PEP 523
> > proposed adding a co_extra field to code objects along with making the
> > frame evaluation function pluggable (
> > https://www.python.org/dev/peps/pep-0523/#expanding-pycodeobject). The
> idea
> > was that things like JITs and debuggers could use the field as a scratch
> > space of sorts to store data related to the code object. People who
> > objected to the new field did either for memory ("it adds another pointer
> > to the struct that won't be typically used"), or for conceptual reasons
> > ("the code object is immutable and you're proposing a mutable field").
> The
> > latter is addressed by not exposing the field in Python and clearly
> stating
> > that code should never expect the field to be filled.
>
> Just a question: Maciej mentioned the field may be useful for vmprof.
> That's already two potential users (vmprof and Pyjion) for a single
> field.


PTVS has also said they would find it useful for debugging.


>   OTOH, the PEP says:
>
> """It is not recommended that multiple users attempt to use the co_extra
> simultaneously. While a dictionary could theoretically be set to the
> field and various users could use a key specific to the project, there
> is still the issue of key collisions as well as performance degradation
> from using a dictionary lookup on every frame evaluation. Users are
> expected to do a type check to make sure that the field has not been
> previously set by someone else."""
>
> Does it mean that, in the event vmprof comes in and changes the field,
> Pyjion will have to recompile the function the next time it comes to
> execute it?
>

As of right now Pyjion simply doesn't JIT the function.


>
> And, conversely, if Pyjion compiles a function while vmprof is enabled,
> vmprof will lose timing information (or whatever else, because I'm not
> sure what vmprof plans to store there) for that code object?
>

Depends on what vmprof chooses to do. Since the data is designed to be
disposable it could decide it should always take precedence and overwrite
the data if someone beat it to using the field. Basically I don't think we
want co_extra1, co_extra2, etc. But we don't want to require a dict either
as that kills performance. Using a list where users could push on objects
might work, but I have no clue what that would do to perf as you would have
to still needlessly search the list when only one piece of code uses the
field.

Basically I don't see a good way to make a general solution for people who
want to use the field simultaneously, so tools that use the field will need
to be clear on how they choose to handle the situation, such "as we use it
if it isn't set" or "we always use it no matter what". This isn't a perfect
solution in all cases and I think that's just going to have to be the way
it is.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Brett Cannon
On Tue, 30 Aug 2016 at 01:20 Maciej Fijalkowski  wrote:

> On Tue, Aug 30, 2016 at 3:00 AM, Brett Cannon  wrote:
> >
> >
> > On Mon, Aug 29, 2016, 17:06 Terry Reedy  wrote:
> >>
> >> On 8/29/2016 5:38 PM, Brett Cannon wrote:
> >>
> >> > who objected to the new field did either for memory ("it adds another
> >> > pointer to the struct that won't be typically used"), or for
> conceptual
> >> > reasons ("the code object is immutable and you're proposing a mutable
> >> > field"). The latter is addressed by not exposing the field in Python
> and
> >>
> >> Am I correct is thinking that you will also not add the new field as an
> >> argument to PyCode_New?
> >
> >
> > Correct.
> >
> >>
> >>  > clearly stating that code should never expect the field to be filled.
> >>
> >> I interpret this as "The only code that should access the field should
> >> be code that put something there."
> >
> >
> > Yep, seems like a reasonable rule to follow.
> >
> > -brett
>
> How do we make sure that multuple tools don't stomp on each other?
>

It will be up to the tool. For Pyjion we just don't use the field if
someone else is using it, so if vmprof chooses to take precedence then it
can. Otherwise we can work out some common practice like if the field is
set and it isn't an object you put there then create a list, push on what
was already there, push on what you want to add, and set the field to the
list. That lets us do a type-check for the common case of only one object
being set, and in the odd case of the list things don't fail as you can
search the list for your object while acknowledging performance will suffer
(or we use a dict, I don't care really as long as we don't require a
storage data structure for the field in the single user case).

My point is that we can figure this out among Pyjion, PTVS, and vmprof if
we are the first users and update the PEP accordingly as guidance.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Antoine Pitrou
On Tue, 30 Aug 2016 17:14:31 +
Brett Cannon  wrote:
> 
> Depends on what vmprof chooses to do. Since the data is designed to be
> disposable it could decide it should always take precedence and overwrite
> the data if someone beat it to using the field. Basically I don't think we
> want co_extra1, co_extra2, etc. But we don't want to require a dict either
> as that kills performance. Using a list where users could push on objects
> might work, but I have no clue what that would do to perf as you would have
> to still needlessly search the list when only one piece of code uses the
> field.

Perhaps a list would work indeed.  Realistically, if there are at most
2-3 users of the field at any given time (and most probably only one or
zero), a simple type check (by pointer equality) on each list item may
be sufficient.

Speaking about Numba, we don't have any planned use for the field, so I
can't really give any further suggestion.

Regards

Antoine.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Brett Cannon
On Tue, 30 Aug 2016 at 10:32 Antoine Pitrou  wrote:

> On Tue, 30 Aug 2016 17:14:31 +
> Brett Cannon  wrote:
> >
> > Depends on what vmprof chooses to do. Since the data is designed to be
> > disposable it could decide it should always take precedence and overwrite
> > the data if someone beat it to using the field. Basically I don't think
> we
> > want co_extra1, co_extra2, etc. But we don't want to require a dict
> either
> > as that kills performance. Using a list where users could push on objects
> > might work, but I have no clue what that would do to perf as you would
> have
> > to still needlessly search the list when only one piece of code uses the
> > field.
>
> Perhaps a list would work indeed.  Realistically, if there are at most
> 2-3 users of the field at any given time (and most probably only one or
> zero), a simple type check (by pointer equality) on each list item may
> be sufficient.
>

Let's see what Maciej says, but we could standardize on switching the field
to a list when a conflict of usage is detected so the common case in the
frame eval function is checking for your own type, and if that fails then
doing a PyList_CheckExact() and look for your object, otherwise make a list
and move over to that for everyone to use. A little bit more code, but it's
simple code and takes care of conflicts only when it calls for it.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What do we do about bad slicing and possible crashes (issue 27867)

2016-08-30 Thread Nick Coghlan
On 28 August 2016 at 08:25, Terry Reedy  wrote:
> Slicing can be made to malfunction and even crash with an 'evil' __index__
> method. https://bugs.python.org/issue27867
>
> The crux of the problem is this: PySlice_GetIndicesEx
> receives a slice object and a sequence length.  Calling __index__ on the
> start, stop, and step components can mutate the sequence and invalidate the
> length.  Adjusting the int values of start and stop according to an invalid
> length (in particular, one that is too long) will result in invalid results
> or a crash.
>
> Possible actions -- very briefly.  For more see end of
> https://bugs.python.org/issue27867?@ok_message=msg 273801
> 0. Do nothing.
> 1. Detect length change and raise.
> 2. Retrieve length after any possible changes and proceed as normal.
>
> Possible implementation strategies for 1. and 2.
> A. Change all functions that call PySlice_GetIndicesEx.
> B. Add PySlice_GetIndicesEx2 (or ExEx?), which would receive *collection
> instead of length, so the length could be retrieved after the __index__
> calls.  Change calls. Deprecate PySlice_GetIndicesEx.

Given Serhiy's clarification that this is primarily a thread safety
problem, I'm more supportive of the "PySlice_GetIndicesForObject"
approach (since that can call all the __index__ methods first, leaving
the final __len__ call as the only problematic case).

However, given the observation that __len__ can also release the GIL,
I'm not clear on how 2A is supposed to work - a poorly timed thread
switch means there's always going to be a risk of len(obj) returning
outdated information if a container implemented in Python is being
mutated concurrently from different threads, so what can be done
differently in the calling functions that couldn't be done in a new
API that accepted the container reference?

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Antoine Pitrou
On Tue, 30 Aug 2016 17:35:35 +
Brett Cannon  wrote:
> >
> > Perhaps a list would work indeed.  Realistically, if there are at most
> > 2-3 users of the field at any given time (and most probably only one or
> > zero), a simple type check (by pointer equality) on each list item may
> > be sufficient.
> >  
> 
> Let's see what Maciej says, but we could standardize on switching the field
> to a list when a conflict of usage is detected so the common case in the
> frame eval function is checking for your own type, and if that fails then
> doing a PyList_CheckExact() and look for your object, otherwise make a list
> and move over to that for everyone to use. A little bit more code, but it's
> simple code and takes care of conflicts only when it calls for it.

That's a bit obscure and confusing, though (I *think* the weakref module
uses a similar kludge in some place).  If you want to iterate on it you
have to write some bizarre macro to share the loop body between the two
different code-paths (list / non-list), or some equally tedious
function-pointer-based code.

Why not make it always a list?  List objects are reasonably cheap in
memory and access time... (unlike dicts)

Regards

Antoine.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] Changes to PEP 498 (f-strings)

2016-08-30 Thread Eric V. Smith
After a long discussion on python-ideas (starting at
https://mail.python.org/pipermail/python-ideas/2016-August/041727.html)
I'm proposing the following change to PEP 498: backslashes inside
brackets will be disallowed. The point of this is to disallow convoluted
code like:

>>> d = {'a': 4}
>>> f'{d[\'a\']}'
'4'

In addition, I'll disallow escapes to be used for brackets, as in:

>>> f'\x7bd["a"]}'
'4'

(where chr(0x7b) ==  "{").

Because we're so close to 3.6 beta 1, my plan is to:

1. Modify the PEP to reflect these restrictions.
2. Modify the code to prevent _any_ backslashes inside f-strings.

This is a more restrictive change than the PEP will describe, but it's
much easier to implement. After beta 1, and hopefully before beta 2, I
will implement the restrictions as I've outlined above (and as they will
be documented in the PEP). The net effects are:

a. Some code that works in the alphas won't work in beta 1. I'll
document this.
b. All code that's valid in beta 1 will work in beta 2, and some
f-strings that are syntax errors in beta 1 will work in beta 2.

I've discussed this issue with Ned and Guido, who are okay with these
changes.

The python-ideas thread I referenced above has some discussion about
further changes to f-strings. Those proposals are outside the scope of
3.6, but the changes I'm putting forth here will allow for those
additional changes, should we decide to make them. That's a discussion
for 3.7, however.

I'm sending this email out just to notify people of this upcoming
change. I hope this won't generate much discussion. If you feel the need
to discuss this issue further, please use the python-ideas thread (where
some people are already ignoring it!).

Eric.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Steve Dower

On 30Aug2016 0806, Victor Stinner wrote:

2016-08-30 16:31 GMT+02:00 Steve Dower :

It's the
random user on Windows who installed their library that has the problem.
They don't know the fix, and may not know how to apply it (e.g. if it's
their Jupyter notebook that won't find one of their files - no obvious
command line options here).


There is already a DeprecationWarning. Sadly, it's hidden by default:
you need a debug build of Python or more simply to pass -Wd command
line option.


It also only appears on Windows, so developers who do the right thing on 
POSIX never find out about it. Your average user isn't going to see it - 
they'll just see the OSError when their file is not found due to the 
lossy encoding.



Maybe we should make this warning (Deprecation warning on bytes paths)
visible by default, or add a new warning suggesting to enable -X utf8
the first time a Python function gets a byte string (like a filename)?


The more important thing in my opinion is to make it visible on all 
platforms, regardless of whether bytes paths are suitable or not. But 
this will probably be seen as hostile by the majority of open-source 
Python developers, which is why I'd rather just quietly fix the 
incompatibility.



Any system that requires communication between two different versions of
Python must have install instructions (if it's public) or someone who
maintains it. It won't magically break without an upgrade, and it should not
get an upgrade without testing. The environment variable is available for
this kind of scenario, though I'd hope the testing occurs during beta and it
gets fixed by the time we release.


I disagree that breaking backward compatibility is worth it. Most
users don't care of Unicode since their application already "just
works well" for their use case.


Again, the problem is libraries (code written by someone else that you 
want to reuse), not applications (code written by you to solve your 
business problem in your environment). Code that assumes the default 
encodings are sufficient is already broken in the general case, and 
libraries nearly always need to cover the general case while 
applications do not. The stdlib needs to cover the general case, which 
is why I keep using open(os.listdir(b'.')[-1]) as an example of 
something that should never fail because of encoding issues.


In theory, we should encourage library developers to support Windows 
properly by using str for paths, probably by disabling bytes paths 
everywhere. Alternatively, we make it so that bytes paths work fine 
everywhere and stop telling people that their code is wrong for a 
platform they're already not hugely concerned about.



Having to set an env var to "repair" their app to be able to upgrade
Python is not really convenient.


Upgrading Python in an already running system isn't going to be really 
convenient anyway. Going from x.y.z to x.y.z+1 should be convenient, but 
from x.y to x.y+1 deserves testing and possibly code or environment 
changes. I don't understand why changing Python at the same time we 
change the version number is suddenly controversial.


Cheers,
Steve

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Guido van Rossum
Is this thread something I need to follow closely?

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Brett Cannon
On Tue, 30 Aug 2016 at 10:49 Antoine Pitrou  wrote:

> On Tue, 30 Aug 2016 17:35:35 +
> Brett Cannon  wrote:
> > >
> > > Perhaps a list would work indeed.  Realistically, if there are at most
> > > 2-3 users of the field at any given time (and most probably only one or
> > > zero), a simple type check (by pointer equality) on each list item may
> > > be sufficient.
> > >
> >
> > Let's see what Maciej says, but we could standardize on switching the
> field
> > to a list when a conflict of usage is detected so the common case in the
> > frame eval function is checking for your own type, and if that fails then
> > doing a PyList_CheckExact() and look for your object, otherwise make a
> list
> > and move over to that for everyone to use. A little bit more code, but
> it's
> > simple code and takes care of conflicts only when it calls for it.
>
> That's a bit obscure and confusing, though (I *think* the weakref module
> uses a similar kludge in some place).  If you want to iterate on it you
> have to write some bizarre macro to share the loop body between the two
> different code-paths (list / non-list), or some equally tedious
> function-pointer-based code.


I don't quite follow where the complexity you're suggesting comes from. The
frame evaluation function in Pyjion would just do:

  if (co_extra == NULL) {
  // No one using the field.
  co_extra = pyjion_cache = PyPyjion_New();
  }
  else if (!is_pyjion_object(co_extra)) {
// Someone other than us is using the field.
if (PyList_CheckExact(co_extra)) {
  // Field is already a list.
  ... look for object ...
  if (ob_found != NULL) {
// We're in the list.
pyjion_cache = ob_found;
  }
  else {
// Not in the list, so add ourselves.
pyjion_cache = PyPyjion_New();
co_extra.append(pyjion_cache);
  }
}
else {
  // Someone else in the field, not a list (yet).
  other_ob = co_extra;
  co_extra = PyList_New();
  co_extra.append(other_ob);
  pyjion_cache = PyPyjion_New();
  co_extra.append(pyjion_cache);
}
  }
  else {
// We're in the field.
pyjion_cache = co_extra;
  }


>
> Why not make it always a list?  List objects are reasonably cheap in
> memory and access time... (unlike dicts)
>

Because I would prefer to avoid any form of unnecessary performance
overhead for the common case.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Nick Coghlan
On 31 August 2016 at 01:06, Victor Stinner  wrote:
> 2016-08-30 16:31 GMT+02:00 Steve Dower :
>> Any system that requires communication between two different versions of
>> Python must have install instructions (if it's public) or someone who
>> maintains it. It won't magically break without an upgrade, and it should not
>> get an upgrade without testing. The environment variable is available for
>> this kind of scenario, though I'd hope the testing occurs during beta and it
>> gets fixed by the time we release.
>
> I disagree that breaking backward compatibility is worth it. Most
> users don't care of Unicode since their application already "just
> works well" for their use case.
>
> Having to set an env var to "repair" their app to be able to upgrade
> Python is not really convenient.

This seems to be the crux of the disagreement: our perceptions of the
relative risks to native Windows Python applications that currently
work properly on Python 3.5 vs the potential compatibility benefits to
primarily *nix applications that currently *don't* work on Windows
under Python 3.5.

If I'm understanding Steve's position correctly, his view is that
native Python applications that are working well on Windows under
Python 3.5 *must already be using strings to interact with the OS*.
This means that they will be unaffected by the proposed changes, as
the proposed changes only impact attempts to pass bytes to the OS, not
attempts to pass strings.

In uncontrolled environments, using bytes to interact with the OS on
Windows just *plain doesn't work properly* under the current model, so
the proposed change is a matter of changing precisely how those
applications fail, rather than moving them from a previously working
state to a newly broken state.

For the proposed default behaviour change to cause problems then,
there must be large bodies of software that exist in sufficiently
controlled environments that they can get bytes-on-WIndows to work in
the first place by carefully managing the code page settings, but then
*also* permit uncontrolled upgrades to Python 3.6 without first
learning that they need to add a new environment variable setting to
preserve the Python 3.5 (and earlier) bytes handling behaviour.
Steve's assertion is that this intersection of "managed code page
settings" and "unmanaged Python upgrades" results in the null set.

A new opt-in config option eliminates any risk of breaking anything,
but means Steve will have to wait until 3.7 to try out the idea of
having more *nix centric software "just work" on Windows. In the grand
scheme of things, I still think it's worth taking that additional
time, especially if things are designed so that embedding applications
can easily flip the default behaviour.

Yes, there will be environments on Windows where the command line
option won't help, just as there are environments on Linux where it
won't help. I think that's OK, as we can use the 3.6 cycle to thrash
out the details of the new behaviour in the environments where it
*does* help (e.g. developers running their test suites on Windows
systems), get people used to the idea that the behaviour of binary
paths on Windows is going to change, and then actually make the switch
in Python 3.7.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Steve Dower

On 30Aug2016 1108, Guido van Rossum wrote:

Is this thread something I need to follow closely?


I have PEPs coming, and I'll distil the technical parts of the 
discussion into those.


We may need you to impose an opinion on whether 3.6 is an appropriate 
time for the change or it should wait for 3.7. I think the technical 
implications are fairly clear, it's just the risk of 
surprising/upsetting users that is not.


Cheers,
Steve
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Antoine Pitrou
On Tue, 30 Aug 2016 18:12:01 +
Brett Cannon  wrote:
> > Why not make it always a list?  List objects are reasonably cheap in
> > memory and access time... (unlike dicts)
> 
> Because I would prefer to avoid any form of unnecessary performance
> overhead for the common case.

But the performance overhead of iterating over a 1-element list
is small enough (it's just an array access after a pointer dereference)
that it may not be larger than the overhead of the multiple tests and
conditional branches your example shows.

Regards

Antoine.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Serhiy Storchaka

On 30.08.16 21:20, Antoine Pitrou wrote:

On Tue, 30 Aug 2016 18:12:01 +
Brett Cannon  wrote:

Why not make it always a list?  List objects are reasonably cheap in
memory and access time... (unlike dicts)


Because I would prefer to avoid any form of unnecessary performance
overhead for the common case.


But the performance overhead of iterating over a 1-element list
is small enough (it's just an array access after a pointer dereference)
that it may not be larger than the overhead of the multiple tests and
conditional branches your example shows.


Iterating over a tuple is even faster. It needs one pointer dereference 
less.


And for memory efficiency we can use just a raw array of pointers.


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What do we do about bad slicing and possible crashes (issue 27867)

2016-08-30 Thread Serhiy Storchaka

On 30.08.16 20:42, Nick Coghlan wrote:

On 28 August 2016 at 08:25, Terry Reedy  wrote:

Slicing can be made to malfunction and even crash with an 'evil' __index__
method. https://bugs.python.org/issue27867

The crux of the problem is this: PySlice_GetIndicesEx
receives a slice object and a sequence length.  Calling __index__ on the
start, stop, and step components can mutate the sequence and invalidate the
length.  Adjusting the int values of start and stop according to an invalid
length (in particular, one that is too long) will result in invalid results
or a crash.

Possible actions -- very briefly.  For more see end of
https://bugs.python.org/issue27867?@ok_message=msg 273801
0. Do nothing.
1. Detect length change and raise.
2. Retrieve length after any possible changes and proceed as normal.

Possible implementation strategies for 1. and 2.
A. Change all functions that call PySlice_GetIndicesEx.
B. Add PySlice_GetIndicesEx2 (or ExEx?), which would receive *collection
instead of length, so the length could be retrieved after the __index__
calls.  Change calls. Deprecate PySlice_GetIndicesEx.


Given Serhiy's clarification that this is primarily a thread safety
problem, I'm more supportive of the "PySlice_GetIndicesForObject"
approach (since that can call all the __index__ methods first, leaving
the final __len__ call as the only problematic case).


This doesn't work with multidimensional slicing (like 
_testbuffer.ndarray or NumPy arrays).



However, given the observation that __len__ can also release the GIL,
I'm not clear on how 2A is supposed to work - a poorly timed thread
switch means there's always going to be a risk of len(obj) returning
outdated information if a container implemented in Python is being
mutated concurrently from different threads, so what can be done
differently in the calling functions that couldn't be done in a new
API that accepted the container reference?


Current code doesn't use __len__. It uses something like 
PyUnicode_GET_LENGTH().


The solution was found easier than I afraid. See my patch to issue27867.


___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Supported versions of OpenSSL

2016-08-30 Thread M.-A. Lemburg
On 29.08.2016 22:16, Christian Heimes wrote:
> On 2016-08-29 21:31, M.-A. Lemburg wrote:
>> On 29.08.2016 18:33, Cory Benfield wrote:
>>>
 On 29 Aug 2016, at 04:09, M.-A. Lemburg  wrote:

 On 28.08.2016 22:40, Christian Heimes wrote:
> ...
> I like to reduce the maintenance burden and list of supported OpenSSL
> versions ASAP. OpenSSL has deprecated 0.9.8 and 1.0.0 last year. 1.0.1
> will reach EOL by the end of this year,
> https://www.openssl.org/policies/releasestrat.html . However OpenSSL
> 0.9.8 is still required for some platforms (OSX).
> ...
> For upcoming 3.6 I would like to limit support to 1.0.2+ and require
> 1.0.2 features for 3.7.
> ...

 Hmm, that last part would mean that Python 3.7 will no longer compile
 on e.g. Ubuntu 14.04 LTS which uses OpenSSL 1.0.1 as default version.
 Since 14.04 LTS is supported until 2019, I think it would be better
 to only start requiring 1.0.2 in Python 3.8.
>>>
>>> Can someone explain to me why this is a use-case we care about?
>>
>> Ubuntu 14.04 is a widely deployed system and newer Python version
>> should run on such widely deployed systems without having to
>> replace important vendor maintained system libraries such as
>> OpenSSL.
> 
> "Widely deployed" is true for a lot of old operating systems including
> Windows XP.
> 
>> Python 3.7 starts shipping around June 2018 (assuming the 18 month
>> release cycle). Ubuntu 14.04 EOL is April 2019, so in order to
>> be able to use Python 3.7 on such a system, you'd have to upgrade
>> to a more recent LTS version 10 months before the EOL date (with
>> all the associated issues) or lose vendor maintenance support and
>> run with your own copy of OpenSSL.
> 
> Why would you deploy an unsupported Python version on a LTS release? Why
> should compatibility be our concern?
> 
>> Sure, but Ubuntu will continue to support OpenSSL 1.0.1
>> until 2019, backporting important security fixes as necessary and
>> that's what's important.
> 
> I see an easy solution here: either pay or make Canonical backport all
> required features to OpenSSL 1.0.1. 
> 
>> It's unfortunate that Python has to rely on a 3rd party library
>> for security, but we should at least make sure that our users
>> can rely on OS vendor support to keep the lib up to date with
>> security fixes.
> 
> No, it is a good thing that we can rely on 3rd party libraries for
> security. Crypto and security is not our domain. It is incredible hard
> to develop and maintain crypto code. Also my proposal enforces OS
> vendors to supply up to date OpenSSL versions.

That was not my point. It's unfortunate that Python depends on
a library which is inevitably going to need updates frequently,
and which then may have the implication that Python won't compile on
systems which don't ship with more recent OpenSSL libs - even
if your application doesn't even need ssl at all.

>> On 29.08.2016 10:24, Christian Heimes wrote:
>>> By the way I knew that something like this would come up from you.
>>> Thank you that you satisfied my expectation. :p
>>
>> Sure, I want Python to be used on as many systems as possible,
>> both in terms of architecture and OS. The more the better.
>> If we don't have to drop support early, why should we ?
> 
> MAL, I don't like your attitude. It feels like you want me and other
> contributors to waste time on this topic. That is not how this
> discussion is going to end. If *you* want to keep support for outdated
> OpenSSL versions, than it is *your* responsibility and *your* time. You
> cannot and will not put this burden on me.

Please reread what I suggested: to postpone the switch to require
OpenSSL 1.0.2 by one Python release version. And in my reply I then
put this into more context, saying that your schedule will likely
work out.

Postponing this should not introduce more work for anyone; if you'd
like to add support for 1.0.2 feature early this can also easily be
done by making such support optional depending on which OpenSSL
lib Python is compiled against. This takes a few #ifdefs, nothing
more.

> Python is running out of developers with OpenSSL expertise. It's Alex,
> Antoine, Benjamin, Victor and me. Antoine and me haven't been active for
> a while. Victor and Benjamin are mostly working on other topics. As far
> as I can judge Alex, he rather works on PyCA than CPython stdlib.
> 
> I'm both interested and willing to improve Python's ssl stack, and I'm
> going to do this in my own free time. Yes, I'm working for Red Hat's
> security engineering, but I'm not getting paid to work on Python (except
> for a couple of hours now and then when a bug is relevant for my daily
> work). I will only contribute improvements and fixes on my own terms,
> that means I'm not going to waste my time with outdated versions. In my
> opinion it is more than reasonable to ditch 1.0.1 and earlier.

I want you to consider the consequences of doing this carefully.

Crypto is important to have, bu

Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Brett Cannon
On Tue, 30 Aug 2016 at 11:56 Serhiy Storchaka  wrote:

> On 30.08.16 21:20, Antoine Pitrou wrote:
> > On Tue, 30 Aug 2016 18:12:01 +
> > Brett Cannon  wrote:
> >>> Why not make it always a list?  List objects are reasonably cheap in
> >>> memory and access time... (unlike dicts)
> >>
> >> Because I would prefer to avoid any form of unnecessary performance
> >> overhead for the common case.
> >
> > But the performance overhead of iterating over a 1-element list
> > is small enough (it's just an array access after a pointer dereference)
> > that it may not be larger than the overhead of the multiple tests and
> > conditional branches your example shows.
>
> Iterating over a tuple is even faster. It needs one pointer dereference
> less.
>

I'll talk it over with Dino and see what he thinks.


>
> And for memory efficiency we can use just a raw array of pointers.
>

I would rather not do that as that leads to having to track the end of the
array, special memory cleanup, etc.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Chris Angelico
On Wed, Aug 31, 2016 at 4:55 AM, Serhiy Storchaka  wrote:
> On 30.08.16 21:20, Antoine Pitrou wrote:
>>
>> On Tue, 30 Aug 2016 18:12:01 +
>> Brett Cannon  wrote:

 Why not make it always a list?  List objects are reasonably cheap in
 memory and access time... (unlike dicts)
>>>
>>>
>>> Because I would prefer to avoid any form of unnecessary performance
>>> overhead for the common case.
>>
>>
>> But the performance overhead of iterating over a 1-element list
>> is small enough (it's just an array access after a pointer dereference)
>> that it may not be larger than the overhead of the multiple tests and
>> conditional branches your example shows.
>
>
> Iterating over a tuple is even faster. It needs one pointer dereference
> less.
>
> And for memory efficiency we can use just a raw array of pointers.

Didn't all this kind of thing come up when function annotations were
discussed? Insane schemes like dictionaries with UUID keys and so on.
The decision then was YAGNI. The decision now, IMO, should be the
same. Keep things simple.

ChrisA
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


[Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Guido van Rossum
I'm happy to present PEP 526 for your collective review:
https://www.python.org/dev/peps/pep-0526/ (HTML)
https://github.com/python/peps/blob/master/pep-0526.txt (source)

There's also an implementation ready:
https://github.com/ilevkivskyi/cpython/tree/pep-526

I don't want to post the full text here but I encourage feedback on
the high-order ideas, including but not limited to

- Whether (given PEP 484's relative success) it's worth adding syntax
for variable/attribute annotations.

- Whether the keyword-free syntax idea proposed here is best:
  NAME: TYPE
  TARGET: TYPE = VALUE

Note that there's an extensive list of rejected ideas in the PEP;
please be so kind to read it before posting here:
https://www.python.org/dev/peps/pep-0526/#rejected-proposals-and-things-left-out-for-now


-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Victor Stinner
Le 30 août 2016 8:05 PM, "Nick Coghlan"  a écrit :
> This seems to be the crux of the disagreement: our perceptions of the
> relative risks to native Windows Python applications that currently
> work properly on Python 3.5 vs the potential compatibility benefits to
> primarily *nix applications that currently *don't* work on Windows
> under Python 3.5.

As I already wrote once, my problem is also tjat I simply have no idea how
much Python 3 code uses bytes filename. For example, does it concern more
than 25% of py3 modules on PyPi, or less than 5%?

Having an idea of the ratio would help to move the discussion forward.

Victor
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Supported versions of OpenSSL

2016-08-30 Thread Cory Benfield

> On 30 Aug 2016, at 16:07, M.-A. Lemburg  wrote:
> 
> That was not my point. It's unfortunate that Python depends on
> a library which is inevitably going to need updates frequently,
> and which then may have the implication that Python won't compile on
> systems which don't ship with more recent OpenSSL libs - even
> if your application doesn't even need ssl at all.
> 
> Crypto is important to have, but at the same time it's not
> essentially for everything you do in Python, e.g. you can
> easily run data analysis scripts or applications without ever
> touching the ssl module.
> 
> Yet, a move to require OpenSSL 1.0.2 for Python 3.7 will make
> it impossible to run such apps on systems that still use OpenSSL
> 1.0.1, e.g. Ubuntu 14.04 or CentOS 7.

If your application doesn’t need SSL, then you can compile without OpenSSL. I 
just downloaded and compiled the current tip of the CPython repository on a 
system with no OpenSSL, and the world didn’t explode, it just printed this:

Python build finished successfully!
The necessary bits to build these optional modules were not found:
_bz2  _curses   _curses_panel  
_dbm  _gdbm _lzma  
_sqlite3  _ssl  _tkinter   
readline  zlib 
To find the necessary bits, look in setup.py in detect_modules() for the 
module's name.

So this user you have considered, who needs Python but not the ssl module, is 
still well served. The ssl module is not mandatory in CPython, and no-one is 
proposing that it should be.

But the real question is this: who *is* this hypothetical user? This user 
apparently needs the latest CPython, but is entirely unwilling to update 
literally anything else, including moving to a more recent release of their 
operating system. They are equipped to compile Python from source, but are 
apparently unwilling or unable to install a more recent OpenSSL from source. 
I’m not entirely certain that python-dev should be supporting that user: that 
user should be contacting their LTS supplier.

Cory
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Sven R. Kunze

Thanks Guido, also to the rest of the PEP team (4 people) :)


On 30.08.2016 23:20, Guido van Rossum wrote:

I'm happy to present PEP 526 for your collective review:
https://www.python.org/dev/peps/pep-0526/ (HTML)
https://github.com/python/peps/blob/master/pep-0526.txt (source)

There's also an implementation ready:
https://github.com/ilevkivskyi/cpython/tree/pep-526

I don't want to post the full text here but I encourage feedback on
the high-order ideas, including but not limited to

- Whether (given PEP 484's relative success) it's worth adding syntax
for variable/attribute annotations.


I'd say no, especially because of the negative feedback by not a few 
thread participants.



- Whether the keyword-free syntax idea proposed here is best:
   NAME: TYPE
   TARGET: TYPE = VALUE


If it will come, it's the best because of its similarity with parameter 
annotations and IIRC there are languages that already do it like this.



Note that there's an extensive list of rejected ideas in the PEP;
please be so kind to read it before posting here:
https://www.python.org/dev/peps/pep-0526/#rejected-proposals-and-things-left-out-for-now


I find everything else well covered in the PEP especially corner-cases 
like variables without initialization, scopes etc.



Sven
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Brett Cannon
On Tue, 30 Aug 2016 at 14:21 Guido van Rossum  wrote:

> I'm happy to present PEP 526 for your collective review:
> https://www.python.org/dev/peps/pep-0526/ (HTML)
> https://github.com/python/peps/blob/master/pep-0526.txt (source)
>
> There's also an implementation ready:
> https://github.com/ilevkivskyi/cpython/tree/pep-526
>
> I don't want to post the full text here but I encourage feedback on
> the high-order ideas, including but not limited to
>
> - Whether (given PEP 484's relative success) it's worth adding syntax
> for variable/attribute annotations.
>

I think so, otherwise type hints are in this weird "half in, half out"
situation in terms of support that only non-OO code can fully utilize.
Either we're going to have type hints for those that want it and properly
support it for full use, or we shouldn't have type hints at all and this
syntax fills in a nice gaps that was a bit awkward to use before.


>
> - Whether the keyword-free syntax idea proposed here is best:
>   NAME: TYPE
>   TARGET: TYPE = VALUE
>

I personally like it. I've been learning Rust lately and it matches up with
their syntax (sans `let`) and I have been happy with it (same goes for
TypeScript's use of the same syntax that Rust uses).

-Brett


>
> Note that there's an extensive list of rejected ideas in the PEP;
> please be so kind to read it before posting here:
>
> https://www.python.org/dev/peps/pep-0526/#rejected-proposals-and-things-left-out-for-now
>
>
> --
> --Guido van Rossum (python.org/~guido)
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe:
> https://mail.python.org/mailman/options/python-dev/brett%40python.org
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Victor Stinner
2016-08-30 23:51 GMT+02:00 Victor Stinner :
> As I already wrote once, my problem is also tjat I simply have no idea how
> much Python 3 code uses bytes filename. For example, does it concern more
> than 25% of py3 modules on PyPi, or less than 5%?

I made a very quick test on Windows using a modified Python raising an
exception on bytes path.

First of all, setuptools fails. It's a kind of blocker issue :-) I
quickly fixed it (only one line needs to be modified).

I tried to run Twisted unit tests (python -m twisted.trial twisted) of
Twisted 16.4. I got a lot of exceptions on bytes path from the
twisted/python/filepath.py module, but also from
twisted/trial/util.py. It looks like these modules are doing their
best to convert all paths to... bytes. I had to modify more than 5
methods just to be able to start running unit tests.

Quick result: setuptools and Twisted rely on bytes path. Dropping
bytes path support on Windows breaks these modules.

It also means that these modules don't support the full Unicode range
on Windows on Python 3.5.

Victor
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Steve Dower

On 30Aug2016 1611, Victor Stinner wrote:

2016-08-30 23:51 GMT+02:00 Victor Stinner :

As I already wrote once, my problem is also tjat I simply have no idea how
much Python 3 code uses bytes filename. For example, does it concern more
than 25% of py3 modules on PyPi, or less than 5%?


I made a very quick test on Windows using a modified Python raising an
exception on bytes path.

First of all, setuptools fails. It's a kind of blocker issue :-) I
quickly fixed it (only one line needs to be modified).

I tried to run Twisted unit tests (python -m twisted.trial twisted) of
Twisted 16.4. I got a lot of exceptions on bytes path from the
twisted/python/filepath.py module, but also from
twisted/trial/util.py. It looks like these modules are doing their
best to convert all paths to... bytes. I had to modify more than 5
methods just to be able to start running unit tests.

Quick result: setuptools and Twisted rely on bytes path. Dropping
bytes path support on Windows breaks these modules.

It also means that these modules don't support the full Unicode range
on Windows on Python 3.5.


Thanks. That's a good idea (certainly better than mine, which was to go 
reading code...)


I haven't looked into setuptools, but Twisted appears to be correctly 
using sys.getfilesystemencoding() when they coerce to bytes, which means 
the proposed change will simply allow the full Unicode range when paths 
are encoded.


However, if there are places where bytes are not transcoded when they 
should be *then* there will be new issues. I wonder if we can quickly 
test whether that happens (e.g. use the file system encoding to "taint" 
the path somehow - special prefix? - so we can raise if bytes that 
haven't been correctly encoded at some point are passed in).


Some of my other searching revealed occasional correct use of 
sys.getfilesystemencoding(), a decent number of uses as a fallback when 
other encodings are not available, and it's very hard to search for code 
that uses the os module with bytes not checked to be the right encoding. 
This is why I argue that the beta period is the best opportunity to 
check, and why we're better to flip the switch now and flip it back if 
it all goes horribly wrong - the alternative is a *very* labour 
intensive exercise that I doubt we can muster.



___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Supported versions of OpenSSL

2016-08-30 Thread Gregory P. Smith
On Tue, Aug 30, 2016 at 1:08 PM M.-A. Lemburg  wrote:

> On 29.08.2016 22:16, Christian Heimes wrote:
> > On 2016-08-29 21:31, M.-A. Lemburg wrote:
> >> On 29.08.2016 18:33, Cory Benfield wrote:
> >>>
>  On 29 Aug 2016, at 04:09, M.-A. Lemburg  wrote:
> 
>  On 28.08.2016 22:40, Christian Heimes wrote:
> > ...
> > I like to reduce the maintenance burden and list of supported OpenSSL
> > versions ASAP. OpenSSL has deprecated 0.9.8 and 1.0.0 last year.
> 1.0.1
> > will reach EOL by the end of this year,
> > https://www.openssl.org/policies/releasestrat.html . However OpenSSL
> > 0.9.8 is still required for some platforms (OSX).
> > ...
> > For upcoming 3.6 I would like to limit support to 1.0.2+ and require
> > 1.0.2 features for 3.7.
> > ...
> 
>  Hmm, that last part would mean that Python 3.7 will no longer compile
>  on e.g. Ubuntu 14.04 LTS which uses OpenSSL 1.0.1 as default version.
>  Since 14.04 LTS is supported until 2019, I think it would be better
>  to only start requiring 1.0.2 in Python 3.8.
> >>>
> >>> Can someone explain to me why this is a use-case we care about?
> >>
> >> Ubuntu 14.04 is a widely deployed system and newer Python version
> >> should run on such widely deployed systems without having to
> >> replace important vendor maintained system libraries such as
> >> OpenSSL.
> >
> > "Widely deployed" is true for a lot of old operating systems including
> > Windows XP.
> >
> >> Python 3.7 starts shipping around June 2018 (assuming the 18 month
> >> release cycle). Ubuntu 14.04 EOL is April 2019, so in order to
> >> be able to use Python 3.7 on such a system, you'd have to upgrade
> >> to a more recent LTS version 10 months before the EOL date (with
> >> all the associated issues) or lose vendor maintenance support and
> >> run with your own copy of OpenSSL.
> >
> > Why would you deploy an unsupported Python version on a LTS release? Why
> > should compatibility be our concern?
> >
> >> Sure, but Ubuntu will continue to support OpenSSL 1.0.1
> >> until 2019, backporting important security fixes as necessary and
> >> that's what's important.
> >
> > I see an easy solution here: either pay or make Canonical backport all
> > required features to OpenSSL 1.0.1. 
> >
> >> It's unfortunate that Python has to rely on a 3rd party library
> >> for security, but we should at least make sure that our users
> >> can rely on OS vendor support to keep the lib up to date with
> >> security fixes.
> >
> > No, it is a good thing that we can rely on 3rd party libraries for
> > security. Crypto and security is not our domain. It is incredible hard
> > to develop and maintain crypto code. Also my proposal enforces OS
> > vendors to supply up to date OpenSSL versions.
>
> That was not my point. It's unfortunate that Python depends on
> a library which is inevitably going to need updates frequently,
> and which then may have the implication that Python won't compile on
> systems which don't ship with more recent OpenSSL libs - even
> if your application doesn't even need ssl at all.
>
> >> On 29.08.2016 10:24, Christian Heimes wrote:
> >>> By the way I knew that something like this would come up from you.
> >>> Thank you that you satisfied my expectation. :p
> >>
> >> Sure, I want Python to be used on as many systems as possible,
> >> both in terms of architecture and OS. The more the better.
> >> If we don't have to drop support early, why should we ?
> >
> > MAL, I don't like your attitude. It feels like you want me and other
> > contributors to waste time on this topic. That is not how this
> > discussion is going to end. If *you* want to keep support for outdated
> > OpenSSL versions, than it is *your* responsibility and *your* time. You
> > cannot and will not put this burden on me.
>
> Please reread what I suggested: to postpone the switch to require
> OpenSSL 1.0.2 by one Python release version. And in my reply I then
> put this into more context, saying that your schedule will likely
> work out.
>
> Postponing this should not introduce more work for anyone; if you'd
> like to add support for 1.0.2 feature early this can also easily be
> done by making such support optional depending on which OpenSSL
> lib Python is compiled against. This takes a few #ifdefs, nothing
> more.
>
> > Python is running out of developers with OpenSSL expertise. It's Alex,
> > Antoine, Benjamin, Victor and me. Antoine and me haven't been active for
> > a while. Victor and Benjamin are mostly working on other topics. As far
> > as I can judge Alex, he rather works on PyCA than CPython stdlib.
> >
> > I'm both interested and willing to improve Python's ssl stack, and I'm
> > going to do this in my own free time. Yes, I'm working for Red Hat's
> > security engineering, but I'm not getting paid to work on Python (except
> > for a couple of hours now and then when a bug is relevant for my daily
> > work). I will only contribute improvements and fixes on my own

Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Victor Stinner
I made another quick&dirty test on Django 1.10 (I ran Django test
suite on my modified Python raising exception on bytes path): I didn't
notice any exception related to bytes path.

Django seems to only use Unicode for paths.

I can try to run more tests if you know some other major Python
applications (modules?) working on Windows/Python 3.

Note: About Twisted, I forgot to mention that I'm not really surprised
that Twisted uses bytes. Twisted was created something like 10 years
ago, when bytes was the defacto choice. Using Unicode in Python 2 was
painful when you imagine a module as large as Twisted. Twisted has to
support Python 2 and Python 3, so it's not surprising that it still
uses bytes in some places, instead of Unicode. Moreover, as many
Python applications/modules, Linux is a first citizen, whereas Windows
is more supported as "best effort".

Victor
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Steve Dower

On 30Aug2016 1702, Victor Stinner wrote:

I made another quick&dirty test on Django 1.10 (I ran Django test
suite on my modified Python raising exception on bytes path): I didn't
notice any exception related to bytes path.

Django seems to only use Unicode for paths.

I can try to run more tests if you know some other major Python
applications (modules?) working on Windows/Python 3.


The major ones aren't really the concern. I'd be interested to see where 
numpy and pandas are at, but I suspect they've already encountered and 
fixed many of these issues due to the size of the user base. (Though 
skim-reading numpy I see lots of code that would be affected - for 
better or worse - if the default encoding for open() changed...)


I'm more concerned about the long-tail of more focused libraries. Feel 
free to grab a random selection of Django extensions and try them out, 
but I don't really think it's worth the effort. I'm certainly not 
demanding you do it.



Note: About Twisted, I forgot to mention that I'm not really surprised
that Twisted uses bytes. Twisted was created something like 10 years
ago, when bytes was the defacto choice. Using Unicode in Python 2 was
painful when you imagine a module as large as Twisted. Twisted has to
support Python 2 and Python 3, so it's not surprising that it still
uses bytes in some places, instead of Unicode.


Yeah, I don't think they're doing anything wrong and wouldn't want to 
call them out on it. Especially since they already correctly handle it 
by asking Python what encoding should be used for the bytes.



Moreover, as many
Python applications/modules, Linux is a first citizen, whereas Windows
is more supported as "best effort".


That last point is exactly why I think this is important. Any arguments 
against making Windows behave more like Linux (i.e. bytes paths are 
reliable) need to be clear as to why this doesn't matter or is less 
important than other concerns.


Cheers,
Steve

___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Steven D'Aprano
On Tue, Aug 30, 2016 at 02:20:26PM -0700, Guido van Rossum wrote:
> I'm happy to present PEP 526 for your collective review:

Are you hoping to get this in before 3.6 beta? Because I'm not sure I 
can give this much attention before then, but I really want to.


-- 
Steve
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Lib/http/client.py: could it return an OSError with the current response?

2016-08-30 Thread Martin Panter
On 30 August 2016 at 13:41, Ivo Bellin Salarin
 wrote:
> While using requests to tunnel a request via a proxy requiring user
> authentication, I have seen that httplib
> (https://hg.python.org/cpython/file/3.5/Lib/http/client.py#l831) raises the
> message returned by the proxy, along with its status code (407) without
> including the proxy response. This one could be very interesting to the
> consumer, since it could contain some useful headers (like the supported
> authentication schemes).

Here are some existing bug threads which may be relevant:

https://bugs.python.org/issue7291 (urllib.request support for handling
tunnel authentication)
https://bugs.python.org/issue24964 (get tunnel response header fields
in http.client)

> Would it be possible to change the http/client.py behavior in order to raise
> an exception including the whole response?

That would be one way, and might be good enough for getting a
Proxy-Authenticate value. Although there might be other ways that
also:

* Allow reading the body (e.g. HTML page) of the error response. IMO
an exception instance is the wrong place for this;
urllib.error.HTTPError is a bad example.
* Allow the tunnel response fields to be obtained even when the
request was successful

> If you don't see any problem with my proposal, how can I propose a pull
> request? :-)

Perhaps you can use one of the patches at one of the above bug reports
as a starting point. What you want seems to be a prerequisite for
Issue 7291 (urllib.request), so maybe we can discuss it there. Or open
a new bug to focus on the http.client-only aspect.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Guido van Rossum
On Tue, Aug 30, 2016 at 6:00 PM, Steven D'Aprano  wrote:
> On Tue, Aug 30, 2016 at 02:20:26PM -0700, Guido van Rossum wrote:
>> I'm happy to present PEP 526 for your collective review:
>
> Are you hoping to get this in before 3.6 beta? Because I'm not sure I
> can give this much attention before then, but I really want to.

Yes I am hoping for that. Unlike PEP 484, this PEP is forward-looking
(more like PEP 492, async/await), and the sooner we can get it in the
sooner people who want to use it won't have to worry about supporting
older Python versions. (And am I ever looking forward to the day when
Python 3.5 counts as "older". :-)

While some of the details are better, this is substantially the same
proposal that we discussed at length in python-ideas, starting at
https://mail.python.org/pipermail/python-ideas/2016-August/041294.html
(and you participated vigorously in that thread, so very little in the
PEP should be news to you).

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Jack Diederich
+0. We should try and be consistent even if this is a thing I don't
want. And trust me, I don't!

That said, as long as pro-mypy people are willing to make everyone else pay
a mypy reading tax for code let's try and reduce the cognitive burden.

* Duplicate type annotations should be a syntax error.
  Duplicate annotations aren't possible in functions so that wasn't an
issue in 484. 526 makes some things syntax errors and some things runtime
errors (for good reason -- function bodies aren't evaluated right away).
Double-annotating a variable is something we can figure out at compile time
and doing the double annotating is non-sensical so we should error on it
because we can.

*  Dissallowing annotations on global and nonlocal
  Agreed, allowing it would be confusing because it would either be a
re-definition or a hard to read annotation-at-a-distance.

* Where __annotations__ live
  It is strange to allow modules.__annotations__ and
MyClass.__annotations__ but not myfunc.__annotations__ (or more in line
with existing function implementations a myfunc.__code__.co_annotations).
If we know enough from the syntax parse to have func.__code__.co_varnames
be known then we should try to do that with annotations.  Let's raise a
SyntaxError for function body annotations that conflict with same-named
variables that are annotated in the function signature as well.

I did C++ for years before I did Python and wrote C++ in many languages
(including Python). So ideally I'm -1000 on all this stuff for cultural
reasons -- if you let a C++ person add types they will for false comfort.
But again, I'm +0 on this specific proposal because we have already gone
down the garden path.

-Jack


On Tue, Aug 30, 2016 at 9:00 PM, Steven D'Aprano 
wrote:

> On Tue, Aug 30, 2016 at 02:20:26PM -0700, Guido van Rossum wrote:
> > I'm happy to present PEP 526 for your collective review:
>
> Are you hoping to get this in before 3.6 beta? Because I'm not sure I
> can give this much attention before then, but I really want to.
>
>
> --
> Steve
> ___
> Python-Dev mailing list
> [email protected]
> https://mail.python.org/mailman/listinfo/python-dev
> Unsubscribe: https://mail.python.org/mailman/options/python-dev/
> jackdied%40gmail.com
>
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Guido van Rossum
On Tue, Aug 30, 2016 at 7:44 PM, Jack Diederich  wrote:
> +0. We should try and be consistent even if this is a thing I don't want.
> And trust me, I don't!

No problem. You won't have to!

> That said, as long as pro-mypy people are willing to make everyone else pay
> a mypy reading tax for code let's try and reduce the cognitive burden.
>
> * Duplicate type annotations should be a syntax error.
>   Duplicate annotations aren't possible in functions so that wasn't an issue
> in 484. 526 makes some things syntax errors and some things runtime errors
> (for good reason -- function bodies aren't evaluated right away).
> Double-annotating a variable is something we can figure out at compile time
> and doing the double annotating is non-sensical so we should error on it
> because we can.

Actually I'm not so sure that double-annotating is always nonsensical.
In the mypy tracker we're seeing some requests for type *inference*
that allows a variable to be given another type later, e.g.

x = 'abc'
test_func(x)
x = 42
another_test_func(x)

Maybe there's a use for explicit annotations too. I would rather not
get in the way of letting type checkers decide such semantics.

> *  Dissallowing annotations on global and nonlocal
>   Agreed, allowing it would be confusing because it would either be a
> re-definition or a hard to read annotation-at-a-distance.
>
> * Where __annotations__ live
>   It is strange to allow modules.__annotations__ and MyClass.__annotations__
> but not myfunc.__annotations__ (or more in line with existing function
> implementations a myfunc.__code__.co_annotations). If we know enough from
> the syntax parse to have func.__code__.co_varnames be known then we should
> try to do that with annotations.  Let's raise a SyntaxError for function
> body annotations that conflict with same-named variables that are annotated
> in the function signature as well.

But myfunc.__annotations__ already exists -- PEP 3107 puts the
signature annotations there. The problem with co_annotations is that
annotations are evaluated (they can be quite complex expressions, e.g.
Optional[Tuple[int, int, some_mod.SomeClass]]), while co_varnames is
just a list of strings. And code objects must be immutable. The issue
with rejecting duplicate annotations so sternly is the same as for the
previous bullet.

> I did C++ for years before I did Python and wrote C++ in many languages
> (including Python). So ideally I'm -1000 on all this stuff for cultural
> reasons -- if you let a C++ person add types they will for false comfort.
> But again, I'm +0 on this specific proposal because we have already gone
> down the garden path.

As long as you run mypy the comfort shouldn't be false. (But your
starting with C++ before Python explains a lot. :-)

> -Jack

-- 
--Guido van Rossum (python.org/~guido)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Jack Diederich
On Tue, Aug 30, 2016 at 11:03 PM, Guido van Rossum  wrote:

> On Tue, Aug 30, 2016 at 7:44 PM, Jack Diederich 
> wrote:
> > +0. We should try and be consistent even if this is a thing I don't want.
> > And trust me, I don't!
>
> No problem. You won't have to!
>
>
Yes! I don't have to want it, it is here!


> > That said, as long as pro-mypy people are willing to make everyone else
> pay
> > a mypy reading tax for code let's try and reduce the cognitive burden.
> >
> > * Duplicate type annotations should be a syntax error.
> >   Duplicate annotations aren't possible in functions so that wasn't an
> issue
> > in 484. 526 makes some things syntax errors and some things runtime
> errors
> > (for good reason -- function bodies aren't evaluated right away).
> > Double-annotating a variable is something we can figure out at compile
> time
> > and doing the double annotating is non-sensical so we should error on it
> > because we can.
>
> Actually I'm not so sure that double-annotating is always nonsensical.
> In the mypy tracker we're seeing some requests for type *inference*
> that allows a variable to be given another type later, e.g.
>
> x = 'abc'
> test_func(x)
> x = 42
> another_test_func(x)
>
> Maybe there's a use for explicit annotations too. I would rather not
> get in the way of letting type checkers decide such semantics.
>
>
Other languages (including rpython) don't allow rebinding types (or
sometimes even re-assignment to same type). We are going for clarity [and
bondage, and discipline]. If we are doing types let's do types like other
people do. I think *disallowing* redefining the type is general to
enforcing types. +1 on being consistent with other langs. If plain
redoubling of types is allowed I'm OK "i: int = 0" doesn't summon horrors
when said three times into a mirror. But we can't always know what "int"
evaluates to so I'd just disallow it.


> > *  Dissallowing annotations on global and nonlocal
> >   Agreed, allowing it would be confusing because it would either be a
> > re-definition or a hard to read annotation-at-a-distance.
> >
> > * Where __annotations__ live
> >   It is strange to allow modules.__annotations__ and
> MyClass.__annotations__
> > but not myfunc.__annotations__ (or more in line with existing function
> > implementations a myfunc.__code__.co_annotations). If we know enough
> from
> > the syntax parse to have func.__code__.co_varnames be known then we
> should
> > try to do that with annotations.  Let's raise a SyntaxError for function
> > body annotations that conflict with same-named variables that are
> annotated
> > in the function signature as well.
>
> But myfunc.__annotations__ already exists -- PEP 3107 puts the
> signature annotations there. The problem with co_annotations is that
> annotations are evaluated (they can be quite complex expressions, e.g.
> Optional[Tuple[int, int, some_mod.SomeClass]]), while co_varnames is
> just a list of strings. And code objects must be immutable. The issue
> with rejecting duplicate annotations so sternly is the same as for the
> previous bullet.
>
>
If we disallow re-assignment of types as a syntax error then the conflict
with myfunc.__annotations__ goes away for vars that share a name with the
function arguments. The fact that variables with types can't be known until
the function body executes a particular line is .. I'm not sure how to deal
with that. For modules and classes you can assert that the body at the top
indent level has been executed. For functions you can only assert that it
has been parsed. So myfunc.__annotations__ could say that the type has a
definition but only later know what the definition is.

> I did C++ for years before I did Python and wrote C++ in many languages
> > (including Python). So ideally I'm -1000 on all this stuff for cultural
> > reasons -- if you let a C++ person add types they will for false comfort.
> > But again, I'm +0 on this specific proposal because we have already gone
> > down the garden path.
>
> As long as you run mypy the comfort shouldn't be false. (But your
> starting with C++ before Python explains a lot. :-)
>

We've talked about this and we have different relationships with tools. I'm
a monk who thinks using a debugger is an admission of failure; you think
linters are a fine method of dissuading others of sin.
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] What do we do about bad slicing and possible crashes (issue 27867)

2016-08-30 Thread Nick Coghlan
On 31 August 2016 at 05:06, Serhiy Storchaka  wrote:
> On 30.08.16 20:42, Nick Coghlan wrote:
>> Given Serhiy's clarification that this is primarily a thread safety
>> problem, I'm more supportive of the "PySlice_GetIndicesForObject"
>> approach (since that can call all the __index__ methods first, leaving
>> the final __len__ call as the only problematic case).
>
> This doesn't work with multidimensional slicing (like _testbuffer.ndarray or
> NumPy arrays).

Thanks, that makes sense.

>> However, given the observation that __len__ can also release the GIL,
>> I'm not clear on how 2A is supposed to work - a poorly timed thread
>> switch means there's always going to be a risk of len(obj) returning
>> outdated information if a container implemented in Python is being
>> mutated concurrently from different threads, so what can be done
>> differently in the calling functions that couldn't be done in a new
>> API that accepted the container reference?
>
> Current code doesn't use __len__. It uses something like
> PyUnicode_GET_LENGTH().

Oh, I see - it's the usual rule that C code can be made implicitly
atomic if it avoids calling hooks potentially written in Python, but
pure Python containers need explicit locks to allow concurrent access
from multiple threads.

> The solution was found easier than I afraid. See my patch to issue27867.

+1 from me. Would it make sense to make these public and new additions
to the stable ABI for 3.6+?

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Nick Coghlan
On 31 August 2016 at 07:11, Chris Angelico  wrote:
> Didn't all this kind of thing come up when function annotations were
> discussed? Insane schemes like dictionaries with UUID keys and so on.
> The decision then was YAGNI. The decision now, IMO, should be the
> same. Keep things simple.

Different use case - for annotations, the *reader* of the code is one
of the intended audiences, so as the author of the code, you decide
what you want to tell them, and that then constrains the tools you can
use (or vice-versa - you pick the kinds of tools you want to use, and
that constrains what you can tell your readers).

This case is different - there are no human readers involved, only
automated tools, so adding a mandatory redirection through a sequence
is just a small performance hit rather than a readability problem.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Nick Coghlan
On 31 August 2016 at 04:55, Serhiy Storchaka  wrote:
> On 30.08.16 21:20, Antoine Pitrou wrote:
>> But the performance overhead of iterating over a 1-element list
>> is small enough (it's just an array access after a pointer dereference)
>> that it may not be larger than the overhead of the multiple tests and
>> conditional branches your example shows.
>
> Iterating over a tuple is even faster. It needs one pointer dereference
> less.

That comes at the cost of making metadata additions a bit more
complicated though - you'd have to replace the existing tuple with a
new one that adds your own metadata, rather than just appending to a
list.

I do think there are enough subtleties here (going from no metadata ->
some metadata, and some metadata -> more metadata) that it makes sense
to provide a standard API for it (excluded from the stable ABI),
rather than expecting plugin developers to roll their own.

Strawman:

PyObject * PyCode_GetExtra(PyCodeObject *code, PyTypeObject *extra_type);
int PyCode_SetExtra(PyCodeObject *code, PyObject *extra);
int PyCode_DelExtra(PyCodeObject *code, PyTypeObject *extra_type);

Then Brett's example code would become:

pyjion_cache = PyCode_GetExtra(code_obj, &PyPyjion_Type);
if (pyjion_cache == NULL) {
pyjion_cache = PyPyjion_New();
if (PyCode_SetExtra(code_obj, pyjion_cache) < 0) {
/* Something went wrong, report that somehow */}
}
/* pyjion_cache is valid here */

Making those APIs fast (for an assumed small number of simultaneously
active interpreter plugins) and thread-safe is then an internal
CPython implementation detail, rather than being something plugin
writers need to concern themselves with.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Supported versions of OpenSSL

2016-08-30 Thread Nick Coghlan
On 31 August 2016 at 09:55, Gregory P. Smith  wrote:
> On Tue, Aug 30, 2016 at 1:08 PM M.-A. Lemburg  wrote:
>> Yet, a move to require OpenSSL 1.0.2 for Python 3.7 will make
>> it impossible to run such apps on systems that still use OpenSSL
>> 1.0.1, e.g. Ubuntu 14.04 or CentOS 7.
>
> Not important. That isn't something we need to worry about. Compiling a new
> libssl is easy.  People using systems that are 4+ years old by the time 3.7
> comes out who expect new software to compile and just work are expecting too
> much.
>
> I find that users of such systems either use only what their distro itself
> supplies (ie: ancient versions at that point) or are fully comfortable
> building any dependencies their own software needs. If they are comfortable
> building a CPython runtime in the first place, they should be comfortable
> building required libraries. Nothing new there.

There's a 3rd variant, which is to raise support tickets with their
LTS vendors to request compatibility backports. I strongly encourage
that behaviour by end user organisations when wearing both my upstream
volunteer contributor hat, since it means they're not bothering
community volunteers with their institutional support requests, and my
downstream redistributor employee hat, since the more Python related
customer support requests Red Hat receives, the easier it gets for
folks internally (including me) to put together business cases arguing
for increased direct investment in the upstream Python ecosystem :)

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] File system path encoding on Windows

2016-08-30 Thread Nick Coghlan
On 31 August 2016 at 10:27, Steve Dower  wrote:
> On 30Aug2016 1702, Victor Stinner wrote:
>> I can try to run more tests if you know some other major Python
>> applications (modules?) working on Windows/Python 3.
>
> The major ones aren't really the concern. I'd be interested to see where
> numpy and pandas are at, but I suspect they've already encountered and fixed
> many of these issues due to the size of the user base. (Though skim-reading
> numpy I see lots of code that would be affected - for better or worse - if
> the default encoding for open() changed...)

For a case of "Don't break software already trying to do things
right", the https://github.com/beetbox/beets example that Daniel
linked earlier would be a good one to test.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Nick Coghlan
On 31 August 2016 at 13:37, Jack Diederich  wrote:
> On Tue, Aug 30, 2016 at 11:03 PM, Guido van Rossum  wrote:
>> But myfunc.__annotations__ already exists -- PEP 3107 puts the
>> signature annotations there. The problem with co_annotations is that
>> annotations are evaluated (they can be quite complex expressions, e.g.
>> Optional[Tuple[int, int, some_mod.SomeClass]]), while co_varnames is
>> just a list of strings. And code objects must be immutable. The issue
>> with rejecting duplicate annotations so sternly is the same as for the
>> previous bullet.
>>
>
> If we disallow re-assignment of types as a syntax error then the conflict
> with myfunc.__annotations__ goes away for vars that share a name with the
> function arguments. The fact that variables with types can't be known until
> the function body executes a particular line is .. I'm not sure how to deal
> with that. For modules and classes you can assert that the body at the top
> indent level has been executed. For functions you can only assert that it
> has been parsed. So myfunc.__annotations__ could say that the type has a
> definition but only later know what the definition is.

What if we included local variable annotations in func.__annotations__
as cells, like the entries in func.__closure__?

We could also use that as a micro-optimisation technique: once the
type annotation cell is populated, CPython would just use it, rather
than re-evaluating the local variable type annotation expression every
time the function is called.

Cheers,
Nick.

-- 
Nick Coghlan   |   [email protected]   |   Brisbane, Australia
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] PEP 526 ready for review: Syntax for Variable and Attribute Annotations

2016-08-30 Thread Guido van Rossum
On Tuesday, August 30, 2016, Nick Coghlan  wrote:

> On 31 August 2016 at 13:37, Jack Diederich  > wrote:
> > On Tue, Aug 30, 2016 at 11:03 PM, Guido van Rossum  > wrote:
> >> But myfunc.__annotations__ already exists -- PEP 3107 puts the
> >> signature annotations there. The problem with co_annotations is that
> >> annotations are evaluated (they can be quite complex expressions, e.g.
> >> Optional[Tuple[int, int, some_mod.SomeClass]]), while co_varnames is
> >> just a list of strings. And code objects must be immutable. The issue
> >> with rejecting duplicate annotations so sternly is the same as for the
> >> previous bullet.
> >>
> >
> > If we disallow re-assignment of types as a syntax error then the conflict
> > with myfunc.__annotations__ goes away for vars that share a name with the
> > function arguments. The fact that variables with types can't be known
> until
> > the function body executes a particular line is .. I'm not sure how to
> deal
> > with that. For modules and classes you can assert that the body at the
> top
> > indent level has been executed. For functions you can only assert that it
> > has been parsed. So myfunc.__annotations__ could say that the type has a
> > definition but only later know what the definition is.
>
> What if we included local variable annotations in func.__annotations__
> as cells, like the entries in func.__closure__?
>
> We could also use that as a micro-optimisation technique: once the
> type annotation cell is populated, CPython would just use it, rather
> than re-evaluating the local variable type annotation expression every
> time the function is called.
>

But what runtime use have the annotations on locals? They are not part of
any inspectable interface. I don't want to spend any effort on them at
runtime. (Just the bit that they are treated as locals.)

--Guido


-- 
--Guido (mobile)
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Update on PEP 523 and adding a co_extra field to code objects

2016-08-30 Thread Victor Stinner
The PEP 445, C API for malloc, allows to plug multiple wrappers and each
wrapper has its own "void* context" data. When you register a new wrapper,
you store the current context and function to later chain it.

See the hooks example:
https://www.python.org/dev/peps/pep-0445/#use-case-3-setup-hooks-on-memory-block-allocators

Since the PEP 523 also adds a function, would it be possible to somehow
design a mecanism to "chain wrappers"?

I know that the PEP 523 has a different design, so maybe it's not possible.

For example, the context can be passed to PyFrameEvalFunction. In this
case, each project would have to register its own eval function, including
vmprof. I don't know if it makes sense for vmprof to modify the behaviour
at runtime (add a C frame per Python eval frame).

Victor
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com


Re: [Python-Dev] Supported versions of OpenSSL

2016-08-30 Thread Paul Moore
On 31 August 2016 at 00:55, Gregory P. Smith  wrote:
> I find that users of such systems either use only what their distro itself
> supplies (ie: ancient versions at that point) or are fully comfortable
> building any dependencies their own software needs. If they are comfortable
> building a CPython runtime in the first place, they should be comfortable
> building required libraries. Nothing new there

In our environment (corporate systems locked to older OS releases,
with Python *not* a strategic solution but used for ad-hoc automation)
it's quite common to find only an ancient version of Python available,
but want to build a new version without any ability to influence
corporate IT to allow new versions of the necessary libraries.

But I strongly agree, this is *my* problem, and Python policy should
not be based on the idea that what I want to do "should" be supported.

So +1 on the proposed change here.

Paul
___
Python-Dev mailing list
[email protected]
https://mail.python.org/mailman/listinfo/python-dev
Unsubscribe: 
https://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com