[RELEASE] Python 2.7.18 release candidate 1

2020-04-06 Thread Benjamin Peterson
Greetings,
2.7.18 release candidate 1, a testing release for the last release of the 
Python 2.7 series, is now available for download. The CPython core developers 
stopped applying routine bugfixes to the 2.7 branch on January 1. 2.7.18 will 
includes fixes that were 
Downloads are at:

   https://www.python.org/downloads/release/python-2718rc1/

The full changelog is at

   https://raw.githubusercontent.com/python/cpython/v2.7.18rc1/Misc/NEWS.d/2.7.
18rc1.rst

Test it out, and let us know if there are any critical problems at

https://bugs.python.org/

(This is the last chance!)

All the best,
Benjamin
--
Python-announce-list mailing list -- python-announce-list@python.org
To unsubscribe send an email to python-announce-list-le...@python.org
https://mail.python.org/mailman3/lists/python-announce-list.python.org/

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[issue40166] UNICODE HOWTO: Change the total number of code points in the introduction section

2020-04-06 Thread miss-islington


Change by miss-islington :


--
nosy: +miss-islington
nosy_count: 3.0 -> 4.0
pull_requests: +18767
pull_request: https://github.com/python/cpython/pull/19406

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40166] UNICODE HOWTO: Change the total number of code points in the introduction section

2020-04-06 Thread Benjamin Peterson


Benjamin Peterson  added the comment:


New changeset 8ea10a94463f1ea217bcaef86f2ebd9d43240b4e by amaajemyfren in 
branch 'master':
closes bpo-40166: Change Unicode Howto so that it does not have a specific 
number of assigned code points. (GH-19328)
https://github.com/python/cpython/commit/8ea10a94463f1ea217bcaef86f2ebd9d43240b4e


--
nosy: +benjamin.peterson
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue39943] Meta: Clean up various issues in C internals

2020-04-06 Thread Andy Lester


Change by Andy Lester :


--
pull_requests: +18766
pull_request: https://github.com/python/cpython/pull/19405

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40214] test_ctypes.test_load_dll_with_flags Windows failure

2020-04-06 Thread Kyle Stanley


New submission from Kyle Stanley :

In several recent PRs, test_ctypes.test_load_dll_with_flags is failing for the 
Azure Pipelines "Windows PR tests win32" and "Windows PR tests win64" with the 
following error message:

```
==
ERROR: test_load_dll_with_flags (ctypes.test.test_loading.LoaderTest) 
[WinDLL('_sqlite3.dll', winmode=0)]
--
Traceback (most recent call last):
  File "d:\a\1\s\lib\ctypes\test\test_loading.py", line 140, in should_pass
subprocess.check_output(
  File "d:\a\1\s\lib\subprocess.py", line 419, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "d:\a\1\s\lib\subprocess.py", line 533, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 
'['d:\\a\\1\\s\\PCbuild\\win32\\python.exe', '-c', "from ctypes import *; 
import nt;WinDLL('_sqlite3.dll', winmode=0)"]' returned non-zero exit status 1.

==
ERROR: test_load_dll_with_flags (ctypes.test.test_loading.LoaderTest) 
[WinDLL('_sqlite3.dll', winmode=0)]
--
Traceback (most recent call last):
  File "d:\a\1\s\lib\ctypes\test\test_loading.py", line 140, in should_pass
subprocess.check_output(
  File "d:\a\1\s\lib\subprocess.py", line 419, in check_output
return run(*popenargs, stdout=PIPE, timeout=timeout, check=True,
  File "d:\a\1\s\lib\subprocess.py", line 533, in run
raise CalledProcessError(retcode, process.args,
subprocess.CalledProcessError: Command 
'['d:\\a\\1\\s\\PCbuild\\win32\\python.exe', '-c', "from ctypes import *; 
import nt;WinDLL('_sqlite3.dll', winmode=0)"]' returned non-zero exit status 1.
```

Since this only started occurring recently in several unrelated PRs, I suspect 
it was most likely an intermittent regression introduced in master. Here are 
the PRs I have seen the same exact error occur in so far:

https://github.com/python/cpython/pull/18239
https://github.com/python/cpython/pull/19403
https://github.com/python/cpython/pull/19402
https://github.com/python/cpython/pull/19399

I was unable to reproduce it locally on my secondary boot of Windows 10.

--
components: Library (Lib)
keywords: 3.9regression
messages: 365887
nosy: aeros
priority: normal
severity: normal
status: open
title: test_ctypes.test_load_dll_with_flags Windows failure
versions: Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Luca

On 4/6/2020 8:51 PM, Reto wrote:

out = df.to_csv(None)
new = pd.read_csv(io.StringIO(out), index_col=0)


Thank you, brother. It works

--
https://mail.python.org/mailman/listinfo/python-list


[issue26903] ProcessPoolExecutor(max_workers=64) crashes on Windows

2020-04-06 Thread Mike Hommey


Mike Hommey  added the comment:

This is still a problem in python 3.7 (and, I guess 3.8).

When not even giving a max_workers, it fails with a ValueError exception on 
_winapi.WaitForMultipleObjects, with the message "need at most 63 handles, got 
a sequence of length 63"

That happens with max_workers=None and max_workers=61 ; not max_workers=60.

I wonder if there's an off-by-one in this test: 
https://github.com/python/cpython/blob/7668a8bc93c2bd573716d1bea0f52ea520502b28/Modules/_winapi.c#L1708

--
nosy: +Mike Hommey

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40213] contextlib.aclosing()

2020-04-06 Thread Nathaniel Smith


Change by Nathaniel Smith :


--
nosy: +ncoghlan, yselivanov

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40213] contextlib.aclosing()

2020-04-06 Thread John Belmonte


New submission from John Belmonte :

Please add aclosing() to contextlib, the async equivalent of closing().

It's needed to ensure deterministic call of aclose() on the resource object at 
block exit.

It's been available in the async_generator module for some time.  However that 
module is intended to provide async generators to Python 3.5, so it's odd for 
apps using modern Python versions to depend on it only for aclosing().

https://github.com/python-trio/async_generator/blob/22eddc191c2ae3fc152ca13cf2d6fa55ac3f1568/async_generator/_util.py#L6

--
components: Library (Lib)
messages: 365885
nosy: John Belmonte, njs
priority: normal
severity: normal
status: open
title: contextlib.aclosing()
type: enhancement
versions: Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40212] Re-enable posix_fallocate and posix_fadvise on AIX

2020-04-06 Thread Batuhan Taskaya


Change by Batuhan Taskaya :


--
nosy: +Michael.Felt

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40212] Re-enable posix_fallocate and posix_fadvise on AIX

2020-04-06 Thread Batuhan Taskaya


Change by Batuhan Taskaya :


--
keywords: +patch
pull_requests: +18765
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19403

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40212] Re-enable posix_fallocate and posix_fadvise on AIX

2020-04-06 Thread Batuhan Taskaya


New submission from Batuhan Taskaya :

fallocate and fadvise was problematic in AIX because of a bug that presents at 
the time of 2014, which looks like resolved in 2016. I think we can safely 
re-enable those functions back. I've tested this patch on AIX 7.2 and it works, 
this patch will also require pre-testing of buildbots.

http://www-01.ibm.com/support/docview.wss?uid=isg1IV56170

--
components: Library (Lib)
messages: 365884
nosy: BTaskaya
priority: normal
severity: normal
status: open
title: Re-enable posix_fallocate and posix_fadvise on AIX
versions: Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40060] socket.TCP_NOTSENT_LOWAT is missing in official macOS builds

2020-04-06 Thread Dima Tisnek


Change by Dima Tisnek :


--
keywords: +patch
pull_requests: +18764
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19402

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Reto
On Mon, Apr 06, 2020 at 06:29:01PM -0400, Luca wrote:
> so, given a dataframe, how do I make it print itself out as CSV?

read the docs of to_csv...

> And given CSV data in my clipboard, how do I paste it into a Jupiter cell
> (possibly along with a line or two of code) that will create a dataframe out
> of it?

```python
import pandas as pd
import io
df = pd.DataFrame(data=range(10))
out = df.to_csv(None)
new = pd.read_csv(io.StringIO(out), index_col=0)
```

That'll do the trick... any other serialization format works similarly.
you can copy out to wherever, it's just csv data.

Cheers,
Reto
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue40060] socket.TCP_NOTSENT_LOWAT is missing in official macOS builds

2020-04-06 Thread Dima Tisnek


Dima Tisnek  added the comment:

Thank you for the explanation, Ronald.

`socket.TCP_NOTSENT_LOWAT` is just a constant though, to be passed to 
`setsockopt`. 

What do you think of `ifndef ... define ...` work-around, akin to a few other 
constants in socket module?
https://github.com/python/cpython/blob/799d7d61a91eb0ad3256ef9a45a90029cef93b7c/Modules/socketmodule.h#L162-L168

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34951] cookielib/cookiejar cookies' Expires date parse fails with long month names

2020-04-06 Thread Liubomyr Popil


Liubomyr Popil  added the comment:

Hello,
I found this issue as most related to problem I was discovered:
a long name of day doesn't parsed.
According to https://tools.ietf.org/html/rfc2616#section-3.3.1:

  Sun, 06 Nov 1994 08:49:37 GMT  ; RFC 822, updated by RFC 1123
  Sunday, 06-Nov-94 08:49:37 GMT ; RFC 850, obsoleted by RFC 1036
  Sun Nov  6 08:49:37 1994   ; ANSI C's asctime() format

HTTP/1.1 clients and servers that parse the date value MUST accept
   all three formats (for compatibility with HTTP/1.0), though they MUST
   only generate the RFC 1123 format for representing HTTP-date values
   in header fields.

month format is correct, but for day part should be a both types.

Thanks,
 - Liubomyr

--
keywords: +patch
message_count: 6.0 -> 7.0
nosy: +lpopil
nosy_count: 3.0 -> 4.0
pull_requests: +18763
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19393

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Multiprocessing queue sharing and python3.8

2020-04-06 Thread Israel Brewster
> On Apr 6, 2020, at 12:19 PM, David Raymond  wrote:
> 
> Attempting reply as much for my own understanding.
> 
> Are you on Mac? I think this is the pertinent bit for you:
> Changed in version 3.8: On macOS, the spawn start method is now the default. 
> The fork start method should be considered unsafe as it can lead to crashes 
> of the subprocess. See bpo-33725.

Ahhh, yep, that would do it! Using spawn rather than fork completely explains 
all the issues I was suddenly seeing. Didn’t even occur to me that the os I was 
running might make a difference. And yes, forcing it back to using fork does 
indeed “fix” the issue. Of course, as is noted there, the fork start method 
should be considered unsafe, so I guess I get to re-architect everything I do 
using multiprocessing that relies on data-sharing between processes. The Queue 
example was just a minimum working example that illustrated the behavioral 
differences I was seeing :-) Thanks for the pointer!

---
Israel Brewster
Software Engineer
Alaska Volcano Observatory 
Geophysical Institute - UAF 
2156 Koyukuk Drive 
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

> 
> When you start a new process (with the spawn method) it runs the module just 
> like it's being imported. So your global " mp_comm_queue2=mp.Queue()" creates 
> a new Queue in each process. Your initialization of mp_comm_queue is also 
> done inside the main() function, which doesn't get run in each process. So 
> each process in the Pool is going to have mp_comm_queue as None, and have its 
> own version of mp_comm_queue2. The ID being the same or different is the 
> result of one or more processes in the Pool being used repeatedly for the 
> multiple steps in imap, probably because the function that the Pool is 
> executing finishes so quickly.
> 
> Add a little extra info to the print calls (and/or set up logging to stdout 
> with the process name/id included) and you can see some of this. Here's the 
> hacked together changes I did for that.
> 
> import multiprocessing as mp
> import os
> 
> mp_comm_queue = None #Will be initalized in the main function
> mp_comm_queue2 = mp.Queue() #Test pre-initalized as well
> 
> def some_complex_function(x):
>print("proc id", os.getpid())
>print("mp_comm_queue", mp_comm_queue)
>print("queue2 id", id(mp_comm_queue2))
>mp_comm_queue2.put(x)
>print("queue size", mp_comm_queue2.qsize())
>print("x", x)
>return x * 2
> 
> def main():
>global mp_comm_queue
>#initalize the Queue
>mp_comm_queue = mp.Queue()
> 
>#Set up a pool to process a bunch of stuff in parallel
>pool = mp.Pool()
>values = range(20)
>data = pool.imap(some_complex_function, values)
> 
>for val in data:
>print(f"**{val}**")
>print("final queue2 size", mp_comm_queue2.qsize())
> 
> if __name__ == "__main__":
>main()
> 
> 
> 
> When making your own Process object and stating it then the Queue should be 
> passed into the function as an argument, yes. The error text seems to be part 
> of the Pool implementation, which I'm not as familiar with enough to know the 
> best way to handle it. (Probably something using the "initializer" and 
> "initargs" arguments for Pool)(maybe)
> 
> 
> 
> -Original Message-
> From: Python-list  > On Behalf 
> Of Israel Brewster
> Sent: Monday, April 6, 2020 1:24 PM
> To: Python mailto:python-list@python.org>>
> Subject: Multiprocessing queue sharing and python3.8
> 
> Under python 3.7 (and all previous versions I have used), the following code 
> works properly, and produces the expected output:
> 
> import multiprocessing as mp
> 
> mp_comm_queue = None #Will be initalized in the main function
> mp_comm_queue2=mp.Queue() #Test pre-initalized as well
> 
> def some_complex_function(x):
>print(id(mp_comm_queue2))
>assert(mp_comm_queue is not None)
>print(x)
>return x*2
> 
> def main():
>global mp_comm_queue
>#initalize the Queue
>mp_comm_queue=mp.Queue()
> 
>#Set up a pool to process a bunch of stuff in parallel
>pool=mp.Pool()
>values=range(20)
>data=pool.imap(some_complex_function,values)
> 
>for val in data:
>print(f"**{val}**")
> 
> if __name__=="__main__":
>main()
> 
> - mp_comm_queue2 has the same ID for all iterations of some_complex_function, 
> and the assert passes (mp_comm_queue is not None). However, under python 3.8, 
> it fails - mp_comm_queue2 is a *different* object for each iteration, and the 
> assert fails. 
> 
> So what am I doing wrong with the above example block? Assuming that it broke 
> in 3.8 because I wasn’t sharing the Queue properly, what is the proper way to 
> share a Queue object among multiple processes for the purposes of 
> inter-process communication?
> 
> The documentation 
> (https://docs.python.org/3.8/library/multiprocessing.html#exchanging-objects-between-processes
>  
> 

[issue39562] Asynchronous comprehensions don't work in asyncio REPL

2020-04-06 Thread Batuhan Taskaya


Batuhan Taskaya  added the comment:

#define IS_COMPILER_FLAG_ENABLED(c, flag) printf("%s: %d\n", #flag, 
c->c_flags->cf_flags & flag)

> If CO_FUTURE_DIVISION conflicts with PyCF_ALLOW_TOP_LEVEL_AWAIT, does not 
> CO_ITERABLE_COROUTINE conflict with PyCF_SOURCE_IS_UTF8 and 
> CO_ASYNC_GENERATOR with PyCF_DONT_IMPLY_DEDENT?

Yes, they do.

Compiling without anything
PyCF_SOURCE_IS_UTF8: 256
CO_ITERABLE_COROUTINE: 256
PyCF_DONT_IMPLY_DEDENT: 0
CO_ASYNC_GENERATOR: 0
Compiling with CO_ASYNC_GENERATOR
PyCF_SOURCE_IS_UTF8: 256
CO_ITERABLE_COROUTINE: 256
PyCF_DONT_IMPLY_DEDENT: 512
CO_ASYNC_GENERATOR: 512

This result is a side affect of merging future flags with compiler flags. Even 
if we access from cf_flags (or the other way around, ff_features) it doesnt 
change anything because we are merging both flags before we start the process. 

Two ways of escaping this is changing flags to not to conlict with each other 
or not merging. Not merging is out of this box because it will break user level 
compile function (it takes both flags in a single parameter, flags). The most 
reasonable solution I thought was making this flags not to conflict with each 
other.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Luca

On 4/6/2020 3:03 PM, Christian Gollwitzer wrote:





CSV is the most sensible option here. It is widely supported by 
spreadsheets etc. and easily copy/pasteable.


Thank you Christian.

so, given a dataframe, how do I make it print itself out as CSV?

And given CSV data in my clipboard, how do I paste it into a Jupiter 
cell (possibly along with a line or two of code) that will create a 
dataframe out of it?



--
https://mail.python.org/mailman/listinfo/python-list


[issue40211] Clarify preadv and pwritev is supported AIX 7.1 and newer.

2020-04-06 Thread Batuhan Taskaya


Change by Batuhan Taskaya :


--
keywords: +patch
pull_requests: +18762
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19401

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40211] Clarify preadv and pwritev is supported AIX 7.1 and newer.

2020-04-06 Thread Batuhan Taskaya


New submission from Batuhan Taskaya :

preadv and pwritev are supported on AIX 7.1 and newer, it would be good to 
clarify this in the documentation.

--
assignee: docs@python
components: Documentation
messages: 365880
nosy: BTaskaya, docs@python, pablogsal
priority: normal
severity: normal
status: open
title: Clarify preadv and pwritev is supported AIX 7.1 and newer.
versions: Python 3.7, Python 3.8, Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue36753] Python modules not linking to libpython causes issues for RTLD_LOCAL system-wide

2020-04-06 Thread Joshua Merchant


Change by Joshua Merchant :


--
nosy: +Joshua Merchant

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Multiprocessing queue sharing and python3.8

2020-04-06 Thread Israel Brewster
> On Apr 6, 2020, at 12:27 PM, David Raymond  wrote:
> 
> Looks like this will get what you need.
> 
> 
> def some_complex_function(x):
>global q
>#stuff using q
> 
> def pool_init(q2):
>global q
>q = q2
> 
> def main():
>#initalize the Queue
>mp_comm_queue = mp.Queue()
> 
>#Set up a pool to process a bunch of stuff in parallel
>pool = mp.Pool(initializer = pool_init, initargs = (mp_comm_queue,))
>...
> 
> 

Gotcha, thanks. I’ll look more into that initializer argument and see how I can 
leverage it to do multiprocessing using spawn rather than fork in the future. 
Looks straight-forward enough. Thanks again!

---
Israel Brewster
Software Engineer
Alaska Volcano Observatory 
Geophysical Institute - UAF 
2156 Koyukuk Drive 
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

> 
> -Original Message-
> From: David Raymond 
> Sent: Monday, April 6, 2020 4:19 PM
> To: python-list@python.org
> Subject: RE: Multiprocessing queue sharing and python3.8
> 
> Attempting reply as much for my own understanding.
> 
> Are you on Mac? I think this is the pertinent bit for you:
> Changed in version 3.8: On macOS, the spawn start method is now the default. 
> The fork start method should be considered unsafe as it can lead to crashes 
> of the subprocess. See bpo-33725.
> 
> When you start a new process (with the spawn method) it runs the module just 
> like it's being imported. So your global " mp_comm_queue2=mp.Queue()" creates 
> a new Queue in each process. Your initialization of mp_comm_queue is also 
> done inside the main() function, which doesn't get run in each process. So 
> each process in the Pool is going to have mp_comm_queue as None, and have its 
> own version of mp_comm_queue2. The ID being the same or different is the 
> result of one or more processes in the Pool being used repeatedly for the 
> multiple steps in imap, probably because the function that the Pool is 
> executing finishes so quickly.
> 
> Add a little extra info to the print calls (and/or set up logging to stdout 
> with the process name/id included) and you can see some of this. Here's the 
> hacked together changes I did for that.
> 
> import multiprocessing as mp
> import os
> 
> mp_comm_queue = None #Will be initalized in the main function
> mp_comm_queue2 = mp.Queue() #Test pre-initalized as well
> 
> def some_complex_function(x):
>print("proc id", os.getpid())
>print("mp_comm_queue", mp_comm_queue)
>print("queue2 id", id(mp_comm_queue2))
>mp_comm_queue2.put(x)
>print("queue size", mp_comm_queue2.qsize())
>print("x", x)
>return x * 2
> 
> def main():
>global mp_comm_queue
>#initalize the Queue
>mp_comm_queue = mp.Queue()
> 
>#Set up a pool to process a bunch of stuff in parallel
>pool = mp.Pool()
>values = range(20)
>data = pool.imap(some_complex_function, values)
> 
>for val in data:
>print(f"**{val}**")
>print("final queue2 size", mp_comm_queue2.qsize())
> 
> if __name__ == "__main__":
>main()
> 
> 
> 
> When making your own Process object and stating it then the Queue should be 
> passed into the function as an argument, yes. The error text seems to be part 
> of the Pool implementation, which I'm not as familiar with enough to know the 
> best way to handle it. (Probably something using the "initializer" and 
> "initargs" arguments for Pool)(maybe)
> 
> 
> 
> -Original Message-
> From: Python-list  
> On Behalf Of Israel Brewster
> Sent: Monday, April 6, 2020 1:24 PM
> To: Python 
> Subject: Multiprocessing queue sharing and python3.8
> 
> Under python 3.7 (and all previous versions I have used), the following code 
> works properly, and produces the expected output:
> 
> import multiprocessing as mp
> 
> mp_comm_queue = None #Will be initalized in the main function
> mp_comm_queue2=mp.Queue() #Test pre-initalized as well
> 
> def some_complex_function(x):
>print(id(mp_comm_queue2))
>assert(mp_comm_queue is not None)
>print(x)
>return x*2
> 
> def main():
>global mp_comm_queue
>#initalize the Queue
>mp_comm_queue=mp.Queue()
> 
>#Set up a pool to process a bunch of stuff in parallel
>pool=mp.Pool()
>values=range(20)
>data=pool.imap(some_complex_function,values)
> 
>for val in data:
>print(f"**{val}**")
> 
> if __name__=="__main__":
>main()
> 
> - mp_comm_queue2 has the same ID for all iterations of some_complex_function, 
> and the assert passes (mp_comm_queue is not None). However, under python 3.8, 
> it fails - mp_comm_queue2 is a *different* object for each iteration, and the 
> assert fails. 
> 
> So what am I doing wrong with the above example block? Assuming that it broke 
> in 3.8 because I wasn’t sharing the Queue properly, what is the proper way to 
> share a Queue object among multiple processes for the purposes of 
> inter-process communication?
> 
> The documentation 
> 

[issue40082] Assertion failure in trip_signal

2020-04-06 Thread STINNER Victor


Change by STINNER Victor :


--
nosy: +vstinner

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40082] Assertion failure in trip_signal

2020-04-06 Thread neonene


neonene  added the comment:

On Windows, PyGILState_GetThisThreadState() returns NULL when ^C-interrupt 
occurs. It is from TlsGetValue() winAPI and I don't think the os's behevior is 
wrong. 
In trip_signal(), crash can be avoided by skipping PyEval_SignalReceived()  if 
tstate is invalid. But I'm not sure the skip itself is ok.

--
nosy: +neonene

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



RE: Multiprocessing queue sharing and python3.8

2020-04-06 Thread David Raymond
Looks like this will get what you need.


def some_complex_function(x):
global q
#stuff using q

def pool_init(q2):
global q
q = q2

def main():
#initalize the Queue
mp_comm_queue = mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool = mp.Pool(initializer = pool_init, initargs = (mp_comm_queue,))
...



-Original Message-
From: David Raymond 
Sent: Monday, April 6, 2020 4:19 PM
To: python-list@python.org
Subject: RE: Multiprocessing queue sharing and python3.8

Attempting reply as much for my own understanding.

Are you on Mac? I think this is the pertinent bit for you:
Changed in version 3.8: On macOS, the spawn start method is now the default. 
The fork start method should be considered unsafe as it can lead to crashes of 
the subprocess. See bpo-33725.

When you start a new process (with the spawn method) it runs the module just 
like it's being imported. So your global " mp_comm_queue2=mp.Queue()" creates a 
new Queue in each process. Your initialization of mp_comm_queue is also done 
inside the main() function, which doesn't get run in each process. So each 
process in the Pool is going to have mp_comm_queue as None, and have its own 
version of mp_comm_queue2. The ID being the same or different is the result of 
one or more processes in the Pool being used repeatedly for the multiple steps 
in imap, probably because the function that the Pool is executing finishes so 
quickly.

Add a little extra info to the print calls (and/or set up logging to stdout 
with the process name/id included) and you can see some of this. Here's the 
hacked together changes I did for that.

import multiprocessing as mp
import os

mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2 = mp.Queue() #Test pre-initalized as well

def some_complex_function(x):
print("proc id", os.getpid())
print("mp_comm_queue", mp_comm_queue)
print("queue2 id", id(mp_comm_queue2))
mp_comm_queue2.put(x)
print("queue size", mp_comm_queue2.qsize())
print("x", x)
return x * 2

def main():
global mp_comm_queue
#initalize the Queue
mp_comm_queue = mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool = mp.Pool()
values = range(20)
data = pool.imap(some_complex_function, values)

for val in data:
print(f"**{val}**")
print("final queue2 size", mp_comm_queue2.qsize())

if __name__ == "__main__":
main()



When making your own Process object and stating it then the Queue should be 
passed into the function as an argument, yes. The error text seems to be part 
of the Pool implementation, which I'm not as familiar with enough to know the 
best way to handle it. (Probably something using the "initializer" and 
"initargs" arguments for Pool)(maybe)



-Original Message-
From: Python-list  On 
Behalf Of Israel Brewster
Sent: Monday, April 6, 2020 1:24 PM
To: Python 
Subject: Multiprocessing queue sharing and python3.8

Under python 3.7 (and all previous versions I have used), the following code 
works properly, and produces the expected output:

import multiprocessing as mp

mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2=mp.Queue() #Test pre-initalized as well

def some_complex_function(x):
print(id(mp_comm_queue2))
assert(mp_comm_queue is not None)
print(x)
return x*2

def main():
global mp_comm_queue
#initalize the Queue
mp_comm_queue=mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool=mp.Pool()
values=range(20)
data=pool.imap(some_complex_function,values)

for val in data:
print(f"**{val}**")

if __name__=="__main__":
main()

- mp_comm_queue2 has the same ID for all iterations of some_complex_function, 
and the assert passes (mp_comm_queue is not None). However, under python 3.8, 
it fails - mp_comm_queue2 is a *different* object for each iteration, and the 
assert fails. 

So what am I doing wrong with the above example block? Assuming that it broke 
in 3.8 because I wasn’t sharing the Queue properly, what is the proper way to 
share a Queue object among multiple processes for the purposes of inter-process 
communication?

The documentation 
(https://docs.python.org/3.8/library/multiprocessing.html#exchanging-objects-between-processes
 
)
 appears to indicate that I should pass the queue as an argument to the 
function to be executed in parallel, however that fails as well (on ALL 
versions of python I have tried) with the error:

Traceback (most recent call last):
  File "test_multi.py", line 32, in 
main()
  File "test_multi.py", line 28, in main
for val in data:
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py",
 line 748, in next
raise value
  File 

RE: Multiprocessing queue sharing and python3.8

2020-04-06 Thread David Raymond
Attempting reply as much for my own understanding.

Are you on Mac? I think this is the pertinent bit for you:
Changed in version 3.8: On macOS, the spawn start method is now the default. 
The fork start method should be considered unsafe as it can lead to crashes of 
the subprocess. See bpo-33725.

When you start a new process (with the spawn method) it runs the module just 
like it's being imported. So your global " mp_comm_queue2=mp.Queue()" creates a 
new Queue in each process. Your initialization of mp_comm_queue is also done 
inside the main() function, which doesn't get run in each process. So each 
process in the Pool is going to have mp_comm_queue as None, and have its own 
version of mp_comm_queue2. The ID being the same or different is the result of 
one or more processes in the Pool being used repeatedly for the multiple steps 
in imap, probably because the function that the Pool is executing finishes so 
quickly.

Add a little extra info to the print calls (and/or set up logging to stdout 
with the process name/id included) and you can see some of this. Here's the 
hacked together changes I did for that.

import multiprocessing as mp
import os

mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2 = mp.Queue() #Test pre-initalized as well

def some_complex_function(x):
print("proc id", os.getpid())
print("mp_comm_queue", mp_comm_queue)
print("queue2 id", id(mp_comm_queue2))
mp_comm_queue2.put(x)
print("queue size", mp_comm_queue2.qsize())
print("x", x)
return x * 2

def main():
global mp_comm_queue
#initalize the Queue
mp_comm_queue = mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool = mp.Pool()
values = range(20)
data = pool.imap(some_complex_function, values)

for val in data:
print(f"**{val}**")
print("final queue2 size", mp_comm_queue2.qsize())

if __name__ == "__main__":
main()



When making your own Process object and stating it then the Queue should be 
passed into the function as an argument, yes. The error text seems to be part 
of the Pool implementation, which I'm not as familiar with enough to know the 
best way to handle it. (Probably something using the "initializer" and 
"initargs" arguments for Pool)(maybe)



-Original Message-
From: Python-list  On 
Behalf Of Israel Brewster
Sent: Monday, April 6, 2020 1:24 PM
To: Python 
Subject: Multiprocessing queue sharing and python3.8

Under python 3.7 (and all previous versions I have used), the following code 
works properly, and produces the expected output:

import multiprocessing as mp

mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2=mp.Queue() #Test pre-initalized as well

def some_complex_function(x):
print(id(mp_comm_queue2))
assert(mp_comm_queue is not None)
print(x)
return x*2

def main():
global mp_comm_queue
#initalize the Queue
mp_comm_queue=mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool=mp.Pool()
values=range(20)
data=pool.imap(some_complex_function,values)

for val in data:
print(f"**{val}**")

if __name__=="__main__":
main()

- mp_comm_queue2 has the same ID for all iterations of some_complex_function, 
and the assert passes (mp_comm_queue is not None). However, under python 3.8, 
it fails - mp_comm_queue2 is a *different* object for each iteration, and the 
assert fails. 

So what am I doing wrong with the above example block? Assuming that it broke 
in 3.8 because I wasn’t sharing the Queue properly, what is the proper way to 
share a Queue object among multiple processes for the purposes of inter-process 
communication?

The documentation 
(https://docs.python.org/3.8/library/multiprocessing.html#exchanging-objects-between-processes
 
)
 appears to indicate that I should pass the queue as an argument to the 
function to be executed in parallel, however that fails as well (on ALL 
versions of python I have tried) with the error:

Traceback (most recent call last):
  File "test_multi.py", line 32, in 
main()
  File "test_multi.py", line 28, in main
for val in data:
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py",
 line 748, in next
raise value
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py",
 line 431, in _handle_tasks
put(task)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py",
 line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/reduction.py",
 line 51, in dumps
cls(buf, protocol).dump(obj)
  File 

[issue40210] ttk.Combobox focus-out event inheritage

2020-04-06 Thread Nikolai Ehrhardt


New submission from Nikolai Ehrhardt :

Hi Guys,

I'm spawning entry fields in a treeview to make values editable while runtime, 
my codepiece:

the edit method is bind to left click:

def edit(self, event):
region = self.identify_region(event.x, event.y)
if region == 'cell':
# the user clicked on a cell

def enter(event):
self.set(item, column, entry.get())
entry.destroy()

column = self.identify_column(event.x)  # identify column
item = self.identify_row(event.y)  # identify item
self.parent_split = self.parent(item).split("__", 1)
print(column, item, self.parent_split)
if not tree_const.isEntryField(self.parent_split, column) or 
len(self.get_children(item)):
return

x, y, width, height = self.bbox(item, column) 
value = self.set(item, column)
entry = None
if tree_const.tc_index_column_map[column] == tree_const.col_op:
entry = ttk.Combobox(self, state="readonly", 
values=tree_const.combo_ops)
entry.bind('<>', enter) 
entry.set(value)
else:
entry = ttk.Entry(self) 
entry.insert(0, value)  # put former value in entry
entry.bind('', enter)  # validate with Enter 

entry.bind('', enter)
# display the Entry   
# create edition entry
entry.place(x=x, y=y, width=width, height=height,
anchor='nw')  # display entry on top of cell
entry.focus_set()

And now is the problem: The entries are not properly destroyed when focusing 
out, so I assume, that the button created for combobox, is not well connected 
to the focus-out event. When the button would just inherit the binding, the 
combobox would already disappear,while selecting.

So there is some logic missing. For destroying widget properly on focusing out.

--
files: treeview_spawn_entries.png
messages: 365878
nosy: Nikolai Ehrhardt
priority: normal
severity: normal
status: open
title: ttk.Combobox focus-out event inheritage
type: behavior
versions: Python 3.5
Added file: https://bugs.python.org/file49041/treeview_spawn_entries.png

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40209] read_pyfile function refactor in Lib/test/test_unparse.py

2020-04-06 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

You can just use open() in binary mode.

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40204] Docs build error with Sphinx 3.0 due to invalid C declaration

2020-04-06 Thread Serhiy Storchaka


Serhiy Storchaka  added the comment:

Maybe copy the code for deprecated and removed features to Doc/tools/extensions?

--
nosy: +serhiy.storchaka

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Christian Gollwitzer

Am 06.04.20 um 17:17 schrieb Luca:

On 4/6/2020 4:08 AM, Reto wrote:

Does this help?


Thank you, but not really. What I am trying to achieve is to have a way 
to copy and paste small yet complete dataframes (which may be the result 
of previous calculations) between a document (TXT, Word, GoogleDoc) and 
Jupiter/IPython.


Did I make sense?



CSV is the most sensible option here. It is widely supported by 
spreadsheets etc. and easily copy/pasteable.


Christian
--
https://mail.python.org/mailman/listinfo/python-list


Introduction to PyLiveUpdate: A runtime python code manipulation framework

2020-04-06 Thread 0xcc
Hi everyone,

I would like to introduce PyLiveUpdate 
(https://github.com/devopspp/pyliveupdate),  
a tool that helps you modify your running python code without stopping and
restarting it. This is helpful when you want to add some code (like print for 
debug) 
or modify a function definition (like fix bugs) for a long running program 
(like a web 
server). I'm now seeking people's feedbacks on this. Welcome to give me any 
comments. Appreciated it!

Best,
CC
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue40209] read_pyfile function refactor in Lib/test/test_unparse.py

2020-04-06 Thread Hakan


Change by Hakan :


--
keywords: +patch
pull_requests: +18761
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19399

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40209] read_pyfile function refactor in Lib/test/test_unparse.py

2020-04-06 Thread Hakan


New submission from Hakan :

The read_pyfile function can be written more effectively with the open function 
in the tokenize module.

--
components: Tests
messages: 365875
nosy: hakancelik
priority: normal
severity: normal
status: open
title: read_pyfile function refactor in Lib/test/test_unparse.py
versions: Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue35212] Expressions with format specifiers in f-strings give wrong code position in AST

2020-04-06 Thread yang


Change by yang :


--
keywords: +patch
nosy: +fhsxfhsx
nosy_count: 2.0 -> 3.0
pull_requests: +18760
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19398

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40204] Docs build error with Sphinx 3.0 due to invalid C declaration

2020-04-06 Thread Roundup Robot


Change by Roundup Robot :


--
keywords: +patch
nosy: +python-dev
nosy_count: 8.0 -> 9.0
pull_requests: +18759
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19397

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Multiprocessing queue sharing and python3.8

2020-04-06 Thread Israel Brewster
Under python 3.7 (and all previous versions I have used), the following code 
works properly, and produces the expected output:

import multiprocessing as mp

mp_comm_queue = None #Will be initalized in the main function
mp_comm_queue2=mp.Queue() #Test pre-initalized as well

def some_complex_function(x):
print(id(mp_comm_queue2))
assert(mp_comm_queue is not None)
print(x)
return x*2

def main():
global mp_comm_queue
#initalize the Queue
mp_comm_queue=mp.Queue()

#Set up a pool to process a bunch of stuff in parallel
pool=mp.Pool()
values=range(20)
data=pool.imap(some_complex_function,values)

for val in data:
print(f"**{val}**")

if __name__=="__main__":
main()

- mp_comm_queue2 has the same ID for all iterations of some_complex_function, 
and the assert passes (mp_comm_queue is not None). However, under python 3.8, 
it fails - mp_comm_queue2 is a *different* object for each iteration, and the 
assert fails. 

So what am I doing wrong with the above example block? Assuming that it broke 
in 3.8 because I wasn’t sharing the Queue properly, what is the proper way to 
share a Queue object among multiple processes for the purposes of inter-process 
communication?

The documentation 
(https://docs.python.org/3.8/library/multiprocessing.html#exchanging-objects-between-processes
 
)
 appears to indicate that I should pass the queue as an argument to the 
function to be executed in parallel, however that fails as well (on ALL 
versions of python I have tried) with the error:

Traceback (most recent call last):
  File "test_multi.py", line 32, in 
main()
  File "test_multi.py", line 28, in main
for val in data:
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py",
 line 748, in next
raise value
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/pool.py",
 line 431, in _handle_tasks
put(task)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/connection.py",
 line 206, in send
self._send_bytes(_ForkingPickler.dumps(obj))
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/reduction.py",
 line 51, in dumps
cls(buf, protocol).dump(obj)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/queues.py",
 line 58, in __getstate__
context.assert_spawning(self)
  File 
"/Library/Frameworks/Python.framework/Versions/3.7/lib/python3.7/multiprocessing/context.py",
 line 356, in assert_spawning
' through inheritance' % type(obj).__name__
RuntimeError: Queue objects should only be shared between processes through 
inheritance

after I add the following to the code to try passing the queue rather than 
having it global:

#Try by passing queue
values=[(x,mp_comm_queue) for x in range(20)]
data=pool.imap(some_complex_function,values)
for val in data:
print(f"**{val}**")   

So if I can’t pass it as an argument, and having it global is incorrect (at 
least starting with 3.8), what is the proper method of getting multiprocessing 
queues to child processes?

---
Israel Brewster
Software Engineer
Alaska Volcano Observatory 
Geophysical Institute - UAF 
2156 Koyukuk Drive 
Fairbanks AK 99775-7320
Work: 907-474-5172
cell:  907-328-9145

-- 
https://mail.python.org/mailman/listinfo/python-list


[issue40208] Remove deprecated SymbolTable.has_exec

2020-04-06 Thread Batuhan Taskaya


Change by Batuhan Taskaya :


--
keywords: +patch
pull_requests: +18758
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19396

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40208] Remove deprecated SymbolTable.has_exec

2020-04-06 Thread Batuhan Taskaya


New submission from Batuhan Taskaya :

SymbolTable's has_exec method deprecated 14 years ago (2006) with 
2def557aba1aaa42b638f9bf95624b7e6929191c, it can be safely removed since there 
is no user of it.

--
components: Library (Lib)
messages: 365874
nosy: BTaskaya
priority: normal
severity: normal
status: open
title: Remove deprecated SymbolTable.has_exec
versions: Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40170] [C API] Make PyTypeObject structure an opaque structure in the public C API

2020-04-06 Thread STINNER Victor


STINNER Victor  added the comment:

> Just checked - seems to be SPECIFIC to xlc-v16 as neither xlv-v11 nor xlc-v13 
> have any issues building.

That sounds like an AIX specific issue. Please open a separated issue.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40204] Docs build error with Sphinx 3.0 due to invalid C declaration

2020-04-06 Thread STINNER Victor


STINNER Victor  added the comment:

It sounds dangerous to not pin the Sphinx version in our CI :-/ Another issue 
caused by CI configuration stored at the same place than the code:
https://mail.python.org/archives/list/python-committ...@python.org/thread/WEU5CQKIA4LIHWHT53YA7HHNUY5H2FUT/

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Change by Pablo Galindo Salgado :


--
resolution:  -> fixed
stage: patch review -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread miss-islington


miss-islington  added the comment:


New changeset 717f1668b3455b498424577e194719f9beae13a1 by Miss Islington (bot) 
in branch '3.7':
bpo-40196: Fix a bug in the symtable when reporting inspecting global variables 
(GH-19391)
https://github.com/python/cpython/commit/717f1668b3455b498424577e194719f9beae13a1


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Pablo Galindo Salgado  added the comment:


New changeset 8bd84e7f79a6cc7670a89a92edba3015aa781758 by Miss Islington (bot) 
in branch '3.8':
bpo-40196: Fix a bug in the symtable when reporting inspecting global variables 
(GH-19391) (GH-19394)
https://github.com/python/cpython/commit/8bd84e7f79a6cc7670a89a92edba3015aa781758


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40206] Multiplying 4.6*100 will result in 459.99999999999994

2020-04-06 Thread Rémi Lapeyre

Rémi Lapeyre  added the comment:

@Steven Yes that's true, I only meant that in the context of the issue where 
only the multiplication is used. FWIW Fraction also would have issues with e.g. 
trigonometric functions right?


@ahmad, that's because you did Decimal(4.6) which first parse 4.6 as a float 
then call Decimal() with the result. You need to use Decimal('4.6') to avoid 
the parser reading 4.6 as a float. Have a look at the tutorial Eric Smith 
linked, the documentation of decimal and the response from Steven D'Aprano for 
more information.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread miss-islington


Change by miss-islington :


--
pull_requests: +18757
pull_request: https://github.com/python/cpython/pull/19395

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread miss-islington


Change by miss-islington :


--
nosy: +miss-islington
nosy_count: 2.0 -> 3.0
pull_requests: +18756
pull_request: https://github.com/python/cpython/pull/19394

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Pablo Galindo Salgado  added the comment:


New changeset 799d7d61a91eb0ad3256ef9a45a90029cef93b7c by Pablo Galindo in 
branch 'master':
bpo-40196: Fix a bug in the symtable when reporting inspecting global variables 
(GH-19391)
https://github.com/python/cpython/commit/799d7d61a91eb0ad3256ef9a45a90029cef93b7c


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40206] Multiplying 4.6*100 will result in 459.99999999999994

2020-04-06 Thread ahmad dana


ahmad dana  added the comment:

Regarding the comment about the decimal point precision , and solving the issue 
with the decimal library, the following attachment shows you that the decimal 
Library did exactly the same behaviour

--
Added file: https://bugs.python.org/file49040/Screen Shot 2020-04-06 at 6.56.48 
PM.png

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40207] Expose NCURSES_EXT_FUNCS under curses

2020-04-06 Thread Batuhan Taskaya


Change by Batuhan Taskaya :


--
keywords: +patch
pull_requests: +18755
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19392

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40207] Expose NCURSES_EXT_FUNCS under curses

2020-04-06 Thread Batuhan Taskaya


New submission from Batuhan Taskaya :

NCURSES_EXT_FUNCS defines the extension version number which is needed to 
determine if certain functions exist or not.

--
components: Library (Lib)
messages: 365866
nosy: BTaskaya
priority: normal
severity: normal
status: open
title: Expose NCURSES_EXT_FUNCS under curses
versions: Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40206] Multiplying 4.6*100 will result in 459.99999999999994

2020-04-06 Thread Steven D'Aprano

Steven D'Aprano  added the comment:

Rémi, it is not true that the Decimal module won't lose precision. It will. 
Decimal is not exact either, it is still a floating point format similar to 
float.

py> Decimal(1)/3*3
Decimal('0.')

The two major advantages of Decimal are: it matches the number you type more 
closely, and you can choose how much precision to use. (At the cost of memory 
and speed.) But there are also disadvantages: rounding errors with Decimal tend 
to be larger on average than for binary floats.

If you want exact calculations that will never lose precision, you have to use 
Fractions, but that is slower and less convenient.

--
nosy: +steven.daprano

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40206] Multiplying 4.6*100 will result in 459.99999999999994

2020-04-06 Thread Eric V. Smith


Eric V. Smith  added the comment:

See https://docs.python.org/3.8/tutorial/floatingpoint.html

--
nosy: +eric.smith
resolution:  -> not a bug
stage:  -> resolved
status: open -> closed

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40170] [C API] Make PyTypeObject structure an opaque structure in the public C API

2020-04-06 Thread STINNER Victor


STINNER Victor  added the comment:

> It should use PyType_GetSlot()

Oh. It seems like currently, PyType_GetSlot() can only be used on a heap 
allocated types :-( The function starts with:

if (!PyType_HasFeature(type, Py_TPFLAGS_HEAPTYPE) || slot < 0) {
PyErr_BadInternalCall();
return NULL;
}

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40170] [C API] Make PyTypeObject structure an opaque structure in the public C API

2020-04-06 Thread STINNER Victor


STINNER Victor  added the comment:

Py_TRASHCAN_BEGIN() access directly PyTypeObject.tp_dealloc:

#define Py_TRASHCAN_BEGIN(op, dealloc) \
Py_TRASHCAN_BEGIN_CONDITION(op, \
Py_TYPE(op)->tp_dealloc == (destructor)(dealloc))

It should use PyType_GetSlot() or a new getter function (to read 
PyTypeObject.tp_dealloc) should be added.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40206] Multiplying 4.6*100 will result in 459.99999999999994

2020-04-06 Thread Rémi Lapeyre

Rémi Lapeyre  added the comment:

Hi ahmad, calculation with floating points in Python uses the IEE 754 
(https://fr.wikipedia.org/wiki/IEEE_754) standard and will result in such 
quirks.

If you want to not loose precision you can use the decimal module:

>>> from decimal import Decimal
>>> Decimal('4.6')*100
Decimal('460.0')

Since this is not a bug if you have other questions when working with floats, 
try to ask on python-list or a forum.

--
nosy: +remi.lapeyre

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Luca

On 4/6/2020 4:08 AM, Reto wrote:

Does this help?


Thank you, but not really. What I am trying to achieve is to have a way 
to copy and paste small yet complete dataframes (which may be the result 
of previous calculations) between a document (TXT, Word, GoogleDoc) and 
Jupiter/IPython.


Did I make sense?

Thanks
--
https://mail.python.org/mailman/listinfo/python-list


[issue40206] Multiplying 4.6*100 will result in 459.99999999999994

2020-04-06 Thread ahmad dana


New submission from ahmad dana :

While we using python3.7 to do some  number multiplication, we faced an issue 
with multiplying 4.6*100 which lead to strange output, the result was 
459.94, while it should be something like 460.00

--
messages: 365860
nosy: ahmad dana
priority: normal
severity: normal
status: open
title: Multiplying 4.6*100 will result in 459.94
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[RELEASE] Python 2.7.18 release candidate 1

2020-04-06 Thread Benjamin Peterson
Greetings,
2.7.18 release candidate 1, a testing release for the last release of the 
Python 2.7 series, is now available for download. The CPython core developers 
stopped applying routine bugfixes to the 2.7 branch on January 1. 2.7.18 will 
includes fixes that were made between the release of 2.7.17 and the end of 
2019. A final—very final!—release is expected in 2 weeks.

Downloads are at:

   https://www.python.org/downloads/release/python-2718rc1/

The full changelog is at

   
https://raw.githubusercontent.com/python/cpython/v2.7.18rc1/Misc/NEWS.d/2.7.18rc1.rst

Test it out, and let us know if there are any critical problems at

https://bugs.python.org/

(This is the last chance!)

All the best,
Benjamin
--
Python-announce-list mailing list -- python-announce-list@python.org
To unsubscribe send an email to python-announce-list-le...@python.org
https://mail.python.org/mailman3/lists/python-announce-list.python.org/

Support the Python Software Foundation:
http://www.python.org/psf/donations/


[RELEASE] Python 2.7.18 release candidate 1

2020-04-06 Thread Benjamin Peterson
Greetings,
2.7.18 release candidate 1, a testing release for the last release of the 
Python 2.7 series, is now available for download. The CPython core developers 
stopped applying routine bugfixes to the 2.7 branch on January 1. 2.7.18 will 
includes fixes that were made between the release of 2.7.17 and the end of 
2019. A final—very final!—release is expected in 2 weeks.

Downloads are at:

   https://www.python.org/downloads/release/python-2718rc1/

The full changelog is at

   
https://raw.githubusercontent.com/python/cpython/v2.7.18rc1/Misc/NEWS.d/2.7.18rc1.rst

Test it out, and let us know if there are any critical problems at

https://bugs.python.org/

(This is the last chance!)

All the best,
Benjamin
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue40205] Profile 'builtins' parameter documentation missing

2020-04-06 Thread Bar Harel


New submission from Bar Harel :

Profile and cProfile's documentation does not say anything about the builtins 
parameter.
Also, it exists only on cProfile, which means Profile is not a drop-in 
replacement.
Lastly, enable() method, that exists on cProfile, also accepts params, and are 
undocumented.

--
assignee: docs@python
components: Documentation, Library (Lib)
messages: 365859
nosy: bar.harel, docs@python
priority: normal
severity: normal
status: open
title: Profile 'builtins' parameter documentation missing
versions: Python 3.7, Python 3.8, Python 3.9

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40204] Docs build error with Sphinx 3.0 due to invalid C declaration

2020-04-06 Thread Karthikeyan Singaravelan


New submission from Karthikeyan Singaravelan :

The following error is caused in Docs build for a 3.8 backport since sphinx is 
ran with warnings. Sphinx released 3.0 on April 6. The last successful build on 
master uses Sphinx 2.2.0 [0]. My guess is sphinx new version possibly breaking 
the build on Python 3.8 where it's not pinned to use 2.2.0 pulling the latest 
version. The changelog for Sphinx has below note :

https://www.sphinx-doc.org/en/master/changes.html#release-3-0-0-released-apr-06-2020
 

The C domain has been rewritten, with additional directives and roles. The 
existing ones are now more strict, resulting in new warnings.

Python 3.8 and Python 3.7 doesn't have Sphinx pinned to 2.2.0 while master does.

Python 3.8 Docs makefile : 
https://github.com/python/cpython/blob/f7b0259d0d243a71d79a3fda9ec7aad4306513eb/Doc/Makefile#L146

Failed build :

https://github.com/python/cpython/pull/19388/checks?check_run_id=563053793#step:7:46

Error :

Warning, treated as error:
/home/runner/work/cpython/cpython/Doc/c-api/buffer.rst:92:Error in declarator 
or parameters
Invalid C declaration: Expected identifier in nested name. [error at 5]
  void \*buf
  -^
Makefile:49: recipe for target 'build' failed
make[1]: *** [build] Error 2


[0] https://github.com/python/cpython/runs/564123625#step:6:24

--
assignee: docs@python
components: Documentation
messages: 365858
nosy: docs@python, eric.araujo, ezio.melotti, mdk, rhettinger, vstinner, 
willingc, xtreak
priority: normal
severity: normal
status: open
title: Docs build error with Sphinx 3.0 due to invalid C declaration
type: behavior
versions: Python 3.8

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40203] Warn about invalid PYTHONUSERBASE

2020-04-06 Thread Volker Weißmann

Volker Weißmann  added the comment:

Forget the thing I said about "invalid//path", but my argument still stands for 
non existing paths or paths to something else than a directory.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40188] Azure Pipelines jobs failing randomly with: Unable to connect to azure.archive.ubuntu.com

2020-04-06 Thread Steve Dower


Steve Dower  added the comment:

I wonder why the "install wamerican" didn't go into the script? It should at 
least get the same options as in the script to make sure it doesn't break the 
install.

Maybe we should make our own mirror of Ubuntu so that we don't have to depend 
on a massive company with billions of users to get it right for us? :o)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40188] Azure Pipelines jobs failing randomly with: Unable to connect to azure.archive.ubuntu.com

2020-04-06 Thread Steve Dower


Change by Steve Dower :


--
pull_requests:  -18743

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Pablo Galindo Salgado  added the comment:

> In symtable.Function.get_locals() symbols with scopes in (LOCAL, CELL) are 
> selected.

Thanks for pointing that out. I will simplify PR 19391.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Change by Pablo Galindo Salgado :


--
Removed message: https://bugs.python.org/msg365854

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Pablo Galindo Salgado  added the comment:

I prefer to explicitly check for absence of the global scope as in PR 19391

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Roundup issue tracker 2.0.0beta0 release

2020-04-06 Thread John P. Rouillard
Hello All:

I'm proud to release version 2.0.0beta0 of the Roundup issue tracker
which has been possible due to the help of several contributors. This
release contains some major changes, so make sure to read
`docs/upgrading.txt
`_ to bring
your tracker up to date. The changes, as usual, include some new
features and many bug fixes.

You can download it with:

   pip download roundup==2.0.0beta0

then unpack and test/install the tarball.

Among the notable improvements from the 1.6.1 release are:

   Roundup is multilingual and will run under either Python 3 or
   Python 2. If you want to use Python 3, you *must read* the Python 3
   Support section in the upgrading doc. Depending on the database
   backend you may have to export/import the tracker. Also you will
   need to make sure your tracker's Python code is Python 3
   compliant. Thanks to Joseph Myers with help from Christof Meerwald.

   Roundup has a rest API to go along with the existing xmlrpc
   API. See doc/rest.txt for details on configuring, authorizing
   access (per role) and making a request. Thanks to Ralf
   Schlatterbeck who integrated and updated Chau Nguyen's GSOC code.
   
   PGP encryption is now done using the gpg module and not the
   obsolete pyme library. Thanks to Christof Meerwald.

   Use of mod_python is deprecated. Apache mod_wsgi documentation
   has been updated along with gunicorn and uwsgi and is the
   preferred mechanism.

   jinja templates updated to bootstrap 4.4.1. Templates use
   autoescape and translation library. Support for messages
   written in markdown added. SimpleMDE used as markdown editor to
   provide preview features. Thanks to Christof Meerwald.
   
The file CHANGES.txt has a detailed list of feature additions and bug
fixes for each release. The most recent changes from there are at the
end of this announcement.  Also see the information in
doc/upgrading.txt.

How You Can Help


We are looking for one or two front end developers to kick the tires
on the rest interface. The rest interface is available by running
demo.py as described below. If you are interested in helping please
contact "rouilj+rit at ieee.org".

The Zope deployment mode has not had any testing under Python 3. We
are looking for community involvement to help get this deployment
mode validated. It may also have issues under Python 2. If you are
interested in helping with this please see:
https://issues.roundup-tracker.org/issue2141835

Email input using POP and IMAP modes need testing under Python 3
and Python 2.

We have new documentation for deploying with apache and mod_wsgi. It
needs testing and enhancement.

There are other documentation issues at:


https://issues.roundup-tracker.org/issue?@columns=title,id,activity,status=7=-1,1,2&@template=index&@action=search

If you find bugs, please report them to issues AT roundup-tracker.org
or create an account at https://issues.roundup-tracker.org and open a
new ticket. If you have patches to fix the issues they can be attached
to the email or uploaded to the tracker.

Upgrading
=

If you're upgrading from an older version of Roundup you *must* follow
all the "Software Upgrade" guidelines given in the doc/upgrading.txt
documentation.

Roundup requires Python 2 newer than version 2.7.2 or Python 3 newer
than or equal to version 3.4 for correct operation.

The wsgi, server and cgi web deployment modes are the ones with the
most testing.

To give Roundup a try, just download (see below), unpack and run::

python demo.py

Release info and download page:
 https://pypi.org/project/roundup
Source and documentation is available at the website:
 http://roundup-tracker.org/
Mailing lists - the place to ask questions:
 https://sourceforge.net/p/roundup/mailman/


About Roundup
=

Roundup is a simple-to-use and install issue-tracking system with
command-line, web and e-mail interfaces. It is based on the winning design
from Ka-Ping Yee in the Software Carpentry "Track" design competition.

Note: Ping is not responsible for this project. The contact for this
project is rich...@users.sourceforge.net.

Roundup manages a number of issues (with flexible properties such as
"description", "priority", and so on) and provides the ability to:

(a) submit new issues,
(b) find and edit existing issues, and
(c) discuss issues with other participants.

The system facilitates communication among the participants by managing
discussions and notifying interested parties when issues are edited. One of
the major design goals for Roundup that it be simple to get going. Roundup
is therefore usable "out of the box" with any Python 2.7.2+ (or 3.4+)
installation. It doesn't even need to be "installed" to be operational,
though an install script is provided.

It comes with five issue tracker templates

* a classic bug/feature tracker
* a more extensive devel tracker for bug/features etc.
* a responsive version 

[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Wolfgang Stöcher

Wolfgang Stöcher  added the comment:

In symtable.Function.get_locals() symbols with scopes in (LOCAL, CELL) are 
selected. Also

>>> code = """\
... def foo():
...x = 42
...def bar():
...   return x
... """
>>> import symtable
>>> top = symtable.symtable(code, "?", "exec")
>>> top.get_children()[0].lookup('x')._Symbol__scope == symtable.CELL
True

So I guess this would be the correct fix then:

def is_local(self):
return self.__scope in (LOCAL, CELL)

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40170] [C API] Make PyTypeObject structure an opaque structure in the public C API

2020-04-06 Thread Michael Felt


Michael Felt  added the comment:

Just checked - seems to be SPECIFIC to xlc-v16 as neither xlv-v11 nor xlc-v13 
have any issues building.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40203] Warn about invalid PYTHONUSERBASE

2020-04-06 Thread Volker Weißmann

New submission from Volker Weißmann :

https://docs.python.org/2/using/cmdline.html says that PYTHONUSERBASE defines 
the user base directory. If I understand this correctly, this implies that 
PYTHONUSERBASE should be a path a directory. I therefore think that python 
should print a warning if PYTHONUSERBASE is:
1. Not a valid path (e.g. "invalid//path")
2. A path to something else than a directory
3. A non existing path (maybe)


I think that

export PYTHONUSERBASE="invalid//path"
python

should generate a warning, because there is no good reason to do so.

--
messages: 365851
nosy: Volker Weißmann
priority: normal
severity: normal
status: open
title: Warn about invalid PYTHONUSERBASE
type: enhancement

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40170] [C API] Make PyTypeObject structure an opaque structure in the public C API

2020-04-06 Thread Michael Felt


Michael Felt  added the comment:

Just manually verified that PR19377, when compiled against xlc - crashes during 
make:

rm -f libpython3.9d.a
ar rcs libpython3.9d.a Modules/getbuildinfo.o  Parser/acceler.o  
Parser/grammar1.o  Parser/listnode.o  Parser/node.o  Parser/parser.o  
Parser/token.o   Parser/myreadline.o Parser/parsetok.o Parser/tokenizer.o  
Objects/abstract.o  Objects/accu.o  Objects/boolobject.o  
Objects/bytes_methods.o  Objects/bytearrayobject.o  Objects/bytesobject.o  
Objects/call.o  Objects/capsule.o  Objects/cellobject.o  Objects/classobject.o  
Objects/codeobject.o  Objects/complexobject.o  Objects/descrobject.o  
Objects/enumobject.o  Objects/exceptions.o  Objects/genobject.o  
Objects/fileobject.o  Objects/floatobject.o  Objects/frameobject.o  
Objects/funcobject.o  Objects/interpreteridobject.o  Objects/iterobject.o  
Objects/listobject.o  Objects/longobject.o  Objects/dictobject.o  
Objects/odictobject.o  Objects/memoryobject.o  Objects/methodobject.o  
Objects/moduleobject.o  Objects/namespaceobject.o  Objects/object.o  
Objects/obmalloc.o  Objects/picklebufobject.o  Objects/rangeobject.o  
Objects/setobject.o 
  Objects/sliceobject.o  Objects/structseq.o  Objects/tupleobject.o  
Objects/typeobject.o  Objects/unicodeobject.o  Objects/unicodectype.o  
Objects/weakrefobject.o  Python/_warnings.o  Python/Python-ast.o  Python/asdl.o 
 Python/ast.o  Python/ast_opt.o  Python/ast_unparse.o  Python/bltinmodule.o  
Python/ceval.o  Python/codecs.o  Python/compile.o  Python/context.o  
Python/dynamic_annotations.o  Python/errors.o  Python/frozenmain.o  
Python/future.o  Python/getargs.o  Python/getcompiler.o  Python/getcopyright.o  
Python/getplatform.o  Python/getversion.o  Python/graminit.o  Python/hamt.o  
Python/import.o  Python/importdl.o  Python/initconfig.o  Python/marshal.o  
Python/modsupport.o  Python/mysnprintf.o  Python/mystrtoul.o  
Python/pathconfig.o  Python/peephole.o  Python/preconfig.o  Python/pyarena.o  
Python/pyctype.o  Python/pyfpe.o  Python/pyhash.o  Python/pylifecycle.o  
Python/pymath.o  Python/pystate.o  Python/pythonrun.o  Python/pytime.o  
Python/bootstrap_hash.o  Python/structmember.o 
  Python/symtable.o  Python/sysmodule.o  Python/thread
.o  Python/traceback.o  Python/getopt.o  Python/pystrcmp.o  Python/pystrtod.o  
Python/pystrhex.o  Python/dtoa.o  Python/formatter_unicode.o  
Python/fileutils.o  Python/dynload_shlib.oModules/config.o  
Modules/getpath.o  Modules/main.o  Modules/gcmodule.o  Modules/posixmodule.o  
Modules/errnomodule.o  Modules/pwdmodule.o  Modules/_sre.o  
Modules/_codecsmodule.o  Modules/_weakref.o  Modules/_functoolsmodule.o  
Modules/_operator.o  Modules/_collectionsmodule.o  Modules/_abc.o  
Modules/itertoolsmodule.o  Modules/atexitmodule.o  Modules/signalmodule.o  
Modules/_stat.o  Modules/timemodule.o  Modules/_threadmodule.o  
Modules/_localemodule.o  Modules/_iomodule.o Modules/iobase.o Modules/fileio.o 
Modules/bytesio.o Modules/bufferedio.o Modules/textio.o Modules/stringio.o  
Modules/faulthandler.o  Modules/_tracemalloc.o Modules/hashtable.o  
Modules/symtablemodule.o  Modules/xxsubtype.o  Python/frozen.o
./Modules/makexp_aix Modules/python.exp . libpython3.9d.a;  xlc_r 
-Wl,-bE:Modules/python.exp -lld -o python Programs/python.o libpython3.9d.a 
-lintl -ldl  -lm   -lm 
 ./python -E -S -m sysconfig --generate-posix-vars ; if test $? -ne 0 ; 
then  echo "generate-posix-vars failed" ;  rm -f ./pybuilddir.txt ;  exit 1 ;  
fi
Objects/genobject.c:127: _PyObject_GC_TRACK: Assertion "!(((PyGC_Head 
*)(op)-1)->_gc_next != 0)" failed: object already tracked by the garbage 
collector
Enable tracemalloc to get the memory block allocation traceback
object address  : 30084150
object refcount : 0
object type : 20013aa8
object type name: generator
object repr : 
Fatal Python error: _PyObject_AssertFailed: _PyObject_AssertFailed
Python runtime state: core initialized
Current thread 0x0001 (most recent call first):
  File "", line 1593 in _setup
  File "", line 1634 in _install
  File "", line 1189 in _install_external_importers
/bin/sh: 24117648 IOT/Abort trap(coredump)
make: 1254-004 The error code from the last command is 134.
Stop.

FYI: about two hours ago I verified that xlc and 
08050e959e6c40839cd2c9e5f6a4fd1513e3d605 : bpo-40147: Fix a compiler warning on 
Windows in Python/compile.c (GH-19389)

all was green.

--
nosy: +Michael.Felt

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40060] socket.TCP_NOTSENT_LOWAT is missing in official macOS builds

2020-04-06 Thread Ronald Oussoren


Ronald Oussoren  added the comment:

AFAIK the macOS builds are still build on the oldest macOS release supported by 
the installer (that is, a macOS 10.9 system). This means the build won't use 
macOS APIs introduced in macOS 10.10 or later.



It would be nice to build the installer using the latest compiler and SDK (more 
APIs available, better compiler, ...), but that requires work to explicitly 
avoid using APIs that aren't available on the system the binary is running on.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40170] [C API] Make PyTypeObject structure an opaque structure in the public C API

2020-04-06 Thread STINNER Victor


STINNER Victor  added the comment:


New changeset 38aefc585f60a77d66f4fbe5a37594a488b53474 by Victor Stinner in 
branch 'master':
bpo-40170: PyObject_GET_WEAKREFS_LISTPTR() becomes a function (GH-19377)
https://github.com/python/cpython/commit/38aefc585f60a77d66f4fbe5a37594a488b53474


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40060] socket.TCP_NOTSENT_LOWAT is missing in official macOS builds

2020-04-06 Thread Dima Tisnek

Dima Tisnek  added the comment:

+macos team, because I can't for the life of me figure out how official builds 
are made ☹️

In short: my local build has socket.TCP_NOTSENT_LOWAT but the official build 
does not.

--
nosy: +ned.deily, ronaldoussoren

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40173] test.support.import_fresh_module fails to correctly block submodules when fresh is specified

2020-04-06 Thread hai shi


hai shi  added the comment:

> I *think* the problem is that in the step where _save_and_remove_module is 
> called on fresh_name (see here: 
> https://github.com/python/cpython/blob/76db37b1d37a9daadd9e5b320f2d5a53cd1352ec/Lib/test/support/__init__.py#L328-L329)

Looks like deleting a module name after `__import__(name)` is not good enought 
in 
https://github.com/python/cpython/blob/master/Lib/test/support/__init__.py#L244.(some
 redundant submodules should be removed, no?)

paul's example can be passed in PR19390.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40173] test.support.import_fresh_module fails to correctly block submodules when fresh is specified

2020-04-06 Thread hai shi


Change by hai shi :


--
keywords: +patch
pull_requests: +18754
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19390

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Change by Pablo Galindo Salgado :


--
Removed message: https://bugs.python.org/msg365843

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Pablo Galindo Salgado  added the comment:

That fix is not correct. For instance consider:

>>> code2 = """\
... def foo():
...x = 42
...def bar():
...   return -1
... """
>>> top.get_children()[0]

>>> top = symtable.symtable(code2, "?", "exec")
>>> top.get_children()[0].lookup('x')._Symbol__scope == symtable.LOCAL
True

but if we return x from bar:

>>> code = """\
... def foo():
...x = 42
...def bar():
...   return x
... """
>>> import symtable
>>> top = symtable.symtable(code, "?", "exec")
>>> top.get_children()[0].lookup('x')._Symbol__scope == symtable.LOCAL
False

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Change by Pablo Galindo Salgado :


--
keywords: +patch
pull_requests: +18753
stage:  -> patch review
pull_request: https://github.com/python/cpython/pull/19391

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40202] Misleading grammatically of ValueError Message?

2020-04-06 Thread Steven D'Aprano


Steven D'Aprano  added the comment:

I think the error messages could be improved.

In the first example: `f,x, a, b = [1,2,3]`

you are unpacking three values, but you need to unpack 4. The error message is 
not very helpful: 5 values is "more than 3" but it would be too many, you need 
not "more than 3" but exactly 4.

In the second example `a, b = [1,2,3]` it would be nice if it would tell you 
how many values you need to unpack.

Ideally, the message would say something like:

Need to unpack 4 values but got 3  # first example
Need to unpack 2 values but got 3  # second example

However I don't know if this is possible with the current parser, it might not 
be possible until the parser is changed (maybe in 3.9?)

--
nosy: +steven.daprano

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Pablo Galindo Salgado  added the comment:

>>> code2 = """\
... def foo():
...x = 42
...def bar():
...   return -1
... """
>>> top.get_children()[0]

>>> top = symtable.symtable(code2, "?", "exec")
>>> top.get_children()[0].lookup('x')._Symbol__scope == symtable.LOCAL
True

but if we return x from bar:

That fix is not correct. For instance consider:

>>> code = """\
... def foo():
...x = 42
...def bar():
...   return x
... """
>>> import symtable
>>> top = symtable.symtable(code, "?", "exec")
>>> top.get_children()[0].lookup('x')._Symbol__scope == symtable.LOCAL
False

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Exceptions versus Windows ERRORLEVEL

2020-04-06 Thread Stephen Tucker
Thanks, Eryk - this is very helpful.

Stephen.

On Mon, Apr 6, 2020 at 6:43 AM Eryk Sun  wrote:

> On 4/3/20, Stephen Tucker  wrote:
> >
> > Does an exception raised by a Python 3.x program on a Windows machine set
> > ERRORLEVEL?
>
> ERRORLEVEL is an internal state of the CMD shell. It has nothing to do
> with Python. If Python exits due to an unhandled exception, the
> process exit code will be 1. If CMD waits on the process, it will set
> the ERRORLEVEL based on the exit code. But CMD doesn't always wait. By
> default its START command doesn't wait. Also, at the interactive
> command prompt it doesn't wait for non-console applications such as
> "pythonw.exe"; it only waits for console applications such as
> "python.exe".
>
-- 
https://mail.python.org/mailman/listinfo/python-list


Re: Decorator with parameters

2020-04-06 Thread Chris Angelico
On Mon, Apr 6, 2020 at 6:36 PM ast  wrote:
>
> Hello
>
> I wrote a decorator to add a cache to functions.
> I realized that cache dictionnary could be defined
> as an object attribute or as a local variable in
> method __call__.
> Both seems to work properly.
> Can you see any differences between the two variants ?
>

There is a small difference, but it probably won't bother you. If you
instantiate your Memoize object once and then call it twice, one form
will share, the other form will have distinct caches.

memo = Memoize1(16) # or Memoize2(16)

@memo
def fib1(n): ...

@memo
def fib2(n): ...

The difference is WHEN the cache dictionary is created. That's all. :)

ChrisA
-- 
https://mail.python.org/mailman/listinfo/python-list


Decorator with parameters

2020-04-06 Thread ast

Hello

I wrote a decorator to add a cache to functions.
I realized that cache dictionnary could be defined
as an object attribute or as a local variable in
method __call__.
Both seems to work properly.
Can you see any differences between the two variants ?

from collection import OrderedDict

class Memoize1:
def __init__(self, size=None):
self.size = size
self.cache = OrderedDict() ### cache defined as an attribute
def __call__(self, f):
def f2(*arg):
if arg not in self.cache:
self.cache[arg] = f(*arg)
if self.size is not None and len(self.cache) >self.size:
self.cache.popitem(last=False)
return self.cache[arg]
return f2

# variant

class Memoize2:
def __init__(self, size=None):
self.size = size
def __call__(self, f):
cache = OrderedDict()  ### cache defined as a local variable
def f2(*arg):
if arg not in cache:
cache[arg] = f(*arg)
if self.size is not None and len(cache) > self.size:
cache.popitem(last=False)
return cache[arg]
return f2

@Memoize1(16)
def fibo1(n):
if n < 2: return n
return fibo1(n-2)+fibo1(n-1)


@Memoize2(16)
def fibo2(n):
if n < 2: return n
return fibo2(n-2)+fibo2(n-1)
--
https://mail.python.org/mailman/listinfo/python-list


[issue40196] symtable.Symbol.is_local() can be True for global symbols

2020-04-06 Thread Pablo Galindo Salgado


Change by Pablo Galindo Salgado :


--
assignee:  -> pablogsal
nosy: +pablogsal

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: print small DataFrame to STDOUT and read it back into dataframe

2020-04-06 Thread Reto
On Sat, Apr 04, 2020 at 07:00:23PM -0400, Luca wrote:
> dframe.to_string
> 
> gives:
> 
>  0  a0  b0  c0  d0
> 1  a1  b1  c1  d1
> 2  a2  b2  c2  d2
> 3  a3  b3  c3  d3>

That's not the output of to_string.
to_string is a method, not an attribute which is apparent by the

> 

comment in your output.
You need to call it with parenthesis like `dframe.to_string()`

> Can I evaluate this string to obtain a new dataframe like the one that
> generated it?

As for re-importing, serialize the frame to something sensible first.
There are several options available, csv, json, html... Take your pick.

You can find all those in the dframe.to_$something namespace
(again, those are methods, make sure to call them).

Import it again with pandas.read_$something, choosing the same serialization 
format
you picked for the output in the first place.

Does this help?

Cheers,
Reto
-- 
https://mail.python.org/mailman/listinfo/python-list


[issue40202] Misleading grammatically of ValueError Message?

2020-04-06 Thread Jacob RR


New submission from Jacob RR :

hi,

so I *think* that ValueError shows an error grammatically incorrect?
In python 2.7
>>> x = [1,2,3]
>>> f,x, a, b = [1,2,3]
Traceback (most recent call last):
  File "", line 1, in 
ValueError: need more than 3 values to unpack

Should have said: Received 3 values to unpack ?
The problem with that is the list size is 3 and the error says that I need more 
than 3 values to unpack which is logically wrong **IMHO** (don't kill me if im 
mistaken)

Now if I try to do something else, for example:

>>> x = [1,2,3]
>>> a, b = [1,2,3]

Traceback (most recent call last):
  File "", line 1, in 
ValueError: too many values to unpack

It says **too many** but I assign a few than the size of the list. am I the one 
who wrong here?

Now, I code in Python 3 I'm not a professional  like you, I'm novice and try to 
learn.. I'll get to the point, the same code in Python 3.7.6 (Anaconda, pip is 
disappoint me :< )

>>> a = [1,2,3]
>>> x,y = a
Traceback (most recent call last):
  File "", line 1, in 
ValueError: too many values to unpack (expected 2)

Should said something else because it received less values and expected should 
say 3 and not 2, correct?


thanks for reading.




PS: Sorry I'm not a native speaker and I might be wrong and am very sorry if 
time wasted.

--
assignee: docs@python
components: Documentation
messages: 365842
nosy: Jacob RR, docs@python
priority: normal
severity: normal
status: open
title: Misleading grammatically of ValueError Message?
type: enhancement
versions: Python 3.7

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40147] Move checking for duplicated keywords to the compiler

2020-04-06 Thread Pablo Galindo Salgado


Pablo Galindo Salgado  added the comment:


New changeset 08050e959e6c40839cd2c9e5f6a4fd1513e3d605 by Zackery Spytz in 
branch 'master':
bpo-40147: Fix a compiler warning on Windows in Python/compile.c (GH-19389)
https://github.com/python/cpython/commit/08050e959e6c40839cd2c9e5f6a4fd1513e3d605


--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



Re: Is there a difference between python

2020-04-06 Thread Eryk Sun
On 4/5/20, Malcolm Greene  wrote:
> Is there a difference between the following 2 ways to launch a console-less
> script under Windows?
>
> python 

[issue34972] json dump silently converts int keys to string

2020-04-06 Thread Stuart Bishop


Stuart Bishop  added the comment:

(sorry, my example is normal Python behavior. {1:1, 1.0:2} == {1:2} , {1.0:1} 
== {1:1} )

--
nosy: +stub

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue40197] Add nanoseconds to timing table in What's new python 3.8

2020-04-06 Thread Morten Hels


Morten Hels  added the comment:

It turns out I was wrong about microseconds. The output in 
https://bugs.python.org/issue35884 does show microseconds, but the output is 
before this commit 
https://github.com/python/cpython/commit/9da3583e78603a81b1839e17a420079f734a75b0
 that fixes a typo (that's my best guess anyway).

Thank you for clearing this up, rhettinger.

--

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue34972] json dump silently converts int keys to string

2020-04-06 Thread Stub


Stub  added the comment:

Similarly, keys can be lost entirely:

>>> json.dumps({1:2, 1.0:3})
'{"1": 3}'

--
nosy: +Stub2

___
Python tracker 

___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com