[issue39602] importlib: lazy loading can result in reimporting a submodule

2020-02-10 Thread Pox TheGreat


New submission from Pox TheGreat :

Using the LazyLoader class one can modify the sys.meta_path finders so that 
every import mechanism becomes lazy. This method has been used in Mercurial and 
by Facebook.

My problem is that if I have a package (pa) which imports a submodule (a) in 
the __init__.py and accesses its attributes (or uses a from list) then that 
submodule is imported (executed) twice without any warning.

I traced back the problem to importlib._bootstrap.py / _find_and_load_unlocked. 
There is a check there if the submodule has already been imported by the parent 
package, but the submodule will be imported just after that check because of 
the _LazyModule and the __path__ attribute access of the submodule.

# Crazy side-effects!
if name in sys.modules:
return sys.modules[name]
parent_module = sys.modules[parent]
try:
path = parent_module.__path__


Maybe we could check if name in sys.modules after the __path__ attribute access?

--
components: Library (Lib)
files: LazyImport.zip
messages: 361705
nosy: Pox TheGreat
priority: normal
severity: normal
status: open
title: importlib: lazy loading can result in reimporting a submodule
type: behavior
versions: Python 3.7, Python 3.8
Added file: https://bugs.python.org/file48889/LazyImport.zip

___
Python tracker 
<https://bugs.python.org/issue39602>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31804] multiprocessing calls flush on sys.stdout at exit even if it is None (pythonw)

2018-01-12 Thread Pox TheGreat

Pox TheGreat <poxthegr...@gmail.com> added the comment:

I have already uploaded a patch file but it is not in the required format. Also 
I realize that most of the confusion was because I forgot to provide the OS 
version. Perhaps it would be good to have a separate field for that.

I will upload a patch as it is described in the developer guide.

--
type: crash -> behavior

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31804>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31804] multiprocessing calls flush on sys.stdout at exit even if it is None (pythonw)

2018-01-10 Thread Pox TheGreat

Pox TheGreat <poxthegr...@gmail.com> added the comment:

Retested it with a freshly installed 3.6.4 version. Used the following code to 
test:

import sys
import multiprocessing


def foo():
return 'bar'


if __name__ == '__main__':
proc = multiprocessing.Process(target=foo)
proc.start()
proc.join()
with open('process_exit_code.txt', 'w') as f:
f.write(sys.version)
f.write('\nprocess exit code: ')
f.write(str(proc.exitcode))

It is very important to run the script with pythonw, not just with python. This 
is the content of the resulting process_exit_code.txt file on my machine:
3.6.4 (v3.6.4:d48eceb, Dec 19 2017, 06:04:45) [MSC v.1900 32 bit (Intel)]
process exit code: 1

As it can be seen the problem was not fixed. The process exit code should be 0. 
By default the new multiprocessing process created uses the same interpreter as 
the creator process, so it uses pythonw too.

--

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31804>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31804] multiprocessing calls flush on sys.stdout at exit even if it is None (pythonw)

2018-01-09 Thread Pox TheGreat

Pox TheGreat <poxthegr...@gmail.com> added the comment:

Unfortunately this is NOT a duplicate of https://bugs.python.org/issue28326. 
That issue is about a closed output stream. In that case sys.stdout and 
sys.stderr are file like objects which have been closed.

This issue is about sys.stdout and sys.stderr being None! This is because 
pythonw was used not python.

--
resolution: duplicate -> 
status: closed -> open

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31804>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue31804] multiprocessing calls flush on sys.stdout at exit even if it is None (pythonw)

2017-10-17 Thread Pox TheGreat

New submission from Pox TheGreat <poxthegr...@gmail.com>:

If you start Python by pythonw then sys.stdout and sys.stderr are set to None. 
If you also use multiprocessing then when the child process finishes 
BaseProcess._bootstrap calls sys.stdout.flush() and sys.stderr.flush() finally. 
This causes the process return code to be not zero (it is 1).

--
components: Library (Lib)
files: process.py.patch
keywords: patch
messages: 304512
nosy: Pox TheGreat
priority: normal
severity: normal
status: open
title: multiprocessing calls flush on sys.stdout at exit even if it is None 
(pythonw)
type: crash
versions: Python 3.5
Added file: https://bugs.python.org/file47224/process.py.patch

___
Python tracker <rep...@bugs.python.org>
<https://bugs.python.org/issue31804>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com