[issue42738] subprocess: don't close all file descriptors by default (close_fds=False)

2021-10-26 Thread Richard Xia


Richard Xia  added the comment:

I'd like to provide another, non-performance-related use case for changing the 
default value of Popen's close_fds parameters back to False.

In some scenarios, a (non-Python) parent process may want its descendant 
processes to inherit a particular file descriptor and for each descendant 
process to pass on that file descriptor its own children. In this scenario, a 
Python program may just be an intermediate script that calls out to multiple 
subprocesses, and closing the inheritable file descriptors by default would 
interfere with the parent process's ability to pass on that file descriptor to 
descendants.

As a concrete example, we have a (non-Python) build system and task runner that 
orchestrates many tasks to run in parallel. Some of those tasks end up invoking 
Python scripts that use subprocess.run() to run other programs. Our task runner 
intentionally passes an inheritable file descriptor that is unique to each task 
as a form of a keep-alive token; if the child processes continue to pass 
inheritable file descriptors to their children, then we can determine whether 
all of the processes spawned from a task have terminated by checking whither 
the last open handle to that file descriptor has been closed. This is 
particularly important when a processes exits before its children, sometimes 
uncleanly due to being force killed by the system or by a user.

In our use case, Python's default value of close_fds=True interferes with our 
tracking scheme, since it prevents Python's subprocesses from inheriting that 
file descriptor, even though that file descriptor has intentionally been made 
inheritable.

While we are able to work around the issue by explicitly setting 
close_fds=False in as much of our Python code as possible, it's difficult to 
enforce this globally since we have many small Python scripts. We also have no 
control over any third party libraries that may possibly call Popen.

Regarding security, PEP 446 already makes it so that any files opened from 
within a Python program are non-inheritable by default, which I agree is a good 
default. One can make the argument that it's not Python's job to enforce a 
security policy on file descriptors that a Python process has inherited from a 
parent process, since Python cannot distinguish from descriptors that were 
accidentally or intentionally inherited.

--
nosy: +richardxia

___
Python tracker 
<https://bugs.python.org/issue42738>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29573] NamedTemporaryFile with delete=True should not fail if file already deleted

2017-03-27 Thread Richard Xia

Richard Xia added the comment:

Thanks for the discussion. I ended up doing something similar to the code 
snippet Christian posted, except I also had a second try/except 
FileNotFoundError within the original finally block to catch the case that 
David pointed out.

In retrospect, I probably should have used TemporaryDirectory since I am using 
Python 3.5 and because the file I was creating with NamedTemporaryFile was only 
being used as an output file, not an input file, to the subprocess command.

--

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29573>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue29573] NamedTemporaryFile with delete=True should not fail if file already deleted

2017-02-15 Thread Richard Xia

New submission from Richard Xia:

Here is a very short program to demonstrate what I'm seeing:

>>> import tempfile
>>> import os
>>> with tempfile.NamedTemporaryFile(delete=True) as fp:
... print(fp.name)
... os.system('rm {}'.format(fp.name))
/tmp/tmpomw0udc6
0
Traceback (most recent call last):
  File "", line 3, in 
  File "/usr/local/lib/python3.6/tempfile.py", line 502, in __exit__
self.close()
  File "/usr/local/lib/python3.6/tempfile.py", line 509, in close
self._closer.close()
  File "/usr/local/lib/python3.6/tempfile.py", line 446, in close
unlink(self.name)
FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmpomw0udc6'


In my specific use case, I am shelling out to another program (binutils' 
objcopy) and passing the path of the NamedTemporaryFile as the output file. 
objcopy apparently deletes the output file if it encounters an error, which 
causes NamedTemporaryFile's exit method to fail when it tries to delete the 
file.

While I can work around this by using delete=False and manually doing the 
cleanup on my own, it's less elegant than being able to rely on the normal 
context manager exit.

--
components: IO
messages: 287888
nosy: richardxia
priority: normal
severity: normal
status: open
title: NamedTemporaryFile with delete=True should not fail if file already 
deleted
type: behavior
versions: Python 3.5, Python 3.6

___
Python tracker <rep...@bugs.python.org>
<http://bugs.python.org/issue29573>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com