Alexey Izbyshev added the comment:
In short: both this bug report and [1] are invalid.
The reason why doing syscall(SYS_vfork) is illegal is explained by Florian
Weimer in [2]:
>The syscall function in glibc does not protect the on-stack return address
>against overwriting, so it ca
Alexey Izbyshev added the comment:
> As for glibc specifics, I'm mostly thinking of the calls we do in the child.
> According to the "Standard Description (POSIX.1)" calls to anything other
> than `_exit()` or `exec*()` are not allowed. But the longer "Linux
&
Alexey Izbyshev added the comment:
The preceding comment is wrong, see discussion in #47245 and
https://bugzilla.kernel.org/show_bug.cgi?id=215813#c14 for explanation of why
that bug report is irrelevant for CPython.
--
___
Python tracker
<ht
Alexey Izbyshev added the comment:
> 3. We have to fix error-path in order not to change heap state (contents and
> allocations), possibly do not touch locks. During vfork() child execution -
> the only parent THREAD (not the process) is blocked. For example, it's not
> all
New submission from Alexey Izbyshev :
After #40422 _Py_closerange() assumes that close_range() closes all file
descriptors even if it returns an error (other than ENOSYS):
if (close_range(first, last, 0) == 0 || errno != ENOSYS) {
/* Any errors encountered while closing file
Change by Alexey Izbyshev :
--
keywords: +patch
pull_requests: +30443
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/32418
___
Python tracker
<https://bugs.python.org/issu
Alexey Izbyshev added the comment:
> It's been years now and that hasn't happened, even with more recent flag
> additions. I think it's safe to say it won't, and such a fallback upon error
> won't put us back into a bogus pre-close_range situation where we
Change by Alexey Izbyshev :
--
keywords: +patch
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue47260>
___
___
Python-bugs-list mai
Change by Alexey Izbyshev :
--
pull_requests: +16379
pull_request: https://github.com/python/cpython/pull/5812
___
Python tracker
<https://bugs.python.org/issue34
Alexey Izbyshev added the comment:
As far as I understand, commit [1] explicitly prevents CPython from running on
Windows 7, and it's included into 3.9. So it seems to be too late to complain,
despite that, according to Wikipedia, more than 15% of all Windows PCs are
still running Wind
Alexey Izbyshev added the comment:
> If Win8-only calls are not used, then presumably it should still build and
> run on Windows 7, presumably with the flag flipped back to Win7. And if there
> are Win8-only calls used and the flag is set to Win7+, I assume that the MSVC
> c
Alexey Izbyshev added the comment:
> If we had a dedicated maintainer who was supporting Win7 and making releases
> for it, then we (i.e. they) could support it. But then, there's nothing to
> stop someone doing that already, and even to stop them charging money for it
> if
New submission from Alexey Izbyshev :
In PC/getpathp.c CPython uses buffers with length MAXPATHLEN+1, which is 257 on
Windows[1]. On Windows 7, where PathCch* functions are not available, CPython
<= 3.8 fallbacks to PathCombineW()/PathCanonicalizeW()[2]. Those functions
assume that
Alexey Izbyshev added the comment:
> As a concrete example, we have a (non-Python) build system and task runner
> that orchestrates many tasks to run in parallel. Some of those tasks end up
> invoking Python scripts that use subprocess.run() to run other programs. Our
>
New submission from Alexey Izbyshev :
> python тест.pyc
python: Can't reopen .pyc file
The issue is caused by _Py_fopen() being used as though it can deal with paths
encoded in FS-default encoding (UTF-8 by default on Windows), but in fact it's
just a simple wrapper around fopen
New submission from Alexey Izbyshev :
Before addition of audit hooks in 3.8, _Py_fopen() and _Py_wfopen() were simple
wrappers around corresponding C runtime functions. They didn't require GIL,
reported errors via errno and could be safely called during early interpreter
initializ
Alexey Izbyshev added the comment:
Thanks, Eryk, for catching the dup, I missed it somehow.
@ZackerySpytz: do you plan to proceed with your PR? If not, I can pick it up --
this issue broke the software I develop after upgrade to 3.8.
I filed issue 42569 to hopefully clarify the status of
Alexey Izbyshev added the comment:
Thanks for the patch, Victor, it looks good.
Just so it doesn't get lost: the problem with the contract of
PyErr_ProgramText() which I mentioned in my dup 42568 is still there.
--
___
Python tracker
&
Alexey Izbyshev added the comment:
> It seems like PyErr_ProgramText() is no longer used in Python.
Isn't it a part of the public API? I can't find it in the docs, but it seems to
be declared in the public header.
--
___
Python tr
Alexey Izbyshev added the comment:
> To implement PEP 446: create non-inheritable file descriptors.
Yes, I understand that was the original role. But currently there is no easy
way to deal with errors from the helpers because of exception vs. errno
conundrum. Maybe they should be split
Alexey Izbyshev added the comment:
> So it should be, "if they fail and you're in a context where exceptions are
> allowed, raise an exception" (which will chain back to the one raised from an
> audit hook".
What exception should be raised if _Py_fopen() fails (
Change by Alexey Izbyshev :
--
components: +Library (Lib)
nosy: +davin, pitrou
versions: -Python 3.6
___
Python tracker
<https://bugs.python.org/issue42
Alexey Izbyshev added the comment:
Thanks for the fix and backports!
--
resolution: fixed ->
stage: resolved -> patch review
status: closed -> open
versions: +Python 3.7
___
Python tracker
<https://bugs.python.or
Change by Alexey Izbyshev :
--
resolution: -> fixed
stage: patch review -> resolved
status: open -> closed
versions: -Python 3.7
___
Python tracker
<https://bugs.python.or
Alexey Izbyshev added the comment:
Yes, despite that MSVCRT knows the type of the file descriptor because it calls
GetFileType() on its creation, it doesn't check it in lseek() implementation
and simply calls SetFilePointer(), which spuriously succeeds for pipes. MSDN
says the follow
Alexey Izbyshev added the comment:
Great approach :)
--
___
Python tracker
<https://bugs.python.org/issue42569>
___
___
Python-bugs-list mailing list
Unsub
New submission from Alexey Izbyshev :
On POSIX-conforming systems, O_APPEND flag for open() must ensure that no
intervening file modification occurs between changing the file offset and the
write operation[1]. In effect, two processes that independently opened the same
file with O_APPEND
Change by Alexey Izbyshev :
--
keywords: +patch
pull_requests: +22575
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/23712
___
Python tracker
<https://bugs.python.org/issu
Alexey Izbyshev added the comment:
This bug would have been caught at compile time if `_Py_Gid_Converter()` used
`gid_t *` instead of `void *`. I couldn't find any call sites where `void *`
would be needed, so probably `_Py_Gid_Converter()` should be fixed too (in a
separate PR/issue?)
Alexey Izbyshev added the comment:
> I've been struggling to understand today why a simple file redirection
> couldn't work properly today (encoding issues)
The core issue is that "working properly" is not defined in general when we're
talking about piping
Alexey Izbyshev added the comment:
> Using close_fds=False, subprocess can use posix_spawn() which is safer and
> faster than fork+exec. For example, on Linux, the glibc implements it as a
> function using vfork which is faster than fork if the parent allocated a lot
> of memo
Change by Alexey Izbyshev :
--
nosy: +izbyshev
___
Python tracker
<https://bugs.python.org/issue38435>
___
___
Python-bugs-list mailing list
Unsubscribe:
Change by Alexey Izbyshev :
--
nosy: +izbyshev
___
Python tracker
<https://bugs.python.org/issue42736>
___
___
Python-bugs-list mailing list
Unsubscribe:
Change by Alexey Izbyshev :
--
components: +Library (Lib)
nosy: +vstinner
versions: -Python 3.6, Python 3.7
___
Python tracker
<https://bugs.python.org/issue42
Alexey Izbyshev added the comment:
I've encountered this issue too. My use case was a 32-bit Python on a 64-bit
CentOS system, and my understanding of the issue was that 64-bit libgcc_s is
somehow counted as a "provider" of libgcc_s for 32-bit libc by the package
manager, so
Change by Alexey Izbyshev :
--
keywords: +patch
pull_requests: +23063
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/24241
___
Python tracker
<https://bugs.python.org/issu
Alexey Izbyshev added the comment:
I've made a PR to remove most calls to pthread_exit().
@xxm: could you test it in your environment?
--
___
Python tracker
<https://bugs.python.org/is
Alexey Izbyshev added the comment:
Could anybody provide their thoughts on this RFE? Thanks.
--
___
Python tracker
<https://bugs.python.org/issue42606>
___
___
Alexey Izbyshev added the comment:
> It's possible to query the granted access of a kernel handle via
> NtQueryObject: ObjectBasicInformation
Ah, thanks for the info. But it wouldn't help for option (1) that I had in mind
because open() and os.open() currently set only msvcr
Alexey Izbyshev added the comment:
Thank you for testing. I've added a NEWS entry to the PR, so it's ready for
review by the core devs.
Note that PyThread_exit_thread() can still be called by daemon threads if they
try to take the GIL after Py_Finalize(), and also via C
Change by Alexey Izbyshev :
--
nosy: +izbyshev
___
Python tracker
<https://bugs.python.org/issue42969>
___
___
Python-bugs-list mailing list
Unsubscribe:
Alexey Izbyshev added the comment:
> I don't know what you mean by default access rights.
I meant the access rights of the handle created by _wopen(). In my PR I
basically assume that _wopen() uses GENERIC_READ/GENERIC_WRITE access rights,
but _wopen() doesn't have a contractu
Alexey Izbyshev added the comment:
> FYI, here are the access rights applicable to files
Thanks, I checked that mapping in headers when I was writing
_Py_wopen_noraise() as well. But I've found a catch via ProcessHacker:
CreateFile() with GENERIC_WRITE (or FILE_GENERIC_WRITE) addi
Alexey Izbyshev added the comment:
> I think truncation via TRUNCATE_EXISTING (O_TRUNC, with O_WRONLY or O_RDWR)
> or overwriting with CREATE_ALWAYS (O_CREAT | O_TRUNC) is at least tolerable
> because the caller doesn't care about the existing data.
Yes, I had a thought th
Alexey Izbyshev added the comment:
I would suggest to start digging from the following piece of code in
`maybe_pyc_file()` (Python/pythonrun.c):
int ispyc = 0;
if (ftell(fp) == 0) {
if (fread(buf, 1, 2, fp) == 2 &&
((unsigned int)buf[1]<
Alexey Izbyshev added the comment:
> Ideally, the error would say:
> FileNotFoundError: ./demo: /usr/bin/hugo: bad interpreter: No such file or
> directory
The kernel simply returns ENOENT on an attempt to execve() a file with
non-existing hash-bang interpreter. The same occ
Alexey Izbyshev added the comment:
> FileNotFoundError: [Errno 2] No such file or directory: Either './demo' or
> the interpreter of './demo' not found.
This doesn't sound good to me because a very probable and a very improbable
reasons are combined together w
Alexey Izbyshev added the comment:
> IMO the fix is simple: only create OSError from the errno, never pass a
> filename.
This will remove a normally helpful piece of the error message in exchange to
being marginally less confusing in a rare case of non-existing interpreter (the
use
Alexey Izbyshev added the comment:
How do you propose to approach documentation of such behavior? The underlying
cause is the ambiguity of ENOENT error code from execve() returned by the
kernel, so it applies to all places where Python can call execve(), including
os.posixspawn(), os.execve
Alexey Izbyshev added the comment:
I generally agree, but getting a good, short error message seems to be the hard
part here. I previously complained[1] about the following proposal by @hroncok:
FileNotFoundError: [Errno 2] No such file or directory: Either './demo' or the
inte
Change by Alexey Izbyshev :
--
nosy: +gregory.p.smith
___
Python tracker
<https://bugs.python.org/issue43308>
___
___
Python-bugs-list mailing list
Unsubscribe:
Alexey Izbyshev added the comment:
I'd really like to get this merged eventually because vfork()-based solution is
fundamentally more generic than posix_spawn(). Apart from having no issue with
close_fds=True, it will also continue to allow subprocess to add any process
context t
Alexey Izbyshev added the comment:
Well, much later than promised, but I'm picking it up. Since in the meantime
support for setting uid/gid/groups was merged, and I'm aware about potential
issues with calling corresponding C library functions in a vfork()-child, I
asked a questi
Alexey Izbyshev added the comment:
I've updated my PR.
* After a discussion with Alexander Monakov (a GCC developer), moved vfork()
into a small function to isolate it from both subprocess_fork_exec() and
child_exec(). This appears to be the best strategy to avoid -Wclobbered
Change by Alexey Izbyshev :
--
resolution: -> not a bug
___
Python tracker
<https://bugs.python.org/issue36034>
___
___
Python-bugs-list mailing list
Un
Change by Alexey Izbyshev :
--
pull_requests: +21862
pull_request: https://github.com/python/cpython/pull/22944
___
Python tracker
<https://bugs.python.org/issue35
Alexey Izbyshev added the comment:
> Thank you for taking this on! I'm calling it fixed for now as the buildbots
> are looking happy with it. If issues with it arise we can address them.
Thank you for reviewing and merging!
Using POSIX_CALL for pthread_sigmask() is incorre
Alexey Izbyshev added the comment:
> regarding excluding the setsid() case: I was being conservative as I couldn't
> find a reference of what was and wasn't allowed after vfork.
Yes, there is no list of functions allowed after vfork(), except for the
conservative POSIX.1
Alexey Izbyshev added the comment:
@ronaldoussoren
> I'd prefer to not use vfork on macOS. For one I don't particularly trust that
> vfork would work reliably when using higher level APIs, but more importantly
> posix_spawn on macOS has some options that are hard to ac
New submission from Alexey Izbyshev :
The following test demonstrates the leak:
```
import subprocess
cwd = 'x' * 10**6
for __ in range(100):
try:
subprocess.call(['/xxx'], cwd=cwd, user=2**64)
except OverflowError:
pass
from resource impo
Change by Alexey Izbyshev :
--
keywords: +patch
pull_requests: +21882
stage: -> patch review
pull_request: https://github.com/python/cpython/pull/22966
___
Python tracker
<https://bugs.python.org/issu
Change by Alexey Izbyshev :
--
pull_requests: +21885
pull_request: https://github.com/python/cpython/pull/22970
___
Python tracker
<https://bugs.python.org/issue42
Alexey Izbyshev added the comment:
I've submitted both PRs.
Regarding PR 22970:
* I made it a draft since we'd probably want to fix the leak first, but then it
will have to be rebased.
* It fixes a bug with _enable_gc(): if it failed after fork(), we'd raise
OSError instea
Change by Alexey Izbyshev :
--
keywords: +patch
stage: -> patch review
___
Python tracker
<https://bugs.python.org/issue42146>
___
___
Python-bugs-list mai
Change by Alexey Izbyshev :
--
type: behavior -> resource usage
___
Python tracker
<https://bugs.python.org/issue42146>
___
___
Python-bugs-list mai
Alexey Izbyshev added the comment:
Thanks for merging! I've rebased PR 22970.
--
___
Python tracker
<https://bugs.python.org/issue42146>
___
___
Pytho
Change by Alexey Izbyshev :
Added file: https://bugs.python.org/file49531/test.py
___
Python tracker
<https://bugs.python.org/issue42097>
___
___
Python-bugs-list mailin
Alexey Izbyshev added the comment:
(Restored test.py attachment)
The issue happens due to an incorrect usage of `multiprocessing.Pool`.
```
# Set up multiprocessing pool, initialising logging in each subprocess
with multiprocessing.Pool(initializer=process_setup,
initargs=(get_queue
Alexey Izbyshev added the comment:
By the way, I don't see a direct relation between `test.py` (which doesn't use
`subprocess` directly) and your comment describing `subprocess` usage with
threads. So if you think that the bug in `test.py` is unrelated to the problem
you face, fe
Alexey Izbyshev added the comment:
It seems that allowing `input=None` to mean "redirect stdin to a pipe and send
an empty string there" in `subprocess.check_output` was an accident(?), and
this behavior is inconsistent with `subprocess.run` and `communicate`, where
`input=None` ha
Alexey Izbyshev added the comment:
> (probably can't even limit that to the case when `text` is used, since it was
> added in 3.7)
Well, actually, we can, since we probably don't need to preserve compatibility
with the AttributeError currently caused by `text=True`
Change by Alexey Izbyshev :
--
nosy: +rhettinger
versions: -Python 3.6, Python 3.7
___
Python tracker
<https://bugs.python.org/issue42457>
___
___
Python-bug
Alexey Izbyshev added the comment:
This is because of a leak of 'wstr' at
https://github.com/python/cpython/blob/1005c84535191a72ebb7587d8c5636a065b7ed79/Objects/unicodeobject.c#L3476
.
There is another leak and usage of uninitialized 'str' because the following
&quo
Alexey Izbyshev added the comment:
The added test exposed a leak in unicode_encode_locale(). See msg330534.
--
___
Python tracker
<https://bugs.python.org/issue34
Alexey Izbyshev added the comment:
Correction: the fall-through in "else if (res == -3)" clause doesn't cause a
memory leak, but still results in usage of uninitialized 'str'.
--
___
Python tracker
<https:
Alexey Izbyshev added the comment:
See #28108 and https://sourceware.org/bugzilla/show_bug.cgi?id=23859 (for
msg276123).
--
nosy: +izbyshev
___
Python tracker
<https://bugs.python.org/issue35
Alexey Izbyshev added the comment:
If I understood PR 10919 correctly, sysconfig.get_config_var('userbase') can
now return unexpanded paths containing '~'. Is it intended despite the previous
discussion starting with msg135047?
--
Alexey Izbyshev added the comment:
> I prefer to stick to the initial bug report which hasn't been fixed in 8 years
I'm interested in fixing this bug too since it bit me in SCons and I had to use
a local patch for it. I welcome the upstream fix and don't object to
Alexey Izbyshev added the comment:
> Would it make sense to backport this fix in 3.6 and 3.7?
I'd like to see it there, given that this bug surfaced in many use cases not
involving any modern features or systemd at all.
--
___
Python
Alexey Izbyshev added the comment:
You might try to check the list of DLLs loaded into the stuck python process
and find third-party ones (e.g., antivirus). If there are any, disable the
third-party software and try again.
--
nosy: +izbyshev
Alexey Izbyshev added the comment:
How is it possible to use faulthandler if the interpreter hasn't even started
yet?
--
___
Python tracker
<https://bugs.python.org/is
Alexey Izbyshev added the comment:
argparse.SUPPRESS is an opaque value to be used by argparse clients. It could
be anything, it just happens to be a string. So the code doesn't compare
strings but checks whether a supplied object *is* the opaque value. I do not
see any problem with
Change by Alexey Izbyshev :
--
nosy: +gregory.p.smith, izbyshev
___
Python tracker
<https://bugs.python.org/issue35537>
___
___
Python-bugs-list mailin
Alexey Izbyshev added the comment:
Serhiy, PyOS_* functions are called only if preexec_fn != None. But it will
never be possible to implement support for preexec_fn (and some other
subprocess features, e.g. close_fds) if processes are run via posix_spawn, so I
don't see why anything s
Alexey Izbyshev added the comment:
Victor and Joannah, thanks for working on adding vfork() support to subprocess.
Regarding speedups in the real world, I can share a personal anecdote. Back at
the time when AOSP was built with make (I think it was AOSP 5) I've observed
~2x slowdow
Alexey Izbyshev added the comment:
> I'm open to experiment to use vfork() in _posixsubprocess
Are you going to do experiments? If not, I can try to do some in early January.
> Using vfork() can cause new issues: that's why there is a
> POSIX_SPAWN_USE_VFORK flag
Alexey Izbyshev added the comment:
> * cwd
posix_spawn_file_actions_addchdir_np() is scheduled for glibc 2.29 [1] and
exists in Solaris [2] (though its suffix indicates that it's "non-portable" --
not in POSIX). POSIX also has a bug for this [7].
>
Alexey Izbyshev added the comment:
The resolution of this [1] glibc bug report effectively says that the use of
global variables tzname, timezone and daylight is not supported by glibc unless
a POSIX-style TZ setting is used (which is probably never in real world).
[1] https
Change by Alexey Izbyshev :
--
nosy: +izbyshev
___
Python tracker
<https://bugs.python.org/issue35674>
___
___
Python-bugs-list mailing list
Unsubscribe:
Alexey Izbyshev added the comment:
> * On FreeBSD, if setting posix_spawn() "attributes" or execute posix_spawn()
> "file actions" fails, posix_spawn() succeed but the child process exits
> immediately with exit code 127 without trying to call execv(). If
Alexey Izbyshev added the comment:
> Until muscl decides to provide an "#ifdef __MUSL__"-like or any way that it's
> musl, I propose to not support musl: don't use os.posix_spawn() but
> _posixsubprocess.
FYI, I'm researching how to use vfork(), focusing
Alexey Izbyshev added the comment:
> Hi,
> As a disclaimer, I'm a FreeBSD developer interested in making sure we're
> doing the right thing here. =)
> May I ask what the above assessment is based on, and specifically what we
> need to address?
Hello, Kyle! That
Alexey Izbyshev added the comment:
> One of the issue that I have with using posix_spawn() is that the *exact*
> behavior of subprocess is not properly defined by test_subprocess. Should we
> more more tests, or document that the exact behavior is "an implementation
> de
Alexey Izbyshev added the comment:
Would it make sense to use os.confstr('CS_PATH') instead of a hardcoded path,
or is identical behavior on all POSIX platforms preferred to that?
--
nosy: +izbyshev
___
Python tracker
<https://bu
Alexey Izbyshev added the comment:
Thank you for the answers, Kyle!
> I'll be preparing a patch for our posix_spawn's signal handling.
Great!
> My mistake in my setuid assessment was pointed out to me- it doesn't seem
> like a highly likely attack vector, but it
Alexey Izbyshev added the comment:
> It should be compared to the current code. Currently, _posixsubprocess uses a
> loop calling execv(). I don't think that calling posix_spawn() in a loop
> until one doesn't fail is more inefficient.
> The worst case would be
Alexey Izbyshev added the comment:
Thanks for the info on CS_PATH, Victor. IMHO it'd make sense to use the
libc-provided default PATH at least in shutil.which() since its intent is to
emulate "which" from the default shell.
--
___
Alexey Izbyshev added the comment:
>> * pass_fds: there is not API to mark a fd as inheritable (clear O_CLOEXEC
>> flag)
> POSIX has a bug for this [5]. It's marked fixed, but the current POSIX docs
> doesn't reflect the changes. The idea is to make
> p
Alexey Izbyshev added the comment:
Another problem with posix_spawn() on glibc: it doesn't report errors to the
parent process when run under QEMU user-space emulation and Windows Subsystem
for Linux. This is because starting with commit [1] (glibc 2.25) posix_spawn()
relies on ad
New submission from Alexey Izbyshev :
This issue is to propose a (complementary) alternative to the usage of
posix_spawn() in subprocess (see bpo-35537).
As mentioned by Victor Stinner in msg332236, posix_spawn() has the potential of
being faster and safer than fork()/exec() approach
1 - 100 of 331 matches
Mail list logo