[issue40857] tempfile.TemporaryDirectory() context manager can fail to propagate exceptions generated within its context

2020-06-03 Thread Tim Reid


New submission from Tim Reid :

When an exception occurs within a tempfile.TemporaryDirectory() context
and the directory cleanup fails, the _cleanup exception_ is propagated,
not the original one. This effectively 'masks' the original exception,
and makes it impossible to catch using a simple 'try'/'except' construct.


Code like this:

  import tempfile
  import os
  import sys

  try:
  with tempfile.TemporaryDirectory() as tempdir:
  print(tempdir)
  # some code happens here

  except ArithmeticError as exc:
  print('An arithmetic error occurred: {}'.format(exc))

  print('Continuing...')

is effective at catching any ArithmeticError which occurs in the
code fragment but is not otherwise handled. However if, in addition,
an error occues in cleaning up the temporary directory, the exception
which occurred in the code is replaced by the cleanup exception, and is
not be propagated to be caught by the 'except' clause.

For example:

  import tempfile
  import os
  import sys

  try:
  with tempfile.TemporaryDirectory() as tempdir:
  print(tempdir)
  n = 1 / 0

  except ArithmeticError as exc:
  print('An arithmetic error occurred: {}'.format(exc))

  print('Continuing...')

produces this:

  /tmp/tmp_r2sxqgb
  An arithmetic error occurred: division by zero
  Continuing...

but this:

  import tempfile
  import os
  import sys

  try:
  with tempfile.TemporaryDirectory() as tempdir:
  print(tempdir)
  os.rmdir(tempdir)  # this new line is the only difference
  n = 1 / 0

  except ArithmeticError as exc:
  print('An arithmetic error occurred: {}'.format(exc))

  print('Continuing...')

produces this:

  /tmp/tmp_yz6zyfs
  Traceback (most recent call last):
File "tempfilebug.py", line 9, in 
  n = 1 / 0
  ZeroDivisionError: division by zero

  During handling of the above exception, another exception occurred:

  Traceback (most recent call last):
File "tempfilebug.py", line 9, in 
  n = 1 / 0
File "/usr/lib/python3.6/tempfile.py", line 948, in __exit__
  self.cleanup()
File "/usr/lib/python3.6/tempfile.py", line 952, in cleanup
  _rmtree(self.name)
File "/usr/lib/python3.6/shutil.py", line 477, in rmtree
  onerror(os.lstat, path, sys.exc_info())
File "/usr/lib/python3.6/shutil.py", line 475, in rmtree
  orig_st = os.lstat(path)
  FileNotFoundError: [Errno 2] No such file or directory: '/tmp/tmp_yz6zyfs'

and the program exits with the top-level code having no chance to catch
the ZeroDivisionError and continue execution. (To catch this exception,
the top-level code would need to know to catch FileNotFoundError.)

My view is that if an exception happens within a TemporaryDirectory
context, *and* there is an exception generated as a result of the cleanup
process, the original exception is likely to be more significant, and
should be the exception which is propagated, not the one generated by
the cleanup.


System info:

$ python3 --version
Python 3.6.9

$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:Ubuntu 18.04.4 LTS
Release:18.04
Codename:   bionic

--
components: Extension Modules
messages: 370689
nosy: granchester
priority: normal
severity: normal
status: open
title: tempfile.TemporaryDirectory() context manager can fail to propagate 
exceptions generated within its context
type: behavior
versions: Python 3.6

___
Python tracker 
<https://bugs.python.org/issue40857>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue16899] Add support for C99 complex type (_Complex) as ctypes.c_complex

2018-07-04 Thread Reid


Reid  added the comment:

I concur with rutsky.  Complex numbers are essential in the physical sciences, 
and the complex type is part of the c99 standard.  Trying to shoehorn complex 
support by a user-defined type makes use of the builtin methods for the 
standard complex type clunky.

--
nosy: +rkmountainguy
versions: +Python 3.7 -Python 3.4

___
Python tracker 
<https://bugs.python.org/issue16899>
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue21726] Unnecessary line in documentation

2014-06-11 Thread Reid Price

New submission from Reid Price:

https://docs.python.org/2/distutils/examples.html#pure-python-distribution-by-package

Chrome on Linux

The last (parenthetical) sentence is not needed.

  (Again, the empty string in package_dir stands for the current directory.)

because there is no package_dir option in the example.

 Preceding Text 
  ...

If you have sub-packages, they must be explicitly listed in packages, but any 
entries in package_dir automatically extend to sub-packages. (In other words, 
the Distutils does not scan your source tree, trying to figure out which 
directories correspond to Python packages by looking for __init__.py files.) 
Thus, if the default layout grows a sub-package:

root/
setup.py
foobar/
 __init__.py
 foo.py
 bar.py
 subfoo/
   __init__.py
   blah.py
then the corresponding setup script would be

from distutils.core import setup
setup(name='foobar',
  version='1.0',
  packages=['foobar', 'foobar.subfoo'],
  )

(Again, the empty string in package_dir stands for the current directory.)

--
assignee: docs@python
components: Documentation
messages: 220295
nosy: Reid.Price, docs@python
priority: normal
severity: normal
status: open
title: Unnecessary line in documentation
type: enhancement

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue21726
___
___
Python-bugs-list mailing list
Unsubscribe: 
https://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue14591] Value returned by random.random() out of valid range

2012-04-16 Thread Dave Reid

New submission from Dave Reid seabass...@gmail.com:

A particular combination of seed and jumpahead calls seems to force the MT 
generator into a state where it produces a random variate that is outside the 
range 0-1. Problem looks like it might be in _randommodule.c:genrand_int32, 
which produces a value  0x for the given state, but I don't understand 
the generator well enough to debug any further.

The attached test case produces 1.58809998297 as the 2nd variate in Python 2.7 
and 1.35540900431 as the 23rd variate in Python 2.7.3. The problem occurs on 
both Linux (CentOS 6) and Mac OSX (10.6.8), both 64-bit.

--
components: Interpreter Core
files: badrand.py
messages: 158406
nosy: Dave.Reid
priority: normal
severity: normal
status: open
title: Value returned by random.random() out of valid range
type: behavior
versions: Python 2.7
Added file: http://bugs.python.org/file25235/badrand.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue14591
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2011-04-10 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Thanks for fixing the negative timeout issue.  I assumed incorrectly that a 
negative timeout would cause it to check and return immediately if it would 
otherwise block.

As for the docs, the 3.2/3.3 issue was fixed in [[72e49cb7fcf5]].

I just added a Misc/NEWS entry for 3.3's What's New in [[9140f2363623]].

--
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11757] test_subprocess.test_communicate_timeout_large_ouput failure on select(): negative timeout?

2011-04-10 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I think the best behavior would be to go ahead and check one last time before 
raising the exception, so _remaining_time should turn a negative value into 0 
(assuming that a timeout value of zero does the right thing for our use case).

If people don't feel that is best, refactoring _remaining_time to incorporate 
the check in _check_timeout would also be good.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11757
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11613] test_subprocess fails under Windows

2011-03-21 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

The bot is green again as of ab2363f89058.  Thanks for the heads up.

--
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11613
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11613] test_subprocess fails under Windows

2011-03-20 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

It is necessary, WaitForSingleObject takes its argument in
milliseconds.  It will make the exception message wrong, though, which
I can fix.

Reid

On Sun, Mar 20, 2011 at 1:46 PM, Santoso Wijaya rep...@bugs.python.org wrote:

 Santoso Wijaya santoso.wij...@gmail.com added the comment:

 The timeout value given to wait() is multiplied by 1000 before being passed 
 to TimeoutExpired constructor. The multiplication is unnecessary since we 
 take the input unit as time unit second.

 --
 keywords: +patch
 nosy: +santa4nt
 Added file: http://bugs.python.org/file21310/timeoutsec.patch

 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue11613
 ___


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11613
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11504] test_subprocess failure

2011-03-16 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

:(  Thanks for spotting these.  Is there an easier way for me to be notified if 
these particular tests fail?  Some of these are not in the stable builder set.

Sorry to leave the builders broken for so long.  I just upped the timeouts to 3 
seconds.  I guess the issue is that the builders are slow and also heavily 
loaded, so processes just don't get to start up as quick as we'd like them too.

It might be worth adding some plumbing to have the child process signal the 
parent when it's initialized, but that seems like it's going to add a whole 
bunch more complexity to the test.

Will close in a few days if there are no more broken buildbots.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11504
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2011-03-14 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I updated and committed the patch to the cpython hg repo in revision 
[c4a0fa6e687c].

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2011-03-14 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

On Mon, Mar 14, 2011 at 12:31 PM, Sridhar Ratnakumar
rep...@bugs.python.org wrote:

 Sridhar Ratnakumar sridh...@activestate.com added the comment:

 On 2011-03-14, at 9:18 AM, Reid Kleckner wrote:

 I updated and committed the patch to the cpython hg repo in revision 
 [c4a0fa6e687c].

 Does this go to the main branch (py3.3) only? It is not clear from just 
 looking at http://hg.python.org/cpython/rev/c4a0fa6e687c/

Yes, it's a new feature, so I don't think it's appropriate to backport.

Actually, I just noticed I forgot the update the doc patches.  They
should all say added in 3.3, not 3.2.

Reid

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2011-03-14 Thread Reid Kleckner

Changes by Reid Kleckner r...@mit.edu:


--
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11504] test_subprocess failure

2011-03-14 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I can't reproduce this.  I've tested on:
64-bit Linux (Debian lenny)
OS X 10.6
Windows Vista 32-bit

It seems reasonable to me that the interpreter should be able to
initialize and write to stdout in less than half a second, but it
seems to be failing consistently on that builder.  I'd really rather
not make the timeout longer, since it will increase testing time for
everyone.  Is there something about that builder that makes
initialization take longer?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11504
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11504] test_subprocess failure

2011-03-14 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I increased the timeout in [fd2b3eac6756] and the buildbot is passing now:
http://python.org/dev/buildbot/all/builders/x86%20debian%20parallel%203.x

--
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11504
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9742] Python 2.7: math module fails to build on Solaris 9

2011-01-10 Thread Reid Madsen

Reid Madsen reid.mad...@tektronix.com added the comment:

Python support,

This issue with not being able to build on Solaris 9 is easily fixed.  I have 
attached a patch with the fix for Python 2.7.

When linking with libpython-2.7.a, the linker will only extract modules that 
satisfy dependencies emanating from python.o.  There may be objects in the 
archive that are not needed to satisfy any of these dependencies and those WILL 
NOT be included in the executable.

The GNU linker supports two options that can be use to force the linker to 
include ALL objects in the archive.  Thus if you change the python link line 
from:


 $(BUILDPYTHON):  Modules/python.o $(LIBRARY) $(LDLIBRARY)
 $(LINKCC) $(LDFLAGS) $(LINKFORSHARED) -o $@ \
 Modules/python.o \
 $(BLDLIBRARY) $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST)

to:

 $(BUILDPYTHON): Modules/python.o $(LIBRARY) $(LDLIBRARY)
 $(LINKCC) $(LDFLAGS) $(LINKFORSHARED) -o $@ \
 Modules/python.o \
 -Wl,--whole-archive $(BLDLIBRARY) -Wl,--no-whole-archive \
 $(LIBS) $(MODLIBS) $(SYSLIBS) $(LDLAST)

Then the problem is resolved.

For compiler toolchains that do not support the --whole-library option, you can 
change the link to link with the individual .o files and not use the archive 
library at all.

Let me know if I can be of any more help.

Reid Madsen

--
keywords: +patch
nosy: +srmadsen
Added file: http://bugs.python.org/file20339/Python-2.7.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9742
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1054041] Python doesn't exit with proper resultcode on SIGINT

2011-01-06 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Looks good to me.  Do you need the TODO(gps)'s in there after implementing the 
behavior described?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1054041
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2011-01-06 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Pablo, so if I understand the issue you've run into correctly, you are using 
shell redirection to redirect stdout to a file, and then attempting to read 
from it using stdout=subprocess.PIPE.

It seems to me like this behavior is expected, because the shell will close 
it's current stdout file descriptor and open a new one pointing at file.  
When python tries to read from its end of the pipe, it complains that the fd 
has been closed.

I can avoid the problem here either by not reading stdout or by not redirecting 
to a file.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2010-09-20 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

No, sorry, I just haven't gotten around to reproducing it on Linux.

And I've even needed this functionality in the mean time, and we worked around 
it with the standard alarm trick!  =/

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8844] Condition.wait() doesn't raise KeyboardInterrupt

2010-08-14 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Added a patch that adds support for recomputing the timeout, plus a test for it.

Can this still make it into 3.2, or is it too disruptive at this point in the 
release process?

--
Added file: http://bugs.python.org/file18536/lock-interrupt-v4.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8844
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2010-07-23 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

On Thu, Jul 22, 2010 at 9:05 AM, Alexander Belopolsky
rep...@bugs.python.org wrote:

 Alexander Belopolsky belopol...@users.sourceforge.net added the comment:

 The documentation should mention somewhere that timeout can be a float.  For 
 example, as in time.sleep docstring:

 
    sleep(seconds)

    Delay execution for a given number of seconds.  The argument may be
    a floating point number for subsecond precision.
 

 I would also like to see some discussion of supported precision.  Is is the 
 same as for time.sleep()?  Does float precision ever affect timeout 
 precision? (On systems with nanosleep it may, but probably this has no 
 practical concequences.)

I added info to wait and communicate, but left the docs for call,
check_call, check_output all saying that their arguments are the same
as Popen(...) and wait(...).

 This can be done as a future enhancement, but I would like to see 
 datetime.timedelta as an acceptable type for timeout.  This can be done by 
 adding duck-typed code in the error branch which would attempt to call 
 timeout.total_seconds() to extract a float.

I'd prefer to leave it as a future enhancement.

 Looking further, it appears that timeout can be anything that can be added to 
 a float to produce float.  Is this an accident of implementation or a design 
 decision?  Note that a result Fraction can be used as timeout but Decimal 
 cannot.

Implementation detail.  I don't think it should matter.

 Zero and negative timeouts are accepted by subprocess.call(), but the result 
 is not documented.  It looks like this still starts the process, but kills it 
 immediately. An alternative would be to not start the process at all or 
 disallow negative or maybe even zero timeouts altogether.  I don't mind the 
 current choice, but it should be documented at least in 
 Popen.wait(timeout=None) section.

 +        def wait(self, timeout=None, endtime=None):
             Wait for child process to terminate.  Returns returncode
             attribute.

 Docstring should describe timeout and endtime arguments.  In fact I don't see 
 endtime documented anywhere.  It is not an obvious choice
 that endtime is ignored when timeout is given.  An alternative would be to 
 terminate at min(now + timeout, endtime).

I didn't intend for the endtime parameter to be documented, it is just
a convenience for implementing communicate, which gets woken up at
various times so it is easier to remember the final deadline rather
than recompute the timeout frequently.

 +                delay = 0.0005 # 500 us - initial delay of 1 ms

 I think this should be an argument to wait() and the use of busy loop should 
 be documented.

 +                    delay = min(delay * 2, remaining, .05)

 Why .05?  It would probably be an overkill to make this another argument, but 
 maybe make it an attribute of Popen, say self._max_busy_loop_delay or a 
 shorter descriptive name of your choice.  If you start it with '_', you don't 
 need to document it, but users may be able to mess with it if they suspect 
 that 0.05 is not the right choice.

*Points to whoever impelemented it for Thread.wait(timeout=...)*.  If
it was good enough for that (until we got real lock acquisitions with
timeouts), then I think it's good enough for this.

 +                endtime = time.time() + timeout

 Did you consider using datetime module instead of time module here?  (I know, 
 you still need time.sleep() later, but you won't need to worry about variable 
 precision of time.time().)

How does the datetime module help here?  It seems like time.time uses
roughly the same time sources that datetime.datetime.now does.

One other thing I'm worried about here is that time.time can be
non-increasing if the system clock is adjusted.  :(

Maybe someone should file a feature request for a monotonic clock.

Reid

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2010-07-21 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

When I ported the patch I tested on trunk + Windows to py3k, I messed that 
stuff up.  I also had to fix a bunch of str vs. bytes issues this time around.  
On Windows, it uses TextIOWrapper to do the encoding, and on POSIX it uses 
os.write, so I have to do the encoding myself.  :p

This patch has been tested on Windows Vista and Mac OS X 10.5.

--
Added file: http://bugs.python.org/file18101/subprocess-timeout-py3k-v7.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9079] Make gettimeofday available in time module

2010-07-21 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

pytime.h looks like it got pasted into the file twice.  Other than that, it 
looks good to me and the tests pass on OS X here.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9079
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9079] Make gettimeofday available in time module

2010-07-21 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I think you used 'struct timeval *' in the function definition instead of 
'_PyTimeVal *'.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9079
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2010-07-20 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Uh oh, that was one of the fixes I made when I tested it on Windows.  I may 
have failed to pick up those changes when I ported to py3k.  I'll check it out 
tonight.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6643] Throw away more radioactive locks that could be held across a fork in threading.py

2010-07-18 Thread Reid Kleckner

Changes by Reid Kleckner r...@mit.edu:


--
assignee:  - rnk
keywords: +needs review -patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6643
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6642] returning after forking a child thread doesn't call Py_Finalize

2010-07-18 Thread Reid Kleckner

Changes by Reid Kleckner r...@mit.edu:


--
assignee:  - rnk
dependencies: +Throw away more radioactive locks that could be held across a 
fork in threading.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6642
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8844] Condition.wait() doesn't raise KeyboardInterrupt

2010-07-18 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Waiting until the portability hacks for gettimeofday make it into core Python.

--
dependencies: +Make gettimeofday available in time module

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8844
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2927] expose html.parser.unescape

2010-07-18 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

It's using the old Python 2 unicode string literal syntax.

It also doesn't keep to 80 cols.

I'd also rather continue using a lazily initialized dict instead of catching a 
KeyError for apos;.

I also feel that with the changes to Unicode in py3k, the cp1252 stuff won't 
work as desired and should be cut.

===

Is anyone still interested in html.unescape or html.escape anyway?  Every web 
framework seems to have their own support routines already.  Otherwise I'd 
recommend close - wontfix.

--
nosy: +rnk

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue2927
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5872] New C API for declaring Python types

2010-07-18 Thread Reid Kleckner

Changes by Reid Kleckner r...@mit.edu:


--
nosy: +rnk
versions: +Python 3.2 -Python 3.1

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5872
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2010-07-17 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I don't imagine this is going into 2.7.0 at this point, so I ported the patch 
to py3k.  I also added support to check_output for the timeout parameter and 
added docs for all of the methods/functions that now take a timeout in the 
module.

The communicate docs include the pattern of:
try:
outs, errs = p.communicate(timeout=15)
except subprocess.TimeoutExpired:
p.kill()
outs, errs = p.communicate()

And check_output uses it.

--
Added file: http://bugs.python.org/file18042/subprocess-timeout-py3k-v6.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9079] Make gettimeofday available in time module

2010-07-17 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I think you forgot to svn add pytime.c before making the diff.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9079
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2010-07-16 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I forgot that I had to tweak the test as well as subprocess.py.  I did a 
.replace('\r', ''), but universal newlines is better.

Looking at the open questions I had about the Windows threads, I think it'll be 
OK if the user follows the pattern of:
proc = subprocess.Popen(...)
try:
stdout, stderr = proc.communicate(timeout=...)
except subprocess.TimeoutExpired:
proc.kill()
stdout, stderr = proc.communicate()

If the child process is deadlocked and the user doesn't kill it, then the file 
descriptors will be leaked and the daemon threads will also live on forever.  I 
*think* that's the worst that could happen.  Or they could of course wakeup 
during interpreter shutdown and cause tracebacks, but that's highly unlikely, 
and already possible currently.

Anyway, I would say we can't avoid leaking the fds in that situation, because 
we can't know if the user will eventually ask us for the data or not.  If they 
want to avoid the leak, they can clean up after themselves.

What's the next step for getting this in?  Thanks to those who've taken time to 
look at this so far.

--
Added file: http://bugs.python.org/file18028/subprocess-timeout-v5.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9079] Make gettimeofday available in time module

2010-07-16 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Right, it's one of the peculiarities of archive files (I think).  When none of 
an object file's symbols are used from the main program, the object file is 
dropped on the floor, ie not included.  This has bizarre consequences in C++ 
with static initializers, which get dropped.

On Windows, the PyAPI_FUNC macros should prevent the linker from stripping the 
datetime stuff.

Jeff Yasskin says you should create a noop function in your object file and 
call it from PyMain for force linkage of the object file you're building.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9079
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9079] Make gettimeofday available in time module

2010-07-16 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I'd really rather not try to rely module loading from a threading primitive.  :)

I think if you follow Antoine's suggestion of adding _PyTime_Init (which does 
nothing in the body other than a comment) it should work fine.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9079
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2010-07-14 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I went through the trouble of building and testing Python on Windows Vista, and 
with some small modifications I got the tests I added to pass.

Here's an updated patch.  I'm still not really sure how those threads work on 
Windows, so I'd rather leave that TODO in until someone with Windows expertise 
checks it.

--
Added file: http://bugs.python.org/file17994/subprocess-timeout-v4.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6033] LOOKUP_METHOD and CALL_METHOD optimization

2010-07-14 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Sorry, I was just posting it so Benjamin could see what this bought us.  I'm 
not pushing to get this in CPython.

The results are for JITed code.  I forget what the interpreted results are.  I 
think they are good for the microbenchmarks, but not as good for the macro.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6033
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6033] LOOKUP_METHOD and CALL_METHOD optimization

2010-07-13 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I have an patch for unladen-swallow out for review here:
http://codereview.appspot.com/160063/show

It resolves the correctness issues I mentioned previously by emitting guards if 
necessary.  If the type is predictable and uses slots, then we don't need to 
check the instance dict.

It gives a 5% speedup on the unpickle benchmark.  Presumably the other 
benchmarks do not do as many method calls.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6033
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6643] Throw away more radioactive locks that could be held across a fork in threading.py

2010-07-12 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I completely agree, but the cat is out of the bag on this one.  I don't see how 
we could get rid of fork until Py4K, and even then I'm sure there will be 
people who don't want to see it go, and I'd rather not spend my time arguing 
this point.

The only application of fork that doesn't use exec that I've heard of is 
pre-forked Python servers.  But those don't seem like they would be very 
useful, since with refcounting the copy-on-write behavior doesn't get you very 
many wins.

The problem that this bandaid solves for me is that test_threading.py already 
tests thread+fork behaviors, and can fail non-deterministically.

This problem was exacerbated while I was working on making the compilation 
thread.

I don't think we can un-support fork and threads in the near future either, 
because subprocess.py uses fork, and libraries can use fork behind the user's 
back.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6643
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8844] Condition.wait() doesn't raise KeyboardInterrupt

2010-07-12 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Alternatively, do you think it would be better to ignore interrupts when a 
timeout is passed?  If a timeout is passed, the lock acquire will eventually 
fail in a deadlock situation, and the signal will be handled in the eval loop.

However, if the timeout is sufficiently long, this is still a problem.

I'd prefer to do that or use gettimeofday from _time than leave this as is.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8844
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue9079] Make gettimeofday available in time module

2010-07-12 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

The patch looks good to me FWIW.

I would be interested in using this perhaps in issue8844, which involves lock 
timeouts.  It may be true that the POSIX API uses nanoseconds, but pythreads 
only exposes microsecond precision.

In order to use it from the thread module, it need to get moved into Python/.  
My best guess at the best place to put it would be sysmodule.c, since that 
already wraps other system calls.  Or, you know, we could live a little and 
make a new file for it.  :)

--
nosy: +rnk

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue9079
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue7576] Avoid warnings in PyModuleDef_HEAD_INIT

2010-07-11 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

This patch looks good to me, after digging through the relevant module code.

I was confused though for a bit as to why PyModuleDef is a PyObject with a NULL 
type.  It turns out that import.c wants to keep them in a dictionary, so it 
needs to be able to cast to PyObject* and to access the refcount.  It never 
needs the type, though, so it's safe to leave it NULL.  I think it might be 
worth adding comments explaining that in this patch or another.

--
nosy: +rnk

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue7576
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6643] Throw away more radioactive locks that could be held across a fork in threading.py

2010-07-11 Thread Reid Kleckner

Changes by Reid Kleckner r...@mit.edu:


--
title: joining a child that forks can deadlock in the forked child process - 
Throw away more radioactive locks that could be held across a fork in 
threading.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6643
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8844] Condition.wait() doesn't raise KeyboardInterrupt

2010-07-10 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Here's a patch that makes Python-level lock acquisitions interruptible for 
py3k.  There are many users of the C-level lock API, most of whom are not set 
up to deal with lock acquisition failure.  I decided to make a new API function 
and leave the others alone.

If possible, I think this should go out with 3.2.

In that case, I was wondering if I should merge PyThread_acquire_lock_timed 
with my new PyThread_acquire_lock_timed_intr, since PyThread_acquire_lock_timed 
wasn't available in 3.1.  Although it did go out in 2.7, we don't promise C API 
compatibility with the 2.x series, so I don't think it matters.

I've tested this patch on Mac OS X and Linux.  The whole test suite passes on 
both, along with the test that I added to test_threadsignals.py.

I added a noop compatibility wrapper to thread_nt.h, but I haven't tested it or 
built it.  When I get around to testing/fixing the subprocess patch on Windows, 
I'll make sure this works and the test is skipped.

--
keywords: +patch
Added file: http://bugs.python.org/file17929/lock-interrupt.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8844
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8844] Condition.wait() doesn't raise KeyboardInterrupt

2010-07-10 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Here is a new version of a patch that updates recursive locks to have the same 
behavior.  The pure Python RLock implementaiton should be interruptible by 
virtue of the base lock acquire primitive being interruptible.

I've also updated the relevant documentation I could find.  I've surely missed 
some, though.

I also got rid of the _intr version of lock acquisition and simply added a new 
parameter to the _timed variant.  The only two callers of it are the ones I 
updated in the _thread module.

--
Added file: http://bugs.python.org/file17931/lock-interrupt.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8844
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6643] joining a child that forks can deadlock in the forked child process

2010-07-10 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Here's an updated patch for py3k (3.2).  The test still fails without the fix, 
and passes with the fix.

Thinking more about this, I'll try summarizing the bug more coherently:

When the main thread joins the child threads, it acquires some locks.  If a 
fork in a child thread occurs while those locks are held, they remain locked in 
the child process.  My solution is to do here what we do elsewhere in CPython: 
abandon radioactive locks and allocate fresh ones.

--
Added file: http://bugs.python.org/file17932/thread-fork-join.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6643
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8844] Condition.wait() doesn't raise KeyboardInterrupt

2010-07-10 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Oops, copy/paste oversight.  =/  I wrote a test to verify that it handles 
signals, and then retries the lock acquire.

--
Added file: http://bugs.python.org/file17935/lock-interrupt.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8844
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6643] joining a child that forks can deadlock in the forked child process

2010-07-10 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I realized that in a later fix for unladen-swallow, we also cleared the 
condition variable waiters list, since it has radioactive synchronization 
primitives in it as well.

Here's an updated patch that simplifies the fix by just using __init__() to 
completely reinitialize the condition variables and adds a test.

This corresponds to unladen-swallow revisions r799 and r834.

--
Added file: http://bugs.python.org/file17936/thread-fork-join.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6643
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8844] Condition.wait() doesn't raise KeyboardInterrupt

2010-07-10 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Also, thanks for the quick reviews!

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8844
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8844] Condition.wait() doesn't raise KeyboardInterrupt

2010-05-28 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I'd like to fix it, but I don't know if I'll be able to in time.  It was 
something that bugged me while running the threading tests while working on 
Unladen.

I'm imagining (for POSIX platforms) adding some kind of check for signals when 
the system call returns EINTR.  If the signal handler raises an exception, like 
an interrupt should raise a KeyboardInterrupt, we can just give a different 
return code and propagate the exception.

It also seems like this behavior can be extended gradually to different 
platforms, since I don't have the resources to change and test every threading 
implementation.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8844
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2010-02-01 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

 - why do you say Thread.join() uses a busy loop? is it because it uses
   Condition.wait()? If so, this will be solved in py3k by issue7316 (which you
   are welcome to review). Otherwise, I think there should be an upper bound on
   the sleeping granularity (say 2 seconds).

Yes, that's what I was referring to.  I'm glad to hear the situation will
improve in the future!

 - the if 'timeout' in kwargs dance is a bit complicated. Why not simply
   kwargs.pop('timeout', None)?

Good call, done.

 - if it times out, communicate() should raise a specific exception. Bonus 
 points
   if the exception holds the partial output as attributes (that's what we do 
 for
   non-blocking IO in py3k), but if it's too difficult we can leave that out. I
   don't think returning None would be very good.

I agree.  Does subprocess.TimeoutExpired sound good?

It won't be possible with the current implementation to put the partial output
in the exception, because read blocks.  For example, in the Windows threaded
implementation, there's a background thread that just calls self.stdout.read(),
which blocks until its finished.

 - for consistency, other methods should probably raise the same exception. I
   think we can leave out the more complex scenarios such as timing out but
   still processing the beginning of the output.

What do you mean still processing?  I agree, they should all throw the same
exception.  I think call and check_call should clean up after themselves by
killing the child processes they create, while communicate and wait should leave
that to the user.

I'm imagining something like this for communicate:

try:
(stdout, stderr) = p.communicate(timeout=123)
except subprocess.TimeoutExpired:
p.kill()
(stdout, stderr) = p.communicate()  # Should not block long

And nothing special for check_call(cmd=[...], timeout=123).

--
Added file: http://bugs.python.org/file16093/subprocess-timeout.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6033] LOOKUP_METHOD and CALL_METHOD optimization

2009-11-24 Thread Reid Kleckner

Changes by Reid Kleckner r...@mit.edu:


--
nosy: +rnk

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6033
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6033] LOOKUP_METHOD and CALL_METHOD optimization

2009-11-24 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

One thing I was wondering about the current patch is what about objects
that have attributes that shadow methods?  For example:

class C(object):
def foo(self):
return 1
c = c()
print c.foo()
c.foo = lambda: 2
print c.foo()

Shouldn't the above print 1 and 2?  With the current patch, it seems
that you might still print 1.

There's also the possible performance drawback where you're loading
builtin C methods, so the optimization fails, but you end up calling
_PyType_Lookup twice.  :(

I'm doing the same optimization for unladen swallow, and these were some
of the things I ran into.  I think I'm going to write a
PyObject_GetMethod that tries to get a method without binding it, but if
it can't for any reason, it behaves just like PyObject_GetAttr and sets
a status code.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6033
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1068268] subprocess is not EINTR-safe

2009-10-12 Thread Reid Kleckner

Changes by Reid Kleckner r...@mit.edu:


--
nosy: +rnk

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue1068268
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6800] os.exec* raises OSError: [Errno 45] Operation not supported in a multithreaded application

2009-08-28 Thread Reid Kleckner

New submission from Reid Kleckner r...@mit.edu:

The test case is attached.  On Mac OS X (and presumably FreeBSD, which
has the same behavior) when you try to exec from a process that has any
other threads in it, you get an OSError, Operation not supported. 
Here's the output on my MacBook:

Traceback (most recent call last):
  File daemon_exec.py, line 16, in module
main()
  File daemon_exec.py, line 13, in main
os.execl('echo', 'hello world')
  File
/opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/os.py,
line 312, in execl
execv(file, args)
OSError: [Errno 45] Operation not supported

And on my Linux box:

hello world

Here's a similar bug that OCaml had to deal with:
http://caml.inria.fr/mantis/view.php?id=4666

I think it's reasonable for Python to declare this to be a limitation of
the OS, but considering that the other thread could be a daemon thread
that the user doesn't really care about, I think it would be reasonable
for Python to kill the other threads in the process before execing. 
That's what happens on Linux, anyway.

I ran into this problem while trying to add a persistent background
compilation thread to unladen swallow, and wondered if having any other
threads would trigger the same exception.

It's tempting to just write this off, but I figured it should be
documented or left open as a low priority defect.

--
assignee: ronaldoussoren
components: Macintosh
files: daemon_exec.py
messages: 92054
nosy: rnk, ronaldoussoren
severity: normal
status: open
title: os.exec* raises OSError: [Errno 45] Operation not supported in a 
multithreaded application
type: behavior
versions: Python 2.6
Added file: http://bugs.python.org/file14798/daemon_exec.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6800
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6800] os.exec* raises OSError: [Errno 45] Operation not supported in a multithreaded application

2009-08-28 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Supposedly this bug also affects FreeBSD, but I can't verify it.  I'd
say the problem isn't going away, at least not for that platform, but I
don't feel like it's worth bending over backwards to deal with it either.

As far as it concerns unladen swallow, we'll bring down our background
thread by another means.  Unfortunately, there's no way to join
pythreads, so I have to add a hack that just retries the execv call if
errno == EOPNOTSUPP with an eventual timeout.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6800
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6642] returning after forking a child thread doesn't call Py_Finalize

2009-08-04 Thread Reid Kleckner

New submission from Reid Kleckner r...@mit.edu:

I attached a test case to reproduce.

Here is what it does:
- The main thread in the parent process starts a new thread and waits
for it.
- The child thread forks.
- The child process creates a daemon thread, and returns.
- The parent process (in the thread that forked) calls os.waitpid(childpid).

What should happen is that the forked child process should terminate
because it shouldn't wait for the daemon thread, and
os.waitpid(childpid) should return after that, and then the main thread
should return from thread.join().

What happens is that because it was a child thread that forked, the C
stack starts inside of the pthread wrapper (or equivalent) instead of
main.  So when child process returns, it doesn't know that it is now the
main thread, and it doesn't execute Py_Finalize.  Furthermore, pthreads
doesn't call sys_exit_group because it thinks that it is a lone thread
returning, and it doesn't want to terminate the entire process group. 
When you look at it with 'ps f', this is what it looks like:

24325 pts/3Ss 0:01 bash
 4453 pts/3Sl 0:00  \_ ./python thread_fork_hang.py
 4459 pts/3Zl 0:07  |   \_ [python] defunct
 4467 pts/3R+ 0:00  \_ ps f

Here's the stack traces from the parent process:
(gdb) bt 
#0  0x77bd0991 in sem_wait () from /lib/libpthread.so.0
#1  0x00587abd in PyThread_acquire_lock (lock=0x12bb680,
waitflag=1) at ../../unladen2/Python/thread_pthread.h:349
#2  0x005b1660 in lock_PyThread_acquire_lock
(self=0x77f37150, args=value optimized out)
at ../../unladen2/Modules/threadmodule.c:46
#3  0x0055b89d in _PyEval_CallFunction (stack_pointer=0x128ff20,
na=value optimized out, nk=0) at ../../unladen2/Python/eval.cc:4046
#4  0x0055644c in PyEval_EvalFrame (f=0x128fd60) at
../../unladen2/Python/eval.cc:2518
#5  0x0055b225 in PyEval_EvalCodeEx (co=0x77ef9670,
globals=0x1, locals=0x2, args=0x123bbd8, argcount=1, kws=0x123bbe0,
kwcount=0, 
defs=0x77e540e8, defcount=1, closure=0x0) at
../../unladen2/Python/eval.cc:3093
#6  0x0055b7b0 in _PyEval_CallFunction (stack_pointer=0x123bbe0,
na=1, nk=0) at ../../unladen2/Python/eval.cc:4188
#7  0x0055644c in PyEval_EvalFrame (f=0x123ba40) at
../../unladen2/Python/eval.cc:2518
#8  0x0055b225 in PyEval_EvalCodeEx (co=0x77efea30,
globals=0x1, locals=0x2, args=0x12038c8, argcount=1, kws=0x12038d0,
kwcount=0, 
defs=0x77e54368, defcount=1, closure=0x0) at
../../unladen2/Python/eval.cc:3093
#9  0x0055b7b0 in _PyEval_CallFunction (stack_pointer=0x12038d0,
na=1, nk=0) at ../../unladen2/Python/eval.cc:4188
#10 0x0055644c in PyEval_EvalFrame (f=0x1203750) at
../../unladen2/Python/eval.cc:2518
#11 0x0055b225 in PyEval_EvalCodeEx (co=0x77f55d50,
globals=0x0, locals=0x0, args=0x0, argcount=0, kws=0x0, kwcount=0,
defs=0x0, 
defcount=0, closure=0x0) at ../../unladen2/Python/eval.cc:3093
#12 0x0055bc02 in PyEval_EvalCode (co=0x12bb680, globals=0x80,
locals=0x0) at ../../unladen2/Python/eval.cc:552
#13 0x0057deb1 in PyRun_FileExFlags (fp=0x1121260,
filename=0x7fffe6be thread_fork_hang.py, start=value optimized out, 
globals=0x10fa010, locals=0x10fa010, closeit=1,
flags=0x7fffe290) at ../../unladen2/Python/pythonrun.c:1359
#14 0x0057e167 in PyRun_SimpleFileExFlags (fp=0x1121260,
filename=0x7fffe6be thread_fork_hang.py, closeit=1,
flags=0x7fffe290)
at ../../unladen2/Python/pythonrun.c:955
#15 0x004d8954 in Py_Main (argc=-134459232, argv=value
optimized out) at ../../unladen2/Modules/main.c:695
#16 0x76cdf1c4 in __libc_start_main () from /lib/libc.so.6
#17 0x004d7ae9 in _start ()
(gdb) thread 2
[Switching to thread 2 (Thread 0x40800950 (LWP 4458))]#0 
0x77bd234f in waitpid () from /lib/libpthread.so.0
(gdb) bt
#0  0x77bd234f in waitpid () from /lib/libpthread.so.0
#1  0x005b6adf in posix_waitpid (self=value optimized out,
args=value optimized out) at ../../unladen2/Modules/posixmodule.c:5797
#2  0x0055b89d in _PyEval_CallFunction (stack_pointer=0x129cff8,
na=value optimized out, nk=0) at ../../unladen2/Python/eval.cc:4046
#3  0x0055644c in PyEval_EvalFrame (f=0x129ce60) at
../../unladen2/Python/eval.cc:2518
#4  0x0055b225 in PyEval_EvalCodeEx (co=0x77f558f0,
globals=0x0, locals=0x0, args=0x77f98068, argcount=0, kws=0x1239f70, 
kwcount=0, defs=0x0, defcount=0, closure=0x0) at
../../unladen2/Python/eval.cc:3093
#5  0x005d98fc in function_call (func=0x77eefc80,
arg=0x77f98050, kw=0x1286d20) at ../../unladen2/Objects/funcobject.c:524
#6  0x004dc68d in PyObject_Call (func=0x77eefc80,
arg=0x77f98050, kw=0x1286d20) at ../../unladen2/Objects/abstract.c:2487
#7  0x005549d0 in _PyEval_CallFunctionVarKw
(stack_pointer=0x129ce08, num_posargs=value optimized out, num_kwargs=0, 
flags=value optimized

[issue6643] joining a child that forks can deadlock in the forked child process

2009-08-04 Thread Reid Kleckner

New submission from Reid Kleckner r...@mit.edu:

This bug is similar to the importlock deadlock, and it's really part of
a larger problem that you should release all locks before you fork. 
However, we can fix this in the threading module directly by freeing and
resetting the locks on the main thread after a fork.

I've attached a test case that inserts calls to sleep at the right
places to make the following occur:
- Main thread spawns a worker thread.
- Main thread joins worker thread.
- To join, the main thread acquires the lock on the condition variable
(worker.__block.acquire()).
== switch to worker ==
- Worker thread forks.
== switch to child process ==
- Worker thread, which is now the only thread in the process, returns.
- __bootstrap_inner calls self.__stop() to notify any other threads
waiting for it that it returned.
- __stop() tries to acquire self.__block, which has been left in an
acquired state, so the child process hangs here.
== switch to worker in parent process ==
- Worker thread calls os.waitpid(), which hangs, since the child never
returns.

So there's the deadlock.

I think I should be able to fix it just by resetting the condition
variable lock and any other locks hanging off the only thread left
standing after the fork.

--
components: Library (Lib)
files: forkjoindeadlock.py
messages: 91265
nosy: rnk
severity: normal
status: open
title: joining a child that forks can deadlock in the forked child process
versions: Python 2.6
Added file: http://bugs.python.org/file14647/forkjoindeadlock.py

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6643
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6642] returning after forking a child thread doesn't call Py_Finalize

2009-08-04 Thread Reid Kleckner

Changes by Reid Kleckner r...@mit.edu:


--
versions: +Python 2.6 -Python 3.2

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6642
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6643] joining a child that forks can deadlock in the forked child process

2009-08-04 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Here's a patch for 3.2 which adds the fix and a test case.  I also
verified that the problem exists in 3.1, 2.7, and 2.6 and backported the
patch to those versions, but someone should review this one before I
upload those.

--
keywords: +patch
versions: +Python 2.7, Python 3.1, Python 3.2
Added file: http://bugs.python.org/file14648/forkdeadlock.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6643
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6642] returning after forking a child thread doesn't call Py_Finalize

2009-08-04 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Here's a patch against 2.6 for one way to fix it.  I imagine it has
problems, but I wanted to throw it out there as a straw man.

This patch builds on the patch for http://bugs.python.org/issue6643
since some of the test cases will occasionally deadlock without it.

--
keywords: +patch
Added file: http://bugs.python.org/file14653/finalize-patch.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6642
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2009-04-02 Thread Reid Kleckner

New submission from Reid Kleckner r...@mit.edu:

I was looking for a way to run a subprocess with a timeout.  While there
are a variety of solutions on Google, I feel like this functionality
should live in the standard library module.  Apparently Guido thought
this would be good in 2005 but no one did it:
http://mail.python.org/pipermail/python-dev/2005-December/058784.html

I'd be willing to implement it, but I'm not a core dev and I'd need
someone to review it.  I'll start working on a patch now, and if people
think this is a good idea I'll submit it for review.

My plan was to add a 'timeout' optional keyword argument to wait() and
propagate that backwards through communicate(), call(), and
check_call().  Does anyone object to this approach?

--
components: Library (Lib)
messages: 85256
nosy: rnk
severity: normal
status: open
title: Add timeout option to subprocess.Popen
type: feature request
versions: Python 2.7, Python 3.1

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2009-04-02 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

Ugh.  I made the assumption that there must be some natural and easy way
to wait for a child process with a timeout in C, and it turns out it's
actually a hard problem, which is why this isn't already implemented.

So my initial hack for solving this problem in my own project was to run
the subprocess, spawn a thread to wait on it, and then use the thread's
wait method, which does accept a timeout.  On further inspection, it
turns out that Thread.wait() actually uses a busy loop to implement the
timeout, which is what I was trying to avoid.  If it's okay to have a
busy loop there, is it okay to have one in Popen.wait()?  Obviously, you
don't need to busy loop if there is no timeout.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5673] Add timeout option to subprocess.Popen

2009-04-02 Thread Reid Kleckner

Reid Kleckner r...@mit.edu added the comment:

I'd like some feedback on this patch.  Is the API acceptable?

Would it be better to throw an exception in wait() instead of returning
None?  

What should communicate() return if it times out?  I can't decide if it
should try to return partial output, return None, or raise an exception.
 If it doesn't return partial output, that output is not recoverable. 
Maybe it should go into the exception object.  On the other hand, the
way that communicate() is implemented with threads on Windows makes it
hard to interrupt the file descriptor read and return partial output.
 For that matter, I'm not even sure how to stop those extra threads, as
you can see in the patch.  If anyone can think of a way to avoid using
threads entirely, that would be even better.

What should call() and check_call() return when they timeout?  If they
return None, which is the current returncode attribute, there is no way
of interacting with the process.  They could throw an exception with a
reference to the Popen object.

--
keywords: +patch
Added file: http://bugs.python.org/file13592/subprocess-timeout.patch

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5673
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2695] Ignore case when checking algorithm in urllib2

2008-05-04 Thread david reid

david reid [EMAIL PROTECTED] added the comment:

Looks like a sensible, simple fix to me :-)

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2695
__
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2756] urllib2 add_header fails with existing unredirected_header

2008-05-04 Thread david reid

New submission from david reid [EMAIL PROTECTED]:

In urllib2 when using reusing a Request calling add_header doesn't work
when an unredirected_header has been added. 

A good example (and the one that caught me out) is content-type. When
making a POST request with no content-type set the current code sets the
content-type as an unredirected_header to
'application/x-www-form-urlencoded' (line 1034 of urllib2.py) but in a
subsequent request, setting the content type via add_header will see
this ignored as unredirected_headers are appended after the headers.

A possible solution is to check whether the header being added already
exists in the requests undredirected_headers and remove it if it does.

--
components: Library (Lib)
messages: 66213
nosy: zathras
severity: normal
status: open
title: urllib2 add_header fails with existing unredirected_header
type: behavior
versions: Python 2.5

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2756
__
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue1368312] fix for scheme identification in urllib2?

2008-05-04 Thread david reid

david reid [EMAIL PROTECTED] added the comment:

I've run into this as an issue with a server that replies with both
digest and basic auth.

When parsing the keys in the header it's possible to detect the start of
a different auth method, so I'd suggest parsing the www-authenticate
line and returning a dict for each type of auth containing the
appropriate key,value pairs.

This approach should allow every auth type to be catered for.

--
nosy: +zathras

_
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue1368312
_
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2695] Ignore case when checking algorithm in urllib2

2008-05-02 Thread david reid

david reid [EMAIL PROTECTED] added the comment:

The patch is inline. There's not much to it :-)

Agree with your suggestion to avoid calling lower() twice.

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2695
__
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue2695] Ignore case when checking algorithm in urllib2

2008-04-26 Thread david reid

New submission from david reid [EMAIL PROTECTED]:

Small change to allow get_algorithm_impls to correctly detect when lower
case algorithm strings are passed. I recently ran into a server that
sent 'md5' and so this function failed without this small change.

def get_algorithm_impls(self, algorithm):
# lambdas assume digest modules are imported at the top level
if algorithm.lower() == 'md5':
H = lambda x: hashlib.md5(x).hexdigest()
elif algorithm.lower() == 'sha':
H = lambda x: hashlib.sha1(x).hexdigest()
...

--
components: Library (Lib)
messages: 65836
nosy: zathras
severity: normal
status: open
title: Ignore case when checking algorithm in urllib2
type: behavior
versions: Python 2.5

__
Tracker [EMAIL PROTECTED]
http://bugs.python.org/issue2695
__
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com