[issue3526] Customized malloc implementation on SunOS and AIX

2011-04-29 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Even worse than that, mixing to malloc implementations could lead to trouble.
For example, the trimming code ensures that the heap is where it last set it. 
So if an allocation has been made by another implementation in the meantime, 
the heap won't be trimmed, and your memory usage won't decrease. Also, it'll 
increase memory fragmentation.
Finally, I've you've got two threads inside different malloc implementations at 
the same time, well, some really bad things could happen.
And there are probably many other reasons why it's a bad idea.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3526
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11247] Error sending packets to multicast IPV4 address

2011-04-29 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Closing as invalid, since it's definitely not a Python issue, but much more 
likely a network configuration problem.

--
resolution:  - invalid
status: open - closed

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11247
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3526] Customized malloc implementation on SunOS and AIX

2011-04-29 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 I don't understand the point concerning trimming/fragmentation/threading by
 Charles-Francois: dlmalloc will allocate its own memory segment using mmap
 and handle memory inside that segment when you do a
 dlmalloc/dlfree/dlrealloc. Other malloc implementations will work in their
 own separate space and so won't impact or be impacted by what happens in
 dlmalloc segments.

Most of the allocations come from the heap - through sbrk - which is a
shared resource, and is a contiguous space. mmap is only used for big
allocations.


 dlmalloc is not that much different from pymalloc in that regard: it handles
 its own memory pool on top of the system memory implementations.
 Yet you can have an application that uses the ordinary malloc while calling
 some Python code which uses pymalloc without any
 trimming/fragmentation/threading issues.

It's completely different. Pymalloc is used *on top* of libc's malloc,
while dlmalloc would be be used in parallel.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3526
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11958] test.test_ftplib.TestIPv6Environment failure

2011-04-29 Thread Charles-Francois Natali

New submission from Charles-Francois Natali neolo...@free.fr:

test_ftplib fails in TestIPv6Environment:

==
ERROR: test_makepasv (test.test_ftplib.TestIPv6Environment)
--
Traceback (most recent call last):
  File /home/cf/cpython/Lib/test/test_ftplib.py, line 651, in setUp
self.server = DummyFTPServer((HOST, 0), af=socket.AF_INET6)
  File /home/cf/cpython/Lib/test/test_ftplib.py, line 220, in __init__
self.bind(address)
  File /home/cf/cpython/Lib/asyncore.py, line 339, in bind
return self.socket.bind(addr)
socket.gaierror: [Errno -2] Name or service not known

==
ERROR: test_transfer (test.test_ftplib.TestIPv6Environment)
--
Traceback (most recent call last):
  File /home/cf/cpython/Lib/test/test_ftplib.py, line 651, in setUp
self.server = DummyFTPServer((HOST, 0), af=socket.AF_INET6)
  File /home/cf/cpython/Lib/test/test_ftplib.py, line 220, in __init__
self.bind(address)
  File /home/cf/cpython/Lib/asyncore.py, line 339, in bind
return self.socket.bind(addr)
socket.gaierror: [Errno -2] Name or service not known

--
Ran 74 tests in 6.595s

FAILED (errors=2)
test test_ftplib failed -- multiple errors occurred
1 test failed:
test_ftplib

The reason is that support.HOST is 'localhost'. and on most machines, localhost 
is an alias for 127.0.0.1, and not the IPv6 loopback, so the address resolution 
fails.
One possible solution is simply to pass ::1 (IPv6 loopback address) instead of 
support.HOST.
Patch attached.

--
components: Tests
files: test_ftplib_ipv6.diff
keywords: patch
messages: 134811
nosy: giampaolo.rodola, neologix
priority: normal
severity: normal
status: open
title: test.test_ftplib.TestIPv6Environment failure
type: behavior
Added file: http://bugs.python.org/file21837/test_ftplib_ipv6.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11958
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11811] ssl.get_server_certificate() does not work for IPv6 addresses

2011-04-28 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 As for ssl_ipv6.diff, it fails on certificate verification:

Of course.
The new version should fix this (tested on google.com).

 is_ipv6_enabled.diff is fine.

Since IPv6 capability is unlikely to change in the middle of a test, I replaced 
the function is_ipv6_enabled() by a boolean IPV6_ENABLED. That way, it's closer 
to socket.has_ipv6, and spares a socket creation/bind/close at each call.

--
Added file: http://bugs.python.org/file21818/ssl_ipv6.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11811
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11811] ssl.get_server_certificate() does not work for IPv6 addresses

2011-04-28 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Added file: http://bugs.python.org/file21819/is_ipv6_enabled.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11811
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Ah, using the fallback implementation of tls?  Surely this isn't a 
 problem with the pthreads tls, I'd be surprised if it retains TLS values 
 after fork.

It surprised me too when I found that out, but it's really with the pthread 
TLS, on RHEL 4 and 5 (fixed in RHEL6).
See the attached test_specific.c test script.

 You could add a new _PyGILState_ReInit() function and call it from
 PyOS_AfterFork() or PyEval_ReInitThreads().

See attached tls_reinit.diff patch.
But I really find this redundant with PyThread_ReInitTLS, because what we're 
really doing is reinit the TLS.
Also, this calls this for every thread implementation, while it's only 
necessary for pthreads (and for other implementation it will redo the work done 
by PyThread_ReInitTLS).
So I've written another patch which does this in pthread's PyThread_ReInitTLS.

You've got much more experience than me, so it's really your call.
Actually, I kind of feel bad for adding such a hack for a pthread's bug 
affecting only RHEL 4 and 5, I'm wondering whether it's really worth fixing it.

--
Added file: http://bugs.python.org/file21801/tls_reinit.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-27 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Added file: http://bugs.python.org/file21802/tls_reinit_bis.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11871] test_default_timeout() of test_threading.BarrierTests failure: BrokenBarrierError

2011-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

The most obvious explanation for that failure is that the barrier's timeout is 
too low.

   def test_default_timeout(self):
   
   Test the barrier's default timeout
   
   #create a barrier with a low default timeout
   barrier = self.barriertype(self.N, timeout=0.1)

If the last thread waits on the barrier more than 0.1s after the first thread, 
then you'll get a BrokenBarrierError.
A 0.1s delay is not that much, 100ms was the default quantum with Linux O(1) 
scheduler...

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11871
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Thank you. I like this patch, except that _PyGILState_ReInit() should be
 declared in the appropriate .h file, not in signalmodule.c.

I asked myself this question when writing the patch: what's the convention  
regarding functions ? Should they always be declared in a header with 
PyAPI_FUNC, or should this be reserved to functions exported through the API?
I've seen a couple external function declarations in several places, so I was 
wondering (and since this one isn't meant to be exported, I chose the later 
option).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-27 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Removed file: http://bugs.python.org/file21802/tls_reinit_bis.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-27 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Removed file: http://bugs.python.org/file21801/tls_reinit.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-27 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Removed file: http://bugs.python.org/file21678/thread_invalid_key.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Here's an updated patch, tested on RHEL4U8.

--
Added file: http://bugs.python.org/file21804/tls_reinit.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10632] multiprocessing generates a fatal error

2011-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

It's a duplicate of http://bugs.python.org/issue10517

--
nosy: +neologix, pitrou

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10632
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11247] Error sending packets to multicast IPV4 address

2011-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Suggesting to close.

--
nosy: +giampaolo.rodola

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11247
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11811] ssl.get_server_certificate() does not work for IPv6 addresses

2011-04-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

A patch is attached, along with corresponding test.
Notes:
- since I don't have an IPv6 internet connectivity, I could only test it locally
- I chose 'ipv6.google.com' as SSL server for the test. If it's a problem, I 
can change it for svn.python.org (it'll just take a couple more lines to make 
sure that we're using IPv6 and not IPv4)
- while writting the test, I needed a way to find whether IPv6 is supported on 
the current host (socket.has_ipv6 only tells you that the interpreter has been 
built with IPv6 support, not that the OS has an IPv6 stack enabled). So instead 
of rewritting what's already done in test_socket, I added a new is_ipv6_enabled 
function in Lib/test/support.py, and modified test_socket, test_ftplib and 
test_ssl to use it. This patch (is_ipv6_enabled.diff) must be applied before 
ssl_ipv6.diff.

--
keywords: +patch
nosy: +neologix
Added file: http://bugs.python.org/file21811/ssl_ipv6.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11811
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11811] ssl.get_server_certificate() does not work for IPv6 addresses

2011-04-27 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Added file: http://bugs.python.org/file21812/is_ipv6_enabled.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11811
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11811] ssl.get_server_certificate() does not work for IPv6 addresses

2011-04-27 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Removed file: http://bugs.python.org/file21811/ssl_ipv6.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11811
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11811] ssl.get_server_certificate() does not work for IPv6 addresses

2011-04-27 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Removed file: http://bugs.python.org/file21812/is_ipv6_enabled.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11811
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-26 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Not necessarily. You can have several interpreters (and therefore several 
 thread states) in a single thread, using Py_NewInterpreter(). It's used by 
 mod_wsgi and probably other software. If you overwrite the old value with the 
 new one, it may break such software.


OK, I didn't know. Better not to change that in that case.

 Would it be possible to cleanup the autoTLS mappings in PyOS_AfterFork() 
 instead?


Well, after fork, all threads have exited, so you'll be running on the
behalf of the child process' main - and only - thread, so by
definition you can't access other threads' thread-specific data, no?
As an alternate solution, I was thinking of calling
PyThread_delete_key_value(autoTLSkey) in the path of thread bootstrap,
i.e. starting in Modules/_threadmodule.c t_bootstrap. Obviously, this
should be done before calling _PyThreadState_Init, since it can also
be called from Py_NewInterpreter.
The problem is that it would require exporting autoTLSkey whose scope
is now limited to pystate.c (we could also create a small wrapper
function in pystate.c to delete the autoTLSkey, since it's already
done in PyThreadState_DeleteCurrent and PyThreadState_Delete).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-26 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 How about deleting the mapping (pthread_key_delete) and recreating it
 from scratch, then?

Sounds good.
So the idea would be to retrieve the current thread's tstate, destroy the 
current autoTLSkey, re-create it, and re-associate the current tstate to this 
new key. I just did a quick test on RHEL4 and it works.
PyThread_ReinitTLS looks like a good candidate for that, but it's the same 
problem, autoTLSkey scope is limited to pystates.c (and I'm not sure that the 
tstate should be exposed to platform thread implementations).
There's also PyEval_ReinitThreads in ceval.c, exposing the autoTLSkey would 
make more sense (and it already knows about tstate, of course).
Where would you put it?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3526] Customized malloc implementation on SunOS and AIX

2011-04-26 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 it is possible to impact the memory allocation system on AIX using some 
 environment variables (MALLOCOPTIONS and others)

LD_PRELOAD won't impact AIX's malloc behaviour, but allows you to
replace it transparently by any other implementation you like
(dlmalloc, ptmalloc, ...), without touching neither cpython nor your
application.

For example, let's says I want a Python version where getpid always returns 42.

$ cat /tmp/pid.c
int getpid(void)
{
return 42;
}

$ gcc -o /tmp/pid.so /tmp/pid.c -fpic -shared

Now,

$ LD_PRELOAD=/tmp/pid.so python -c 'import os; print(os.getpid())'
42

That's it. If you replace pid.so by dlmalloc.so, you'll be using
dlmalloc instead of AIX's malloc, without having modified a single
line of code.
If you're concerned with impacting other applications, then you could
do something like:

$ cat python.c
#include stdlib.h
#include unistd.h

int main(int argc, char *argv[])
{
setenv(LD_PRELOAD, /tmp/pid.so, 1);
execvl(path to real python, argv);

return 1;
}

And then:
$ ./python -c 'import os; print(os.getpid())'
42

 Also note that dlmalloc (or a derivative - ptmalloc) is part of GNU glibc 
 which is used by most Linux systems, and is what you get when you call malloc.
 http://en.wikipedia.org/wiki/Malloc#dlmalloc_and_its_derivatives


Actually, glibc/eglibc versions have diverged quite a lot from the
original ptmalloc2, see for example http://bugs.python.org/issue11849
(that's one reason why embedding such a huge piece of code into Python
is probably not a good idea as highlighted by Antoine, it's updated
fairly frequently).

 So by using dlmalloc on SunOS and AIX you would get the same level of 
 performance for memory operations that you already probably can appreciate on 
 Linux systems.

Yes, but with the above trick, you can do that without patching
python nor your app.
I mean, if you start embedding malloc in python, why stop there, and
not embed the whole glibc ;-)
Note that I realize this won't solve the problem for other AIX users
(if there are any left :-), but since this patch doesn't seem to be
gaining adhesion, I'm just proposing an alternative that I find
cleaner, simpler and easier to maintain.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3526
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3526] Customized malloc implementation on SunOS and AIX

2011-04-26 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

I just noticed there's already a version of dlmalloc in 
Modules/_ctypes/libffi/src/dlmalloc.c

Compiling with gcc -shared -fpic -o /tmp/dlmalloc.so 
./Modules/_ctypes/libffi/src/dlmalloc.c

Then LD_PRELOAD=/tmp/dlmalloc.so ./python

works just fine (and by the way, it solves the problem with glibc's version in 
#11849, it's somewhat slower though).

Or am I missing something?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3526
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5115] Extend subprocess.kill to be able to kill process groups

2011-04-26 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Note that the setpgid creation part is now somewhat redundant with Popen's 
start_new_session flag (which calls setsid). Also, this should probably be an 
option, since with that patch every subprocess is in its own process group.

 I was wondering... what if process A runs a subprocess B which runs a
 subprocess C. Is C still considered a children of A and gets killed as
 well?

No.
When setpgid/setsid is called, a new group is created, so process C will be not 
be part of the same group as B.

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5115
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11849] glibc allocator doesn't release all free()ed memory

2011-04-25 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 The MALLOC_MMAP_THRESHOLD improvement is less visible here:


Are you running on 64-bit ?
If yes, it could be that you're exhausting M_MMAP_MAX (malloc falls
back to brk when there are too many mmap mappings).
You could try with
MALLOC_MMAP_THRESHOLD_=1024 MALLOC_MMAP_MAX_=16777216 ../opt/python
issue11849_test.py

By the way, never do that in real life, it's a CPU and memory hog ;-)

I think the root cause is that glibc's malloc coalescing of free
chunks is called far less often than in the original ptmalloc version,
but I still have to dig some more.

 By the way, I noticed that dictionnaries are never allocated through
 pymalloc, since a new dictionnary takes more than 256B...

 On 64-bit builds indeed. pymalloc could be improved to handle allocations up
 to 512B. Want to try and write a patch?

Sure.
I'll open another issue.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11849
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11849] glibc allocator doesn't release all free()ed memory

2011-04-25 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 It isn't better.

Requests above 256B are directly handled by malloc, so MALLOC_MMAP_THRESHOLD_ 
should in fact be set to 256 (with 1024 I guess that on 64-bit every mid-sized 
dictionnary gets allocated with brk).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11849
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-25 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 So, if it is possible to fix this and remove this weird special case and cast 
 it into the abyss, then by all means, you have my 10 thumbs up.  Not that it 
 counts for much :)

Me too.
We still have a couple hundred RHEL4/5 boxes at work, and I guess we're not 
alone in this case. It's a really specific case, but I think it would be nice 
to fix it, especially since it also makes the code more understandable and less 
error-prone. Unless of course this special treatment is really necessary, in 
which case I'll have to think of another solution or just drop it.
I'm adding Antoine to the noisy list, since he's noted as thread expert in the 
Experts Index.

--
nosy: +pitrou

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11912] Python shouldn't use the mprotect() system call

2011-04-24 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

PaX doesn't block mprotect in itself, but prevents pages from being both 
writable and executable.
Andreas's right, it's probably due to a dlopen of an object requiring 
executable stack via ctypes.
So you should report this to iotop's developpers. In the meantime, you could 
use paxctl -m /usr/bin/python.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11912
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue3526] Customized malloc implementation on SunOS and AIX

2011-04-24 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Sébastien:
I'm chiming in late, but doesn't AIX have something like LD_PRELOAD?
Why not use it to transparently replace AIX's legacy malloc by another malloc 
implementation like dlmalloc or ptmalloc?
That would not require any patching of Python, and could also be used for other 
applications.

As a side note, while mmap has some advantages, it is way slower than brk 
(because pages must be zero-filled, and since mmap/munmap is called at every 
malloc/free call, this zero-filling is done every time contrarily to brk 
pools). See http://sources.redhat.com/ml/libc-alpha/2006-03/msg00033.html

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue3526
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11849] ElementTree memory leak

2011-04-24 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

This is definitely a malloc bug.
Test with default malloc on a Debian box:

cf@neobox:~/cpython$ ./python ../issue11849_test.py 
*** Python 3.3.0 alpha
---   PID TTY  STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
  0  3778 pts/2S+ 0:00  1  1790  8245  7024  0.5 ./python 
../issue11849_test.py
  1  3778 pts/2S+ 0:17  1  1790 61937 60404  4.6 ./python 
../issue11849_test.py
  2  3778 pts/2S+ 0:35  1  1790 110841 108300  8.3 ./python 
../issue11849_test.py
  3  3778 pts/2S+ 0:53  1  1790 159885 158540 12.2 ./python 
../issue11849_test.py
  4  3778 pts/2S+ 1:10  1  1790 209369 206724 15.9 ./python 
../issue11849_test.py
  5  3778 pts/2S+ 1:28  1  1790 258505 255956 19.7 ./python 
../issue11849_test.py
  6  3778 pts/2S+ 1:46  1  1790 307669 304964 23.5 ./python 
../issue11849_test.py
  7  3778 pts/2S+ 2:02  1  1790 360705 356952 27.5 ./python 
../issue11849_test.py
  8  3778 pts/2S+ 2:21  1  1790 405529 404172 31.2 ./python 
../issue11849_test.py
  9  3778 pts/2S+ 2:37  1  1790 458789 456128 35.2 ./python 
../issue11849_test.py
END  3778 pts/2S+ 3:00  1  1790 504189 501624 38.7 ./python 
../issue11849_test.py
 GC  3778 pts/2S+ 3:01  1  1790 454689 453476 35.0 ./python 
../issue11849_test.py
***  3778 pts/2S+ 3:01  1  1790 454689 453480 35.0 ./python 
../issue11849_test.py
[56426 refs]


The heap is not trimmed, even after GC collection.
Now, using a smaller mmap threshold so that malloc uses mmap instead of brk:

cf@neobox:~/cpython$ MALLOC_MMAP_THRESHOLD_=1024 ./python ../issue11849_test.py 
*** Python 3.3.0 alpha
---   PID TTY  STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
  0  3843 pts/2S+ 0:00  1  1790  8353  7036  0.5 ./python 
../issue11849_test.py
  1  3843 pts/2S+ 0:17  1  1790 62593 59240  4.5 ./python 
../issue11849_test.py
  2  3843 pts/2S+ 0:35  1  1790 112321 108304  8.3 ./python 
../issue11849_test.py
  3  3843 pts/2S+ 0:53  1  1790 162313 157372 12.1 ./python 
../issue11849_test.py
  4  3843 pts/2S+ 1:11  1  1790 212057 206456 15.9 ./python 
../issue11849_test.py
  5  3843 pts/2S+ 1:29  1  1790 261749 255484 19.7 ./python 
../issue11849_test.py
  6  3843 pts/2S+ 1:47  1  1790 311669 304484 23.5 ./python 
../issue11849_test.py
  7  3843 pts/2S+ 2:03  1  1790 365485 356488 27.5 ./python 
../issue11849_test.py
  8  3843 pts/2S+ 2:22  1  1790 411341 402568 31.1 ./python 
../issue11849_test.py
  9  3843 pts/2S+ 2:38  1  1790 465141 454552 35.1 ./python 
../issue11849_test.py
END  3843 pts/2S+ 3:02  1  1790 67173 63892  4.9 ./python 
../issue11849_test.py
 GC  3843 pts/2S+ 3:03  1  1790  9925  8664  0.6 ./python 
../issue11849_test.py
***  3843 pts/2S+ 3:03  1  1790  9925  8668  0.6 ./python 
../issue11849_test.py
[56428 refs]

Just to be sure, with ptmalloc3 malloc implementation:

cf@neobox:~/cpython$ LD_PRELOAD=../ptmalloc3/libptmalloc3.so ./python 
../issue11849_test.py 
*** Python 3.3.0 alpha
---   PID TTY  STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
  0  3898 pts/2S+ 0:00  1  1790  8369  7136  0.5 ./python 
../issue11849_test.py
  1  3898 pts/2S+ 0:17  1  1790 62825 60264  4.6 ./python 
../issue11849_test.py
  2  3898 pts/2S+ 0:34  1  1790 112641 110176  8.5 ./python 
../issue11849_test.py
  3  3898 pts/2S+ 0:52  1  1790 162689 160048 12.3 ./python 
../issue11849_test.py
  4  3898 pts/2S+ 1:09  1  1790 212285 209732 16.2 ./python 
../issue11849_test.py
  5  3898 pts/2S+ 1:27  1  1790 261881 259460 20.0 ./python 
../issue11849_test.py
  6  3898 pts/2S+ 1:45  1  1790 311929 309332 23.9 ./python 
../issue11849_test.py
  7  3898 pts/2S+ 2:01  1  1790 365625 362004 27.9 ./python 
../issue11849_test.py
  8  3898 pts/2S+ 2:19  1  1790 411445 408812 31.5 ./python 
../issue11849_test.py
  9  3898 pts/2S+ 2:35  1  1790 465205 461536 35.6 ./python 
../issue11849_test.py
END  3898 pts/2S+ 2:58  1  1790 72141 69688  5.3 ./python 
../issue11849_test.py
 GC  3898 pts/2S+ 2:59  1  1790 15001 13748  1.0 ./python 
../issue11849_test.py
***  3898 pts/2S+ 2:59  1  1790 15001 13752  1.0 ./python 
../issue11849_test.py
[56428 refs]

So the problem is really that glibc/eglibc malloc implementations don't 
automatically trim memory upon free (this happens if you're only 
allocating/deallocating small chunks  64B that come from fastbins, but that's 
not the case here).
By the way, I noticed that dictionnaries are never allocated through pymalloc, 
since a new dictionnary takes more than 256B...

--

___
Python tracker rep

[issue11382] some posix module functions unnecessarily release the GIL

2011-04-21 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Is there anything I can do to help this move forward ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11382
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-04-21 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

I'm -10 on sync_file_range on Linux:
- it doesn't update the file metadata, so there's a high chance of corruption 
after a crash
- last time I checked, it didn't flush the disk cache (well, it probably does 
if barriers are enabled, but that's also the case with fsync)

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Change os.fsync() to support physical backing store syncs

2011-04-21 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 and it seems - as far as i understand what i read - that you're
 still right; and, furthermore, that fsync() does everything
 anyway.  (But here an idiot is talking about *very* complicated
 stuff.)


I just double-checked, and indeed, fsync does flush the disk cache
when barriers are enabled on several FS, while sync_file_range does
not. So sync_file_range should definitely not be used on Linux.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Mac OS X fsync() should really be fcntl(F_FULLFSYNC)

2011-04-20 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 in particular: linux doesn't guarantee that data gets writting to the disk 
 when you call fsync, only that the data gets pushed to the storage device.

Barriers are now enable by default in EXT4, and Theodore Tso has been 
favourable to that for quite some time now:
http://lwn.net/Articles/283288/

As for OS-X, this is definitely a bug (I mean, having to call fsync before mmap 
is a huge bug in itself).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8426] multiprocessing.Queue fails to get() very large objects

2011-04-19 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 IMO, it would be nice if I could ask my queue, Just what is your capacity
(in bytes, not entries) anyways?  I want to know how much I can put in here
without worrying about whether the remote side is dequeueing.  I guess I'd
settle for explicit documentation that the bound exists.

It is documented.
See the comment about the underlying pipe.

  But how should I
expect my code to be portable?  Are there platforms which provide less than
64k?  Less than 1k?  Less than 256 bytes?

It depends :-)
If the implementation is using pipes, under Linux before 2.6.9 (I think), a 
pipe was limited by the size of a page, i.e. 4K on x86.
Now, it's 64K.
If it's a Unix socket (via socketpair), the maximum size can be set through 
sysctl, etc.
So you can't basically state a limit, and IMHO, you should't be concerned with 
that if you want your code to be portable.
I find the warning excplicit enough, be that's maybe because I'm familiar with 
this low-level details.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8426
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11849] ElementTree memory leak

2011-04-19 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 BTW, after utilize lxml instead of ElementTree, such phenomenon of increasing 
 memory usage disappeared.

If you looked at the link I posted, you'll see that lxml had some similar 
issues and solved it by calling malloc_trim systematically when freeing memory.
It could also be heap fragmentation, though.

To go further, it'd be nice if you could provide the output of
valgrind --tool=memcheck --leak-check=full 
--suppressions=Misc/valgrind-python.supp python test script
after uncommenting relevant lines in Misc/valgrind-python.supp (see 
http://svn.python.org/projects/python/trunk/Misc/README.valgrind ).
It will either confirm a memory leak or malloc issue (I still favour the later).

By the way, does

while True:
XML(gen_xml())

lead to a constant memory usage increase ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11849
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11877] Mac OS X fsync() should really be fcntl(F_FULLFSYNC)

2011-04-19 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

I know that POSIX makes no guarantee regarding durable writes, but IMHO that's 
definitely wrong, in the sense that when one calls fsync, he expects the data 
to be committed to disk and be durable.
Fixing this deficiency through Python's exposed fsync might thus be a good idea 
(for example, the Window version probably doesn't call fsync, so it's already 
not a direct mapping).

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11877
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11849] ElementTree memory leak

2011-04-18 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 kaifeng cafe...@gmail.com added the comment:

 I added 'malloc_trim' to the test code and rerun the test with Python 2.5 / 
 3.2 on CentOS 5.3.  The problem still exists.


Well, malloc_trim can fail, but how did you add it ? Did you use
patch to apply the diff ?
Also, could you post the output of a
ltrace -e malloc_trim python test script

For info, the sample outputs I posted above come from a RHEL6 box.

Anyway, I'm 99% sure this isn't a leak but a malloc issue (valgrind
--tool=memcheck could confirm this if you want to try, I could be
wrong, it wouldn't be the first time ;-) ).
By the way, look at what I just found:
http://mail.gnome.org/archives/xml/2008-February/msg3.html

 Antoine Pitrou pit...@free.fr added the comment:
 That's an interesting thing, perhaps you want to open a feature request as a 
 separate issue?

Dunno.
Memory management is a domain which belongs to the operating
system/libc, and I think applications should mess with it (apart from
specific cases) .
I don't have time to look at this precise problem in greater detail
right now, but AFAICT, this looks either like a glibc bug, or at least
a corner case with default malloc parameters (M_TRIM_THRESHOLD and
friends), affecting only RHEL and derived distributions.
malloc_trim should be called automatically by free if the amount of
memory that could be release is above M_TRIM_THRESHOLD.
Calling it systematically can have a non-negligible performance impact.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11849
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11849] ElementTree memory leak

2011-04-17 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

The problem is not with Python, but with your libc.
When a program - such as Python - returns memory, it uses the free(3) library 
call.
But the libc is free to either return the memory immediately to the kernel 
using the relevant syscall (brk, munmap), or keep it around just in case (to 
simplify).
It seems that RHEL5 and onwards tend to keep a lot of memory around, at least 
in this case (probably because of the allocation pattern).

To sum up, python is returning memory, but your libc is not.
You can force it using malloc_trim, see the attached patch (I'm not at all 
suggesting its inclusion, it's just an illustration).

Results with current code:

*** Python 3.3.0 alpha
---   PID TTY  STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
  0 29823 pts/0S+ 0:00  1  1607 168176 8596  0.2 ./python 
/tmp/issue11849_test.py
  1 29823 pts/0S+ 0:01  1  1607 249400 87088  2.2 ./python 
/tmp/issue11849_test.py
  2 29823 pts/0S+ 0:03  1  1607 324080 161704  4.1 ./python 
/tmp/issue11849_test.py
  3 29823 pts/0S+ 0:04  1  1607 398960 235036  5.9 ./python 
/tmp/issue11849_test.py
  4 29823 pts/0S+ 0:06  1  1607 473356 309464  7.8 ./python 
/tmp/issue11849_test.py
  5 29823 pts/0S+ 0:07  1  1607 548120 384624  9.8 ./python 
/tmp/issue11849_test.py
  6 29823 pts/0S+ 0:09  1  1607 622884 458332 11.6 ./python 
/tmp/issue11849_test.py
  7 29823 pts/0S+ 0:10  1  1607 701864 535736 13.6 ./python 
/tmp/issue11849_test.py
  8 29823 pts/0S+ 0:12  1  1607 772440 607988 15.5 ./python 
/tmp/issue11849_test.py
  9 29823 pts/0S+ 0:13  1  1607 851156 685384 17.4 ./python 
/tmp/issue11849_test.py
END 29823 pts/0S+ 0:16  1  1607 761712 599400 15.2 ./python 
/tmp/issue11849_test.py
 GC 29823 pts/0S+ 0:16  1  1607 680900 519280 13.2 ./python 
/tmp/issue11849_test.py
*** 29823 pts/0S+ 0:16  1  1607 680900 519288 13.2 ./python 
/tmp/issue11849_test.py


Results with the malloc_trim:

*** Python 3.3.0 alpha
---   PID TTY  STAT   TIME  MAJFL   TRS   DRS   RSS %MEM COMMAND
  0 30020 pts/0S+ 0:00  1  1607 168180 8596  0.2 ./python 
/tmp/issue11849_test.py
  1 30020 pts/0S+ 0:01  1  1607 249404 86160  2.1 ./python 
/tmp/issue11849_test.py
  2 30020 pts/0S+ 0:03  1  1607 324084 160596  4.0 ./python 
/tmp/issue11849_test.py
  3 30020 pts/0S+ 0:04  1  1607 398964 235036  5.9 ./python 
/tmp/issue11849_test.py
  4 30020 pts/0S+ 0:06  1  1607 473360 309808  7.9 ./python 
/tmp/issue11849_test.py
  5 30020 pts/0S+ 0:07  1  1607 548124 383896  9.7 ./python 
/tmp/issue11849_test.py
  6 30020 pts/0S+ 0:09  1  1607 622888 458716 11.7 ./python 
/tmp/issue11849_test.py
  7 30020 pts/0S+ 0:10  1  1607 701868 536124 13.6 ./python 
/tmp/issue11849_test.py
  8 30020 pts/0S+ 0:12  1  1607 772444 607212 15.4 ./python 
/tmp/issue11849_test.py
  9 30020 pts/0S+ 0:14  1  1607 851160 684608 17.4 ./python 
/tmp/issue11849_test.py
END 30020 pts/0S+ 0:16  1  1607 761716 599524 15.3 ./python 
/tmp/issue11849_test.py
 GC 30020 pts/0S+ 0:16  1  1607 680776 10744  0.2 ./python 
/tmp/issue11849_test.py
*** 30020 pts/0S+ 0:16  1  1607 680776 10752  0.2 ./python 
/tmp/issue11849_test.py

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11849
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11849] ElementTree memory leak

2011-04-17 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


--
keywords: +patch
Added file: http://bugs.python.org/file21696/gc_trim.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11849
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-15 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

This is due to a bug in the TLS key management when mixed with fork.
Here's what happens:
When a thread is created, a tstate is allocated and stored in the thread's TLS:
thread_PyThread_start_new_thread - t_bootstrap - _PyThreadState_Init - 
_PyGILState_NoteThreadState:

if (PyThread_set_key_value(autoTLSkey, (void *)tstate)  0)
Py_FatalError(Couldn't create autoTLSkey mapping);

where 
int
PyThread_set_key_value(int key, void *value)
{
int fail;
void *oldValue = pthread_getspecific(key);
if (oldValue != NULL)
return 0;
fail = pthread_setspecific(key, value);
return fail;
}

A pthread_getspecific(key) is performed to see if there was already a value 
associated to this key.
The problem is that, if a process has a thread with a given thread ID (and a 
tstate stored in its TLS), and then the process forks (from another thread), if 
a new thread is created with the same thread ID as the thread in the child 
process, pthread_getspecific(key) will return the value stored by the other 
thread (with the same thread ID). In short, thread-specific values are 
inherited across fork, and if you're unlucky and create a thread with a thread 
ID already existing in the parent process, you're screwed.
To conclude, PyGILState_GetThisThreadState, which calls 
PyThread_get_key_value(autoTLSkey) will return the other thread's tstate, which 
will triggers this fatal error in PyThreadState_Swap.

The patch attached fixes this issue by removing the call to 
pthread_getspecific(key) from PyThread_set_key_value. This solves the problem 
and doesn't seem to cause any regression in test_threading and 
test_multiprocessing, and I think that if we were to call 
PyThread_set_key_value twice on the same key it's either an error, or we want 
the last version to be stored, not the old one.
test_threading and test_multiprocessing now run fine without any fatal error.

Note that this is probably be a bug in RHEL pthread's implementation, but given 
how widespread RHEL and derived distros are, I think this should be fixed.
I've attached a patch and a small test program to check if thread-specific data 
is inherited across a fork.
Here's a sample run on a RHEL4U8 box:

$ /tmp/test
PID: 17922, TID: 3086187424, init value: (nil)
PID: 17924, TID: 3086187424, init value: 0xdeadbeef

The second thread has been created in the child process and inherited the first 
thread's (created by the parent) key's value (one condition for this to happen 
is of course that the second thread is allocated the same thread ID as the 
first one).

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-15 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Added file: http://bugs.python.org/file21677/test_specific.c

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-15 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


--
keywords: +patch
Added file: http://bugs.python.org/file21678/thread_invalid_key.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10517] test_concurrent_futures crashes with --with-pydebug on RHEL5 with Fatal Python error: Invalid thread state for this thread

2011-04-15 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Note: this seems to be fixed in RHEL6.
(Sorry for the noise).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10517
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10496] import site failed when Python can't find home directory (sysconfig._getuserbase)

2011-04-14 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

I'm not sure whether POSIX warrants anything about this behavior, but nothing 
prevents a process from running with a UID not listed in /etc/passwd (or NIS, 
whatever). For example, sudo allows running a command with a UID not listed in 
the password database, see http://linux.die.net/man/5/sudoers :

targetpw

If set, sudo will prompt for the password of the user specified by the -u flag 
(defaults to root) instead of the password of the invoking user. Note that this 
precludes the use of a uid not listed in the passwd database as an argument to 
the -u flag. This flag is off by default.


UIDs not backed by users are useful for example if you're working with a 
sandbox, or virtual users such as in some FTP servers 
http://www.proftpd.org/docs/howto/VirtualUsers.html :

Question: What makes a user virtual, then?
Answer: A virtual user is, quite simply, a user that is not defined in the 
system /etc/passwd file. This file associates a user name, given by the system 
administrator, to a user ID (commonly shortened to UID) and a group ID (GID), 
among other details. The Unix kernel does not deal with users in terms of their 
user names; it only knows about UIDs and GIDs. This means that an application 
like proftpd can look up the IDs to use for a given user name however it sees 
fit. Using /etc/passwd is not strictly required.


--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10496
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10332] Multiprocessing maxtasksperchild results in hang

2011-04-13 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

This problem arises because the pool's close method is called before all the 
tasks have completed. Putting a sleep(1) before pool.close() won't exhibit this 
lockup.
The root cause is that close makes the workers handler thread exit: since the 
maxtasksperchild argument is used, workers exit when they've processed their 
max number of tasks. But since the workers handler thread exited, it doesn't 
maintain the pool of workers anymore, and thus the remaining tasks are not 
treated anymore, and the task handler thread waits indefinitely (since it waits 
until the cache is empty).
The solution is to prevent the worker handler thread from exiting until the 
cache has been drained (unless the pool is terminated in which case it must 
exit right away).
Attached is a patch and relevant test.

Note: I noticed that there are some thread-unsafe operations (the cache that 
can be modified from different threads, and thread states are modified also 
from different threads). While this isn't an issue with the current cPython 
implementation (GIL), I wonder if this should be fixed.

--
keywords: +patch
nosy: +neologix
Added file: http://bugs.python.org/file21644/pool_lifetime_close.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10332
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8426] multiprocessing.Queue fails to get() very large objects

2011-04-13 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

It's documented in 
http://docs.python.org/library/multiprocessing.html#multiprocessing-programming 
:

Joining processes that use queues

Bear in mind that a process that has put items in a queue will wait before 
terminating until all the buffered items are fed by the “feeder” thread to the 
underlying pipe. (The child process can call the Queue.cancel_join_thread() 
method of the queue to avoid this behaviour.)

This means that whenever you use a queue you need to make sure that all items 
which have been put on the queue will eventually be removed before the process 
is joined. Otherwise you cannot be sure that processes which have put items on 
the queue will terminate. Remember also that non-daemonic processes will be 
automatically be joined.


Suggesting to close.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8426
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10121] test_multiprocessing stuck in test_make_pool if run in a loop

2011-04-13 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

It's probably a duplicate of http://bugs.python.org/issue8428
It would be nice if you could try to reproduce it with a py3k snapshot though, 
just to be sure.

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10121
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11790] transient failure in test_multiprocessing.WithProcessesTestCondition

2011-04-11 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

One possible cause for those intermittent failures is the preemtion of a thread 
while waiting on the condition:

 def wait(self, timeout=None):
   233 assert self._lock._semlock._is_mine(), \
   234'must acquire() condition before using wait()'
   235 
   236 # indicate that this thread is going to sleep
   237 self._sleeping_count.release()
   238 
   239 # release lock
   240 count = self._lock._semlock._count()
   241 for i in range(count):
   242 self._lock.release()
   243 
-- here
   244 try:
   245 # wait for notification or timeout
   246 ret = self._wait_semaphore.acquire(True, timeout)


For example, if the last thread/process is preempted after having released the 
condition's lock (and hence performed a up on the sleeping semaphore sooner 
in the f function) but before waiting on the condition's semaphore, since the 
main thread only waits 0.1s before locking the condition and performing a 
notify_all on it (it will proceed since all the threads performed an up on 
sleeping), only the threads already waiting on the condition will be woken 
up, this last thread won't be woken up, triggering a failure in this assertion
   764 self.assertReturnsIfImplemented(0, get_value, woken)
with woken.get_value() == 5

It's just a guess, but I'd suggest increasing the sleep before trying to signal 
the condition a bit:

   762 # check no process/thread has woken up
   763 time.sleep(10 * DELTA)

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11790
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11790] transient failure in test_multiprocessing.WithProcessesTestCondition

2011-04-11 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Sorry, wrong copy-paste, the failing assertion will of course be this one:
   773 self.assertReturnsIfImplemented(6, get_value, woken)

since woken.get_value() == 5

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11790
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8428] buildbot: test_multiprocessing timeout (test_notify_all? test_pool_worker_lifetime?)

2011-04-10 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

I think those lockups are due to a race in the Pool shutdown code.
In Lib/multiprocessing/pool.py:

def close(self):
debug('closing pool')
if self._state == RUN:
self._state = CLOSE
self._worker_handler._state = CLOSE
self._taskqueue.put(None) 

We set the current state to CLOSE, and send None to the taskqueue, so that 
task_handler detects that we want to shut down the queue and sends None 
(sentinel) to the inqueue for each worker process.
When a worker process receives this sentinel, it exists, and when Pool's join 
method is called, each process is joined successfully.
Now, there's a problem, because of the worker_hanler thread.
This thread constantly starts new threads if existing one exited after having 
completed their work:

def _handle_workers(pool):
while pool._worker_handler._state == RUN and pool._state == RUN:
pool._maintain_pool()
time.sleep(0.1)
debug('worker handler exiting')

where 

def _maintain_pool(self):
Clean up any exited workers and start replacements for them.

if self._join_exited_workers():
self._repopulate_pool()

Imagine the following happens:

worker_handler checks that the pool is still running (state == RUN), but before 
calling maintain_pool, it's preempted (releasal of the GIL), and Pool's close() 
methode is called :
state is set to CLOSE, None is put to taskqueue, and worker threads exit.
Then, Pool's join is called:

def join(self):
debug('joining pool')
assert self._state in (CLOSE, TERMINATE)
self._worker_handler.join()
self._task_handler.join()
self._result_handler.join()
for p in self._pool:
p.join()


this blocks until worker_handler exits. This thread sooner or later resumes and 
calls maintain_pool.
maintain_pool calls repopulate_pool, which recreates new worker 
threads/processes.
Then, worker_handler checks the current state, sees CLOSE, and exists.
Then, Pool's join blocks  there:
for p in self._pool:
p.join()

since the newly created processes never receive the sentinels (already consumed 
by the previous worker processes)...

This race can be reproduced almost every time by just adding:


def _handle_workers(pool):
while pool._worker_handler._state == RUN and pool._state == RUN:
+time.sleep(1)
pool._maintain_pool()
time.sleep(0.1)
debug('worker handler exiting')

Then something as simple as this will block:

p = multiprocessing.Pool(3)
p.close()
p.join()

I still have to think of a clean way to solve this.

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8428
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8428] buildbot: test_multiprocessing timeout (test_notify_all? test_pool_worker_lifetime?)

2011-04-10 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Attached is a patch fixing this race, and a similar one in Pool's terminate.

--
keywords: +patch
Added file: http://bugs.python.org/file21608/pool_shutdown_race.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8428
___diff -r bbfc65d05588 Lib/multiprocessing/pool.py
--- a/Lib/multiprocessing/pool.py   Thu Apr 07 10:48:29 2011 -0400
+++ b/Lib/multiprocessing/pool.py   Sun Apr 10 23:52:22 2011 +0200
@@ -322,6 +322,8 @@
 while pool._worker_handler._state == RUN and pool._state == RUN:
 pool._maintain_pool()
 time.sleep(0.1)
+# send sentinel to stop workers
+pool._taskqueue.put(None)
 debug('worker handler exiting')
 
 @staticmethod
@@ -440,7 +442,6 @@
 if self._state == RUN:
 self._state = CLOSE
 self._worker_handler._state = CLOSE
-self._taskqueue.put(None)
 
 def terminate(self):
 debug('terminating pool')
@@ -474,7 +475,6 @@
 
 worker_handler._state = TERMINATE
 task_handler._state = TERMINATE
-taskqueue.put(None) # sentinel
 
 debug('helping task handler/workers to finish')
 cls._help_stuff_finish(inqueue, task_handler, len(pool))
@@ -484,6 +484,11 @@
 result_handler._state = TERMINATE
 outqueue.put(None)  # sentinel
 
+# we must wait for the worker handler to exit before terminating
+# workers because we don't want workers to be restarted behind our 
back 
+debug('joining worker handler')
+worker_handler.join()
+
 # Terminate workers which haven't already finished.
 if pool and hasattr(pool[0], 'terminate'):
 debug('terminating workers')
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11757] test_subprocess.test_communicate_timeout_large_ouput failure on select(): negative timeout?

2011-04-09 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Oh, I didn't know. In this case, is my commit 3664fc29e867 correct? I
 think that it is, because without the patch, subprocess may call poll()
 with a negative timeout, and so it is no more a timeout at all.


Yes, it looks correct.
But I think there are a couple places left where functions can be
called with a negative timeout, for example here :

  1537 stdout, stderr =
self._communicate_with_select(input, endtime,
  1538
orig_timeout)
  1539
  1540 self.wait(timeout=self._remaining_time(endtime))

or here:

  1113 if self.stdout is not None:
  1114 self.stdout_thread.join(self._remaining_time(endtime))
  1115 if self.stdout_thread.isAlive():

Also, it might be simpler and cleaner to factorize the raising of the
TimeoutExpired exception inside _remaining_time, instead of scattering
this kind of checks around the file:

  1514 remaining = self._remaining_time(endtime)
  1515 if remaining = 0:
  1516 raise TimeoutExpired(self.args, timeout)

merging what's done in _check_timeout

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11757
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6931] dreadful performance in difflib: ndiff and HtmlDiff

2011-04-08 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Check also this:

 http://bugs.python.org/issue11740

You should indicate it as duplicate.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6931
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11757] test_subprocess.test_communicate_timeout_large_ouput failure on select(): negative timeout?

2011-04-07 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

It seems to have fixed the failure, no ?
I don't know what's the policy regarding syscall parameters check, but
I think it'd be better to check that the timeout passed to select is
not negative, and raise an exception otherwise, instead of silently
storing it into struct timeval (with an overflow) before passing it to
select.
Attached is a patch + test that does just that.

--
keywords: +patch
Added file: http://bugs.python.org/file21566/select_negative_timeout.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11757
___diff -r bbfc65d05588 Lib/test/test_select.py
--- a/Lib/test/test_select.py   Thu Apr 07 10:48:29 2011 -0400
+++ b/Lib/test/test_select.py   Thu Apr 07 21:06:59 2011 +0200
@@ -20,6 +20,7 @@
 self.assertRaises(TypeError, select.select, [self.Nope()], [], [])
 self.assertRaises(TypeError, select.select, [self.Almost()], [], [])
 self.assertRaises(TypeError, select.select, [], [], [], not a number)
+self.assertRaises(ValueError, select.select, [], [], [], -1)
 
 def test_returned_list_identity(self):
 # See issue #8329
diff -r bbfc65d05588 Modules/selectmodule.c
--- a/Modules/selectmodule.cThu Apr 07 10:48:29 2011 -0400
+++ b/Modules/selectmodule.cThu Apr 07 21:06:59 2011 +0200
@@ -234,6 +234,11 @@
 timeout period too long);
 return NULL;
 }
+if (timeout  0) {
+PyErr_SetString(PyExc_ValueError,
+timeout must be non-negative);
+return NULL;
+}
 seconds = (long)timeout;
 timeout = timeout - (double)seconds;
 tv.tv_sec = seconds;
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11757] test_subprocess.test_communicate_timeout_large_ouput failure on select(): negative timeout?

2011-04-07 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 You may also patch poll_poll().


Poll accepts negative timeout values, since it's the only way to
specify an infinite wait (contrarily to select which can be passed
NULL).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11757
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11766] test_multiprocessing failure (test_pool_worker_lifetime)

2011-04-06 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Does this only happen on Cygwin buildbots ?
If yes, then it might simply be an issue with Cygwin's fork implementation, 
which is much slower than natively.
Right now, the test waits 0.5s before checking that the processes are started, 
after repopulating the pool. While 0.5s is normally way enough for forking a 
couple processes, it seems that under Cygwin this can take a surprising amount 
of time, see http://old.nabble.com/Slow-fork-issue---Win-x64-td19538601.html 
and also http://superuser.com/questions/133313/can-i-speed-up-cygwins-fork
Unless I misunderstand their benchmarks, the fork (+exec) rate of date from a 
shell can be as low as 5/sec, so I can only guess what forking cpython would 
take.
Maybe we could try to increase the timeout before checking the PIDs:

+countdown = 10
-countdown = 10
while countdown and not all(w.is_alive() for w in p._pool):
countdown -= 1
time.sleep(DELTA)

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11766
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11757] test_subprocess failure

2011-04-04 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

_remaining_time doesn't check that endtime  current time and can return a 
negative number, which would trigger an EINVAL when passed to select 
(select_select doesn't seem to check for negative double).
Note that a check is performed through _check_timeout but after having called 
select, so there are at least two possible ways to get this error:
The process blocks a little before calling select for the first time. This can 
at least happen here:
if self.stdin and not self._communication_started:
# Flush stdio buffer.  This might block, if the user has
# been writing to .stdin in an uncontrolled fashion.
self.stdin.flush()
if not input:
self.stdin.close()

There's also a short race window if the endtime deadline expires between the 
call to _check_timeout and remaining_time.

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11757
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11277] test_zlib crashes under Snow Leopard buildbot

2011-04-04 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Is the SIGBUS generated on the first page access ?
How much memory does this buildbot have ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11277
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11753] test_sendall_interrupted() of test_socket hangs on FreeBSD

2011-04-03 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

This test assumes that send will necessarily return if interrupted by a signal, 
but the kernel can automatically restart the syscall when no data has been 
committed (instead of returning -1 with errno set to EINTR).
And, AFAIK, that's exactly what FreeBSD does, see 
http://www.freebsd.org/cgi/man.cgi?query=siginterruptapropos=0sektion=0manpath=FreeBSD+8.2-RELEASEformat=html
 :

The siginterrupt() function is used to change the system call restart
 behavior when a system call is interrupted by the specified signal.  If
 the flag is false (0), then system calls will be restarted if they are
 interrupted by the specified signal and no data has been transferred yet.
 System call restart has been the default behavior since 4.2BSD, and is
 the default behaviour for signal(3) on FreeBSD.


And http://www.gsp.com/cgi-bin/man.cgi?section=2topic=sigaction :


If a signal is caught during the system calls listed below, the call may be 
forced to terminate with the error EINTR, the call may return with a data 
transfer shorter than requested, or the call may be restarted. Restart of 
pending calls is requested by setting the SA_RESTART bit in sa_flags. The 
affected system calls include open(2), read(2), write(2), sendto(2), 
recvfrom(2), sendmsg(2) and recvmsg(2) on a communications channel or a slow 
device (such as a terminal, but not a regular file) and during a wait(2) or 
ioctl(2). However, calls that have already committed are not restarted, but 
instead return a partial success (for example, a short read count).


So if the signal arrives while some data has been transferred, send will return 
with a partial write, but if the signal arrives before any data has been 
written, then you'll never see EINTR and remain stuck forever (unless 
SA_RESTART is unset).
Note that POSIX seems to require write to return with EINTR if interrupted 
before any data is written, see 
http://pubs.opengroup.org/onlinepubs/009695399/functions/write.html :


If write() is interrupted by a signal before it writes any data, it shall 
return -1 with errno set to [EINTR].


But send and sendto man pages don't require this behaviour.

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11753
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11727] Add a --timeout option to regrtest.py using the faulthandler module

2011-03-31 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 There is something interesting in this output: the test uses a subprocess and 
 we only have the traceback of the parent. It may be nice to have the trace of 
 the child process. It might be possible by sending a signal to the child 
 process (but how can we get the list of the child processes in a C signal 
 handler?).

I don't think you can find that, but you could send a signal to the whole 
process group:
if (getpgrp() == getpid()) {
kill(-getpgrp(), signal);
}

The getpgrp() == getpid() makes sure that you'll only do that if the current 
process is the group leader (and it's async-safe). You'll probably want to 
block signal in the parent's handler first.
Note that it won't work if your child process calls setsid(), of course 
(start_new_session argument to Popen).

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11727
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8052] subprocess close_fds behavior should only close open fds

2011-03-30 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 I wonder whether the Java people are simply unaware of the potential problem?
 Or perhaps they have checked the Linux and Solaris implementations of 
 readdir()
 and confirmed that it is in fact safe on those platforms. Even if this is the
 case, I would be wary of doing things the same way - there's no guarantee that
 the implementation won't change out from underneath us.

The problem is not so much readdir: if you look at the source code
(http://fxr.googlebit.com/source/lib/libc/gen/readdir.c), it doesn't
do much apart from locking a mutex private to the related DIR *, so as
long as you pass it a DIR * not referenced elsewhere (which would be
the case since it would call opendir between the fork and exec), it
should be ok. The man page
(http://pubs.opengroup.org/onlinepubs/007908799/xsh/readdir.html) also
makes it clear:
After a call to fork(), either the parent or child (but not both)
may continue processing the directory stream using readdir(),
rewinddir() or seekdir().  If both the parent and child processes use
these functions, the result is undefined.

The problem is more with opendir, which needs to allocate memory for
the struct dirent before calling getdents syscall.

I agree with you, we should definitely favor correctness over efficiency.

As for the other approach, I'm not aware of any portable way to
determine if a program is multi-threaded. Also, as noted by Victor,
there might be room for some subtle races (Python-registered signal
handlers are called synchronously from the main eval loop with the GIL
held, so I don't think there should be a problem there, but you might
have a problem with C-extension registered signal handlers).

Finally, looking at this thread
http://lists.freebsd.org/pipermail/freebsd-hackers/2007-July/021132.html,
it seems that some closefrom implementations are definitely not
async-safe, which is a pity...

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8052
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue6085] Logging in BaseHTTPServer.BaseHTTPRequestHandler causes lag

2011-03-28 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Removed file: http://bugs.python.org/file17081/base_http_server_fqdn_lag.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue6085
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8052] subprocess close_fds behavior should only close open fds

2011-03-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

If you're suggesting to set FDs CLOEXEC by default, I think it's neither 
possible nor reasonable:
- you have to take into account not only files, but also pipes, sockets, etc
- there's no portable way to e.g. open a file and set it CLOEXEC atomically
- first and foremost, it' going to break a lot of existing code, for example, 
pipe + fork, accept + fork, etc
As for the dedicated syscalls, there's already been some discussion about 
closefrom and friends, but Gregory did some research and it looked like those 
are not async-safe - which, if it's really the case, renders those calls mostly 
useless.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8052
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8052] subprocess close_fds behavior should only close open fds

2011-03-27 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Ooops, it's of course not going to break code containing accept + fork or pipe 
+ fork, you obviously also need an execve ;-)
But the point is that you can't change the semantics of FDs being inheritable 
across an execve (think about inetd for example).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8052
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11650] CTRL-Z causes interpreter exit

2011-03-23 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

What's the problem here ?
CTRL-Z causes the controlling terminal to send a SIGTSTP to the process, and 
the default handler stops the process, pretty much like a SIGSTOP.
If you don't want that to happen:
import signal
signal.signal(signal.SIGTSTP, signal.SIG_IGN)

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11650
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11650] CTRL-Z causes interpreter exit

2011-03-23 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

I'm still not sure I understand the problem.
- when you hit CTRL-Z, the process is put in background, since it receives a 
SIGTSTP : normal
- when you put it in foreground with 'fg', it doesn't resume ? Did you try to 
hit ENTER to have sys.ps1 ' ' printed to stdout ? Or did the process exit ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11650
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11650] CTRL-Z causes interpreter exit

2011-03-23 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

In that case, it's likely due to the way OS-X handles interrupted syscalls.
Under Linux, getchar and friends (actually read with default SA_RESTART) won't 
return EINTR on (SIGSTOP|SIGTSTP)/SIGCONT.
Under OS-X, it seems that e.g. getchar (read) does return EOF with errno set to 
EINTR, in which case the interactive interpreter will exit, if errno is not 
checked.
Out of curiosity, could you try the C snippet:

#include stdio.h

int main(int argc, char *argv[])
{
int c;

if ((c = getchar()) == EOF) {
perror(getchar);
}

return 0;
}

And interrupt it with CTRL-Z ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11650
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11650] CTRL-Z causes interpreter exit

2011-03-23 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

my_fgets in Parser/myreadline.c is broken:
There's a comment saying that a fgets is retried on EINTR, but the code doesn't 
retry. It used to in older cPython versions, but there was also a bug, so my 
guess is that this bug has been here for a long time.
Could you try with the attached patch ?
It's just a quick'n dirty patch, but it should fix it.

--
keywords: +patch
Added file: http://bugs.python.org/file21358/fgets_eintr.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11650
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11459] Python select.select does not correctly report read readyness

2011-03-10 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Could you try with the attached patch ?
The problem is that subprocess silently replaces bufsize=0, so child.stdout is 
actually buffered, and when you read just one byte, everything that's available 
for reading is read into the python's object buffer. Then, select/poll doesn't 
see the pipe as ready for reading since everything as already been read.
Mixing buffered I/O and select leads to trouble, you're right to pass 
bufsize=0, but I don't know why subprocess goes out of its way and buffers it 
anyway:
if bufsize == 0:
bufsize = 1  # Nearly unbuffered (XXX for now)

--
keywords: +patch
nosy: +neologix
Added file: http://bugs.python.org/file21068/subprocess_buffer.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11459
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue8995] Performance issue with multiprocessing queue (3.1 VS 2.6)

2011-03-10 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Could you try with Python 3.2 ?
In 3.1, the only available pickle implementation was in pure python: with 
cPickle (2.7) or _pickle (3.2), it should be much faster.

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue8995
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11432] webbrowser.open on unix fails.

2011-03-08 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

The problem lies here:

/* Close pipe fds. Make sure we don't close the same fd more than */
/* once, or standard fds. */
if (p2cread  2) {
POSIX_CALL(close(p2cread));
}
(c2pwrite  2) {
POSIX_CALL(close(c2pwrite));
}
if (errwrite != c2pwrite  errwrite  2) {
POSIX_CALL(close(errwrite));
}

If p2cread == c2pwrite (which is the case here since /dev/null is passed as 
stdin and stderr), we end up closing the same FD twice, hence the EBADF.
Just passing 
-(c2pwrite  2) {
+(c2pwrite  2  c2pwrite != p2cread) {
POSIX_CALL(close(c2pwrite));
}

Solves this (but you probably also want to check for (errwrite != p2cread) when 
closing c2pwrite).

Note that the Python implementation uses a set to avoid closing the same FD 
twice:

# Close pipe fds. Make sure we don't close the
# same fd more than once, or standard fds.
closed = set()
for fd in [p2cread, c2pwrite, errwrite]:
if fd  2 and fd not in closed:
os.close(fd)
closed.add(fd)

It might be cleaner to use a fd_set, i.e.:
fd_set set;
FD_ZERO(set);
FD_SET(0, set);
FD_SET(1, set);
FD_SET(2, set);
if (!FD_ISSET(p2cread, set)) {
POSIX_CALL(close(p2cread));
FD_SET(p2cread, fd);
}
if (!FD_ISSET(c2pwrite, set)) {
POSIX_CALL(close(c2pwrite));
FD_SET(c2pwrite, fd);
}
if (!FD_ISSET(errwrite, set)) {
POSIX_CALL(close(errwrite));
FD_SET(errwrite, fd);
}

But maybe it's just too much (and also, fd_set can be defined in different 
header files, and while I'm sure it's async-safe on Linux, I don't know if it's 
required as part of a standard...).

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11432
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11432] webbrowser.open on unix fails.

2011-03-08 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Attached is a patch checking that no FD is closed more once when
closing pipe FDs, along with an update for test_subprocess.

--
keywords: +patch
Added file: http://bugs.python.org/file21053/subprocess_same_fd.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11432
___Index: Lib/test/test_subprocess.py
===
--- Lib/test/test_subprocess.py (révision 88766)
+++ Lib/test/test_subprocess.py (copie de travail)
@@ -292,6 +292,32 @@
 tf.seek(0)
 self.assertStderrEqual(tf.read(), bappleorange)
 
+def test_stdin_stdout_filedes(self):
+# capture stdin and stdout to the same open file
+tf = tempfile.TemporaryFile()
+self.addCleanup(tf.close)
+p = subprocess.Popen([sys.executable, -c,
+  'import sys;'
+  'sys.stdout.write(apple);'],
+ stdin=tf,
+ stdout=tf)
+p.wait()
+tf.seek(0)
+self.assertEqual(tf.read(), bapple)
+
+def test_stdin_stderr_filedes(self):
+# capture stdin and stderr to the same open file
+tf = tempfile.TemporaryFile()
+self.addCleanup(tf.close)
+p = subprocess.Popen([sys.executable, -c,
+  'import sys;'
+  'sys.stderr.write(apple);'],
+ stdin=tf,
+ stderr=tf)
+p.wait()
+tf.seek(0)
+self.assertEqual(tf.read(), bapple)
+
 def test_stdout_filedes_of_stdout(self):
 # stdout is set to 1 (#1531862).
 cmd = rimport sys, os; sys.exit(os.write(sys.stdout.fileno(), 
b'.\n'))
Index: Modules/_posixsubprocess.c
===
--- Modules/_posixsubprocess.c  (révision 88766)
+++ Modules/_posixsubprocess.c  (copie de travail)
@@ -99,10 +99,10 @@
 if (p2cread  2) {
 POSIX_CALL(close(p2cread));
 }
-if (c2pwrite  2) {
+if (c2pwrite  2  c2pwrite != p2cread) {
 POSIX_CALL(close(c2pwrite));
 }
-if (errwrite != c2pwrite  errwrite  2) {
+if (errwrite  2  errwrite != c2pwrite  errwrite != p2cread) {
 POSIX_CALL(close(errwrite));
 }
 
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11443] Zip password issue

2011-03-08 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

The check is done in py3k:

Traceback (most recent call last):
  File /home/cf/test_zip.py, line 7, in module
print(z.read(secretfile.txt))
  File /home/cf/py3k/Lib/zipfile.py, line 889, in read
with self.open(name, r, pwd) as fp:
  File /home/cf/py3k/Lib/zipfile.py, line 975, in open
raise RuntimeError(Bad password for file, name)
RuntimeError: ('Bad password for file', 'secretfile.txt')

Try with Python 3.2.

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11443
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11185] test_wait4 error on AIX

2011-03-07 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 wait4 without WNOHANG works fine. waitpid works fine even with WNOHANG.
 I don't know which workaround is the better.

As far as the test is concerned, it's of course better to use wait4
without WNOHANG in a test names test_wait4 (especially since waitpid
is tested elsewhere)...

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11185
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11185] test_wait4 error on AIX

2011-03-06 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

If test_wait3 and test_fork1 pass, then yes, it's probably an issue with AIX's 
wait4.
See http://fixunix.com/aix/84872-sigchld-recursion.html:


Replace the wait4() call with a waitpid() call...
like this:
for(n=0;waitpid(-1, status, WNOHANG)  0; n++) ;

Or, compile the existing code with the BSD library:
cc -o demo demo.c -D_BSD -lbsd

Both will work...

The current problem is that child process is not seen by the wait4()
call,
so that when signal is rearmed, it immediately goes (recursively)
into the
child_handler() function.


So it seems that under AIX, posix_wait4 should be compiled with -D_BSD -lbsd.
Could you try this ?

If this doesn't do the trick, then avoiding passing WNOHANG could be the second 
option.

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11185
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5091] Segfault in PyObject_Malloc(), address out of bounds

2011-03-05 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Do you have a coredump ?
It'd be curious to see this faulting address.
I didn't notice the first time, but in the OP case the address is definitely 
wrong: 0xecc778b7 is above PAGE_OFFSET (0xc000 on x86), so unless he's 
using a kernel with a 4G/4G split (and it's not the case since it seems to be a 
PAE kernel), it's definitely an invalid/corrupt address...

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5091
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue5091] Segfault in PyObject_Malloc(), address out of bounds

2011-03-05 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 The code that is segfaulting is using pycrypto and sqlite3, so it may be that 
 a bug in one of these is trampling on something.  No idea how to investigate 
 any further. 

You could try valgrind:
$ valgrind --tool=memcheck -o /tmp/output.log prog arguments

This slows down the execution, but can reveal certain types of memory 
corruption.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue5091
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11408] python locks: blocking acquire calls useless gettimeofday

2011-03-05 Thread Charles-Francois Natali

New submission from Charles-Francois Natali neolo...@free.fr:

While tracing a program using multiprocessing queues, I noticed that there were 
many calls to gettimeofday.
It turns out that acquire_timed, used by lock_PyThread_acquire_lock and 
rlock_acquire, always call gettimeofday, even if no timeout argument is given.
Here's an example of the performance impact (I know it's a contrived example 
:-):

$ cat /tmp/test_lock.py 
import threading

lock = threading.Lock()

i = 0

def do_loop():
global i
for j in range(50):
lock.acquire()
i += 1
lock.release()


t1 = threading.Thread(target=do_loop)
t2 = threading.Thread(target=do_loop)
t1.start()
t2.start()
t1.join()
t2.join()

With current code:
$ time ./python /tmp/test_lock.py 

real0m5.200s
user0m3.288s
sys 0m1.896s

Without useless calls to gettimeofday:
$ time ./python /tmp/test_lock.py 

real0m3.091s
user0m3.056s
sys 0m0.020s

Note that the actual gain depends on the kernel, hardware and clocksource in 
use (the above measurements are on a Linux 2.6.32 kernel, using acpi_pm as 
clocksource).

Attached is a patch removing useless calls to gettimeofday.
Note that I also removed the check for expired timeout following trylock in 
case of PY_LOCK_INTR, since according to 
http://pubs.opengroup.org/onlinepubs/009695399/functions/sem_wait.html,  it 
seems that only sem_wait is interruptible, not sem_trywait (e.g. on Linux, 
sem_trywait is implemented using futex which handle non-contended case in 
user-space). Windows locking primitives can't return PY_LOCK_INTR. Anyway, even 
if it happend once in a blue moon, we would just retry a trylock, which kind of 
makes sense.

--
files: lock_gettimeofday_py3k.diff
keywords: patch
messages: 130121
nosy: neologix, pitrou
priority: normal
severity: normal
status: open
title: python locks: blocking acquire calls useless gettimeofday
type: performance
Added file: http://bugs.python.org/file21007/lock_gettimeofday_py3k.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11408
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11406] There is no os.listdir() equivalent returning generator instead of list

2011-03-05 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Big dirs are really slow to read at once. If user wants to read items one by 
 one like here

The problem is that readdir doesn't read a directory entry one at a time.
When you call readdir on an open DIR * for the first time, the libc calls the 
getdents syscall, requesting a whole bunch of dentry at a time (32768 on my 
box).
Then, the subsequent readdir calls are virtually free, and don't involve any 
syscall/IO at all (that is, until you hit the last cached dent, and then 
another getdents is performed until end of directory).

 Also, dir_cache in kernel used more effectively.

You mean the dcache ? Could you elaborate ?

 also, forgot... memory usage on big directories using list is a pain.

This would indeed be a good reason. Do you have numbers ?

 A generator listdir() geared towards performance should probably be able to 
 work in batches, e.g. read 100 entries at once and buffer them in some 
 internal storage (that might mean use readdir_r()).

That's exactly what readdir is doing :-)

 Bonus points if it doesn't release the GIL around each individual entry, but 
 also batches that.

Yes, since only one in 2**15 readdir call actually blocks, that could be a nice 
optimization (I've no idea of the potential gain though).

 Big dirs are really slow to read at once.

Are you using EXT3 ?
There are records of performance issues with getdents on EXT2/3 filesystems, 
see:
http://lwn.net/Articles/216948/
and this nice post by Linus:
https://lkml.org/lkml/2007/1/7/149

Could you provide the output of an strace -ttT python test script  (and 
also the time spent in os.listdir) ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11406
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11395] print(s) fails on Windows with long strings

2011-03-04 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

It's probably a Windows limitation regarding the number of bytes that can be 
written to stdout in one write.
As for the difference between python versions, what does
python -c import sys; print(sys.getsizeof('a')) return ?

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11395
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11382] some posix module functions

2011-03-03 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


--
nosy: neologix
priority: normal
severity: normal
status: open
title: some posix module functions

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11382
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11382] some posix module functions unnecessarily release the GIL

2011-03-03 Thread Charles-Francois Natali

New submission from Charles-Francois Natali neolo...@free.fr:

Some posix module functions unnecessarily release the GIL.
For example, posix_dup, posix_dup2 and posix_pipe all release the GIL, but 
those are non-blocking syscalls (the don't imply any I/O, only modifying the 
process file descriptors table).
This leads to the famous convoy effect (see http://bugs.python.org/issue7946).

For example:

$ cat /tmp/test_dup2.py 
import os
import threading
import sys
import time


def do_loop():
while True:
pass

t = threading.Thread(target=do_loop)
t.setDaemon(True)
t.start()

f = os.open(sys.argv[1], os.O_RDONLY)

for i in range(4, 1000):
os.dup2(f, i)

Whith  GIL release/acquire:

$ time ./python /tmp/test_dup2.py  /etc/fstab 

real0m5.238s
user0m5.223s
sys 0m0.009s

$ time ./python /tmp/test_pipe.py 

real0m3.083s
user0m3.074s
sys 0m0.007s

Without GIL release/acquire:

$ time ./python /tmp/test_dup2.py  /etc/fstab 

real0m0.094s
user0m0.077s
sys 0m0.010s

$ time ./python /tmp/test_pipe.py 

real0m0.088s
user0m0.074s
sys 0m0.008s

--
title: some posix module functions - some posix module functions unnecessarily 
release the GIL

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11382
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11382] some posix module functions unnecessarily release the GIL

2011-03-03 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

I didn't even know that Windows had such calls.
But anyway, if we start releasing the GIL around each malloc call, then it's 
going to get really complicated:

static PyObject *
posix_geteuid(PyObject *self, PyObject *noargs)
{
return PyLong_FromLong((long)geteuid());
}

PyLong_FromLong - _PyLong_New - PyObject_MALLOC which can call malloc.

As for DuplicateHandle, I assume it's as fast as Unix's dup(2).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11382
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11382] some posix module functions unnecessarily release the GIL

2011-03-03 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Well, those are contrived examples showing the effect of the convoy effect 
induced by those unneeded GIL release/acquire: releasing and re-acquiring the 
GIL comes with a cost (e.g. under Linux, futex are really fast in the 
uncontended case since handled in use space but much slower when there's 
contention), and subverts the OS scheduling policy (forcing the thread to 
drop/re-acquire the GIL make the thread block after having consumed a small 
amount of its time slice and increases the context switching rate). I think 
that releasing and re-acquiring the GIL should only be done around potentially 
blocking calls.

 Do you have loops which contain no other syscall than os.dup2()?

No, but it's not a reason for penalizing threads that use dup, dup2 or pipe.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11382
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11382] some posix module functions unnecessarily release the GIL

2011-03-03 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Do you want to propose a patch?

Sure, if removing those calls to Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS 
seems reasonable (I might haved missed something obvious).
Just to be clear, I'm not at all criticizing the current GIL implementation, 
there's been a great work done on it.
I'm just saying that releasing and re-acquiring the GIL around fast syscalls is 
probaly not a good idea.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11382
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11382] some posix module functions unnecessarily release the GIL

2011-03-03 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

2011/3/3 Antoine Pitrou rep...@bugs.python.org:

 Antoine Pitrou pit...@free.fr added the comment:

 Just to be clear, I'm not at all criticizing the current GIL
 implementation, there's been a great work done on it.
 I'm just saying that releasing and re-acquiring the GIL around fast
 syscalls is probaly not a good idea.

 If these syscalls aren't likely to yield control to another thread, then
 I agree there's no point in releasing the GIL around them.
 (but is it the case that they are always fast? for example, how about
 dup() on a network file system? or is it indifferent?)

The initial open can take long, but once it's open, calling dup just
implies copying a reference to the open file (a pointer) to the file
descriptor table. No I/O is done (I tested it one a NFS mount).
Now, I don't know Windows at all, but I'm pretty sure that every
operating system does more or less the same thing, and that those
three calls (there might be others) don't block.


 --

 ___
 Python tracker rep...@bugs.python.org
 http://bugs.python.org/issue11382
 ___


--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11382
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11382] some posix module functions unnecessarily release the GIL

2011-03-03 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Attached is a patch removing useless calls to
Py_BEGIN_ALLOW_THREADS/Py_END_ALLOW_THREADS for several posix
functions.
It's straigthforward, but since I only have Linux boxes, I couldn't
test it under Windows.

--
keywords: +patch
Added file: http://bugs.python.org/file20988/gil_release_py3k.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11382
___Index: Modules/posixmodule.c
===
--- Modules/posixmodule.c   (révision 88734)
+++ Modules/posixmodule.c   (copie de travail)
@@ -3041,9 +3041,7 @@
 if (!PyArg_ParseTuple(args, ii, which, who))
 return NULL;
 errno = 0;
-Py_BEGIN_ALLOW_THREADS
 retval = getpriority(which, who);
-Py_END_ALLOW_THREADS
 if (errno != 0)
 return posix_error();
 return PyLong_FromLong((long)retval);
@@ -3063,9 +3061,7 @@
 
 if (!PyArg_ParseTuple(args, iii, which, who, prio))
 return NULL;
-Py_BEGIN_ALLOW_THREADS
 retval = setpriority(which, who, prio);
-Py_END_ALLOW_THREADS
 if (retval == -1)
 return posix_error();
 Py_RETURN_NONE;
@@ -5712,9 +5708,7 @@
 return NULL;
 if (!_PyVerify_fd(fd))
 return posix_error();
-Py_BEGIN_ALLOW_THREADS
 fd = dup(fd);
-Py_END_ALLOW_THREADS
 if (fd  0)
 return posix_error();
 return PyLong_FromLong((long)fd);
@@ -5733,9 +5727,7 @@
 return NULL;
 if (!_PyVerify_fd_dup2(fd, fd2))
 return posix_error();
-Py_BEGIN_ALLOW_THREADS
 res = dup2(fd, fd2);
-Py_END_ALLOW_THREADS
 if (res  0)
 return posix_error();
 Py_INCREF(Py_None);
@@ -6116,9 +6108,7 @@
 HFILE read, write;
 APIRET rc;
 
-Py_BEGIN_ALLOW_THREADS
 rc = DosCreatePipe( read, write, 4096);
-Py_END_ALLOW_THREADS
 if (rc != NO_ERROR)
 return os2_error(rc);
 
@@ -6127,9 +6117,7 @@
 #if !defined(MS_WINDOWS)
 int fds[2];
 int res;
-Py_BEGIN_ALLOW_THREADS
 res = pipe(fds);
-Py_END_ALLOW_THREADS
 if (res != 0)
 return posix_error();
 return Py_BuildValue((ii), fds[0], fds[1]);
@@ -6137,9 +6125,7 @@
 HANDLE read, write;
 int read_fd, write_fd;
 BOOL ok;
-Py_BEGIN_ALLOW_THREADS
 ok = CreatePipe(read, write, NULL, 0);
-Py_END_ALLOW_THREADS
 if (!ok)
 return win32_error(CreatePipe, NULL);
 read_fd = _open_osfhandle((Py_intptr_t)read, 0);
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11391] mmap write segfaults if PROT_WRITE bit is not set in prot

2011-03-03 Thread Charles-Francois Natali

New submission from Charles-Francois Natali neolo...@free.fr:

$ cat /tmp/test_mmap.py 
import mmap

m = mmap.mmap(-1, 1024, prot=mmap.PROT_READ|mmap.PROT_EXEC)
m[0] = 0
$ ./python /tmp/test_mmap.py 
Segmentation fault

When trying to perform a write, is_writable is called to check that we can 
indeed write to the mmaped area. is_writable just checks the access mode, and 
if it's not ACCESS_READ, we go ahead and proceed to the write.
The problem is that under Unix, it's possible to pass ACCESS_DEFAULT, and in 
that case no check is done on prot value.
In that case, is_writable will return true (since ACCESS_DEFAULT != 
ACCESS_READ), but if prot doesn't include PROT_WRITE bit, we'll segfault.
Attached is a patch including fix and specific test.

--
files: mmap_check_prot_py3k.diff
keywords: patch
messages: 130008
nosy: neologix
priority: normal
severity: normal
status: open
title: mmap write segfaults if PROT_WRITE bit is not set in prot
Added file: http://bugs.python.org/file20991/mmap_check_prot_py3k.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11391
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11391] mmap write segfaults if PROT_WRITE bit is not set in prot

2011-03-03 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Patch looks mostly good. Why do you use ~PROT_WRITE instead of 
 PROT_READ|PROT_EXEC as in your example?

Because I'm not sure that PROT_EXEC is supported by all platforms. See
http://pubs.opengroup.org/onlinepubs/007908799/xsh/mmap.html :
The implementation will support at least the following values of
prot: PROT_NONE, PROT_READ, PROT_WRITE, and the inclusive OR of
PROT_READ and PROT_WRITE.
If PROT_EXEC is defined but unsupported, it's likely to be defined as
0, so passing PROT_READ|PROT_EXEC will just pass PROT_READ (which is
catched by the current version), whereas with ~PROT_WRITE we're sure
that the PROT_WRITE bit won't be set.

 (I'm unsure whether a POSIX implementation could refuse such a value)

Me neither. I'd guess that the syscall just performs bitwise AND to
check the bits that are set, but you never know :-)
Maybe we could try this and see if a builbot complains ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11391
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11314] Subprocess suffers 40% process creation overhead penalty

2011-03-02 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 pitrou I think your analysis is wrong. These mmap() calls are for
 pitrou anonymous memory, most likely they are emitted by the libc's
 pitrou malloc() to get some memory from the kernel. In other words
 pitrou they will be blazingly fast.

 Are you sure? :-)

Well, it is fast. It's true that mmap is slower than brk, since the
kernel zero-fills the pages, but the overhead probably doesn't come
from this, but more likely from pymalloc or malloc, and also from the
call to _PyBytes_resize in posix_read when the number of bytes read is
smaller than what has been requested.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11314
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11314] Subprocess suffers 40% process creation overhead penalty

2011-03-02 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 So, even though implemented in C, the file descriptor closing logic is still 
 quite costly!

Yes, see this recent issue: http://bugs.python.org/issue11284

In the reporter's case, it's much worse, because FreeBSD (at least the version 
he's using) has a SC_OPEN_MAX of 655000, so that passing close_fds=True bumps 
the Popen runtime to 3 seconds!
Some Unix offer a closefrom() or similar function to close all open file 
descriptors (and only open ones), which should be much faster.
I'm not aware of anything equivalent under Linux, but it might be worth looking 
into.

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11314
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11284] slow close file descriptors in subprocess, popen2, os.popen*

2011-03-02 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Attached is a patch adding os.closefrom.
If closefrom(2) is available, it's used.
Otherwise, two options:
- if sysconf and _SC_OPEN_MAX are defined, we close each file descriptor up to 
_SC_OPEN_MAX
- if not, we choose a default value (256), and close every FD up to this value
subprocess has been converted to use it, and a test has been added in test_os
Unfortunately, I only have Linux boxes, so I can't really test it.

Remarks:
- is it OK to provide posix_closefrom even though the underlying platform 
doesn't support it ?
- no error code is returned (since when closing every FD manually this wouldn't 
make much sense), even though closefrom(2) does return one
- for the test, I only close FDs  7 to avoid closing stdin/stdout/stder, but 
you might have a better idea
- this won't fix the problem for Linux, which doesn't have closefrom(2). Is it 
worth using /proc/self/fd interface ?

--
Added file: http://bugs.python.org/file20979/py3k_closefrom.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11284
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11284] slow close file descriptors in subprocess, popen2, os.popen*

2011-03-02 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Removed file: http://bugs.python.org/file20979/py3k_closefrom.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11284
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11284] slow close file descriptors in subprocess, popen2, os.popen*

2011-03-02 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Added file: http://bugs.python.org/file20980/py3k_closefrom.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11284
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11284] slow close file descriptors in subprocess, popen2, os.popen*

2011-03-02 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

Attached is a new version falling back to /proc/self/fd when closefrom(2) is 
not available (on Unix), working on Linux.
It's indeed much faster than the current approach.
Note that it's only used if _posixsubprocess is not available, because in that 
case the FD are closed from _posixsubprocess.c:child_exec.
To make it available to _posixsubprocess, I was thinking of putting the 
closefrom code in a helper function, which would then be called from 
posix_closefrom and _posixsubprocess.
Is this the proper way to do ?
If yes, what file would be a good recipient for that ?

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11284
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11284] slow close file descriptors in subprocess, popen2, os.popen*

2011-03-02 Thread Charles-Francois Natali

Changes by Charles-Francois Natali neolo...@free.fr:


Removed file: http://bugs.python.org/file20980/py3k_closefrom.diff

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11284
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue11284] slow close file descriptors in subprocess, popen2, os.popen*

2011-03-02 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

 Your posix_closefrom() implementation as written today is not safe to call 
 between fork() and exec() due to the opendir/readdir implementation.  It can 
 and will hang processes at unexpected times.

Yeah, I remove the patch when I realized that.

 According to http://www.unix.com/man-page/All/3c/closefrom/ closefrom() is 
 not async-signal-safe. :(

Strange. I was sure closefrom was implemented with fcntl.

 I still want to find a way to do this nicely on Linux (even if it means me 
 going and implementing a closefrom syscall to be added to 2.6.39).

Well, arguably, CLOEXEC is designed to cope with this kind of situation.
closefrom is more like a hack (and mostly useless if it's really not 
async-safe).

--

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue11284
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



[issue10900] bz2 module fails to uncompress large files

2011-03-01 Thread Charles-Francois Natali

Charles-Francois Natali neolo...@free.fr added the comment:

@Eric.Wolf

Could you try with this:

# Read in anohter chunk of the file
# NOTE: It's possible that an XML tag will be greater than buffsize
#   This will break in that situation
-newb = self.fp.read(self.bufpos)
+newb = self.fp.read(self.buffsize)

Also, could you provide the output of
strace -emmap2,sbrk,brk python script

I could be completely wrong, but both in your case and in wrobell's case, 
there's a lot of _PyBytes_Resize going on, and given how PyObject_Realloc is 
implemented, this could lead to heavy heap fragmentation.

--
nosy: +neologix

___
Python tracker rep...@bugs.python.org
http://bugs.python.org/issue10900
___
___
Python-bugs-list mailing list
Unsubscribe: 
http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com



  1   2   3   >