[issue8366] OS X universal builds fail on 2.7b1 and py3k with Don't know machine value for archs
New submission from Ned Deily n...@acm.org: Prior to r79392 (trunk) and r79401 (py3k), the changes introduced for Issue8211, the gcc lines produced for an OS X universal build, say ./configure --enable-universalsdk --with-universal-archs=32-bit ; make might look like this: gcc-4.0 -c -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-common -dynamic -DNDEBUG -g -O3 -I/tmp/_py/libraries/usr/local/include -I. -IInclude -I/private/tmp/_t/Include -isysroot /Developer/SDKs/MacOSX10.4u.sdk -DPy_BUILD_CORE -o Modules/python.o /private/tmp/_t/Modules/python.c With the changes introduced by r79392 and r79401 to save and restore the original value of CFLAGS across the AC_PROG_CC macro call (see trunk configure.in at about line 496), the same gcc call now looks like this: gcc-4.0 -c -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -fno-common -dynamic -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.4u.sdk -DNDEBUG -g -O3 -I/tmp/_py/libraries/usr/local/include -I. -IInclude -I/private/tmp/_t/Include -isysroot /Developer/SDKs/MacOSX10.4u.sdk -DPy_BUILD_CORE -o Modules/python.o /private/tmp/_t/Modules/python.c Note that there are now two sets of -arch and -sysroot flags because the CFLAGS ones were being lost before but are now preserved. While the duplicate flags do not seem to bother gcc and friends, the code in sysconfig.py to determine the machine name for OS X is not prepared to handle them and the build fails when the interpreter starts up: Traceback (most recent call last): File /private/tmp/_t/Lib/site.py, line 542, in module main() File /private/tmp/_t/Lib/site.py, line 521, in main addbuilddir() File /private/tmp/_t/Lib/site.py, line 118, in addbuilddir s = build/lib.%s-%.3s % (get_platform(), sys.version) File /private/tmp/_t/Lib/sysconfig.py, line 646, in get_platform Don't know machine value for archs=%r%(archs,)) ValueError: Don't know machine value for archs=('i386', 'i386', 'ppc', 'ppc') make: *** [sharedmods] Error 1 -- assignee: ronaldoussoren components: Macintosh messages: 102810 nosy: ned.deily, ronaldoussoren severity: normal status: open title: OS X universal builds fail on 2.7b1 and py3k with Don't know machine value for archs versions: Python 2.7, Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8366 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8211] configure: ignore AC_PROG_CC hardcoded CFLAGS
Ned Deily n...@acm.org added the comment: Note these changes to restore CFLAGS have the side effect of breaking OS X universal builds; see Issue8366. -- nosy: +ned.deily ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8211 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8365] 'readline' module fails to build on OS X - some recent change broke it
Ned Deily n...@acm.org added the comment: Issue6877 (and subsequent fixes in Issue8066) allows the Python readline module to be built and linked with the OS X editline (libedit) library rather than with the GNU readline library (which is not included with OS X). However, the libedit included in versions of OS X prior to 10.5 is considered too broken to use here. By default, if you do not specify an --with-universal-archs other than 32-bit to configure or if you do not explicitly set MACOSX_DEPLOYMENT_TARGET to another value, configure defaults to using 10.4 (or earlier) so the building of the readline module is skipped. You can check this: from distutils.sysconfig import get_config_var get_config_var('MACOSX_DEPLOYMENT_TARGET') '10.4' (Whether this is the best default is another question.) As it stands, to be able to build the readline module, either: (1) supply the GNU readline library as a local library, or (2) ensure you are building with a deployment target of at least 10.5. For example: ./configure MACOSX_DEPLOYMENT_TARGET=10.6 ; make Also note that option (2) is not available for 3.1.x since the changes to support editline/libedit were not ported to it; they are, however, in 2.6.5, 2.7 (trunk), and 3.2 (py3k). -- assignee: - ronaldoussoren components: +Macintosh nosy: +ned.deily ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8365 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8366] OS X universal builds fail on 2.7b1 and py3k with Don't know machine value for archs
Shashwat Anand anand.shash...@gmail.com added the comment: I tried to reproduce it on trunk by trying, ./configure --enable-universalsdk --with-universal-archs=32-bit ; make However It did managed to build successfully. The relevant bits during installation: gcc -c -arch ppc -arch i386 -isysroot / -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -I. -IInclude -I./Include -DPy_BUILD_CORE -o Modules/python.o ./Modules/python.c snip Python build finished, but the necessary bits to build these modules were not found: bsddb185 gdbm linuxaudiodev ossaudiodevreadline spwd sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. Failed to build these modules: _bsddb _locale running build_scripts trim /usr/bin/install -c -m 644 ./Tools/gdb/libpython.py python.exe-gdb.py 12:52:55 l0nwlf-MBP:python-svn $ ./python.exe Python 2.7b1+ (trunk:79945M, Apr 11 2010, 12:46:28) [GCC 4.2.1 (Apple Inc. build 5646) (dot 1)] on darwin Type help, copyright, credits or license for more information. -- nosy: +l0nwlf ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8366 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8366] OS X universal builds fail on 2.7b1 and py3k with Don't know machine value for archs
Ned Deily n...@acm.org added the comment: Fails for me with py3k, trunk, and with the 2.7.b1 tarball on 10.6.3 (Intel) and 10.5.8 (ppc). Your test output looks suspect; with the given configure values, the use of gcc-4.0 should be forced. Perhaps you used an existing build directory but did not do a make clobber and/or not rm the previous cached configure values? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8366 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8241] py2_test_grammar.py contains invalid syntax for 2.6
Martin v. Löwis mar...@v.loewis.de added the comment: Benjamin, ISTM that the tests in lib2to3/tests/data/py2_test_grammar aren't run at all, as part of regrtest. If so, the entire file could be removed. -- components: +Installation -2to3 (2.x to 3.0 conversion tool) ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8241 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8366] OS X universal builds fail on 2.7b1 and py3k with Don't know machine value for archs
Shashwat Anand anand.shash...@gmail.com added the comment: I reinstalled with : make distclean; ./configure --enable-universalsdk --with-universal-archs=32-bit; make on 10.6.2 (Intel) Which I guess is correct. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8366 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8365] 'readline' module fails to build on OS X - some recent change broke it
Shashwat Anand anand.shash...@gmail.com added the comment: ./configure MACOSX_DEPLOYMENT_TARGET=10.6 ; make does the trick. However it should be done by default rather than being defined explicitly. Closing the issue. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8365 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8365] 'readline' module fails to build on OS X - some recent change broke it
Changes by Shashwat Anand anand.shash...@gmail.com: -- status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8365 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8093] IDLE processes don't close
Stefan Krah stefan-use...@bytereef.org added the comment: For the record: In 2.7-alpha you do not even get to press [Restart Shell], since IDLE is not responding during the calculation. -- nosy: +skrah ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8093 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8366] OS X universal builds fail on 2.7b1 and py3k with Don't know machine value for archs
Ned Deily n...@acm.org added the comment: Chances are you do not have the 10.4u SDK installed; it is not installed by default by the Snow Leopard Xcode mpkg installer. If it is not installed, configure falls back to using / as the sysroot (see configure.in at around line 95). If you are going to build and test Python on OS X 10.5 or 10.6, you really need to have it installed. $ ls /Developer/SDKs/ MacOSX10.4u.sdk/ MacOSX10.5.sdk/ MacOSX10.6.sdk/ Even without 10.4u installed, this should fail: make distclean; ./configure --enable-universalsdk=/Developer/SDKs/MacOSX10.6.sdk --with-universal-archs=32-bit MACOSX_DEPLOYMENT_TARGET=10.6 -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8366 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8366] OS X universal builds fail on 2.7b1 and py3k with Don't know machine value for archs
Shashwat Anand anand.shash...@gmail.com added the comment: The error was finally reproduced with : $make distclean; ./configure --enable-universalsdk=/Developer/SDKs/MacOSX10.6.sdk --with-universal-archs=32-bit MACOSX_DEPLOYMENT_TARGET=10.6; make gcc -arch ppc -arch i386 -isysroot /Developer/SDKs/MacOSX10.6.sdk -u _PyMac_Error -o python.exe \ Modules/python.o \ libpython2.7.a -ldl -framework CoreFoundation Traceback (most recent call last): File /Volumes/CoreHD/python-svn/Lib/site.py, line 544, in module main() File /Volumes/CoreHD/python-svn/Lib/site.py, line 523, in main addbuilddir() File /Volumes/CoreHD/python-svn/Lib/site.py, line 118, in addbuilddir s = build/lib.%s-%.3s % (get_platform(), sys.version) File /Volumes/CoreHD/python-svn/Lib/sysconfig.py, line 646, in get_platform Don't know machine value for archs=%r%(archs,)) ValueError: Don't know machine value for archs=('i386', 'i386', 'ppc', 'ppc') make: *** [sharedmods] Error 1 Also, $ ls /Developer/SDKs/ MacOSX10.5.sdk MacOSX10.6.sdk -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8366 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8367] test_winsound: failure on systems without soundcard
New submission from Stefan Krah stefan-use...@bytereef.org: Got this test failure on Windows/qemu: == FAIL: test_stopasync (__main__.PlaySoundTest) -- Traceback (most recent call last): File Lib\test\test_winsound.py, line 199, in test_stopasync None, winsound.SND_PURGE AssertionError: RuntimeError not raised The problem is that PlaySound(None, winsound.SND_PURGE) does not raise on systems without a soundcard. The wrapped C function returns success in this case, so I think it's ok to go along with it and disable this assertion. -- messages: 102821 nosy: skrah severity: normal status: open title: test_winsound: failure on systems without soundcard ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8367 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8367] test_winsound: failure on systems without soundcard
Stefan Krah stefan-use...@bytereef.org added the comment: Searching for the failure reveals that sporadically this has appeared on the buildbots, so I plan to apply the patch soon if there aren't any protests. -- assignee: - skrah components: +Tests keywords: +patch priority: - normal stage: - patch review type: - behavior versions: +Python 2.6, Python 2.7, Python 3.1, Python 3.2 Added file: http://bugs.python.org/file16866/test_winsound.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8367 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8299] Improve GIL in 2.7
Kristján Valur Jónsson krist...@ccpgames.com added the comment: I looked at ccbench. It's a great tool. I've added two features to it (see the attached patch) -y option to turn off the do_yield option in throughput, and so measure thread scheduling without assistance, and the throughput option now also computes balance, which is the standard deviation of the throughput of each thread normalized by the average. I give you three results for throughput, to demonstrate the ROUNDROBIN_GIL implementation: 1) LEGACY_GIL, no forced switching C:\pydev\python\trunk\PCbuildpython.exe ..\Tools\ccbench\ccbench.py -y -t == CPython 2.7a4+.0 (trunk) == == AMD64 Windows on 'Intel64 Family 6 Model 23 Stepping 6, GenuineIntel' == --- Throughput --- Pi calculation (Python) threads= 1: 672 iterations/s. balance threads= 2: 597 ( 88%)0.4243 threads= 3: 603 ( 89%)0.2475 threads= 4: 596 ( 88%)0.4776 regular expression (C) threads= 1: 571 iterations/s. balance threads= 2: 565 ( 98%)0.6203 threads= 3: 567 ( 99%)1.6867 threads= 4: 570 ( 99%)1.1670 SHA1 hashing (C) threads= 1: 1269 iterations/s. balance threads= 2: 1268 ( 99%)1.1470 threads= 3: 1270 (100%)0.6024 threads= 4: 1263 ( 99%)0.7419 LEGACY_GIL, with forced switching C:\pydev\python\trunk\PCbuildpython.exe ..\Tools\ccbench\ccbench.py -t == CPython 2.7a4+.0 (trunk) == == AMD64 Windows on 'Intel64 Family 6 Model 23 Stepping 6, GenuineIntel' == --- Throughput --- Pi calculation (Python) threads= 1: 663 iterations/s. balance threads= 2: 605 ( 91%)0.0232 threads= 3: 599 ( 90%)0.1988 threads= 4: 601 ( 90%)0.4648 regular expression (C) threads= 1: 568 iterations/s. balance threads= 2: 562 ( 99%)0.1737 threads= 3: 571 (100%)0.3950 threads= 4: 566 ( 99%)0.3158 SHA1 hashing (C) threads= 1: 1275 iterations/s. balance threads= 2: 1267 ( 99%)0.7238 threads= 3: 1271 ( 99%)0.2405 threads= 4: 1270 ( 99%)0.1508 Using the forced do_yield helps balance things, but not much. We still have a .7 balance in SHA1 hashing for two threads. Now, for ROUNDROBIN_GIL, and no forced switching: C:\pydev\python\trunk\PCbuildpython.exe ..\Tools\ccbench\ccbench.py -t -y == CPython 2.7a4+.0 (trunk) == == AMD64 Windows on 'Intel64 Family 6 Model 23 Stepping 6, GenuineIntel' == --- Throughput --- Pi calculation (Python) threads= 1: 672 iterations/s. balance threads= 2: 485 ( 72%)0.0289 threads= 3: 448 ( 66%)0.0737 threads= 4: 476 ( 70%)0.0408 regular expression (C) threads= 1: 569 iterations/s. balance threads= 2: 551 ( 96%)0.0505 threads= 3: 551 ( 96%)0.1637 threads= 4: 551 ( 96%)0.2020 SHA1 hashing (C) threads= 1: 1271 iterations/s. balance threads= 2: 1262 ( 99%)0.0111 threads= 3: 1207 ( 94%)0.0143 threads= 4: 1202 ( 94%)0.0317 Notice the much better balance value, and this is without the forced sleep. Also note a lower througput when computing pi with threads. This is because yielding every 100 opcodes now actually works, and the aforementioned instruction cache problem kicks in. Increasing the checkinterval to 1000 solves this: C:\pydev\python\trunk\PCbuildpython.exe ..\Tools\ccbench\ccbench.py -t -y -i100 0 == CPython 2.7a4+.0 (trunk) == == AMD64 Windows on 'Intel64 Family 6 Model 23 Stepping 6, GenuineIntel' == --- Throughput --- Pi calculation (Python) threads= 1: 673 iterations/s. balance threads= 2: 628 ( 93%)0. threads= 3: 603 ( 89%)0.0284 threads= 4: 606 ( 90%)0.0328 regular expression (C) threads= 1: 570 iterations/s. balance threads= 2: 569 ( 99%)0.2729 threads= 3: 562 ( 98%)0.6595 threads= 4: 560 ( 98%)1.2440 SHA1 hashing (C) threads= 1: 1265 iterations/s. balance threads= 2: 1256 ( 99%)0. threads= 3: 1264 ( 99%)0.0759 threads= 4: 1255 ( 99%)0.1309 If no one objects, I'd like to submit this changed ccbench.py to the trunk. -- Added file: http://bugs.python.org/file16867/ccbench.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8299 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6860] Inconsistent naming of custom command in setup.py help output
Éric Araujo mer...@netwok.org added the comment: Hello Distutils being frozen, I’m reassigning to Distutils2. Not sure if I should make versions blank, 3.3 or third-party, so leaving it alone. Regards -- components: +Distutils2 -Distutils ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6860 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6650] sre_parse contains a confusing generic error message
Éric Araujo mer...@netwok.org added the comment: In the absence of better propositions, the message in the patch seems more helpful to me than the previous, especially because “lookbehind” is a search term that matches 0.1 wink on docs.python.org. So I’d apply this patch. -- nosy: +merwok ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6650 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6555] distutils config file should have the same name on both platforms and all scopes
Éric Araujo mer...@netwok.org added the comment: Let me add that os.path.expanduser is the Right Way™ to get a user’s home directory on POSIX too, since not every setup has a $HOME envvar or a /etc/passwd file. The only interface one should use is the pwd module (or getent in shell scripts), and so does os.path.expanduser. Reassigning to Distutils2. Regards -- components: +Distutils2 -Distutils ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6555 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8299] Improve GIL in 2.7
Kristján Valur Jónsson krist...@ccpgames.com added the comment: Fyi, here is the output using the unmodified Windows GIL, i.e. without my patch being active: C:\pydev\python\trunk\PCbuildpython.exe ..\Tools\ccbench\ccbench.py -t -y == CPython 2.7a4+.0 (trunk) == == AMD64 Windows on 'Intel64 Family 6 Model 23 Stepping 6, GenuineIntel' == --- Throughput --- Pi calculation (Python) threads= 1: 623 iterations/s. balance threads= 2: 489 ( 78%)0.0289 threads= 3: 461 ( 74%)0.0369 threads= 4: 460 ( 73%)0.0426 regular expression (C) threads= 1: 515 iterations/s. balance threads= 2: 548 (106%)0.0771 threads= 3: 532 (103%)0.0556 threads= 4: 523 (101%)0.1132 SHA1 hashing (C) threads= 1: 1188 iterations/s. balance threads= 2: 1212 (102%)0.0232 threads= 3: 1198 (100%)0.0250 threads= 4: 1215 (102%)0.0163 You see results virtually identical to the ROUNDROBIN_GIL implementation. This is just do demonstrate that Windows has had the ROUNDROBIN_GIL behaviour all along. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8299 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8299] Improve GIL in 2.7
David Beazley d...@dabeaz.com added the comment: I must be missing something, but why, exactly would you want multiple CPU-bound threads to yield every 100 ticks? Frankly, that sounds like a horrible idea that is going to hammer your system with excessive context switching overhead and cache performance problems---an effect that you, yourself have actually observed. The results of ccbench also show worse performance for the round-robin GIL because of this. Although the legacy GIL signals every 100 ticks, threads do not context switch that rapidly. In fact, on single CPU systems, they context switch at about the same rate as the system time-slice (5-10 milliseconds on most systems). The new GIL implemented by Antoine also does not rapidly switch CPU-bound threads. Again, I must be missing something, but I don't see how this round-robin GIL and all of this forced thread switching is anything that you would ever want--especially for CPU-bound threads. It seems to go against just about every design goal that people usually have for schedulers (especially the goal of minimizing context switching overhead). Again, maybe I'm just being dense and missing something. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8299 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5315] signal handler never gets called
Charles-Francois Natali neolo...@free.fr added the comment: I think two things can trigger this problem, both have to do with how signals are handled by the interpreter. Contrarily to what you may think, when a signal is received, its handler is _not_ called. Instead, it's Modules/signalmodule.c signal_handler() that's called. This handler stores the reception of the signal inside a table, and schedules the execution of the associated handler for later: signal_handler(int sig_num) { [...] Handlers[sig_num].tripped = 1; /* Set is_tripped after setting .tripped, as it gets cleared in PyErr_CheckSignals() before .tripped. */ is_tripped = 1; Py_AddPendingCall(checksignals_witharg, NULL); [...] } checksignal_withargs() calls PyErr_CheckSignals(), which in turn calls the handler. The pending calls are checked periodically from the interpreter main loop, in Python/ceval.c: when _Py_Ticker reaches 0, then we check for pending calls, and if there are any, we run the pending calls, hence checksignals_witharg, and the handler. This is actually a documented behaviour, quoting signal documentation: Although Python signal handlers are called asynchronously as far as the Python user is concerned, they can only occur between the “atomic” instructions of the Python interpreter. This means that signals arriving during long calculations implemented purely in C (such as regular expression matches on large bodies of text) may be delayed for an arbitrary amount of time. But there's a race, imagine this happens: - a thread (or a process for that matter) receives a signal - signal_handler schedules the associated handler - before _Py_Ticker reaches 0 and is checked from the interpreter main loop, a blocking call is made - since the process is blocked in the call, the main eval loop doesn't run, and the handler doesn't get called until the process leaves the call and enters the main eval loop again. If the call doesn't return (e.g. select without timeout), then the process remains stuck forever. This problem can also happen even if the signal is sent after select is called: - the main thread calls select - the second thread runs, and sends a signal to the process - the signal is not received by the main thread, but by the second thread - the second thread schedules execution of the handler - since the main thread is blocked in select, the handler never gets called But this case is quite flaky, because the documentation warns you: Some care must be taken if both signals and threads are used in the same program. The fundamental thing to remember in using signals and threads simultaneously is: always perform signal() operations in the main thread of execution. Any thread can perform an alarm(), getsignal(), pause(), setitimer() or getitimer(); only the main thread can set a new signal handler, and the main thread will be the only one to receive signals (this is enforced by the Python signal module, even if the underlying thread implementation supports sending signals to individual threads). This means that signals can’t be used as a means of inter-thread communication. Use locks instead. Sending signals to a process with multiple threads is risky, you should use locks. Finally, I think that the documentation should be rephrased: and the main thread will be the only one to receive signals (this is enforced by the Python signal module, even if the underlying thread implementation supports sending signals to individual threads). It's false. What's guaranteed is that the signal handler will only be executed on behalf of the main thread, but any thread can _receive_ a signal. And comments in Modules/signalmodule.c are misleading: We still have the problem that in some implementations signals generated by the keyboard (e.g. SIGINT) are delivered to all threads (e.g. SGI), while in others (e.g. Solaris) such signals are delivered to one random thread (an intermediate possibility would be to deliver it to the main thread -- POSIX?). For now, we have a working implementation that works in all three cases -- the handler ignores signals if getpid() isn't the same as in the main thread. XXX This is a hack. Sounds strange. If only a thread other than the main thread receives the signal and you ignore it, then it's lost, isn't it ? Furthermore, under Linux 2.6 and NPTL, getpid() returns the main thread PID even from another thread. Peers ? -- nosy: +neologix ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5315 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8299] Improve GIL in 2.7
David Beazley d...@dabeaz.com added the comment: Sorry, but I don't see how you can say that the round-robin GIL and the legacy GIL have the same behavior based solely on the result of a performance benchmark. Do you have any kind of thread scheduling trace that proves they are scheduling threads in exactly the same manner? Maybe they both have lousy performance, but for different reasons. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8299 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8299] Improve GIL in 2.7
Antoine Pitrou pit...@free.fr added the comment: SHA1 hashing (C) threads= 1: 1275 iterations/s. balance threads= 2: 1267 ( 99%)0.7238 threads= 3: 1271 ( 99%)0.2405 threads= 4: 1270 ( 99%)0.1508 Using the forced do_yield helps balance things, but not much. We still have a .7 balance in SHA1 hashing for two threads. Which is not unreasonable, since SHA1 releases the GIL. The unbalance would be produced by the Windows scheduler, not by Python. Note: do_yield is not meant to balance things as much as to make measurements meaningful at all. Without switching at all during say 2 seconds, the numbers become totally worthless. If no one objects, I'd like to submit this changed ccbench.py to the trunk. Please let me take a look. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8299 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8368] Memory leak on multi-threaded PyObject_CallObject
New submission from Krauzi krauzi_g...@yahoo.de: Hi guys, i think this is a bug and Matt from h...@python.org suggested me to report it here: I attached a sample project where the memory leak occurs.I created a sample project where the memory leak occurs. It's a visual studio 2008 project and uses windows threads so you have to recompile when using linux (makefile not included). 500 thread calls result in a memory leak of about 1 MB. -- files: Python Bug.zip messages: 102832 nosy: Krauzi severity: normal status: open title: Memory leak on multi-threaded PyObject_CallObject versions: Python 2.6 Added file: http://bugs.python.org/file16868/Python Bug.zip ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8368 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7443] test.support.unlink issue on Windows platform
Tim Golden m...@timgolden.me.uk added the comment: I put together a trivial patch against the 2.7 trunk (basically: I added a os.rename before the os.remove in test_support.unlink) and reran my test harness with test_zipfile... and it still failed because, of course, test_zipfile calls shutil.rmtree which bypasses the test_support.unlink altogether etc. etc. At this point several things occur to me: 1) There's little point in targetting the 2.x tree since 2.7 is due out any time now and is effectively end-of-line for 2.x and this isn't a release-blocker. Therefore whatever effort is brought to bear should be against the 3.x latest 2) This is a repeatable but relatively minority case: it only applies to Windows boxes and only to those where some behind-the-scenes process is holding this kind of lock on files for long enough to affect the tests. I'd certainly like to nail it but... 3) The amount of work -- and intrusion in the tests -- is quite substantial. Just looking [*] for straight os.unlink instances, without even considering shutil use gives 71 instances; os.remove gives another 90. That's even without the issues of the tests for shutil and os in any case. I'm willing to do the legwork of moving all the tests use, eg, support.unlink, support.rmtree and so on, but quis custodiet? who'll test the tests? grep os\.unlink *.py | wc -l grep os\.remove *.py | wc -l 4) All that said, the result should be a cleaner, more controllable test environment, at least for temp files. Another solution would be to rework the use of TESTFN on Windows to use a new temporary file every time. But that would be as much work and more than the unlink / rmtree work above. I'd like to hear opinions before I move forward with a non-trivial patch for this. For the sake of completeness, I attach a tiny test case which shows that the rename/remove approach should in fact work for the symptom we're seeing. -- Added file: http://bugs.python.org/file16869/test-case.py ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7443 ___import os, sys import traceback import win32file FILENAME = test def rename_and_remove (filename): os.rename (filename, filename + .deleted) os.remove (filename + .deleted) def remove_only (filename): os.remove (filename) def test (remove): open (FILENAME, w).close () hFile = win32file.CreateFile ( FILENAME, win32file.GENERIC_READ, win32file.FILE_SHARE_DELETE, None, win32file.OPEN_EXISTING, 0, 0 ) try: remove (FILENAME) try: open (FILENAME, w).close () except IOError: print FAIL else: print PASS finally: hFile.Close () try: open (FILENAME, w).close () except IOError: print FAIL else: print PASS if __name__ =='__main__': print print Should see FAIL-PASS test (remove_only) print print Should see PASS-PASS test (rename_and_remove) ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8369] Add a lint command to distutils
New submission from Éric Araujo mer...@netwok.org: Add a command to run lint tools such as Python -3, pep8, pyflakes, pychecker, pylint. I think this should not be a subcommand of test. The idea comes from buildutils (http://pypi.python.org/pypi/buildutils/). -- assignee: tarek components: Distutils2 messages: 102834 nosy: merwok, tarek severity: normal status: open title: Add a lint command to distutils type: feature request versions: Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8369 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8369] Add a lint command to distutils
Éric Araujo mer...@netwok.org added the comment: Also, think about running reindent.py and lib2to3 fixers such as idioms or ws_comma. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8369 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8241] py2_test_grammar.py contains invalid syntax for 2.6
Benjamin Peterson benja...@python.org added the comment: 2010/4/11 Martin v. Löwis rep...@bugs.python.org: Martin v. Löwis mar...@v.loewis.de added the comment: Benjamin, ISTM that the tests in lib2to3/tests/data/py2_test_grammar aren't run at all, as part of regrtest. If so, the entire file could be removed. True, but then it would become out of sync with the other branches. The tests aren't run in the trunk either, but we keep the file there. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8241 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7443] test.support.unlink issue on Windows platform
Jason R. Coombs jar...@jaraco.com added the comment: When I was working on a routine to checkout/patch/build/test/cleanup Python (see https://svn.jaraco.com/jaraco/python/jaraco.develop, and particularly scripts/test-windows-symlink-patch.py), I ran into a similar problem during the cleanup step. I tried using shutil.rmtree to clean up the folder that was checked out, but I repeatedly got 'access denied' exceptions. I ended up working around the problem by using subprocess and cmd.exe's rmdir /s /q. I think this demonstrates three facets to this problem: 1) It doesn't just affect the test suite. It happens to other Python programs that are using shutil.rmtree (and possibly remove/unlink) to remove files that are in use. 2) It doesn't have to be that way. At the very least, there could (should?) be a function in Python that replicates 'rmdir /s /q', which is not subject to the 'access denied' error. 3) We could use subprocess and cmd.exe to perform the unlink and rmtree operations in the test suite to bypass the transient failures until Python supports the behavior natively. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7443 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8299] Improve GIL in 2.7
Kristján Valur Jónsson krist...@ccpgames.com added the comment: David, I don't necessarily think it is reasonable to yield every 100 opcodes, but that is the _intent_ of the current code base. Checkinterval is set to 100. If you don't want that, then set it higher. Your statement is like saying: Why would you want to have your windows fit tightly, it sounds like a horrible thing for the air quality indoors (I actually got this when living in Germany). The answer is, of course, that a snugly fitting window can still be opened if you want, but more importantly, you _can_ close it properly. And because the condition variable isn't strictly FIFO, it actually doesn't switch every time (an observation. The scheduler may decide o do its own things inside the condition variable / semaphore). What the ROUNDROBIN_GIL ensures, however, is that the condition variable is _entered_ every checkinterval. What I'm trying to demonsrate to you is the brokenness of the legacy GIL (as observed by Antoine long ago) and how it is not broken on windows. It is broken because the currently running thread is biased to reaquire the GIL immediately in an unpredictable fashion that is not being managed by the (OS) thread scheduler. Because it doesn't enter the condition variable wait when others are competing for it, the scheduler has no means of providing fairness to the application. So, to summarise this: I'm not proposing that we context switch every 100 opcodes, but I am proposing that we context switch consistently according to whatever checkinterval is put in place. Antoine, in case you misunderstood: I´m saying that the ROUNDROBIN_GIL and the Windows GIL are the same. If you don't believe me, take a look at the NonRecursiveLock implementation for windows. I'm also starting to think that you didn't actually bother to look at the patch. Please compare PyLock_gil_acquire() for LEGACY_GIL and ROUNDROBIN_GIL and see if you can spot the difference. Really, it's just two lines of code. Maybe it needs restating. The bug is this (python pseudocode) with gil.cond: while not gil.locked: #this line is the bug gil.cond.wait() gil.locked = True vs. with gil.cond: if gil.n_waiting or gil.locked: gil.n_waiting += 1 while True: gil.cond.wait() #always wait at least once if not gil.locked: break gil.n_waiting -= 1 gil.locked = True The cond.wait() is where fairness ensues, where the OS can decide to serve threads roughly on a first come, first serve basis. If you are biased towards not entering it at all (when yielding the GIL), then you have taken away the OS' chance of scheduling. Antoine (2): The need to have do_yield is a symptom of the brokenness of the GIL. You have a checkinterval of 100, which elapses some 1000 times per second, and yet you have to put in place special fudge code to ensure that we do get switches every few seconds? The whole point of the checkinterval is for you _not_ to have to dot the code with sleep() calls. Surely you don't expect the average application developer to do that if he wants his two cpu bound threads to compete fairly for the GIL? This is why I added the -y switch: To emulate normal application code. Also, the 0.7 imbalance observed in the SHA1 disappears on windows, (and using ROUNDROBIN_GIL). It is not due to the windows scheduler, it is due to the broken legacy_gil. This last slew of comments has been about the ROUNDROBIN_GIL only. I haven't dazzled you yet with PRIORITY_GIL, but that solves both problems because it is _fair_, and it allows us to increase the checkinterval to 1, thus elimintating the rapid switching overhead, and yet gives fast response to IO. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8299 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8368] Memory leak on multi-threaded PyObject_CallObject
Antoine Pitrou pit...@free.fr added the comment: Why do you think PyObject_CallObject() is the culprit? There could be any number of reasons for a memory leak: - cyclic references needing to be cleared (have you tried calling PyGC_Collect()?) - reference leak(s) in your own internal logic - inefficiencies in the Windows memory allocator which mean freed memory is not necessarily reclaimed PyObject_CallObject() itself is called with the GIL held, so the multithreaded nature of the program shouldn't be a factor. I would also suggest running more iterations, to see whether memory consumption reaches a stable state or grows endlessly. -- nosy: +pitrou ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8368 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8366] OS X universal builds fail on 2.7b1 and py3k with Don't know machine value for archs
Changes by R. David Murray rdmur...@bitdance.com: -- nosy: +haypo ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8366 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8299] Improve GIL in 2.7
Antoine Pitrou pit...@free.fr added the comment: Antoine (2): The need to have do_yield is a symptom of the brokenness of the GIL. Of course it is. But the point of the benchmark is to give valid results even with the old broken GIL. I could remove do_yield and still have it give valid results, but that would mean running each step for 30 seconds instead of 2. I don't like having to wait several minutes for benchmark numbers :-) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8299 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8370] change module builtins to __builtin__ in __import__ documentation
New submission from Chris Jerdonek chris.jerdo...@gmail.com: The builtins module referenced in the Python 2.6 __import__ documentation does not seem to exist in Python 2.6: http://docs.python.org/library/functions.html#__import__ These should probably be changed to __builtin__: http://docs.python.org/library/__builtin__.html -- messages: 102841 nosy: cjerdonek severity: normal status: open title: change module builtins to __builtin__ in __import__ documentation type: behavior versions: Python 2.6 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8370 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8370] change module builtins to __builtin__ in __import__ documentation
Changes by Chris Jerdonek chris.jerdo...@gmail.com: -- assignee: - georg.brandl components: +Documentation nosy: +georg.brandl ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8370 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8370] change module builtins to __builtin__ in __import__ documentation
Éric Araujo mer...@netwok.org added the comment: You’re right. This module name has been fixed in the 3.x branch (it used a magic name without reason); when importlib was backported from 3.1 to 2.6, this change must have been overlooked. Are you willing to produce a patch? Regards -- nosy: +merwok ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8370 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8370] change module builtins to __builtin__ in __import__ documentation
Chris Jerdonek chris.jerdo...@gmail.com added the comment: Replaced builtins with __builtin__. Also inserted a missing the. -- keywords: +patch Added file: http://bugs.python.org/file16870/_issue-8370-1.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8370 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8370] change module builtins to __builtin__ in __import__ documentation
Chris Jerdonek chris.jerdo...@gmail.com added the comment: Thanks for the info and quick response. Then this should probably also be applied to trunk (Python 2.7). -- versions: +Python 2.7 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8370 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8299] Improve GIL in 2.7
David Beazley d...@dabeaz.com added the comment: I'm sorry, I still don't get the supposed benefits of this round-robin patch over the legacy GIL. Given that using interpreter ticks as a basis for thread scheduling is problematic to begin with (mostly due to the fact that ticks have totally unpredictable execution times), I'd much rather see further GIL work continue to build upon the time-based scheduler that's been implemented in Python 3.2. For instance, I think being able to specify a thread-switching interval in seconds (sys.setswitchinternal) makes much more sense than continuing to fool around with check intervals and all of this tick business. The new GIL implementation is by no means perfect, but people are working on it. I'd much rather know if anything that you've worked out with this patch can be applied to that version of the GIL. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8299 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8368] Memory leak on multi-threaded PyObject_CallObject
Krauzi krauzi_g...@yahoo.de added the comment: i think the PyObject_Call* is the problem because when i comment it out, i do not longer get leaks. The arguments are also correctly decremented because i also can use NULL as argument and i get the same mem leaks like before. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8368 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8370] change module builtins to __builtin__ in __import__ documentation
Chris Jerdonek chris.jerdo...@gmail.com added the comment: Would it make sense to put a New in version 3.1 at the top of this page: http://docs.python.org/py3k/library/builtins.html (perhaps also with a note explaining that the module replaces __builtin__). I actually wasn't able to confirm when builtins was introduced by searching Google and What's New, etc. That's why I appreciated Eric's note. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8370 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8370] change module builtins to __builtin__ in __import__ documentation
Éric Araujo mer...@netwok.org added the comment: (perhaps also with a note explaining that the module replaces __builtin__) People used to 2.x will know about the name change; people new to Python with 3.x (the happy ones!) will not this this information, except perhaps to understand outdated docs or snippets. Hm. A short note would be helpful, I agree. I actually wasn't able to confirm when builtins was introduced by searching Google and What's New, http://docs.python.org/py3k/whatsnew/3.0.html#library-changes Regards -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8370 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8370] change module builtins to __builtin__ in __import__ documentation
Éric Araujo mer...@netwok.org added the comment: Your first patch seems good to me; wait for a core developer’s answer before taking time to add notes about renamed modules everywhere. Cheers -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8370 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5315] signal handler never gets called
Antoine Pitrou pit...@free.fr added the comment: Thanks for the detailed analysis, Charles-François. Finally, I think that the documentation should be rephrased: Yes, I think so. Furthermore, under Linux 2.6 and NPTL, getpid() returns the main thread PID even from another thread. Yes, those threads belong to the same process. But as mentioned, signals are a rather fragile inter-process communication device; just use a specific file descriptor. And if you still wanna use signals, there's set_wakeup_fd(): http://docs.python.org/library/signal.html#signal.set_wakeup_fd -- assignee: - georg.brandl components: +Documentation nosy: +georg.brandl, pitrou, tim_one priority: - normal versions: +Python 2.6, Python 2.7, Python 3.1, Python 3.2 -Python 2.4, Python 2.5 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5315 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6507] Enhance dis.dis to autocompile codestrings
Daniel Urban urban.dani...@gmail.com added the comment: I've made a patch, which adds a disassemble_str function to the dis module. The dis.dis function calls this function if x is a string. Added the following sentence to the documentation: Strings are first compiled to code objects with the :func:`compile` built-in function. Added two simle unittests. -- keywords: +patch nosy: +durban Added file: http://bugs.python.org/file16871/issue6507.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8108] test_ftplib fails with OpenSSL 0.9.8m
Darryl Miles darryl.mi...@darrylmiles.org added the comment: I am unable to get make test to run from an unpatched version in SVN (details below of make output). Please find attached an updated patch for your consideration (and testing, as I can't test it due to 'make test' SIGSEGV on CentOS 5.4 i386). Patch Notes: 1) Some thing that concern me, the unwrap() philosophy looks to be used to remove SSL from the Python high-level socket handle, so you can go back to plaintext mode. You can ONLY perform an unwrap() AFTER an SSL_shutdown()==1 has been observed (you need to wait for the other end to do something voluntarily). So you must retry the SSL_shutdown() over and over while you sleep-wait for IO, so this is akin to calling the ssl.shutdown(ssl.SSL_SHUTDOWN_MODE_BOTH) and getting back success. Also if it is your intention to properly implement an unwrap() like this you should disable IO read-ahead mode before calling shutdown for the second time, SSL_set_read_ahead(ssl, 0). This stops OpenSSL from eating too many bytes accidentally (probably from the kernel into its own buffers), from the inbound IO stream, which may not be SSL protocol data, it maybe plain text data (behind the last byte of SSL protocol data). 2) Due to the IO waiting it looks also necessary to copy the setup of SSL_set_nbio() from the read/write paths so the check_socket_and_wait_for_timeout() works in sympathy to the callers IO timeout reconfiguration. 3) My patch presumes the allocation of the type struct PySSLObject uses calloc() or some other memory zeroing strategy. There is a new member in that struct to track if SSL_shutdown() has previously returned a zero value. 4) The SSL_peek() error path needs checking to see if the error return is consistent with the Python paradigm. 5) Please check I have understand the VARARGS method correctly. I have made the default to SSL_SHUTDOWN_MODE_SENT (despite backward compatibly being SSL_SHUTDOWN_MODE_ONCE), this is because I would guess that most high-level applications did not intend to use it in raw mode; nor be bothered with the issues surrounding correct usage. I would guess high-level applications wanted Python to take the strain here. 6) I suspect you need to address your unwrap() policy a little better, the shutdown operation and the unwrap() are two different matters. The shutdown() should indicate success or not (in respect of the mode being requested, raw mode is a tricky one as the caller would want to the exact error return so it can do the correct thing), unwrap() should itself call ssl.shutdown(ssl.SSL_SHUTDOWN_MODE_BOTH) until it sees success and then remove the socket (and deallocate SSL objects). As things stand SSL_SHUTDOWN_MODE_ONCE does not work in a useful way since the error returns are not propagated to the caller, because unwrap is mixed into this. So that would still need fixing. building works ok, testing fails with SIGSEGV. Is this something to do with no having _bsddb built ? I have db-4.3 working. Maybe someone can reply by email on the matter. # make running build running build_ext building dbm using gdbm Python build finished, but the necessary bits to build these modules were not found: bsddb185 sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. running build_scripts # make test running build running build_ext building dbm using gdbm Python build finished, but the necessary bits to build these modules were not found: bsddb185 sunaudiodev To find the necessary bits, look in setup.py in detect_modules() for the module's name. running build_scripts find ./Lib -name '*.py[co]' -print | xargs rm -f ./python -Wd -3 -E -tt ./Lib/test/regrtest.py -l == CPython 2.7a4+ (trunk:79902M, Apr 11 2010, 16:38:55) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)] == Linux-2.6.18-164.15.1.el5-i686-with-redhat-5.4-Final == /root/python-svn/build/test_python_29248 test_grammar test_opcodes test_dict test_builtin test_exceptions test_types test_unittest test_doctest test_doctest2 test_MimeWriter test_SimpleHTTPServer test_StringIO test___all__ /root/python-svn/Lib/test/test___all__.py:10: DeprecationWarning: in 3.x, the bsddb module has been removed; please use the pybsddb project instead import bsddb /root/python-svn/Lib/bsddb/__init__.py:67: PendingDeprecationWarning: The CObject type is marked Pending Deprecation in Python 2.7. Please use capsule objects instead. import _bsddb make: *** [test] Segmentation fault -- Added file: http://bugs.python.org/file16872/Modules__ssl.c.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8108 ___ ___ Python-bugs-list mailing list Unsubscribe:
[issue8371] Add a command to download distributions
New submission from Éric Araujo mer...@netwok.org: Distutils2 should have a command responsible for downloading distributions. This would factor it out of other code in one clear location and allow users to download for later installation. If setup.cfg files grow options for extras, test-requires, build-requires and such specific kinds of dependencies, matching options would appear on the download (or get) command. Side note: Is it okay to post this as a bug or should I rather mail distutils-sig first? Or mail them now? Regards -- assignee: tarek components: Distutils2 messages: 102853 nosy: merwok, tarek severity: normal status: open title: Add a command to download distributions type: feature request versions: Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8371 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8366] OS X universal builds fail on 2.7b1 and py3k with Don't know machine value for archs
Roumen Petrov bugtr...@roumenpetrov.info added the comment: It seems to that improvement for CFLAGS show bug in python build system - quote for configure: BASECFLAGS=${UNIVERSAL_ARCH_FLAGS} -isysroot ${UNIVERSALSDK} ${BASECFLAGS} tgt=`sw_vers -productVersion | sed 's/\(10\.[[0-9]]*\).*/\1/'` if test ${UNIVERSALSDK} != / -a ${tgt} '' '10.4' ; then CFLAGS=${UNIVERSAL_ARCH_FLAGS} -isysroot ${UNIVERSALSDK} ${CFLAGS} CPPFLAGS=-isysroot ${UNIVERSALSDK} fi No idea why the script set CFLAGS and CPPFLAGS under a specific condition - may be some test cases from configure will fail without those flags set. Assignment to CPPFLAGS will ignore user settings - this is not acceptable. Lets see report from Ned Deily: might look like this : the -isysroot is added twice . -- nosy: +rpetrov ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8366 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8368] Memory leak on multi-threaded PyObject_CallObject
Krauzi krauzi_g...@yahoo.de added the comment: kay updated the code. Please use this: http://paste.pocoo.org/show/200484/ smaller code but problem still the same. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8368 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8368] Memory leak on multi-threaded PyObject_CallObject
Changes by Krauzi krauzi_g...@yahoo.de: Removed file: http://bugs.python.org/file16868/Python Bug.zip ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8368 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8368] Memory leak on multi-threaded PyObject_CallObject
Martin v. Löwis mar...@v.loewis.de added the comment: I've ported the program to Linux (see attached tar file). I cannot observe any memory leak here - even if I let the program run for a long time (linking with Python 2.6). Memory usage in top goes up and down, but never over some upper limit. The only changes to the source that I made are these: - remove the pause calls - run the thread creation in an infinite loop - join the threads I notice that the Win32 version also doesn't join the threads. Notice that this is a memory leak in itself. -- nosy: +loewis Added file: http://bugs.python.org/file16873/issue8368.tgz ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8368 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8108] test_ftplib fails with OpenSSL 0.9.8m
Antoine Pitrou pit...@free.fr added the comment: To just run the test_ftplib tests, you can use: $ ./python -m test.regrtest -v -uall test_ftplib (Other tests involving SSL sockets are test_ssl, test_smtpnet, test_imaplib and test_poplib) 1) Some thing that concern me, the unwrap() philosophy looks to be used to remove SSL from the Python high-level socket handle, so you can go back to plaintext mode. You can ONLY perform an unwrap() AFTER an SSL_shutdown()==1 has been observed (you need to wait for the other end to do something voluntarily). When the SSL shutdown fails, an exception is raised, which means the rest of the unwrapping (at the Python high-level socket level) doesn't occur. Therefore, it is safe to call unwrap() again from user code because the SSL object is still there. Also if it is your intention to properly implement an unwrap() like this you should disable IO read-ahead mode before calling shutdown for the second time, SSL_set_read_ahead(ssl, 0). This stops OpenSSL from eating too many bytes accidentally (probably from the kernel into its own buffers), from the inbound IO stream, which may not be SSL protocol data, it maybe plain text data (behind the last byte of SSL protocol data). Do you know how to cook a simple test to exercise this? 2) Due to the IO waiting it looks also necessary to copy the setup of SSL_set_nbio() from the read/write paths so the check_socket_and_wait_for_timeout() works in sympathy to the callers IO timeout reconfiguration. Thanks for spotting this. 5) Please check I have understand the VARARGS method correctly. I have made the default to SSL_SHUTDOWN_MODE_SENT (despite backward compatibly being SSL_SHUTDOWN_MODE_ONCE), this is because I would guess that most high-level applications did not intend to use it in raw mode; nor be bothered with the issues surrounding correct usage. I would guess high-level applications wanted Python to take the strain here. Yes, sounds right indeed. I'm not sure we need a choice of shutdown modes at all. building works ok, testing fails with SIGSEGV. Is this something to do with no having _bsddb built ? I have db-4.3 working. Maybe someone can reply by email on the matter. _bsddb seems to be built, it's the old bsddb185 which isn't. The module apparently breaks when importing it, can you open a separate issue for it? I'd like Bill Janssen's opinion on these proposed changes. Bill, can you take a look? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8108 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8366] OS X universal builds fail on 2.7b1 and py3k with Don't know machine value for archs
Ned Deily n...@acm.org added the comment: Setting CPPFLAGS to include the SDK is needed to make sure some of the autoconf tests work correctly by using the SDK's header files rather than those from /. But, you're right, it shouldn't throw away other CPPFLAGS settings. Plus that whole test there is suspect: it probably shouldn't be testing the build system version there. There are various ways to handle this and opportunities to simplify the OS X SDK processing; plus all of the trees (trunk, py3k, 26, 31) should work the same way. Let's see what Ronald prefers. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8366 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6507] Enhance dis.dis to autocompile codestrings
Raymond Hettinger rhettin...@users.sourceforge.net added the comment: +1 -- nosy: +rhettinger ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6507 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7443] test.support.unlink issue on Windows platform
Martin v. Löwis mar...@v.loewis.de added the comment: 1. I agree that we should fix the unlinking problem on Windows. I also think that such a fix should be independent of the test suite - many people run into failed unlink problems. 2. Tim already said it, but I repeat: the common theory is that the culprit for this kind of problem is software like virus checkers, desktop search spiders, Tortoise, ... 3. I'm not convinced that rmdir/s/q *really* solves the problem reliably. Because it's a timing issue, it may be that the additional startup cost to invoke rmdir was enough to let the virus scanner win the race, so that rmdir actually had no problems with removing the file. 4. I believe the official party line for removing files on Windows is this: If DeleteFile fails, move the file to the trash bin (of the disk), and use NtSetInformationFile to set the delete disposition for the file. See cygwin's unlink_nt for an elaborate implementation of unlinking: http://tinyurl.com/y7w6rrj -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7443 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8372] socket: Buffer overrun while reading unterminated AF_UNIX addresses
New submission from David Watson bai...@users.sourceforge.net: The makesockaddr() function in the socket module assumes that AF_UNIX addresses have a null-terminated sun_path, but Linux actually allows unterminated addresses using all 108 bytes of sun_path (for normal filesystem sockets, that is, not just abstract addresses). When receiving such an address (e.g. in accept() from a connecting peer), makesockaddr() will run past the end and return extraneous bytes from the stack, or fail because they can't be decoded, or perhaps segfault in extreme cases. This can't currently be tested from within Python as Python also refuses to accept address arguments which would fill the whole of sun_path, but the attached linux-pass-unterminated.diff (for 2.x and 3.x) enables them for Linux. With the patch applied: Python 2.7a4+ (trunk, Apr 8 2010, 18:20:28) [GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu4)] on linux2 Type help, copyright, credits or license for more information. import socket s = socket.socket(socket.AF_UNIX) s.bind(a * 108) s.getsockname() '\xfa\xbf\xa8)\xfa\xbf\xec\x15\n\x08l\xaaY\xb7\xb8CZ\xb7' len(_) 126 Also attached are some unit tests for use with the above patch, a couple of C programs for checking OS behaviour (you can also see the bug by doing accept() in Python and using the bindconn program), and patches aimed at fixing the problem. Firstly, the return-unterminated-* patches make makesockaddr() scan sun_path for the first null byte as before (if it's not a Linux abstract address), but now stop at the end of the structure as indicated by the addrlen argument. However, there's one more catch before this will work on Linux, which is that Linux system calls return the length of the address they *would* have stored in the structure had there been room for it, which in this case is one byte longer than the official size of a sockaddr_un structure, due to the missing null terminator. The addrlen-* patches handle this by always calling makesockaddr() with the actual buffer size if it is less than the returned length. This silently ignores any truncation, but I'm not sure how to do anything sensible about that, and some operating systems (e.g. FreeBSD) just silently truncate the address anyway and don't return the original length (POSIX doesn't make clear which, if either, behaviour is required). Once these patches are applied, the tests pass. There is one other issue: the patches for 3.x retain the assumption that socket paths are in UTF-8, but they should actually be handled according to PEP 383. I've got a patch for that, but I'll open a separate issue for it since the handling of the Linux abstract namespace isn't documented and there's some slightly unobvious behaviour that people might be depending on. -- components: Extension Modules files: linux-pass-unterminated.diff keywords: patch messages: 102861 nosy: baikie severity: normal status: open title: socket: Buffer overrun while reading unterminated AF_UNIX addresses type: behavior versions: Python 2.5, Python 2.6, Python 2.7, Python 3.1, Python 3.2, Python 3.3 Added file: http://bugs.python.org/file16874/linux-pass-unterminated.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8372 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8211] configure: ignore AC_PROG_CC hardcoded CFLAGS
Marc-Andre Lemburg m...@egenix.com added the comment: Victor, could you please fix the patch or revert it ? Thanks. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8211 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8372] socket: Buffer overrun while reading unterminated AF_UNIX addresses
Changes by David Watson bai...@users.sourceforge.net: Added file: http://bugs.python.org/file16875/return-unterminated-2.x.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8372 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8372] socket: Buffer overrun while reading unterminated AF_UNIX addresses
Changes by David Watson bai...@users.sourceforge.net: Added file: http://bugs.python.org/file16876/return-unterminated-3.x.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8372 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7443] test.support.unlink issue on Windows platform
Tim Golden m...@timgolden.me.uk added the comment: I'm afraid that the problem doesn't lie in the unlink: DeleteFile succeeds. The problem is that the file is only marked for delete until such time as the last SHARE_DELETE handle on it is closed. Until that time, an attempt to (re)create the file for anything other than SHARE_DELETE will fail. As you say, it's a timing issue. Making os.unlink on Windows more robust may be a good idea, but it's not going to help this issue. See my test-case.py on an earlier message for reproduction: http://bugs.python.org/file16869 TJG -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7443 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7433] MemoryView memory_getbuf causes segfaults, double call to tp_releasebuffer
Changes by Meador Inge mead...@gmail.com: -- nosy: +minge ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7433 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8372] socket: Buffer overrun while reading unterminated AF_UNIX addresses
Changes by David Watson bai...@users.sourceforge.net: Added file: http://bugs.python.org/file16877/addrlen-2.x.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8372 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8372] socket: Buffer overrun while reading unterminated AF_UNIX addresses
Changes by David Watson bai...@users.sourceforge.net: Added file: http://bugs.python.org/file16878/addrlen-3.x.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8372 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8211] configure: ignore AC_PROG_CC hardcoded CFLAGS
Marc-Andre Lemburg m...@egenix.com added the comment: Reopening the ticket: it shouldn't have been closed. I'm also making this a release blocker, since this needs to be fixed before the next release - the CC variable has to be initialized by the build system and breaking this in general for all default builds just to get a debug build without optimizations is not warranted. -- priority: - release blocker status: closed - open ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8211 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8211] configure: ignore AC_PROG_CC hardcoded CFLAGS
Changes by Marc-Andre Lemburg m...@egenix.com: -- resolution: fixed - ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8211 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8372] socket: Buffer overrun while reading unterminated AF_UNIX addresses
Changes by David Watson bai...@users.sourceforge.net: Added file: http://bugs.python.org/file16879/test-2.x.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8372 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8372] socket: Buffer overrun while reading unterminated AF_UNIX addresses
Changes by David Watson bai...@users.sourceforge.net: Added file: http://bugs.python.org/file16880/test-3.x.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8372 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8373] socket: AF_UNIX socket paths not handled according to PEP 383
New submission from David Watson bai...@users.sourceforge.net: In 3.x, the socket module assumes that AF_UNIX addresses use UTF-8 encoding - this means, for example, that accept() will raise UnicodeDecodeError if the peer socket path is not valid UTF-8, which could crash an unwary server. Python 3.1.2 (r312:79147, Mar 23 2010, 19:02:21) [GCC 4.2.4 (Ubuntu 4.2.4-1ubuntu4)] on linux2 Type help, copyright, credits or license for more information. from socket import * s = socket(AF_UNIX, SOCK_STREAM) s.bind(b\xff) s.getsockname() Traceback (most recent call last): File stdin, line 1, in module UnicodeDecodeError: 'utf8' codec can't decode byte 0xff in position 0: unexpected code byte I'm attaching a patch to handle socket paths according to PEP 383. Normally this would use PyUnicode_FSConverter, but there are a couple of ways in which the address handling currently differs from normal filename handling. One is that embedded null bytes are passed through to the system instead of being rejected, which is needed for the Linux abstract namespace. These abstract addresses are returned as bytes objects, but they can currently be specified as strings with embedded null characters as well. The patch preserves this behaviour. The current code also accepts read-only buffer objects (it uses the s# format), so in order to accept these as well as bytearray filenames (which the posix module accepts), the patch simply accepts any single-segment buffer, read-only or not. This patch applies on top of the patches I submitted for issue #8372 (rather than knowingly running past the end of sun_path). -- components: Extension Modules files: af_unix-pep383.diff keywords: patch messages: 102865 nosy: baikie severity: normal status: open title: socket: AF_UNIX socket paths not handled according to PEP 383 type: behavior versions: Python 3.1, Python 3.2, Python 3.3 Added file: http://bugs.python.org/file16881/af_unix-pep383.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8373 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8373] socket: AF_UNIX socket paths not handled according to PEP 383
David Watson bai...@users.sourceforge.net added the comment: This patch does the same thing without fixing issue #8372 (not that I'd recommend that, but it may be easier to review). -- Added file: http://bugs.python.org/file16882/af_unix-pep383-no-8372-fix.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8373 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8373] socket: AF_UNIX socket paths not handled according to PEP 383
Changes by David Watson bai...@users.sourceforge.net: Added file: http://bugs.python.org/file16883/test-existing.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8373 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8373] socket: AF_UNIX socket paths not handled according to PEP 383
Changes by David Watson bai...@users.sourceforge.net: Added file: http://bugs.python.org/file16884/test-new.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8373 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8373] socket: AF_UNIX socket paths not handled according to PEP 383
Changes by David Watson bai...@users.sourceforge.net: Added file: http://bugs.python.org/file16885/af_unix-pep383-doc.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8373 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8108] test_ftplib fails with OpenSSL 0.9.8m
Darryl Miles darryl.mi...@darrylmiles.org added the comment: To explain why you need 2 modes, a client/server would expect to do the following pseudo actions for maximum efficiency: set_socket_timeout(600_SECONDS) # or useful default send_data_over_ssl(QUIT\r\n) shutdown(SSL_SHUTDOWN_MODE_SENT) flush_data_down_to_socket() # maybe automatic/implied (OpenSSL users with custom BIO layers should be aware of this step) shutdown(socket, SHUT_WR) # this is optional, TCP socket level shutdown recv_data_over_ssl() = 250 Bye bye!\r\n # this will take time to arrive set_socket_io_timeout(5_SECONDS) shutdown(SSL_SHUTDOWN_MODE_BOTH) # this is optional! some clients may choose to skip it entirely close()/unwrap() A server would: recv_data_over_ssl() = QUIT\r\n # would be sitting idle waiting for this command send_data_over_ssl(250 Bye bye!\r\n) shutdown(SSL_SHUTDOWN_MODE_SENT) flush_data_down_to_socket() # maybe automatic/implied (OpenSSL users with custom BIO layers should be aware of this step) shutdown(socket, SHUT_WR) # this is optional, TCP socket level shutdown set_socket_io_timeout(30_SECONDS) shutdown(SSL_SHUTDOWN_MODE_BOTH) # a good server would implement this step close()/unwrap() Now if your outbound data is CORKed and flushed, the flush points would cause all the SSL data from both the 'last sent data' and the 'send shutdown notify' to go out in the same TCP segment and arrive at the other end more or less together. Doing any of the above in a different order introduces some kind of inefficiency. shutdown(fd, SHUT_WR) are often used at the socket level to help the manage TIME_WAIT. The client has to wait for the QUIT response message anyway. With the above sequence there is no additional time delay or cost with both parties performing a SSL protocol shutdown at the same time. Despite the IO timeouts existing (to provide a safety net). If the client is talking to a buggy server the worst case scenario is that it receives the quit response but the server never does an SSL shutdown and the server doesn't close the socket connection. In this situation the client will have to wait for IO timeout, some clients in other software use blocking sockets and don't have a timeout so they end up hooked (forever). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8108 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8211] configure: ignore AC_PROG_CC hardcoded CFLAGS
Ned Deily n...@acm.org added the comment: To be totally fair, it is likely that part of the OS X breakage was caused by the original code inadvertently working around the original CFLAGS misbehavior. From an OS X perspective, it may be best to just fix the new issue and move on. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8211 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8372] socket: Buffer overrun while reading unterminated AF_UNIX addresses
Changes by Antoine Pitrou pit...@free.fr: -- nosy: +haypo, loewis priority: - high stage: - patch review versions: -Python 2.5, Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8372 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8373] socket: AF_UNIX socket paths not handled according to PEP 383
Changes by Antoine Pitrou pit...@free.fr: -- nosy: +haypo, loewis priority: - normal stage: - patch review versions: -Python 3.3 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8373 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7443] test.support.unlink issue on Windows platform
Martin v. Löwis mar...@v.loewis.de added the comment: I'm afraid that the problem doesn't lie in the unlink: DeleteFile succeeds. The problem is that the file is only marked for delete until such time as the last SHARE_DELETE handle on it is closed. Then we shouldn't use DeleteFile in the first place to delete the file, but instead CreateFile, with DELETE access (and FILE_SHARE_DELETE sharing). If that fails, we need to move the file to the bin (see unlink_nt for details). Making os.unlink on Windows more robust may be a good idea, but it's not going to help this issue. See my test-case.py on an earlier message for reproduction: It certainly will help this case also. It would detect that the file is still open, and move it into the trash bin. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7443 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8368] Memory leak on multi-threaded PyObject_CallObject
Antoine Pitrou pit...@free.fr added the comment: Thanks Martin. I see no leak here either (Linux with Python 2.6 and 2.7). -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8368 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8368] Memory leak on multi-threaded PyObject_CallObject
Krauzi krauzi_g...@yahoo.de added the comment: oh no then it's a windows bug. Now i understand why many devs use linux instead of windows.. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8368 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8253] add a resource+files section in setup.cfg
Éric Araujo mer...@netwok.org added the comment: I’ve read some distutils-sig threads about this. Do you still want to write a PEP for it before implementation? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8253 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7443] test.support.unlink issue on Windows platform
Tim Golden m...@timgolden.me.uk added the comment: Then we shouldn't use DeleteFile in the first place to delete the file, but instead CreateFile, with DELETE access (and FILE_SHARE_DELETE sharing). If that fails, we need to move the file to the bin (see unlink_nt for details). I see what you're getting at. I'm getting to the end of my day here, but I'll try to put a patch together for posixmodule.c when I can, if no-one else gets there first. Would you agree that py3k is the only target branch worth aiming for? -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7443 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2987] RFC2732 support for urlparse (e.g. http://[::1]:80/)
Tony Locke tlo...@tlocke.org.uk added the comment: I've created a patch for parse.py against the py3k branch, and I've also included ndim's test cases in that patch file. When returning the host name of an IPv6 literal, I don't include the surrounding '[' and ']'. For example, parsing http://[::1]:5432/foo/ gives the host name '::1'. -- nosy: +tlocke versions: +Python 3.2 Added file: http://bugs.python.org/file16886/parse.py.patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2987 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2054] add ftp-tls support to ftplib - RFC 4217
Giampaolo Rodola' billiej...@users.sourceforge.net added the comment: Thinking back about this, I wonder whether FTPS could be a better name to use instead of FTP_TLS. It's shorter, easier to remember, and also makes more sense since also SSL can be used, not only TLS. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2054 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2987] RFC2732 support for urlparse (e.g. http://[::1]:80/)
Éric Araujo mer...@netwok.org added the comment: Seems sensible: Delimiters are not part of components. -- nosy: +merwok ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2987 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2054] add ftp-tls support to ftplib - RFC 4217
Éric Araujo mer...@netwok.org added the comment: It doesn’t look like a constant, too. httplib.Client, ftplib.Client, ftplib.SecureClient would be much more descriptive than httplib.HTTP and ftplib.FTP. Any interest about adding aliases? Regards -- nosy: +merwok ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2054 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2054] add ftp-tls support to ftplib - RFC 4217
Antoine Pitrou pit...@free.fr added the comment: Thinking back about this, I wonder whether FTPS could be a better name to use instead of FTP_TLS. It's shorter, easier to remember, and also makes more sense since also SSL can be used, not only TLS. What do you mean by also SSL can be used? Wikipedia has an interesting article about the subject: http://en.wikipedia.org/wiki/FTPS Secured FTP with explicit negotiation (what we are doing) is sometimes called FTPES (that's how it's named in FileZilla indeed). FTPS is more often used to describe secured FTP with implicit negotiation, i.e. the SSL session is established before the FTP protocol even kicks in. I think FTP_TLS is a fine name. Perhaps we can simply make the above distinction clearer in the docs. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2054 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8368] Memory leak on multi-threaded PyObject_CallObject
Martin v. Löwis mar...@v.loewis.de added the comment: I can't reproduce the problem on Windows, either. The attached project runs the thread creation in a loop (leaving the 3s sleep in the code). I see (in process viewer) that the Commit Size varies between 13MB and 14MB; there is no indication of a leak. -- resolution: - invalid status: open - closed Added file: http://bugs.python.org/file16887/Python Bug.zip ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8368 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7443] test.support.unlink issue on Windows platform
Martin v. Löwis mar...@v.loewis.de added the comment: Would you agree that py3k is the only target branch worth aiming for? Most certainly, yes. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7443 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2987] RFC2732 support for urlparse (e.g. http://[::1]:80/)
Antoine Pitrou pit...@free.fr added the comment: I think parsing should be a bit more careful. For example, what happens when you give 'http://dead:beef::]/foo/' as input (note the missing opening bracket)? -- nosy: +pitrou ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2987 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2987] RFC2732 support for urlparse (e.g. http://[::1]:80/)
Antoine Pitrou pit...@free.fr added the comment: By the way, updating the RFC list as done in python-urlparse-rfc2732-rfc-list.patch is also a good idea. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2987 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8368] Memory leak on multi-threaded PyObject_CallObject
Krauzi krauzi_g...@yahoo.de added the comment: ah no you misunderstood me. Please add a system(pause) and a system(pause) at the beginning of the while(1)-loop and one at the end. Then compare the memory usage of the program at the beginning (lets say it's 2,6 MB) with the usage at the second pause. In may case its 3,9 MB at this point. THIS is what i mean with the leak. On my computer on about 3-35000 calls i pass the 4 MB border. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8368 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2987] RFC2732 support for urlparse (e.g. http://
Éric Araujo mer...@netwok.org added the comment: Isn’t “http://dead:beef::]/foo/“ and invalid URI? Regarding doc, see also #5650. -- title: RFC2732 support for urlparse (e.g. http://[::1]:80/) - RFC2732 support for urlparse (e.g. http:// ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2987 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5650] Obsolete RFCs should be removed from doc of urllib.urlparse
Éric Araujo mer...@netwok.org added the comment: See also #2987 -- title: Obsolete RFC's should be removed from doc of urllib.urlparse - Obsolete RFCs should be removed from doc of urllib.urlparse ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5650 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue2987] RFC2732 support for urlparse (IPv6 addresses)
Antoine Pitrou pit...@free.fr added the comment: Isn’t “http://dead:beef::]/foo/“ and invalid URI? That's the point, it shouldn't parse as a valid one IMO. -- title: RFC2732 support for urlparse (e.g. http:// - RFC2732 support for urlparse (IPv6 addresses) ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue2987 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7301] Add environment variable $PYTHONWARNINGS
Stefan Krah stefan-use...@bytereef.org added the comment: The changes in main.c in r79881 don't look right: strtok() is used on the string returned by getenv(), which must not be modified. Also (and this admittedly cosmetic), perhaps use a static buffer like wchar_t warning[128] or use a single allocation before the for loop. What is the maximum length for a single warning? -- nosy: +skrah ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7301 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7301] Add environment variable $PYTHONWARNINGS
Philip Jenvey pjen...@underboss.org added the comment: The pending patch for py3k fixes the modification of the env value (trunk already has a fix for that). That patch is also doing the conversion to wchar_t via the char2wchar function now, with that reusing a single buffer seems out of the question -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7301 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8070] Infinite loop in PyRun_InteractiveLoopFlags() if PyRun_InteractiveOneFlags() raises an error
STINNER Victor victor.stin...@haypocalc.com added the comment: I don't have time to write a better patch, please improve mine :-) -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8070 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7946] Convoy effect with I/O bound threads and New GIL
Charles-Francois Natali neolo...@free.fr added the comment: A couple remarks on BFS-based patch: - nothing guarantees that you'll get a msec resolution - gettimeofday returns you wall clock time: if a process that modifies time is running, e.g. ntpd, you'll likely to run into trouble. the value returned is _not_ monotonic, but clock_gettime(CLOCK_MONOTONIC) is - inline functions are used, but it's not ANSI - static inline long double get_timestamp(void) { struct timeval tv; GETTIMEOFDAY(tv); return (long double) tv.tv_sec + tv.tv_usec * 0.01; } the product is computed as double, and then promoted as (long double). - the code uses a lot of floating point calculation, which is slower than integer Otherwise: - You know, I almost wonder whether this whole issue could be fixed by just adding a user-callable function to optionally set a thread priority number. For example: sys.setpriority(n) Modify the new GIL code so that it checks the priority of the currently running thread against the priority of the thread that wants the GIL. If the running thread has lower priority, it immediately drops the GIL. The problem with this type of fixed-priority is starvation. And it shouldn't be up to the user to set the priorities. And some threads can mix I/O and CPU intensive tasks. It's a dual-core Linux x86-64 system. But, looking at the patch again, the reason is obvious: #define CHECK_SLICE_DEPLETION(tstate) (bfs_check_depleted || (tstate tick_counter % 1000 == 0)) `tstate-tick_counter % 1000` is replicating the behaviour of the old GIL, which based its speculative operation on the number of elapsed opcodes (and which also gave bad latency numbers on the regex workload). I find this suspicous too. I haven't looked at the patch in detail, but what does the number of elapsed opcodes offers you over the timesplice expiration approach ? -- nosy: +neologix ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7946 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8374] Some locales are unsupported
New submission from Luke Jennings ubuntujenk...@googlemail.com: In the locale module there are some locales that are not supported these the ones that I am aware of are nl_AW, sr_RS sr_ME. This information was due to a project that captures screenshots in different languages and we have to retrieve the language code. Related to the origin of the bug https://bugs.edge.launchpad.net/quickshot/+bug/554861 . If any more information is required please let me know. -- components: Extension Modules messages: 102891 nosy: ubuntujenkins severity: normal status: open title: Some locales are unsupported type: behavior versions: Python 2.6 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8374 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue8374] Some locales are unsupported
Éric Araujo mer...@netwok.org added the comment: Hello Not a local expert here, but since this module relies on the underlying libc locale support. Do other programs work correctly with this locale? Apart from that, your program needs to catch and handle exceptions anyway. Martin, I’m making you nosy, since you’re listed as locale area expect in the maintainers file. Hope it’s okay to do so. Regards -- components: +Library (Lib) -Extension Modules nosy: +loewis, merwok ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue8374 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com