Re: [Python-Dev] Replacing PyWin32's PeekNamedPipe, ReadFile, and WriteFile
On Thu, 23 Jul 2009 14:21:38 +0200, Christian Heimes li...@cheimes.de wrote: Michael Foord wrote: A big advantage of using ctypes is that it works cross-implementation - on IronPython and PyPy already and on Jython soon. I'd like to see more standard library modules use it. Distributions that choose not to include it are crippling their Python distribution. Interesting, I didn't know that IronPython supports ctypes, too. I still find ctypes a bit problematic because it doesn't us header files for its types, structs and function definitions. This is indeed a big problem with ctypes. Fortunately, a project exists to correct it: http://pypi.python.org/pypi/ctypes_configure/0.1 Anyone writing code with ctypes should be looking at ctypes_configure as well. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Replacing PyWin32's PeekNamedPipe, ReadFile, and WriteFile
On Thu, 23 Jul 2009 14:23:56 +0200, Christian Heimes li...@cheimes.de wrote: Nick Coghlan wrote: I see ctypes as largely useful when you want to call a native DLL but don't have any existing infrastructure for accessing native code from your project. A few lines of ctypes code is then a much better solution than adding a C or C++ compilation dependency just to access a couple of functions. Of course, that definitely isn't the case for CPython - we not only have plenty of existing C infrastructure, but in the specific case of subprocess on Windows we already have a dedicated extension module (PC/_subprocess.c). You've hit the nail on the head! That's it. True, CPython has C infrastructure. What about the other Python runtimes, though? At the language summit, there was a lot of discussion (and, I thought, agreement) about moving the standard library to be a collaborative project between several of the major runtime projects. A ctypes-based solution seems more aligned with this goal than more C code. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] pthread sem PyThread_acquire_lock
On Thu, 2 Jul 2009 15:47:48 -0700, Gregory P. Smith g...@krypto.org wrote: On Mon, Jun 29, 2009 at 2:28 PM, Martin v. Löwismar...@v.loewis.de wrote: AFAIK, ignoring EINTR doesn't preclude the calling of signal handlers. This is my understanding as well - so I don't think Python actually swallows the signal. A great example is reading from a socket. Whether or not it can be interrupted depends on the platform, so catching Ctrl+C often requires a timeout loop. Also, remember that signals are asynchronous in the sense that they are handled outside the normal execution flow of a program. Checking for EINTR probably isn't the best way to determine if a signal has been sent to the program. I think it would be reasonable to support asynchronous exceptions, and Python supports SIGINT fairly well most of the time. It might be possible to support keyboard interrupts throughout the system, but changing Python to do so could also cause incompatibilities. So any change must be done with greatest care, but simultaneously, should also try to arrange to cover all cases. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/greg%40krypto.org If you want signals to actually be handled in a timely manner, its best to leave the main thread of the program alone as a signal handling thread that just spends its time in a loop of time.sleep(123) calls rather than blocking on any sort of lock. Spawn other threads to do the actual work in your program. Signals are delivered indirectly in the existing CPython implementation by setting an internal flag that the main interpreter thread polls on occasion so blocking calls that do not interrupt and return early being called from the main thread will effectively block signals. Yes, this is all true now. The question is why the implementation works that way, and whether it is desirable to keep it working that way. Considering *some* of the lock implementations make themselves not interruptable by threads while others don't bother, it seems like *some* change to the status quo is desirable. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] python sendmsg()/recvmsg() implementation
On Tue, 09 Jun 2009 16:46:54 +0200, Kálmán Gergely kalman.gerg...@duodecad.hu wrote: Hello, my name is Greg. I've just started using python after many years of C programming, and I'm also new to the list. I wanted to clarify this first, so that maybe I will get a little less beating for my stupidity :) Welcome! [snip] Browsing the net I've found a patch to the python core (http://bugs.python.org/issue1194378), dated 2005. First of all, I would like to ask you guys, whether you know of any way of doing this FD passing magic, or that you know of any 3rd party module / patch / anything that can do this for me. Aside from the patch in the tracker, there are several implementations of these APIs as third-party extension modules. Since I'm fairly familiar with C and (not that much, but I feel the power) python, I would take the challenge of writing it, given that the above code is still somewhat usable. If all else fails I would like to have your help to guide me through this process. What would be great is if you could take the patch in the tracker and get it into shape so that it is suitable for inclusion. This would involve three things, I think: 1. Write unit tests for the functionality (since the patch itself provides none) 2. Update the patch so that it again applies cleanly against trunk 3. Add documentation for the new APIs Once this is done, you can get a committer to look at it and either provide more specific feedback or apply it. Thanks, Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Issues with Py3.1's new ipaddr
On Tue, 02 Jun 2009 19:34:11 +0200, \Martin v. Löwis\ mar...@v.loewis.de wrote: [snip] You seem comfortable with these quirks, but then you're not planning to write software with this library. Developers who do intend to write meaningful network applications do seem concerned, yet we're ignored. I don't hear a public outcry - only a single complainer. Clay repeatedly pointed out that other people have objected to ipaddr and been ignored. It's really, really disappointing to see you continue to ignore not only them, but the repeated attempts Clay has made to point them out. I don't have time to argue this issue, but I agree with essentially everything Clay has said in this thread, and I commented about these problems on the ticket months ago, before ipaddr was added. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] setuptools has divided the Python community
On Wed, 25 Mar 2009 11:34:43 -0400, Tres Seaver tsea...@palladion.com wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Antoine Pitrou wrote: Tarek Ziadé ziade.tarek at gmail.com writes: But I agree that the sizes of the packages are too small now, and it has gone to far. Installing a web app like Plone is scary (+100 packages) I am working on a TurboGears2-based app and I just did a count of the .egg packages in the virtualenv. There are 45 of them People should really stop splitting their work into micro-libraries (with such ludicrous names as AddOns or Extremes, I might add (*)), and myriads of separately-packaged plugins (the repoze stuff). The Twisted approach is much saner, where you have a cohesive whole in a single package. Monoliths have downsides: consider that fact that the WSGI-complieant HTTP server for Twisted languished for *years* outside the released versions of Twisted: IIRC, the server was released as a separate distribution, but it was not compatible with the released versions of the main Twisted distribution: you had to install a snapshot / alpha of Twisted to get the 'web2' server to work. Maybe monoliths have downsides, but please pick a different example to support this. The manner in which the WSGI server in Twisted Web2 was made available has very little to do with large vs small packages and much to do with our (the Twisted developers) decision /not/ to release Twisted Web2 at all. I could go into lots more detail about that decision, but I don't think any of i would be relevant to the topic at hand. If anything, Twisted's example shows how monolithic packages are easier all-around than micro-packages. We basically have the release infrastructure to release Twisted in many smaller pieces, and we even do - but we only make releases of all the smaller pieces simultaneously, we encourage people to use the Twisted package which includes all the pieces, and we don't do any testing of mixed versions of the smaller pieces because it would be very difficult. Further, we *have* done point releases of /all/ of Twisted to supply some critical bug fix (these generally take the form of a X.Y.1 release, we rarely go to .2 or higher these days). And we've done these quite rapidly when the need arises - the monolithic nature of Twisted isn't really a hindrance here (updating our website, a manual process, is far more time consuming than actually doing the release - and a big part of that cost comes from the fact that we have web pages for each smaller piece, even though we don't encourage people to use these!). So, as long as we're just talking about the vagueries of monolithic vs micro, I'll weigh in in favr of monolithic (but I'm open to discussion about specific issues which are much more interesting than abstractions). Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Test failures under Windows?
On Tue, 24 Mar 2009 13:49:28 + (UTC), Antoine Pitrou solip...@pitrou.net wrote: Hello, [snip] By the way, what happened to the Windows buildbots? It looks like some of them are suffering from problems which I think are common with buildbot on Windows - primarily difficulty dealing with runaway processes or timeouts. Perhaps BuildBot/Windows improvements would make a good GSoC project? :) Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] asyncore fixes in Python 2.6 broke Zope's version of medusa
On Wed, 4 Mar 2009 10:21:26 -0800, Guido van Rossum gu...@python.org wrote: On Wed, Mar 4, 2009 at 10:14 AM, Sidnei da Silva sidnei.da.si...@gmail.com wrote: On Wed, Mar 4, 2009 at 3:04 PM, Guido van Rossum gu...@python.org wrote: Sounds like it's not so much the code that's future proof but the process used for evolving it. That seems to be missing for asyncore. :-( Turning the issue around a bit, has anyone considered polishing up the current fix to restore it's backwards compatibility, instead of starting a discussion about a full-blown replacement? I think that would be enough for most asyncore users (or even the couple few affected) for the moment, and then we can think about a possible future replacement. If it can be done while maintaining backwards compatibility with both the 2.6 version and the pre-2.6 version, that would be great of course! But can it? Is it really necessary to retain compatibility with the Python 2.6 version? Python 2.6.0 and Python 2.6.1 contain a regression (as compared to basically all previous versions of Python) which prevents asyncore-based programs which are years old from working on them. Restoring the pre-2.6 behavior will fix these old, presumably stable, widely used programs for users who install 2.6.2 and newer. The downside (which you were imagining, I'm sure) is that any new software developed against the Python 2.6.0 or 2.6.1 behavior will then break in 2.6.2 and later. While this is unfortunate, it is clearly the far lesser of two evils. The choice must be made, though. Either leave old software broken or break new software. Just because the leave old software broken choice is made through inaction doesn't make it the better choice (though obviously since it requires action, someone will have to do it, and I'm not volunteering - if inaction is the choice because no one wants to do the work, fine, but that's a different motivation than avoiding breaking newly written software). So, as a disinterested party in this specific case, I'd say revert to the pre-2.6 behavior. It does less harm than leaving the current behavior. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] asyncore fixes in Python 2.6 broke Zope's version of medusa
On Wed, 4 Mar 2009 10:46:28 -0800, Guido van Rossum gu...@python.org wrote: On Wed, Mar 4, 2009 at 10:27 AM, Jean-Paul Calderone exar...@divmod.com wrote: [snip] So, as a disinterested party in this specific case, I'd say revert to the pre-2.6 behavior. It does less harm than leaving the current behavior. Sorry, but I really do think that we should maintain backward compatibility *within* the 2.6 series as well. If that makes it impossible to also maintain the 2.5 behavior, perhaps some flag could be added to restore 2.5 compatibility, e.g. import asyncore asyncore.python_25_compat = True Note that this API is designed to work in 2.5 as well. :-) But why? The argument I made had the objective of minimizing developer effort. What's the objective of maintaining backward compatibility within the 2.6 series in this case (sorry if it appeared earlier in this thread and I missed it)? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] asyncore fixes in Python 2.6 broke Zope's version of medusa
On Wed, 4 Mar 2009 10:54:19 -0800, Guido van Rossum gu...@python.org wrote: On Wed, Mar 4, 2009 at 10:51 AM, Jean-Paul Calderone exar...@divmod.com wrote: On Wed, 4 Mar 2009 10:46:28 -0800, Guido van Rossum gu...@python.org wrote: On Wed, Mar 4, 2009 at 10:27 AM, Jean-Paul Calderone exar...@divmod.com wrote: [snip] So, as a disinterested party in this specific case, I'd say revert to the pre-2.6 behavior. It does less harm than leaving the current behavior. Sorry, but I really do think that we should maintain backward compatibility *within* the 2.6 series as well. If that makes it impossible to also maintain the 2.5 behavior, perhaps some flag could be added to restore 2.5 compatibility, e.g. import asyncore asyncore.python_25_compat = True Note that this API is designed to work in 2.5 as well. :-) But why? The argument I made had the objective of minimizing developer effort. What's the objective of maintaining backward compatibility within the 2.6 series in this case (sorry if it appeared earlier in this thread and I missed it)? The same as always. We don't change APIs in bugfix releases. Okay. Thanks for explaining. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Choosing a best practice solution for Python/extension modules
On Fri, 20 Feb 2009 13:45:26 -0800, Brett Cannon br...@python.org wrote: On Fri, Feb 20, 2009 at 12:53, Aahz a...@pythoncraft.com wrote: On Fri, Feb 20, 2009, Brett Cannon wrote: On Fri, Feb 20, 2009 at 12:37, Brett Cannon br...@python.org wrote: On Fri, Feb 20, 2009 at 12:31, Daniel Stutzbach dan...@stutzbachenterprises.com wrote: A slight change would make it work for modules where only key functions have been rewritten. For example, pickle.py could read: from _pypickle import * try: from _pickle import * except ImportError: pass True, although that still suffers from the problem of overwriting things like __name__, __file__, etc. Actually, I take that back; the IMPORT_STAR opcode doesn't pull in anything starting with an underscore. So while this alleviates the worry above, it does mean that anything that gets rewritten needs to have a name that does not lead with an underscore for this to work. Is that really an acceptable compromise for a simple solution like this? Doesn't __all__ control this? If you define it, yes. But there is another issue with this: the pure Python code will never call the extension code because the globals will be bound to _pypickle and not _pickle. So if you have something like:: # _pypickle def A(): return _B() def _B(): return -13 # _pickle def _B(): return 42 # pickle from _pypickle import * try: from _pickle import * except ImportError: pass If you import pickle and call pickle.A() you will get -13 which is not what you are after. If pickle and _pypickle are both Python modules, and _pypickle.A is intended to be used all the time, regardless of whether _pickle is available, then there's not really any reason to implement A in _pypickle. Just implement it in pickle. Then import whatever optionally fast thing it depends on from _pickle, if possible, and fall-back to the less fast thing in _pypickle otherwise. This is really the same as any other high-level/low-level library split. It doesn't matter that in this case, one low-level implementation is provided as an extension module. Importing the low-level APIs from another module and then using them to implement high-level APIs is a pretty common, simple, well-understood technique which is quite applicable here. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Choosing a best practice solution for Python/extension modules
On Sat, 21 Feb 2009 11:07:07 -0800, Brett Cannon br...@python.org wrote: On Sat, Feb 21, 2009 at 09:17, Jean-Paul Calderone exar...@divmod.comwrote: On Fri, 20 Feb 2009 13:45:26 -0800, Brett Cannon br...@python.org wrote: On Fri, Feb 20, 2009 at 12:53, Aahz a...@pythoncraft.com wrote: On Fri, Feb 20, 2009, Brett Cannon wrote: On Fri, Feb 20, 2009 at 12:37, Brett Cannon br...@python.org wrote: On Fri, Feb 20, 2009 at 12:31, Daniel Stutzbach dan...@stutzbachenterprises.com wrote: A slight change would make it work for modules where only key functions have been rewritten. For example, pickle.py could read: from _pypickle import * try: from _pickle import * except ImportError: pass True, although that still suffers from the problem of overwriting things like __name__, __file__, etc. Actually, I take that back; the IMPORT_STAR opcode doesn't pull in anything starting with an underscore. So while this alleviates the worry above, it does mean that anything that gets rewritten needs to have a name that does not lead with an underscore for this to work. Is that really an acceptable compromise for a simple solution like this? Doesn't __all__ control this? If you define it, yes. But there is another issue with this: the pure Python code will never call the extension code because the globals will be bound to _pypickle and not _pickle. So if you have something like:: # _pypickle def A(): return _B() def _B(): return -13 # _pickle def _B(): return 42 # pickle from _pypickle import * try: from _pickle import * except ImportError: pass If you import pickle and call pickle.A() you will get -13 which is not what you are after. If pickle and _pypickle are both Python modules, and _pypickle.A is intended to be used all the time, regardless of whether _pickle is available, then there's not really any reason to implement A in _pypickle. Just implement it in pickle. Then import whatever optionally fast thing it depends on from _pickle, if possible, and fall-back to the less fast thing in _pypickle otherwise. This is really the same as any other high-level/low-level library split. It doesn't matter that in this case, one low-level implementation is provided as an extension module. Importing the low-level APIs from another module and then using them to implement high-level APIs is a pretty common, simple, well-understood technique which is quite applicable here. But that doesn't provide a clear way, short of screwing with sys.modules, to get at just the pure Python implementation for testing when the extensions are also present. The key point in trying to figure this out is to facilitate testing since the standard library already uses the import * trick in a couple of places. screwing with sys.modules isn't a goal. It's a means of achieving a goal, and not a particularly good one. I guess I overedited my message, sorry about that. Originally I included an example of how to parameterize the high-level API to make it easier to test (or use) with any implementation one wants. It went something like this: try: import _pickle as _lowlevel except ImportError: import _pypickle as _lowlevel class Pickler: def __init__(self, implementation=None): if implementation is None: implementation = _lowlevel self.dump = implementation.dump self.load = implementation.load ... Perhaps this isn't /exactly/ how pickle wants to work - I haven't looked at how the C extension and the Python code fit together - but the general idea should apply regardless of those details. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Warnings
On Thu, 5 Feb 2009 08:35:30 -0800, Raymond Hettinger pyt...@rcn.com wrote: import os os.tmpnam() RuntimeWarning: tmpnam is a potential security risk to your program This warning is a reflection of the fact that (at least) the glibc authors think you shouldn't be using tmpnam(2). If you compile a C program that uses it, you'll see a warning about it. Since you can write a Python program that uses tmpnam(2) without ever compiling such a C program, you get a RuntimeWarning instead. It's not quite analogous, since you don't get the warning from the C program every time you run it, but it's about as close as you can do in Python without resorting to crazy tricks. Are these runtime warnings necessary? Suppressing these warnings is a pita for one-off uses of os.tmpnam() or os.tempnam(). Why are you using them? Why not just use one of the many, many, many other APIs for generating temporary files that Python exposes? One of the ones that doesn't emit any warnings? I would hate for this sort of thing to propagate throughout the standard library. Some folks think eval() should never be used and the same for input(). Some folks think md5 should be removed. Some folks think pickles are the ultimate security threat. IMO, it is enough to note potential vulnerabilities in the docs. Even then, I'm not too keen on the docs being filled with lots of red-outlined pink-boxed warning signs, effectively communicating that Python itself is dangerous and unreliable. I agree. The best thing to do would be to deprecate the Python wrappers around insecure C functions and then remove them after a couple releases. It's not as though these functions fill a critical niche - the tempfile module and even os.tmpfile are more than enough. Why does Python offer this attractive nuisance? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Missing operator.call
On Wed, 4 Feb 2009 10:50:47 -0800, Brett Cannon br...@python.org wrote: On Wed, Feb 4, 2009 at 10:43, Steven Bethard steven.beth...@gmail.com wrote: [snip] Not sure I follow you here. It's not the __init__ that allows you to do ``x()``, it's the fact that the class declares a __call__, right? class C(object): ... pass ... C.__call__() __main__.C object at 0x01A3C370 C() __main__.C object at 0x02622EB0 str.__call__() '' str() '' I don't think so:: Foo.__call__ method-wrapper '__call__' of type object at 0x81cee0c Foo.__call__ = lambda: None Foo.__call__ unbound method Foo.lambda Foo() __main__.Foo object at 0xf7f90e8c That's because the __call__ special on an instance is ignored, as many specials on new-style instances are ignored. If you change the method where it counts - on type(Foo) in this case - then you see something different. class X(type): ... def __call__(self, *a, **kw): ... print 'X.__call__', a, kw ... return super(X, self).__call__(*a, **kw) ... class Y(object): ... __metaclass__ = X ... Y.__call__ bound method X.__call__ of class '__main__.Y' Y() X.__call__ () {} __main__.Y object at 0xb7d0706c Y.__call__ = lambda: None Y.__call__ unbound method Y.lambda Y() X.__call__ () {} __main__.Y object at 0xb7d0706c X.__call__ = lambda: None Y() Traceback (most recent call last): File stdin, line 1, in module TypeError: lambda() takes no arguments (1 given) As far as I know, Steven Bethard's point is correct. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Summary of Python tracker Issues
On Fri, 30 Jan 2009 18:06:48 +0100 (CET), Python tracker sta...@bugs.python.org wrote: [snip] Average duration of open issues: 697 days. Median duration of open issues: 6 days. It seems there's a bug in the summary tool. I thought it odd a few weeks ago when I noticed the median duration of open issues was one day. I just went back and checked and the week before it was one day it was 2759 days. Perhaps there is some sort of overflow problem when computing this value? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3.0.1 (io-in-c)
On Wed, 28 Jan 2009 18:52:41 +, Paul Moore p.f.mo...@gmail.com wrote: 2009/1/28 Martin v. Löwis mar...@v.loewis.de: Well, first try to understand what the error *is*: py unicodedata.name('\u0153') 'LATIN SMALL LIGATURE OE' py unicodedata.name('£') 'POUND SIGN' py ascii('£') '\\xa3' py ascii('£'.encode('cp850').decode('cp1252')) '\\u0153' So when Python reads the file, it uses cp1252. This is sensible - just that the console uses cp850 doesn't change the fact that the common encoding of files on your system is cp1252. It is an unfortunate fact of Windows that the console window uses a different encoding from the rest of the system (namely, the console uses the OEM code page, and everything else uses the ANSI code page). Ah, I see. That is entirely obvious. The key bit of information is that the default io encoding is cp1252, not cp850. I know that in theory, I see the consequences often enough (:-)), but it isn't instinctive for me. And the simple default encoding is system dependent comment is not very helpful in terms of warning me that there could be an issue. It probably didn't help that the exception raised told you that the error was in the charmap codec. This should have said cp850 instead. The fact that cp850 is implemented in terms of charmap isn't very interesting. The fact that while encoding some text using cp850 is. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] r68547 - in python/trunk/Lib/test: test_datetime.py test_os.py
On Mon, 12 Jan 2009 19:09:28 +0100 (CET), kristjan.jonsson python-check...@python.org wrote: Author: kristjan.jonsson Date: Mon Jan 12 19:09:27 2009 New Revision: 68547 Log: Add tests for invalid format specifiers in strftime, and for handling of invalid file descriptors in the os module. Modified: python/trunk/Lib/test/test_datetime.py python/trunk/Lib/test/test_os.py Several of the tests added to test_os.py are invalid and fail. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] RELEASED Python 3.0 final
On Thu, 4 Dec 2008 22:05:05 -0800, Guido van Rossum [EMAIL PROTECTED] wrote: On Thu, Dec 4, 2008 at 9:40 PM, [EMAIL PROTECTED] wrote: The default case, the case of the user without the wherewithal to understand the nuances of the distinction between 2.x and 3.x, is a user who should use 2.x. Not at all clear. If they're not sensitive to those nuances it's just as likely that they're a casual developer (e.g. a student just learning to program). Such users are unlikely to start using major 3rd party packages like Twisted or Django, which would be completely overwhelming to someone just learning. That seems like it would be right to me, but two or three times a month someone shows up in the Twisted IRC channel who is learning both Python and Twisted at the same time. So apparently there are a lot of people for whom this isn't overwhelming. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] RELEASED Python 3.0 final
On Thu, 4 Dec 2008 20:20:34 +, Paul Moore [EMAIL PROTECTED] wrote: 2008/12/4 Barry Warsaw [EMAIL PROTECTED]: [snip] One thing I'd like to see more clearly stated is that there's no reason NOT to use Python 3.0 for new code. I don't think that message has really come across yet - in spite of the warnings being all about compatibility issues, no-one has stressed the simple point that if your code is new, it doesn't have compatibility concerns! New code that wouldn't be more easily written with a dependency on a library that hasn't been ported, you mean. Although beyond that, there may be reasons (for example, the significant performance degradation in the I/O library currently being discussed on python-list). Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] My patches
On Thu, 30 Oct 2008 17:17:02 -0400, A.M. Kuchling [EMAIL PROTECTED] wrote: [snip] On some of my issues (esp. ones relating to curses and mailbox.py), I feel paralyzed because problems are occurring on platforms I don't have access to (e.g. FreeBSD). The buildbots will report problems, but then you have to debug them by committing changes, triggering a build, and observing the results. And all of these actions will send e-mail to python-checkins. (Imagine if every 'print reached here!' you added while debugging was e-mailed to everyone...) I do that when I need to. People whose lives would be ruined by the receipt of such an email always have the option of leaving the checkin list. However, there is a buildbot feature named try which lets you submit a patch (subject to authentication) and performs a build with the patch applied. This lets you try lots of little changes without getting your VCS involved. It needs to be enabled in the buildmaster configuration and credentials created for any user who will be given access. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Filename as byte string in python 2.6 or 3.0?
On Mon, 29 Sep 2008 14:34:07 +0200, Ulrich Eckhardt [EMAIL PROTECTED] wrote: On Monday 29 September 2008, [EMAIL PROTECTED] wrote: Also, what about MacOS X? AFAIK, OS X guarantees UTF-8 for filesystem encodings. So the OS also provides Unicode filenames and how it deals with broken or legacy media is left up to the OS. Read Jack Jansen's recent email about NFC vs NFD. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] What this code should do?
On Fri, 19 Sep 2008 18:26:05 +0200, Amaury Forgeot d'Arc [EMAIL PROTECTED] wrote: Hello Maciej, Maciej Fijalkowski wrote: Hello, I'm a little clueless about exact semantics of following snippets: http://paste.pocoo.org/show/85698/ is this fine? or shall I fill the bug? (the reason to ask is because a) django is relying on this b) pypy implements it differently) Note that python 3.0 has a different behaviour; in the first sample, it prints: A (class 'NameError' ... B (class 'ZeroDivisionError', ... See the subtle differences between http://docs.python.org/dev/library/sys.html#sys.exc_info http://docs.python.org/dev/3.0/library/sys.html#sys.exc_info The second example changes its behavior, too. It gives back the NameError from the exc_info call. I'm having a hard time reconciling this with the Python 3.0 documentation. Can you shed some light? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] ssl module, non-blocking sockets and asyncore integration
On Wed, 17 Sep 2008 10:40:01 PDT, Bill Janssen [EMAIL PROTECTED] wrote: Ah, now I remember. It seems that sometimes when SSL_ERROR_WANT_READ was returned, things would block; that is, the handle_read method on asyncore.dispatcher was never called again, so the SSLSocket.recv() method was never re-called. There are several levels of buffering going on, and I never figured out just why that was. This (very rare) re-call of read is to handle that. You certainly do need to call read again if OpenSSL fails an SSL_read with a want-read error, but in asyncore, you don't want to do it right away, you want to wait until the socket becomes readable again, otherwise you *do* block waiting for bytes from the network. See the SSL support in Twisted for an example of the correct way to handle this. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] 2.6 rc1 performance results
On Sat, 13 Sep 2008 08:03:50 -0400, A.M. Kuchling [EMAIL PROTECTED] wrote: Three weeks ago, Antoine Pitrou posted the pybench results for 2.6 trunk: http://mail.python.org/pipermail/python-dev/2008-August/081951.html The big discovery in those results were TryExcept being 48% slower, but there was a patch in the bug tracker to improve things. I've re-run the tests to check the results. Disclaimer: these results are probably not directly comparable. Antoine was using a 32-bit Linux installation on an Athlon 3600+ X2; I'm on a Macbook. Good news: TryExcept is now only 10% slower than 2.5, not 48%. Bad news: the big slowdowns are: CompareFloats: 117ms98ms +19.2% 118ms99ms +19.0% CompareIntegers: 110ms 104ms +5.6% 110ms 105ms +4.9% DictWithStringKeys: 118ms 105ms +12.8% 133ms 108ms +22.7% NestedForLoops: 125ms 116ms +7.7% 127ms 118ms +8.0% Recursion: 193ms 159ms +21.5% 197ms 163ms +20.8% SecondImport: 139ms 129ms +8.4% 143ms 130ms +9.9% SecondPackageImport: 150ms 139ms +8.6% 152ms 140ms +8.1% SecondSubmoduleImport: 211ms 191ms +10.5% 214ms 195ms +9.4% SimpleComplexArithmetic: 130ms 119ms +9.4% 131ms 120ms +9.2% I see similar results for some of these. The complete results from a run on an AMD Athlon(tm) 64 Processor 3200+ are attached. Jean-Paul --- PYBENCH 2.0 --- * using CPython 2.6rc1 (trunk:66421M, Sep 12 2008, 21:05:52) [GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7)] * disabled garbage collection * system check interval set to maximum: 2147483647 * using timer: time.time --- Benchmark: p26.pybench --- Rounds: 10 Warp: 10 Timer: time.time Machine Details: Platform ID:Linux-2.6.24-19-generic-i686-with-debian-lenny-sid Processor: Python: Implementation: CPython Executable: /home/exarkun/Projects/python/trunk//python Version:2.6.0 Compiler: GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7) Bits: 32bit Build: Sep 12 2008 21:05:52 (#trunk:66421M) Unicode:UCS2 --- Comparing with: p25.pybench --- Rounds: 10 Warp: 10 Timer: time.time Machine Details: Platform ID:Linux-2.6.24-19-generic-i686-with-debian-lenny-sid Processor: Python: Implementation: n/a Executable: /home/exarkun/Projects/python/branches/release25-maint/python Version:2.5.3a0 Compiler: GCC 4.2.3 (Ubuntu 4.2.3-2ubuntu7) Bits: 32bit Build: Sep 13 2008 09:32:41 (#release25-maint:66444) Unicode:UCS2 Test minimum run-timeaverage run-time thisother diffthisother diff --- BuiltinFunctionCalls: 178ms 187ms -4.5% 184ms 193ms -4.6% BuiltinMethodLookup: 151ms 165ms -8.5% 155ms 167ms -7.2% CompareFloats: 150ms 146ms +2.9% 153ms 150ms +1.9% CompareFloatsIntegers: 143ms 147ms -2.8% 150ms 150ms +0.4% CompareIntegers: 180ms 182ms -1.0% 182ms 190ms -4.3% CompareInternedStrings: 159ms 160ms -1.1% 163ms 166ms -2.0% CompareLongs: 135ms 136ms -0.7% 136ms 139ms -1.5% CompareStrings: 142ms 150ms -5.4% 146ms 153ms -4.5% CompareUnicode: 148ms 135ms +9.6% 151ms 137ms +10.6% ComplexPythonFunctionCalls: 155ms 226ms -31.4% 158ms 229ms -30.9% ConcatStrings: 197ms 203ms -2.8% 202ms 215ms -6.4% ConcatUnicode: 179ms 168ms +6.6% 182ms 184ms -0.8% CreateInstances: 159ms 157ms +1.4% 162ms 161ms +0.7% CreateNewInstances: 119ms 141ms -15.4% 121ms 144ms -16.2% CreateStringsWithConcat: 189ms 173ms +9.3% 195ms 177ms +10.2% CreateUnicodeWithConcat: 116ms 113ms +2.3% 118ms 115ms +2.6% DictCreation: 109ms 140ms -22.2% 112ms 143ms -21.8% DictWithFloatKeys: 202ms 199ms +1.6% 208ms 204ms +1.6% DictWithIntegerKeys: 158ms 156ms +1.0% 161ms
Re: [Python-Dev] Further PEP 8 compliance issues in threading and multiprocessing
On Mon, 1 Sep 2008 09:42:06 -0500, Benjamin Peterson [EMAIL PROTECTED] wrote: On Mon, Sep 1, 2008 at 9:36 AM, Antoine Pitrou [EMAIL PROTECTED] wrote: Nick Coghlan ncoghlan at gmail.com writes: Is this just intended to discourage subclassing? If so, why give the misleading impression that these things can be subclassed by naming them as if they were classes? How should this be handled when it comes to the addition of PEP 8 compliant aliases? I don't see a problem for trivial functional wrappers to classes to be capitalized like classes. So I'd suggest option 3: leave it as-is. Otherwise option 2 (replace the wrappers with the actual classes) has my preference. Yes, I believe that pretending that functions are classes is a fairly common idiom in the stdlib and out, so I see no problem leaving them alone. We haven't had any complaints about the threading Event function yet either. :) Here's a complaint. It's surprising that you can't use Event et al with isinstance. This is something I'm sure a lot of people run into (I did, many years ago) when they start to use these APIs. Once you do figure out why it doesn't work, it's not clear how to do what you want, since _Event is private. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Think a dead import finding script would be handy?
On Sun, 17 Aug 2008 15:04:58 -0700, Brett Cannon [EMAIL PROTECTED] wrote: On Sun, Aug 17, 2008 at 1:40 PM, Georg Brandl [EMAIL PROTECTED] wrote: Brett Cannon schrieb: After Christian mentioned how we could speed up interpreter start-up by removing some dead imports he found, I decided to write up a quick script that generates the AST for a source file and (very roughly) tries to find imports that are never used. People think it's worth tossing into Tools, even if it is kind of rough? Otherwise I might toss it into the sandbox or make a quick Google code project out of it. Regardless, one interesting side-effect of the script is that beyond finding some extraneous imports in various places, it also found some holes in __all__. I have the script look for the definition of __all__ and consider an import used if it is listed there. pylint already finds unused imports. It finds tons of other, relatively useless, stuff in the default configuration, but I'm sure it can be coaxed into only showing unused imports too. Does anyone ever run pylint over the stdlib on occasion? Buildbot includes a pyflakes step. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Memory tests in Unicode
On Sat, 16 Aug 2008 13:01:33 -0300, Facundo Batista [EMAIL PROTECTED] wrote: 2008/8/16 Antoine Pitrou [EMAIL PROTECTED]: If the test does allocate the very large string, it means MemoryError isn't raised, which defeats the purpose of the test. I do *not* want to remove the test. Antoine wasn't suggesting removing it. He's suggesting that the test is not accomplishing its goal if the except suite isn't executed, and so the test should be changed to make this failure noticable. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest Suggestions
On Wed, 13 Aug 2008 15:35:15 + (UTC), Antoine Pitrou [EMAIL PROTECTED] wrote: Barry Warsaw barry at python.org writes: The goal should be to produce something like a unittest-ng, distribute it via the Cheeseshop, and gather consensus around it for possible inclusion in Python 2.7/3.1. There is already unittest, nose, py.test, trial... perhaps others I don't know of. I fear writing yet another testing framework from the ground-up will lead to more bikeshedding and less focussed discussion (see some testing-in-python threads for an example :-)). nose itself is not a completely independent piece of work but a discovery-based unittest extension (although a very big extension!). For that reason, Michael Foord's suggestion to gradually modernize and improve the stdlib unittest sounds reasonable to me: it allows to be more focussed, keep backwards compatibility, and also to decide and implement changes piecewise - avoiding the blank sheet effect where people all push for wild ideas and radically new concepts (tm). (however, nose is LGPL-licensed so it would not be suitable for direct reuse of large chunks of code in the stdlib, unless the authors agree for a relicensing) trial is also an extension of the stdlib unittest module (less and less over time as more and more stdlib unittest changes break it). Incremental improvements with backwards compatibility are a great thing. I very strongly encourage that course of action. It has already happened a number of times in this thread that some proposed functionality already exists in some third-party unittest extension and could easily be moved into the stdlib unittest module. That's a good thing: it shows that the functionality is actually valuable and it makes it easy to include, since it's already implemented. For what it's worth, trial is MIT licensed; anyone should feel free to grab any part of it they like for any purpose. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest Suggestions
On Tue, 12 Aug 2008 11:05:57 -0400, Barry Warsaw [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Aug 12, 2008, at 10:30 AM, Sebastian Rittau wrote: [I just saw the other post about unit testing, while I was writing this. A strange conincidence.] Indeed. I've played around (again) recently with both nose and py.test, so I'd like to make a meta comment. I would really like to see some of the people who are interested in Python unit testing to get together and work on an updated testing framework that incorporates the best ideas from all the existing frameworks. I'd like to see good integration with setuptools, both for running the tests and for packaging. I'd like to see good doctest support, with the ability to hook in setups and teardowns. I'd like to see some of useful things like layers taken from zope.testing. This doesn't belong on python-dev, and probably not on python-ideas either, but I'd be willing to start a testing SIG on python.org if others are interested in getting together for this purpose. The goal should be to produce something like a unittest-ng, distribute it via the Cheeseshop, and gather consensus around it for possible inclusion in Python 2.7/3.1. A SIG might be a good idea. There's also already the testing in python list, too: http://lists.idyll.org/listinfo/testing-in-python A lot of this discussion would be appropriate there. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Proposed unittest changes
On Sun, 13 Jul 2008 23:51:44 +0100, Michael Foord [EMAIL PROTECTED] wrote: Ben Finney wrote: Howdy Michael, I'm interested in the changes you're proposing for Python's 'unittest' module. I am (like, I suspect, many Python coders) maintaining my own set of extensions to the module across many projects, so I'd really like to see many of the improvements you discuss actually in the standard library. What assistance can I offer to help on this issue? I intend to start working on them in August, after I have finished my current writing commitments. The full list of changes proposed (feel free to start - but ping me or the list) and not shot down was something like: Documenting that the assert method names are to be preferred over the 'FailUnless' names (this stirred up some controversy this weekend so should probably not happen). Adding the following new asserts: assertIn(member, container, msg=None) assertNotIn (member, container, msg=None) assertIs (first, second, msg=None) assertNotIs (first, second, msg=None) assertRaisesWithMessage(exc_class, message, callable, *args, **keywargs) Several of these are implemented in other libraries (Twisted, at least). You might save some time by grabbing them and their unit tests, rather than re-implementing them. Twisted calls `assertIs´ `assertIdentical´, by the way. [snip] Other suggestions that weren't controversial but I might not get to: assertRaisesWithMessage taking a regex to match the error message Actually, I remember that someone raised an object to this as being not as flexible as some might want - an objection I agree with. Perhaps that was overruled, but I didn't want this to slip by as not controversial. expect methods that collect failures and report at the end of the test (allowing an individual test method to raise several errors without stopping) assertIsInstance and assertIsSubclass The former of these is also in Twisted already, if you want to copy it. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Py3k DeprecationWarning in stdlib
On Thu, 26 Jun 2008 23:56:23 +1000, Nick Coghlan [EMAIL PROTECTED] wrote: [snip] Ok, then we're back to there being no supported way to write tests that need to intercept warnings. Twisted has already suffered from this (JP reports that Twisted's assertWarns is broken in 2.6), and I doubt it's alone. So I guess I am filing a bug after all... :) Yeah - Brett's correct that everything under test.test_support should really be formally undocumented. It's mostly a place for code that reflects things we do a lot in our unit tests and are tired of repeating rather than this is a good API that we want to support forever and encourage other people to use. However, if other folks turn out to have similar needs, then it may be possible to add something to unittest to support it. However, given that the beta deadline has already passed, you may need to use similar hackery to that used by catch_warning and replace warnings.showwarning with a test function that saves the raised exception (it also wouldn't be hard to enhance it a bit to handle more than a single warning). We don't use showwarning because in order to reliably catch warnings that way, it's necessary to rely on even more private implementation details of the warning system. Consider this: from warnings import warn from test.test_support import catch_warning def f(): warn(foo) def test(): with catch_warning() as w: f() assert str(w.message) == foo, %r != %r % (w.message, foo) test() test() The first assertion passes (by the way, I don't understand why w.message isn't the message passed to warn, but is instead an instance of UserWarning) but the second assertion fails. A more subtle example might include two functions, the first of which is deprecated and called by the second, and one test for each of them. Now the test for the warning will only pass if it runs before the other test; if they accidentally run in the other order, you won't see the warning, so as far as I can tell, you can't reliably write a unit test for warnings using catch_warning. The real problem with testing many uses of the warning system is that it doesn't expose enough public APIs for this to be possible. You *have* to use APIs which are, apparently, private (such as warn_explicit). Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Py3k DeprecationWarning in stdlib
On Fri, 27 Jun 2008 01:14:36 +1000, Nick Coghlan [EMAIL PROTECTED] wrote: Jean-Paul Calderone wrote: [snip] The real problem with testing many uses of the warning system is that it doesn't expose enough public APIs for this to be possible. You *have* to use APIs which are, apparently, private (such as warn_explicit). Hmm, I think the bigger problem is that there is no documented way to save the warnings filter and restore it to a previous state - the 'filters' attribute (which holds the actual list of filters) isn't documented and isn't included in __all__. This makes it hard to write an officially supported test that fiddles with the warning settings then puts them back the way they were. It sounds like you're agreeing that there aren't enough public APIs. Making warn_explicit public addresses this particular problem in one way - by letting applications hook into the warning system before filters are applied. Making the filter list public is another way, since it would let applications clear and then restore the filters. I don't particularly care about the details, I just want some public API for this. Making warn_explicit public seems better to me, since it was already there in previous versions of Python, and it lets you completely ignore both the filters list and the global registry, but if others would rather make the filters and global registry a part of the public API, that's fine by me as well. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Py3k DeprecationWarning in stdlib
On Fri, 27 Jun 2008 01:52:18 +1000, Nick Coghlan [EMAIL PROTECTED] wrote: Jean-Paul Calderone wrote: I don't particularly care about the details, I just want some public API for this. Making warn_explicit public seems better to me, since it was already there in previous versions of Python, and it lets you completely ignore both the filters list and the global registry, but if others would rather make the filters and global registry a part of the public API, that's fine by me as well. Why do you say warn_explicit isn't public? It's in both the 2.5 and 2.6 API docs for the warnings module. Brett told me it was private (on this list several weeks or a month or so ago). It's also no longer called in 2.6 by the C implementation of the warning system. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Community buildbots and Python release quality metrics
On Thu, 26 Jun 2008 21:46:48 +0200, Georg Brandl [EMAIL PROTECTED] wrote: [snip] As for reverting changes that break, I'd support this only for changes that break *all* of them. For example, I only use one platform to develop on (and I guess it's the same for many others), having the buildbots go red on another platform means I can try to fix the issue. BuildBot has two ways to let you run your code on all builders before you commit it to trunk. You can force a build on a branch or you can try a build with a patch. I don't know if these options are enabled on Python's buildmaster. If they are, then if you want, you can use them to make sure your code works on all platforms before you put it into trunk, where it may cause problems for someone else. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Py3k DeprecationWarning in stdlib
On Tue, 24 Jun 2008 23:03:33 -, [EMAIL PROTECTED] wrote: On 10:05 pm, [EMAIL PROTECTED] wrote: We need to be especially careful with the unit test suite itself - changing the test code to avoid the warning will normally be the right answer, but when the code is actually setting out to test the deprecated feature we need to suppress the warning in the test suite instead. This is a dangerous road to go down. If you suppress warnings in the test suite, you might suppress additional warnings which should actually be reported. Or, if the API gets modified in some way that the warning is supposed to be emitted but isn't any longer, it will be silent. It's easy to accidentally suppress too much or not enough. The way we've dealt with this in Twisted is adding an 'assertWarns' method so that we can invoke an API that is supposed to generate a warning, and (A) that warning and only that *specific* warning will not be emitted, and (B) if the API stops emitting the warning in the future, the test will fail and we will notice. It's also nice to have this facility in the test harness itself, so that you don't run the additional risk of accidentally (and silently) leaving warning suppression in place for subsequent tests. It would be *extra* nice to have this facility added to the standard library, since assertWarns in Twisted is broken by changes in Python 2.6 (ie, our tests for warnings all fail with [EMAIL PROTECTED]). For now, we will probably address this by switching to a different warning API. In the long term, it'd be better for us, other Python developers, and the standard library if there were an API in the standard library which facilitated testing of warnings. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] test_multiprocessing: test_listener_client flakiness
On Wed, 18 Jun 2008 18:35:26 +0200, Amaury Forgeot d'Arc [EMAIL PROTECTED] wrote: [snip] I just found the cause of the problem ten minutes ago: It seems that when a socket listens on the address 127.0.0.1 or localhost, another process cannot connect to it using the machine's name (even from the same machine). That's because when you try to connect to A:B you won't connect to a server listening on X:B - somewhat by design. Changing the test to listen on A:B and X:B might fix it, but so would changing it to connect to the same address it listens to. The best seems to listen with the empty address . This will cause it to listen on all available interfaces, including public ones. It's rather unlikely that someone from the internet will connect to the port while the test suite is running and use it to do terrible things to you, but it's not /impossible/. I'd suggest changing the client to connect to 127.0.0.1 or localhost, instead. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Interesting blog post by Ben Sussman-Collins
On Fri, 13 Jun 2008 18:22:42 -0400, Barry Warsaw [EMAIL PROTECTED] wrote: [snip] * small branches - we have a strict limit on diffs no greater than 800 lines. Yes we have exceptions, but they are rare and pre-arranged. Having such a strict limit really forces you to be disciplined, organized and very effectively diffuses code bombs. * everyone can see (lots of) everyone else's code - this is great because everyone needs some advice or guidance along the way. If you get stuck, you can push a branch and I can pull it and look at it, run it, test it, even modify it and push my own branch for you to see. This is /much/ more effective than trading patches, and I don't see how this could even work without a dvcs. * nothing lands without being reviewed - this is a hard and fast rule, no exceptions. Someone else has to review your code, and most developers are also reviewers (we have a mentoring program to train new reviewers). You get over the fear pretty quickly, and learn /a lot/ both by reviewing and getting reviewed. Coding standards emerge, best practices are established, and overall team productivity goes way up. Small branches are critical to this process, as is our goal of reviewing every branch within 24 hours of its submission. * nothing lands without passing all tests - speaking from experience, this is the one thing I wish Python would adopt! This means the trunk is /always/ releasable and stable. The trade-off is that it can take quite a while for your branch to land once it's been approved, since this process is serialized and is dependent on full test suite execution time. Python's challenge here is that what passes on one platform does not necessarily pass on another. Still, if this week is any indication, passing on /any/ platform would be nice. ;) I'm not saying Python can or should adopt these guidelines. An open source volunteer project is different than a corporate environment, even if the latter is very open-source-y. But it is worthwhile to continually evaluate and improve the process because over time, you definitely improve efficiency in ways that are happily adopted by the majority of the community. A big +1 on all these points. I can also add that Twisted is developed following many of these rules so it *can* work for open source volunteer projects, if the developers want it to. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] segfault in struct module
[EMAIL PROTECTED]:~$ ~/Projects/python/trunk/python Python 2.6a3+ (trunk:63964, Jun 5 2008, 16:49:12) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type help, copyright, credits or license for more information. import struct struct.pack(357913941c, 'a') Segmentation fault [EMAIL PROTECTED]:~$ The unit test for exactly this case was deleted in r60892. I would like to suggest that just deleting unit tests isn't a very good idea. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] GIL cpu usage problem, confirm me
On Sun, 8 Jun 2008 08:37:20 -0500, Benjamin Peterson [EMAIL PROTECTED] wrote: Certainly not in core Python. Have a look http://code.google.com/p/python-threadsafe/, though. http://code.google.com/p/python-safethread/ Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Addition of pyprocessing module to standard lib.
On Wed, 21 May 2008 20:57:33 +0200, \Martin v. Löwis\ [EMAIL PROTECTED] wrote: As said before, PyOpenGL is an example of an extension that moved from C code to Python/ctypes, luckily we don't use it, but what if the maintainers of MySQL-Python or cx_Oracle decide to move to ctypes. Having the ctypes extension in the stdlib doesn't imply it runs on any platform where python runs. Extension writers should keep this in mind when they decide to use ctypes. They should document, that their extension depends on ctypes and therefore doesn't run on platforms where ctypes doesn't work. Plus, even if ctypes works, the code might be incorrect, because they had been assuming structure layouts and symbolic constants that have just a different definition on some other platform, causing the extension module to crash. Writing portable ctypes modules is really hard - significantly harder than writing portable C code (although writing non-portable ctypes code is apparently easier than writing non-portable C code). True. There's some room for improvement in ctypes here, fortunately. For example, PyPy has some tools which resolve the particular problem you're talking about; the library is even available separately and can (and probably should) be used by anyone writing a ctypes module. Sample usage and installation instructions available here: http://codespeak.net/~fijal/configure.html Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Symbolic errno values in error messages
On Sat, 17 May 2008 00:15:23 +1000, Nick Coghlan [EMAIL PROTECTED] wrote: Alexander Belopolsky wrote: Yannick Gingras ygingras at ygingras.net writes: 2) Where can I find the symbolic name in C? Use standard C library char* strerror(int errnum) function. You can see an example usage in Modules/posixmodule.c (posix_strerror). I don't believe that would provide adequate Windows support. It's not C, but maybe it's interesting to look at anyway: http://twistedmatrix.com/trac/browser/trunk/twisted/python/win32.py?rev=21682#L94 However, neither strerror nor the linked code gives out symbolic names for errnos. They both produce messages like Interrupted system call, whereas the symbolic name would be EINTR. Modules/errnomodule.c might be worth looking at, although its solution is somewhat disappointing. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Community buildbots
Hi, I just wanted to point out a few things: Community 2.5 bots, 6 out of 8 offline, of the remaining two (which are both red), one is actually using Python 2.6, not Python 2.5: http://python.org/dev/buildbot/community/2.5/ Community 2.6 bots, 6 out of 8 offline, but at least the remaining two (both of which are red) seem to be using the correct Python version: http://python.org/dev/buildbot/community/trunk/ Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] warnings.showwarning (was Re: [Python-3000] Reminder: last alphas next Wednesday 07-May-2008)
On Thu, 1 May 2008 19:31:20 -0700, Brett Cannon [EMAIL PROTECTED] wrote: [snip] I just closed the release blocker I created (the backwards-compatibility issue with warnings.showwarning() ). I would like to add a PendingDeprecationWarning (or stronger) to 2.6 for showwarning() implementations that don't support the optional 'line' argument. I guess the best way to do it in C code would be to see if PyFunction_GetDefaults() returns a tuple of length two (since showwarning() already has a single optional argument as it is). Hi Brett, I'm still seeing some strange behavior from the warnings module, This can be observed on the community buildbot for Twisted, for example: http://python.org/dev/buildbot/community/trunk/x86%20Ubuntu%20Hardy%20trunk/builds/171/step-Twisted.zope.stable/0 The log ends with basically all of the warning-related tests in Twisted failing, reporting that no warnings happened. There is also some strange behavior that can be easily observed in the REPL: [EMAIL PROTECTED]:~/Projects/python/trunk$ ./python /home/exarkun/Projects/Divmod/trunk/Combinator/combinator/xsite.py:7: DeprecationWarning: the sets module is deprecated from sets import Set Python 2.6a2+ (trunk:62636M, May 2 2008, 09:19:41) [GCC 4.1.3 20070929 (prerelease) (Ubuntu 4.1.2-16ubuntu2)] on linux2 Type help, copyright, credits or license for more information. import warnings warnings.warn(foo) :1: UserWarning: foo # Where'd the module name go? def f(*a): ... print a ... warnings.showwarning = f warnings.warn(foo) # Where'd the warning go? Any ideas on this? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] cStringIO buffer interface
On Wed, 30 Apr 2008 09:51:25 -0700, Guido van Rossum [EMAIL PROTECTED] wrote: On Wed, Apr 30, 2008 at 9:36 AM, Farshid Lashkari [EMAIL PROTECTED] wrote: I was just curious as to why cStringIO objects don't implement the buffer interface. cStringIO objects seem similar to string and array objects, and those support the buffer protocol. Is there a reason against allowing cStringIO to support at least the read buffer interface, or is just that nobody has considered it until now? Well, for one, it would mean you could no longer exchange a StringIO instance for a cStringIO instance. It would probably only mean that there is one further incompatibility between cStringIO and StringIO - you already can't exchange them in a number of cases. They handle unicode differently, they have different methods, etc. Maybe making them diverge even further is a step in the wrong direction, though. Also, what's the compelling use case you're thinking of? I'm not sure what use-case Farshid Lashkari had. For Twisted, it has been considered as a way to reduce peak memory usage (by reducing the need for memory copying, which also speeds things up). I'm not sure if anyone has benchmarked this yet, so I don't know if it's a real win or not. I think Thomas Hervé has a patch to cStringIO which implements the feature, though. For reference, http://twistedmatrix.com/trac/ticket/3188. This isn't high on my priority list, but I thought I'd point out the potential use-case. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Encoding detection in the standard library?
On Mon, 21 Apr 2008 17:50:43 +0100, Michael Foord [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: David Is there some sort of text encoding detection module is the David standard library? And, if not, is there any reason not to add David one? No, there's not. I suspect the fact that you can't correctly determine the encoding of a chunk of text 100% of the time mitigates against it. The only approach I know of is a heuristic based approach. e.g. http://www.voidspace.org.uk/python/articles/guessing_encoding.shtml (Which was 'borrowed' from docutils in the first place.) This isn't the only approach, although you're right that in general you have to rely on heuristics. See the charset detection features of ICU: http://www.icu-project.org/userguide/charsetDetection.html I think OSAF's pyicu exposes these APIs: http://pyicu.osafoundation.org/ Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] socket.SOL_REUSEADDR: different semantics between Windows vs Unix (or why test_asynchat is sometimes dying on Windows)
On Fri, 4 Apr 2008 13:24:49 -0700, Trent Nelson [EMAIL PROTECTED] wrote: Interesting results! I committed the patch to test_socket.py in r62152. I was expecting all other platforms except for Windows to behave consistently (i.e. pass). That is, given the following: import socket host = '127.0.0.1' sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock.bind((host, 0)) port = sock.getsockname()[1] sock.close() del sock sock1 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock1.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock1.bind((host, port)) sock2 = socket.socket(socket.AF_INET, socket.SOCK_STREAM) sock2.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1) sock2.bind((host, port)) the second bind should fail with EADDRINUSE, at least according to the 'SO_REUSEADDR and SO_REUSEPORT Socket Options' section in chapter 7.5 of Stevens' UNIX Network Programming Volume 1 (2nd Ed): With TCP, we are never able to start multiple servers that bind the same IP address and same port: a completely duplicate binding. That is, we cannot start one server that binds 198.69.10.2 port 80 and start another that also binds 198.69.10.2 port 80, even if we set the SO_REUSEADDR socket option for the second server. The results: both Windows *and* Linux fail the patched test; none of the buildbots for either platform encountered an EADDRINUSE socket.error after the second bind(). FreeBSD, OS X, Solaris and Tru64 pass the test -- EADDRINUSE is raised on the second bind. (Interesting that all the ones that passed have a BSD lineage.) Notice that the quoted text explains that you cannot start multiple servers that etc. Since you didn't call listen on either socket, it's arguable that you didn't start any servers, so there should be no surprise regarding the behavior. Try adding listen calls at various places in the example and you'll see something different happen. FWIW, AIUI, SO_REUSEADDR behaves just as described in the above quote on Linux/BSD/UNIX/etc. On Windows, however, that option actually means something quite different. It means that the address should be stolen from any process which happens to be using it at the moment. There is another option, SO_EXCLUSIVEADDRUSE, only on Windows I think, which, AIUI, makes it impossible for another process to steal the port using SO_REUSEADDR. Hope this helps, Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] __eq__ vs hash
On Fri, 4 Apr 2008 07:38:04 -0700, Guido van Rossum [EMAIL PROTECTED] wrote: On Fri, Apr 4, 2008 at 2:46 AM, Ralf Schmitt [EMAIL PROTECTED] wrote: the news file for python 2.6 does not mention that you need to define __hash__ in case you define __eq__ for a class. This breaks some code (for me: mercurial and pyparsing). Shouldn't this be documented somewhere (I also cannot find it in the whatsnew file). Well, technically this has always been the requirement. What specific code breaks? Maybe we need to turn this into a warning in order to be more backwards compatible? There was some code in Twisted (one class, specifically) which was broken/ revealed to be broken by this Python 2.6 change. The code assumed identity hashing if no __hash__ method was implemented. This ended up only working if you only had a singleton instance of the class, but the class also went out of its way to make sure that was the case. We have since changed the code to work on Python 2.6. If you're curious about the details, here's the code after the fix: http://twistedmatrix.com/trac/browser/trunk/twisted/web2/dav/element/base.py?rev=22305#L345 Here's the changeset that fixed it: http://twistedmatrix.com/trac/changeset/22305 And here's the same class before the fix: http://twistedmatrix.com/trac/browser/trunk/twisted/web2/dav/element/base.py?rev=22304#L344 Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Proposal: from __future__ import unicode_string_literals
On Mon, 24 Mar 2008 00:14:13 +0100, \Martin v. Löwis\ [EMAIL PROTECTED] wrote: You are still only seeing this as a case of libraries with a small number of people developing them and making regular well defined releases. That is not how the world I am talking about looks. Can you give me examples of such software? Are you perhaps talking about closed source software? I'm not sure what software he was talking about. I can say that for the work I do on both Twisted and Divmod software, I'd be quite happy to see this feature. As either part of a migration path towards 3k _or_ as a feature entirely on its own merits, this would be very useful to me. I'm a bit curious about why Thomas said this sort of thing results in fragile code. Twisted has been using __future__ imports for years and they've never been a problem. Twisted currently supports Python 2.3 through Python 2.5, and the only thing that's really difficult about that is subtle changes in library behavior, not syntax. I'm also curious about why Lennart thinks that this would only be relevant for large projects with lots of developers making regular releases. Sure, I'd use it in that scenario, but that's because it's a subset of all development. ;) Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Buildbot failures
On Thu, 7 Feb 2008 09:08:26 -0500, A.M. Kuchling [EMAIL PROTECTED] wrote: On Wed, Feb 06, 2008 at 08:34:21PM -0500, Raymond Hettinger wrote: Also, test_docxmlrpc hasn't been happy. One of the tests isn't getting the exact response string it expected. Any ideas what is causing this? My fault; it should be fixed now. There is also a recurring failure in SocketServer.py returning ValueError: list.remove(x): x not in list during attempts to remove a PID from the list of active_children. Any ideas about what is causing this? I couldn't find a current build that was showing this error, but searching python.org turned up one that had been indexed: http://www.python.org/dev/buildbot/trunk/ppc%20Debian%20unstable%20trunk/builds/726/step-test/0 I don't see what could be causing this failure, though; the test isn't starting any subprocesses outside of what the ForkingServer class does. I don't see how this could be an artifact of the buildbot environment, either. It would be easy to add an 'if pid in self.active_children' to the code, but I don't want to do that without understanding the problem. You could instrument fork() so that it logs the call stack and the child PID and instrument ForkingServer so that it reports which PID it is about to try to remove from active_children. Perhaps this will point to the problem. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 370, open questions
On Thu, 17 Jan 2008 08:55:51 +0100, Christian Heimes [EMAIL PROTECTED] wrote: * Should the site package directory also be ignored if process gid != effective gid? If it should, I think the PEP should explain the attack this defends against in more detail. The current brief mention of security issues is a bit hand-wavey. For example, what is the relationship between security, this feature, and the PYTHONPATH environment variable? Isn't the attack of putting malicious code into a user site-packages directory the same as the attack of putting it into a directory in PYTHONPATH? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 370, open questions
On Thu, 17 Jan 2008 13:09:34 +0100, Christian Heimes [EMAIL PROTECTED] wrote: Jean-Paul Calderone wrote: If it should, I think the PEP should explain the attack this defends against in more detail. The current brief mention of security issues is a bit hand-wavey. For example, what is the relationship between security, this feature, and the PYTHONPATH environment variable? Isn't the attack of putting malicious code into a user site-packages directory the same as the attack of putting it into a directory in PYTHONPATH? The PYTHONPATH env var has the same security implications. However a user has multiple ways to avoid problems. For example the user can use the -E flag or set up sudo to ignore the environment. I'm not sure how sudo gets involved. sudo doesn't set the euid, it sets the uid. This is about programs with the setuid bit set. (I assume this doesn't also apply to Python programs that explicitly make use of the seteuid() call, since this will probably only be checked at interpreter startup before any Python application code has run.) The uid and gid tests aren't really required. They just provide an extra safety net if a user forgets to add the -s flag to a suid app. It's not much of a safety net if PYTHONPATH still allows injection of arbitrary code. It's just needless additional complexity for no benefit. On the other hand, if all of the other mechanisms for modifying how imports work is also made to behave this way, then maybe there's a point. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Bug day tasks
On Fri, 04 Jan 2008 16:53:46 +0100, Christian Heimes [EMAIL PROTECTED] wrote: A.M. Kuchling wrote: Another task is to get logging set up for the #python-dev IRC channel. Searching didn't find any existing archive; we could run it on python.org somewhere, but does anyone here already run an IRC logging bot? Maybe someone could just add #python-dev to their existing setup. It'd be nice if we can also get a bot into #python-dev to broadcast svn commits and bug tracker changes. The Twisted guys have good bot with decent msg coloring but IIRC it's tight into TRAC. For svn we could probably use CIA bot and tie it into a svn post commit hook. The trac integration is entirely optional, so don't let that discourage you. If anyone wants to investigate setting this up, svn://divmod.org/svn/Divmod/sandbox/exarkun/commit-bot The code has no unit tests and there is no documentation. Also notice sandbox in the SVN URL. The only real advantage that it has over CIA that I can point out is that you don't have to write an XML or have a SQL server running in order to use it. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Does Python need a file locking module (slightly higher level)?
On Tue, 23 Oct 2007 01:11:39 +0100, Jon Ribbens [EMAIL PROTECTED] wrote: On Tue, Oct 23, 2007 at 12:29:35PM +1300, Greg Ewing wrote: [EMAIL PROTECTED] wrote: Does fcntl.flock work over NFS and SMB and on Windows? I don't think file locking will ever work over NFS, since it's a stateless protocol by design, and locking would require maintaining state on the server. You can do file locking over NFS, that's one of the reasons people use fcntl. It uses an RPC side channel separate to the main NFS protocol. You can do it. It just doesn't work. (You could say the same about regular read and write operations for many NFS implementations, though) Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] incompatible unittest changes
On Fri, 19 Oct 2007 15:51:51 -0700, Collin Winter [EMAIL PROTECTED] wrote: On 10/19/07, Jean-Paul Calderone [EMAIL PROTECTED] wrote: In trunk after 2.5, equality and hashing for TestCase were added, changing the behavior so that two instances of TestCase for the same test method hash the same and compare equal. This means two instances of TestCase for the same test method cannot be added to a single set. Here's the change: http://svn.python.org/view/python/trunk/Lib/unittest.py?rev=54199r1=42115r2=54199 The implementations aren't even very good, since they prevent another type from deciding that it wants to customize comparison against TestCase (or TestSuite, or FunctionTestCase) instances. The implementations have been changed in a more recent revision. Not in http://svn.python.org/projects/python/trunk/Lib/[EMAIL PROTECTED] Is there a real use case for this functionality? If not, I'd like it to be removed to restore the old behavior. The use-case was problems I encountered when writing the test suite for unittest. If you can find a way to implement the functionality you want *and* keep the test suite reasonably straightforward, I'll be happy to review your patch. The test suite can implement the comparison which is currently on the unittest classes and invoke that functionality instead of using == and !=. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] incompatible unittest changes
In trunk after 2.5, equality and hashing for TestCase were added, changing the behavior so that two instances of TestCase for the same test method hash the same and compare equal. This means two instances of TestCase for the same test method cannot be added to a single set. Here's the change: http://svn.python.org/view/python/trunk/Lib/unittest.py?rev=54199r1=42115r2=54199 The implementations aren't even very good, since they prevent another type from deciding that it wants to customize comparison against TestCase (or TestSuite, or FunctionTestCase) instances. Is there a real use case for this functionality? If not, I'd like it to be removed to restore the old behavior. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Removing the GIL (Me, not you!)
On Fri, 14 Sep 2007 14:13:47 -0500, Justin Tulloss [EMAIL PROTECTED] wrote: Your idea can be combined with the maxint/2 initial refcount for non-disposable objects, which should about eliminate thread-count updates for them. -- I don't really like the maxint/2 idea because it requires us to differentiate between globals and everything else. Plus, it's a hack. I'd like a more elegant solution if possible. It's not really a solution either. If your program runs for a couple minutes and then exits, maybe it won't trigger some catastrophic behavior from this hack, but if you have a long running process then you're almost certain to be screwed over by this (it wouldn't even have to be *very* long running - a month or two could do it on a 32bit platform). Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Removing the GIL (Me, not you!)
On Fri, 14 Sep 2007 17:43:39 -0400, James Y Knight [EMAIL PROTECTED] wrote: On Sep 14, 2007, at 3:30 PM, Jean-Paul Calderone wrote: On Fri, 14 Sep 2007 14:13:47 -0500, Justin Tulloss [EMAIL PROTECTED] wrote: Your idea can be combined with the maxint/2 initial refcount for non-disposable objects, which should about eliminate thread-count updates for them. -- I don't really like the maxint/2 idea because it requires us to differentiate between globals and everything else. Plus, it's a hack. I'd like a more elegant solution if possible. It's not really a solution either. If your program runs for a couple minutes and then exits, maybe it won't trigger some catastrophic behavior from this hack, but if you have a long running process then you're almost certain to be screwed over by this (it wouldn't even have to be *very* long running - a month or two could do it on a 32bit platform). Not true: the refcount becoming 0 only calls a dealloc function.. For objects which are not deletable, the dealloc function should simply set the refcount back to maxint/2. Done. So, eg, replace the Py_FatalError in none_dealloc with an assignment to ob_refcnt? Good point, sounds like it could work (I'm pretty sure you know more about deallocation in CPython than I :). Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-3000] test_asyncore fails intermittently on Darwin
On Sun, 29 Jul 2007 23:40:29 -0700, Hasan Diwan [EMAIL PROTECTED] wrote: The issue seems to be in the socket.py close method. It needs to sleep socket.SO_REUSEADDR seconds before returning. Yes, it is a simple fix in python, but the socket code is C. I found some code in socket.py and made the changes. Patch is available at http://sourceforge.net/tracker/index.php?func=detailaid=1763387group_id=5470atid=305470 -- enjoy your week. Uh, no, that's basically totally wrong. Details on the ticket. -- Cheers, Hasan Diwan [EMAIL PROTECTED] ___ Python-3000 mailing list [EMAIL PROTECTED] http://mail.python.org/mailman/listinfo/python-3000 Unsubscribe: http://mail.python.org/mailman/options/python-3000/exarkun%40divmod.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Fwd: [ python-Patches-1744382 ] Read Write lock
On Fri, 6 Jul 2007 10:47:16 -0700, Mike Klaas [EMAIL PROTECTED] wrote: On 6-Jul-07, at 6:45 AM, Yaakov Nemoy wrote: I can do the other three parts, but I am wondering, how do I write a deterministic test unit for my patch? How is it done with the threading model in python in general? I don't know how it is done in general, but for reference, here are some of the unittests for my read/write lock class: [snip] read.release() self.assertEqual(wrlock.readerCount, 0) time.sleep(.1) self.assertTrue(writer.gotit) Not exactly deterministic. Instead of a flag attribute, try using an Event or a Condition. Either of these will let you know exactly when the necessary operation has completed. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adding NetworkIOError for bug 1706815
On Tue, 3 Jul 2007 23:58:44 -0700, Gregory P. Smith [EMAIL PROTECTED] wrote: In response to bug 1706815 and seeing messy code to catch errors in network apps I've implemented most of the ideas in the bug and added a NetworkIOError exception (child of IOError). With this, socket.error would now inherit from NetworkIOError instead of being its own thing (the old one didn't even declare a parent!). Complete patch attached to the bug. All unit tests pass. Documentation updates included. http://sourceforge.net/tracker/index.php?func=detailaid=1706816group_id=5470atid=105470 FWIW, that page does not seem to be generally accessible. It's difficult to know what you're talking about without being able to see it. Artifact: Invalid ArtifactID; this Tracker item may have moved to a different Tracker since this URL was generated -- [Find the new location of this Tracker item] Following [Find the new location ...]: Artifact: This Artifact Has Been Made Private. Only Group Members Can View Private ArtifactTypes. Any thoughts? I'm happy with it and would like to commit it if folks agree. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] updated for gdbinit
On Tue, 15 May 2007 09:26:55 -0500, [EMAIL PROTECTED] wrote: Christian I tried to use gdbinit today and found that the fragile Christian pystacks macro didn't work anymore. I don't know gdb very Christian well, but this turned out to work a bit more reliably: ... Thanks. I'll give it a try and check it in if it checks out. It would also be nice if it handled non-main threads. This is accomplished by additionally checking if the pc is in t_bootstrap (ie, between that and thread_PyThread_start_new_thread). Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 2.5.1
On Sat, 28 Apr 2007 09:32:57 -0400, Raghuram Devarakonda [EMAIL PROTECTED] wrote: On 4/28/07, Calvin Spealman [EMAIL PROTECTED] wrote: Index: test_os.py === --- test_os.py (revision 54982) +++ test_os.py (working copy) @@ -6,6 +6,7 @@ import unittest import warnings import sys +import tempfile from test import test_support warnings.filterwarnings(ignore, tempnam, RuntimeWarning, __name__) @@ -241,13 +242,18 @@ self.assertEquals(os.stat(self.fname).st_mtime, t1) def test_1686475(self): +fn = tempfile.mktemp() +openfile = open(fn, 'w') # Verify that an open file can be stat'ed try: -os.stat(rc:\pagefile.sys) +os.stat(fn) except WindowsError, e: if e == 2: # file does not exist; cannot run test return self.fail(Could not stat pagefile.sys) +finally: +openfile.close() +os.remove(fn) from test import mapping_tests mktemp() is deprecated. You may want to use mkstemp(). There will be no need for explicit open as well as mkstemp() also returns open descriptor. You still need fdopen() though, since os.stat() won't take a file descriptor. The patch is incomplete though, since it should remove the ENOENT handling and the remaining reference to pagefile.sys. As for mktemp() being deprecated - the docstring warns users away, but actually calling it emits no warning. Sure, using it can lead to insecurities, but there's hardly any worry of that here. If the function were actually deprecated (that is, if calling it emitted a DeprecationWarning), that would be a good reason to avoid calling it, though. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] object.__init__
As a data point, I thought I'd point out that the recent object.__init__ change broke a handful of Twisted unit tests. The fix for this was simple, and I've already implemented it, but it would have been nice if the old behavior had been deprecated first and then removed, instead of just disappearing. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] 2.5 branch unfrozen
On Wed, 25 Apr 2007 21:28:10 +0200, Georg Brandl [EMAIL PROTECTED] wrote: Lars Gustäbel schrieb: On Sat, Apr 21, 2007 at 04:45:37PM +1000, Anthony Baxter wrote: Ok, things seem to be OK. So the release25-maint branch is unfrozen. Go crazy. Well, a little bit crazy. I'm afraid that I went crazy a little too early. Sorry for that. Won't happen again. BTW, svn provides a lock mechanism by which the branch freezing could be enforced more strictly... It doesn't work on directories, though. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] functools additions
On Sun, 15 Apr 2007 18:18:16 -0400, SevenInchBread [EMAIL PROTECTED] wrote: Do you have commit access? What's your real name? I prefer to remain pseudonymous, and I don't have commit access. Yeah... they're not terribly useful - more or less there for the sake of being there. Batteries included and all that Please discuss this on the python-ideas list before bringing it up on python-dev. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] test_pty.py hangs in verbose mode on Mac OS X?
On Fri, 13 Apr 2007 10:32:28 -0400, Barry Warsaw [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 I've been getting some test failures in Python 2.5 svn head on Mac OS X 10.4.9 which I'm not getting on Linux (Ubuntu feisty beta). test_sqlite and test_zipimport both fail, however, when run in verbose mode (e.g. ./python.exe Lib/test/test_sqlite.py) both pass. But that's not exactly why I'm writing this email wink. In the course of trying to debug this, I ran the following on my Mac: make TESTOPTS=-v test This runs the entire test suite in verbose mode, and you do get a lot of output. However the test suite hangs on test_pty.py. In fact, if you run that test alone: ./python.exe Lib/test/test_pty.py it too hangs for me. The reason is that in verbose mode, debug() actually prints stuff to stdout and on the Mac, when the child of the pty.fork() writes to its stdout, it blocks and so the parent's waitpid () never returns. This doesn't happen on Linux though; the child's stdout prints don't block, it exits, and the parent continues after the waitpid(). Here's a very simple program that reproduces the problem: - -snip snip- import os, pty, sys pid, fd = pty.fork() print sys.stderr, pid, fd if pid: os.waitpid(pid, 0) else: os._exit(0) - -snip snip- stderr, stdout, doesn't matter. This hangs on the Mac but completes successfully on Linux. Of course, in neither case do you see the child's output. I don't know if this is caused by a bug in the Mac's pty implementation or something we're doing wrong on that platform. I played around with several modifications to pty.fork() on the Mac, including letting it drop down to the openpty()/os.fork() code, even adding an explicit ioctl(slave_fd, TIOCSCTTY) call which Stevens chapter 19 recommends for 4.3+BSD. I can't get it to not block. What about reading from the child in the parent before calling waitpid? Barring a fix to pty.fork() (or possibly os.forkpty()) for the Mac, then I would like to at least make test_pty.py not block when run in verbose mode. A very simple hack would add something like this to the if pid == pty.CHILD stanza: def debug(msg): pass, possibly protected by a if verbose:. A less icky hack would be to read the output from the master_fd in the parent, though you have to be careful with that on Linux else the read can throw an input/output error. Disabling debug output is band-aid yes, and any application on the Mac like the above snippet will still fail. If anybody has any suggestions, I'm all ears, but I've reached the limit of my pty-fu. I don't think this is an OS X PTY bug. Writing to a blocking file descriptor can block. Programs that do this need to account for the possibility. Jean-Paul ___ Python-Dev mailing list [EMAIL PROTECTED] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] test_pty.py hangs in verbose mode on Mac OS X?
On Fri, 13 Apr 2007 11:02:01 -0400, Barry Warsaw [EMAIL PROTECTED] wrote: -BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Apr 13, 2007, at 10:57 AM, Jean-Paul Calderone wrote: I don't know if this is caused by a bug in the Mac's pty implementation or something we're doing wrong on that platform. I played around with several modifications to pty.fork() on the Mac, including letting it drop down to the openpty()/os.fork() code, even adding an explicit ioctl(slave_fd, TIOCSCTTY) call which Stevens chapter 19 recommends for 4.3+BSD. I can't get it to not block. What about reading from the child in the parent before calling waitpid? Yep, this is what I suggested below. Porting the same change over to Linux produced an OSError, but that's probably just because I wasn't as careful as I should have been late last night. Barring a fix to pty.fork() (or possibly os.forkpty()) for the Mac, then I would like to at least make test_pty.py not block when run in verbose mode. A very simple hack would add something like this to the if pid == pty.CHILD stanza: def debug(msg): pass, possibly protected by a if verbose:. A less icky hack would be to read the output from the master_fd in the parent, though you have to be careful with that on Linux else the read can throw an input/output error. Disabling debug output is band-aid yes, and any application on the Mac like the above snippet will still fail. If anybody has any suggestions, I'm all ears, but I've reached the limit of my pty-fu. I don't think this is an OS X PTY bug. Writing to a blocking file descriptor can block. Programs that do this need to account for the possibility. Why doesn't it block on Linux then? Likely differing buffering behavior. Prior to Linux 2.6, the pipe implementation allowed only a single buffer (that is, the bytes from a single write call) in a pipe at a time, and blocked subsequent writes until that buffer was read. Recently this has changed to allow multiple buffers up to 4k total length, so multiple short writes won't block anymore. OS X may have some other buffering behavior which is causing writes to block where they don't on Linux. All these details are left to the platform, and there are a variety of behaviors which can be considered valid. Of course, I don't actually /know/ the cause of the problem here, but this explanation seems plausible to me, and I'd investigate it before looking for platform-specific pty bugs (although OS X is a good platform on which to go looking for those ;). Jean-Paul ___ Python-Dev mailing list [EMAIL PROTECTED] http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] About SSL tests
On Wed, 28 Mar 2007 16:38:45 -0700, Brett Cannon [EMAIL PROTECTED] wrote: On 3/28/07, Facundo Batista [EMAIL PROTECTED] wrote: There's this bug (#451607) about the needing of tests for socket SSL... Last interesting update in the tracker is five years ago, and since a lot of work has been done in test_socket_ssl.py (Brett, Neal, Tim, George Brandl). Do you think is useful to leave this bug opened? Having a bug left open because a module needs more test is not really needed. It's rather obvious when a module needs more tests. =) I say close it. I just wish we had a more reliable web site to connect to for SSL tests. How about something even better? Take a look at openssl s_server. This is still a pretty terrible way to test the SSL functionality, but it's loads better than connecting to a site on the public internet. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] About SSL tests
On Thu, 29 Mar 2007 00:22:23 + (UTC), Facundo Batista [EMAIL PROTECTED] wrote: Jean-Paul Calderone wrote: Take a look at openssl s_server. This is still a pretty terrible way to test the SSL functionality, but it's loads better than connecting to a site on the public internet. How would you deal with the deployment and maintenance of this server in all buildbot's machines? Or we just can ask to see if we have the server available, and then run the tests if yes? If the openssl binary is available, when the test starts, launch it in a child process, talk to it for the test, then kill it when the test is done. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Breaking calls to object.__init__/__new__
On Wed, 21 Mar 2007 15:45:16 -0700, Guido van Rossum [EMAIL PROTECTED] wrote: See python.org/sf/1683368. I'd like to invite opinions on whether it's worth breaking an unknown amount of user code in 2.6 for the sake of stricter argument checking for object.__init__ and object.__new__. I think it probably isn't; but the strict version could be added to 3.0 and a warning issued in 2.6 in -Wpy3k mode. Alternatively, we could introduce the stricter code in 2.6, fix the stdlib modules that it breaks, and hope for the best. Opinions? Perhaps I misunderstand the patch, but it would appear to break not just some inadvisable uses of super(), but an actual core feature of super(). Maybe someone can set me right. Is this correct? class Base(object): def __init__(self, important): # Don't upcall with `important` because object is the base # class and its __init__ doesn't care (or won't accept) it super(Base, self).__init__() self.a = important If so, what are the implications for this? class Other(object): def __init__(self, important): # Don't upcall with `important` because object is the base # class and its __init__ doesn't care (or won't accept) it super(Other, self).__init__() self.b = important class Derived(Base, Other): pass (A similar example could be given where Base and Other take differently named arguments with nothing to do with each other. The end result is the same either way, I think.) I think I understand the desire to pull keyword arguments out at each step of the upcalling process, but I don't see how it can work, since up calling isn't always what's going on - given a diamond, there's arbitrary side-calling, so for cooperation to work every method has to pass on every argument, so object.__init__ has to take arbitrary args, since no one knows when their up call will actually hit object. Since without diamonds, naive by-name upcalling works, I assume that super() is actually intended to be used with diamonds, so this seems relevant. I hope I've just overlooked something. Writing this email feels very strange. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Status of thread cancellation
On Thu, 15 Mar 2007 14:34:15 +0100, \Martin v. Löwis\ [EMAIL PROTECTED] wrote: I just proposed to implement thread cancellation for the SoC. Is there any prior work where one could start? The outcome of some prior work, at least: http://java.sun.com/j2se/1.4.2/docs/guide/misc/threadPrimitiveDeprecation.html Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Status of thread cancellation
On Thu, 15 Mar 2007 09:41:31 -0500, [EMAIL PROTECTED] wrote: I just proposed to implement thread cancellation for the SoC. Is there any prior work where one could start? Jean-Paul The outcome of some prior work, at least: Jean-Paul http://java.sun.com/j2se/1.4.2/docs/guide/misc/threadPrimitiveDeprecation.html I responded to that. I got the impression reading that page that the killed thread doesn't regain control so it can't clean up its potentially inconsistent data structures. The second question on the page: Couldn't I just catch the ThreadDeath exception and fix the damaged object? Addresses this. I inferred from Martin's proposal that he expected the thread to be able to catch the exception. Perhaps he can elaborate on what cleanup actions the dying thread will be allowed to perform. Perhaps he can. Hopefully, he can specifically address these points: 1. A thread can throw a ThreadDeath exception almost anywhere. All synchronized methods and blocks would have to be studied in great detail, with this in mind. 2. A thread can throw a second ThreadDeath exception while cleaning up from the first (in the catch or finally clause). Cleanup would have to repeated till it succeeded. The code to ensure this would be quite complex. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest enhancement for TestCase classes hierarchies
On Sat, 10 Mar 2007 09:13:28 -0600, Collin Winter [EMAIL PROTECTED] wrote: In my continuing trawl through the SF patch tracker, I came across #1244929 (http://python.org/sf/1244929), which causes TestLoader.loadTestsFromModule() to skip classes whose name starts with an underscore. This addresses the warning in that method's docs: While using a hierarchy of TestCase-derived classes can be convenient in sharing fixtures and helper functions, defining test methods on base classes that are not intended to be instantiated directly does not play well with this method. Doing so, however, can be useful when the fixtures are different and defined in subclasses. Does not play well, in this case, means that your base classes will be picked up against your will if they subclass TestCase. I like the patch and have worked up tests and doc changes for it. Any objections to including this in 2.6? This use case is what mixins are for. You don't have to include TestCase in your ancestry until you get to a class which you actually want to run tests. The current rule of loading anything that subclasses TestCase is simple and straightforward. Complicating it to provide a feature which is already available through a widely used standard Python idiom doesn't seem worth while. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [PATCH] Handling of scripts / substitution of python executable path
On Fri, 23 Feb 2007 15:36:50 +0100, Hans Meine [EMAIL PROTECTED] wrote: Hi! [snip - distutils should leave #!/usr/bin/env python alone] Comments? (I first posted this to distutils-sig but was told that distutils is a bit neglected there, so I decided to try to push these simple patches in via python-dev.) How about a distutils installation command line option to tell it to use this behavior? People with unusual environments can select it. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)
On Thu, 15 Feb 2007 10:46:05 -0500, A.M. Kuchling [EMAIL PROTECTED] wrote: On Thu, Feb 15, 2007 at 09:19:30AM -0500, Jean-Paul Calderone wrote: That feels like 6 layers too many, given that _logrun(selectable, _drdw, selectable, method, dict) return context.call({ILogContext: newCtx}, func, *args, **kw) return self.currentContext().callWithContext(ctx, func, *args, **kw) return func(*args, **kw) getattr(selectable, method()) klass(number, string) are all generic calls. I know function calls are expensive in Python, and method calls even more so... but I still don't understand this issue. Twisted's call stack is too deep? It is fair to say it is deep, I guess, but I don't see how that is a problem. If it is, I don't see how it is specific to this discussion. It's hard to debug the resulting problem. Which level of the *12* levels in the stack trace is responsible for a bug? Which of the *6* generic calls is calling the wrong thing because a handler was set up incorrectly or the wrong object provided? The code is so 'meta' that it becomes effectively undebuggable. I've debugged plenty of Twisted applications. So it's not undebuggable. :) Application code tends to reside at the bottom of the call stack, so Python's traceback order puts it right where you're looking, which makes it easy to find. For any bug which causes something to be set up incorrectly and only later manifests as a traceback, I would posit that whether there is 1 frame or 12, you aren't going to get anything useful out of the traceback. Standard practice here is just to make exception text informative, I think, but this is another general problem with Python programs and event loops, not one specific to either Twisted itself or the particular APIs Twisted exposes. As a personal anecdote, I've never once had to chase a bug through any of the 6 generic calls singled out. I can't think of a case where I've helped any one else who had to do this, either. That part of Twisted is very old, it is _very_ close to bug-free, and application code doesn't have very much control over it at all. Perhaps in order to avoid scaring people, there should be a way to elide frames from a traceback (I don't much like this myself, I worry about it going wrong and chopping out too much information, but I have heard other people ask for it)? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] microthreading vs. async io
On Thu, 15 Feb 2007 10:36:21 -0600, [EMAIL PROTECTED] wrote: [snip] def fetchSequence(...): fetcher = Fetcher() yield fetcher.fetchHomepage() firstData = yield fetcher.fetchPage('http://...') if someCondition(firstData): while True: secondData = yield fetcher.fetchPage('http://...') # ... if someOtherCondition(secondData): break else: # ... Ahem: from twisted.internet import reactor from twisted.internet.defer import inlineCallbacks from twisted.web.client importt getPage @inlineCallbacks def fetchSequence(...): homepage = yield getPage(homepage) firstData = yield getPage(anotherPage) if someCondition(firstData): while: secondData = yield getPage(wherever) if someOtherCondition(secondData): break else: ... So as I pointed out in another message in this thread, for several years it has been possible to do this with Twisted. Since Python 2.5, you can do it exactly as I have written above, which looks exactly the same as your example code. Is the only problem here that this style of development hasn't had been made visible enough? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)
On Thu, 15 Feb 2007 13:55:31 -0800, Josiah Carlson [EMAIL PROTECTED] wrote: Jean-Paul Calderone [EMAIL PROTECTED] wrote: [snip] Now if we can only figure out a way for everyone to benefit from this without tying too many brains up in knots. :) Whenever I need to deal with these kinds of things (in wxPython specifically), I usually set up a wxTimer to signal asyncore.poll(timeout=0), but I'm lazy, and rarely need significant throughput in my GUI applications. And I guess you also don't mind that on OS X this is often noticably broken? :) [snip] Protocol support is hit and miss. NNTP in Python could be better, but that's not an asyncore issue (being that nntplib isn't implemented using asyncore), that's an NNTP in Python could be done better issue. Is it worth someone's time to patch it, or should they just use Twisted? Well, if we start abandoning stdlib modules, because they can always use Twisted, then we may as well just ship Twisted with Python. We could always replace the stdlib modules with thin compatibility layers based on the Twisted protocol implementations. It's trivial to turn an asynchronous API into a synchronous one. I think you are correct in marking this an unrelated issue, though. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)
On Thu, 15 Feb 2007 15:47:39 +1300, Greg Ewing [EMAIL PROTECTED] wrote: Steve Holden wrote: If the borrowed code takes a reactor parameter then presumably the top-level code can pass the appropriate reactor type in. Since there should only be one reactor at a time in any given application, it shouldn't have to be passed in -- it could be held in a global variable deep inside the library. Only the code which creates the reactor initially needs to know about that variable, or even that there is such a thing as a reactor. Whether or not the premise here is accurate may be out of scope for this thread. Or it may not be. I dunno. However, I do want to point out that it is not necessarily correct that there should be only one reactor at a time in a given application. PJE has already explained that peak.events can have multiple reactors. Twisted is tied to one, but this may not always be the case. Whether there is a default reactor for applications that don't care about the ability to have more than one at a time is yet another question which may be worth examining. These are the kinds of things which should be spelled out in a PEP, including the rationale for any particular policy decisions (which should be kept to an absolute minimum) are made. Incorporating some piece of event-driven code written by someone else implies specific assumptions about event types and delivery, surely. It requires agreement on how to specify the event types and what to do in response, but that's all it should require. The way I envisage it, setting up an event callback should be like opening a file -- there's only one way to do it, and you don't have to worry about what the rest of the application is doing. You don't have to get passed an object that knows how to open files -- it's a fundamental service provided by the system. You just use it. If we suppose that files and sockets are supported in roughly the same way, and we suppose that sockets are supported in the way that Twisted supports them, then there is no difficulty supporting files in this way. :) That's why it's difficult to port code between GUI toolkits, for example, and even more so to write code that runs on several toolkits without change. Just in case it's not clear, the events I'm talking about are things like file and socket I/O, not GUI events. Trying to use two different GUIs at once is not something I'm addressing. Alright, good. Getting two different GUI libraries to play together is a pretty hairy task indeed, and well worth keeping separate from this one. :) Rather, you should be able to write code that does e.g. some async socket I/O, and embed it in a GUI app using e.g. gtk, without having to modify it to take account of the fact that it's working in a gtk environment, or having to parameterise it to allow for such things. Excellent. To be clear, this is how the Twisted model works, with respect to integration with GUI toolkits. I would not enjoy working with a system in which this was not the case. You seem to be arguing for libraries that contain platform dependencies to handle multiple platforms. I'm arguing that as much of the platform dependency as possible should be in the asyncore library (or whatever replaces it). Certainly. Library code doesn't care if the event loop is driven by select or poll or epoll or /dev/poll or kqueue or aio or iocp or win32 events or realtime signals or kaio or whatever gnarly thing is hidden in gtk or whatever gnarly thing is hidden inside qt or whatever gnarly thing is hidden inside COM or whatever gnarly thing is hidden inside wxWidgets. It cares about what features are available. It requests them somehow, and uses them. If they are unavailable, then it can decide whether the lack is catastrophic and give up or if it can be worked around somehow. The way a Twisted application does this is based on interfaces. Assuming interfaces continue to not be present in the stdlib, a stdlib event loop would have to find some other API for presenting this information, but it is not a very hard problem to solve. The main application code *might* have to give it a hint such as this app uses gtk, but no more than that. And ideally, I'd prefer it not to even have to do that -- pygtk should do whatever is necessary to hook itself into asyncore if at all possible, not the other way around. There is some advantage to declaring things up front, lest you get into the situation where you are partway through using code which will suddenly begin to demand Gtk at the same time as you are partway through using code which will suddenly begin to demand Qt, at which point you are in trouble. But this is another minor point. Since Glyph has already stated his opinion that Twisted isn't yet ready for adoption as-is this doesn't add to the discussion. Okay, but one of the suggestions made seemed to be why not just use the Twisted API. I'm putting forward a possible reason. So far, it sounds like the
Re: [Python-Dev] Twisted Isn't Specific (was Re: Trial balloon: microthreads library in stdlib)
On Thu, 15 Feb 2007 16:18:40 +1300, Greg Ewing [EMAIL PROTECTED] wrote: [snip] This is where my vision is fundamentally different: you shouldn't have to *make* a decision in the first place. All event-driven libraries should be made to use the same substrate on any given platform. Then they can coexist without the need for any top-level choices. I know that will be hard to do, but it's the only way out of this mess that I can see. Thomas already pointed this out, but I'm repeating it anyway. This vision represents an impossible reality at present. You will not get Gtk or Qt or wxWidgets to use Python's event notification API. If you are really very interested in solving this problem, go to the developers of each platform those toolkits run on and sell them on a unified event notification API. Once they have adopted, implemented, and deployed it, you can go to the Gtk, Qt, and wxWidgets teams and tell them to port all of their code to that new API. Then, you can have a unified model in Python. Until then, the practical compromise with almost zero negative consequences (sometimes, one extra piece of configuration will be required - compare this to how the logging module works ;) is to optionally allow explicit reactor selection. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Summary of dynamic attribute access discussion
On Tue, 13 Feb 2007 17:20:02 +0100, \Martin v. Löwis\ [EMAIL PROTECTED] wrote: Anthony Baxter schrieb: and the wrapper class idea of Nick Coghlan: attrview(obj)[foo] This also appeals - partly because it's not magic syntax wink I also like this. I would like to spell it attrs, and I think its specification is class attrs: def __init__(self, obj): self.obj = obj def __getitem__(self, name): return getattr(self.obj, name) def __setitem__(self, name, value): return setattr(self.obj, name, value) def __delitem__(self, name): return delattr(self, name) def __contains__(self, name): return hasattr(self, name) It's so easy people can include in their code for backwards compatibility; in Python 2.6, it could be a highly-efficient builtin (you still pay for the lookup of the name 'attrs', of course). This looks nice. The simplicity of the implementation is great too. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Summary of dynamic attribute access discussion
On Tue, 13 Feb 2007 11:27:48 -0800, Mike Klaas [EMAIL PROTECTED] wrote: On 2/13/07, Josiah Carlson [EMAIL PROTECTED] wrote: As for people who say, but getattr, setattr, and delattr aren't used; please do some searches of the Python standard library. In a recent source checkout of the trunk Lib, there are 100+ uses of setattr, 400+ uses of getattr (perhaps 10-20% of which being the 3 argument form), and a trivial number of delattr calls. In terms of applications where dynamic attribute access tends to happen; see httplib, urllib, smtpd, the SocketServer variants, etc. Another data point: on our six-figure loc code base, we have 123 instances of getattr, 30 instances of setattr, and 0 instances of delattr. There are 5 instances of setattr( ... getattr( ... ) ) on one line (and probably a few more that grep didn't pick up that span multiple lines). Another data point: in our six-figure loc code base, we have 469 instances of getattr, 91 instances of setattr, and 0 instances of delattr. There is one instances of setattr(..., getattr(...)), and one instance of setattr(getattr(...), ...). +1 on .[] notation and the idea in general. -1 on a syntax change for this. Somewhere between -0 and +0 for a builtin or library function like attrview(). Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Trial balloon: microthreads library in stdlib
On Wed, 14 Feb 2007 15:20:13 +1300, Greg Ewing [EMAIL PROTECTED] wrote: Greg, productive discussion is not furthered by the unsupported statement of one position or another. Instead of only stating what you believe to be a problem, explain why you believe it is a problem. A sentence like: The need for different event-driven mechanisms to compete with each other is the very problem that needs to be addressed. Invites a response which merely contradicts it (for example, you are wrong), an exchange which hasn't helped anyone to understand anything better. If you present supporting evidence for the position, then the validity and the weight of that evidence can be discussed, and one position or another might be shown to have greater validity. Also, show that you have fully understood the position you are arguing against. For example, if you respond to a message in which someone claims to welcome something, don't respond by saying that requiring that thing is bad. As you know, welcoming something is not the same as requiring that thing, so by making this statement alone, you give the impression of talking past the person to whom you are responding and it may seem to readers that you haven't understood the other person's position. If Twisted is designed so that it absolutely *has* to use its own special event mechanism, and everything else needs to be modified to suit its requirements, then it's part of the problem, not part of the solution. Here, you've built on your unsupported premise to arrive at a conclusion which may be controversial. Again, instead of couching the debate in terms of what you might see as self evidence problems, explain why you hold the position you do. That way, the possibility is created for other people to come to understand why you believe the conclusion to be valid. You have presented what could be the beginning of supporting evidence here, in saying that requiring everything else to be modified is undesirable. This is only a place to start, not to end, though. You may want to discuss the real scope of modifications required (because everything is obviously hyperbole, focusing on what changes are actually necessary would be beneficial) and why you think that modifications are necessary (it may not be clear to others why they are, or it may be the case that others can correct misconceptions you have). For example, you might give a case in which you have needed to integrate Twisted (or a different event framework) with another event loop and describe difficulties you discovered. This will help advance the discussion around practical, specific concerns. Without this focus, it is hard for a discussion to be productive, since it will involve only vague handwaving. Finally, it is often beneficial to avoid bringing up phrases such as the problem. Particularly in a context such as this, where the existing discussion is focusing on a specific issue, such as the necessity or utility of adding a new set of functionality to the Python standard library, the relevance of the problem may not be apparent to readers. In this case, some may not find it obvious how a third party library can be the problem with such new functionality. If you explicitly spell out the detrimental consequences of an action, instead of waving around the problem, the resulting discussion can be that much more productive and focused. Thanks Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Importing .pyc in -O mode and vice versa
On Tue, 07 Nov 2006 12:20:00 +1300, Greg Ewing [EMAIL PROTECTED] wrote: I think I'd be happy with having to do that explicitly. I expect the vast majority of Python programs don't need to track changes to the set of importable modules during execution. The exceptions would be things like IDEs, and they could do a cache flush before reloading a module, etc. Another questionable optimization which changes application- level semantics. No, please? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Importing .pyc in -O mode and vice versa
On Sun, 05 Nov 2006 14:21:34 +1300, Greg Ewing [EMAIL PROTECTED] wrote: Fredrik Lundh wrote: well, from a performance perspective, it would be nice if Python looked for *fewer* things, not more things. Instead of searching for things by doing a stat call for each possible file name, would it perhaps be faster to read the contents of all the directories along sys.path into memory and then go searching through that? Bad for large directories. There's a cross-over at some number of entries. Maybe Python should have a runtime-tuned heuristic for selecting a filesystem traversal mechanism. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Path object design
On Wed, 01 Nov 2006 11:06:14 +0100, Georg Brandl [EMAIL PROTECTED] wrote: [EMAIL PROTECTED] wrote: On 03:14 am, [EMAIL PROTECTED] wrote: One thing is sure -- we urgently need something better than os.path. It functions well but it makes hard-to-read and unpythonic code. I'm not so sure. The need is not any more urgent today than it was 5 years ago, when os.path was equally unpythonic and unreadable. The problem is real but there is absolutely no reason to hurry to a premature solution. I've already recommended Twisted's twisted.python.filepath module as a possible basis for the implementation of this feature. I'm sorry I don't have the time to pursue that. I'm also sad that nobody else seems to have noticed. Twisted's implemenation has an advantage that it doesn't seem that these new proposals do, an advantage I would really like to see in whatever gets seriously considered for adoption: Looking at http://twistedmatrix.com/documents/current/api/twisted.python.filepath.FilePath.html, it seems as if FilePath was made to serve a different purpose than what we're trying to discuss here: I am a path on the filesystem that only permits 'downwards' access. Instantiate me with a pathname (for example, FilePath('/home/myuser/public_html')) and I will attempt to only provide access to files which reside inside that path. [...] The correct way to use me is to instantiate me, and then do ALL filesystem access through me. What a successor to os.path needs is not security, it's a better (more pythonic, if you like) interface to the old functionality. No. You've misunderstood the code you looked at. FilePath serves exactly the purpose being discussed here. Take a closer look. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The lazy strings patch
On Mon, 23 Oct 2006 07:58:25 -0700, Larry Hastings [EMAIL PROTECTED] wrote: [snip] If external Python extension modules are as well-behaved as the shipping Python source tree, there simply wouldn't be a problem. Python source is delightfully consistent about using the macro PyString_AS_STRING() to get at the creamy char *center of a PyStringObject *. When code religiously uses that macro (or calls PyString_AsString() directly), all it needs is a recompile with the current stringobject.h and it will Just Work. I genuinely don't know how many external Python extension modules are well- behaved in this regard. But in case it helps: I just checked PIL, NumPy, PyWin32, and SWIG, and all of them were well-behaved. FWIW, http://www.google.com/codesearch?q=+ob_sval Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The lazy strings patch
On Mon, 23 Oct 2006 09:07:51 -0700, Josiah Carlson [EMAIL PROTECTED] wrote: Paul Moore [EMAIL PROTECTED] wrote: I had picked up on this comment, and I have to say that I had been a little surprised by the resistance to the change based on the code would break argument, when you had made such a thorough attempt to address this. Perhaps others had missed this point, though. I'm also concerned about future usability. Me too (perhaps in a different way though). Word in the Py3k list is that Python 2.6 will be just about the last Python in the 2.x series, and by directing his implementation at only Python 2.x strings, he's just about guaranteeing obsolescence. People will be using 2.x for a long time to come. And in the long run, isn't all software obsolete? :) By building with unicode and/or objects with a buffer interface in mind, Larry could build with both 2.x and 3.x in mind, and his code wouldn't be obsolete the moment it was released. (I'm not sure what the antecedent of it is in the above, I'm going to assume it's Python 3.x.) Supporting unicode strings and objects providing the buffer interface seems like a good idea in general, even disregarding Py3k. Starting with str is reasonable though, since there's still plenty of code that will benefit from this change, if it is indeed a beneficial change. Larry, I'm going to try to do some benchmarks against Twisted using this patch, but given my current time constraints, you may be able to beat me to this :) If you're interested, Twisted [EMAIL PROTECTED] plus this trial plugin: http://twistedmatrix.com/trac/browser/sandbox/exarkun/merit/trunk will let you do some gross measurements using the Twisted test suite. I can give some more specific pointers if this sounds like something you'd want to mess with. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Massive test_sqlite failure on Mac OSX ... sometimes
On Sun, 22 Oct 2006 07:51:27 -0500, [EMAIL PROTECTED] wrote: Ronald According to a comment in (IIRC) the pyOpenGL sources GLUT on Ronald OSX does a chdir() during initialization, that could be the Ronald problem here. How would that explain that it fails on my g5 but not on my powerbook? They are at the same revision of the operating system and compiler. The checksums on the libraries are different though the file sizes are the same. The dates on the files are different as well. I suspect the checksum difference is caused by the different upgrade dates of the two machines and the resulting different times the two systems were optimized. Is there anyone else with a g5 who can do a vanilla Unix (not framework) build on an up-to-date g5 from an up-to-date Subversion repository? It would be nice if someone else could at least confirm or not confirm this problem. Robert Gravina has seen a problem which bears some resemblance to this one while using PySQLite in a real application on OS X. I've pointed him to this thread; hopefully it's the same issue and a second way of producing the issue will shed some more light on the matter. The top of that thread is available here: http://divmod.org/users/mailman.twistd/pipermail/divmod-dev/2006-October/000707.html Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Caching float(0.0)
On Sun, 1 Oct 2006 13:54:31 -0400, Terry Reedy [EMAIL PROTECTED] wrote: Nick Craig-Wood [EMAIL PROTECTED] wrote in message news:[EMAIL PROTECTED] On Fri, Sep 29, 2006 at 12:03:03PM -0700, Guido van Rossum wrote: I see some confusion in this thread. If a *LITERAL* 0.0 (or any other float literal) is used, you only get one object, no matter how many times it is used. For some reason that doesn't happen in the interpreter which has been confusing the issue slightly... $ python2.5 a=0.0 b=0.0 id(a), id(b) (134737756, 134737772) Guido said *a* literal (emphasis shifted), reused as in a loop or function recalled, while you used *a* literal, then *another* literal, without reuse. Try a=b=0.0 instead. Actually this just has to do with, um, compilation units, for lack of a better term: [EMAIL PROTECTED]:~$ python Python 2.4.3 (#2, Apr 27 2006, 14:43:58) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type help, copyright, credits or license for more information. a = 0.0 b = 0.0 print a is b False ^D [EMAIL PROTECTED]:~$ cat test.py a = 0.0 b = 0.0 print a is b ^D [EMAIL PROTECTED]:~$ python test.py True [EMAIL PROTECTED]:~$ cat test_a.py a = 0.0 ^D [EMAIL PROTECTED]:~$ cat test_b.py b = 0.0 ^D [EMAIL PROTECTED]:~$ cat test.py from test_a import a from test_b import b print a is b ^D [EMAIL PROTECTED]:~$ python test.py False [EMAIL PROTECTED]:~$ python Python 2.4.3 (#2, Apr 27 2006, 14:43:58) [GCC 4.0.3 (Ubuntu 4.0.3-1ubuntu5)] on linux2 Type help, copyright, credits or license for more information. a = 0.0; b = 0.0 print a is b True [EMAIL PROTECTED]:~$ Each line in an interactive session is compiled separately, like modules are compiled separately. With the current implementation, literals in a single compilation unit have a chance to be cached like this. Literals in different compilation units, even for the same value, don't. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] deja-vu .. python locking
On Mon, 18 Sep 2006 17:06:47 +0200, Martin Devera [EMAIL PROTECTED] wrote: Martin v. Löwis wrote: Martin Devera schrieb: RCU like locking Solution I have in mind is similar to RCU. In Python we have quiscent state - when a thread returns to main loop of interpreter. There might be a terminology problem here. RCU is read-copy-update, right? I fail to see the copy (copy data structure to be modified) and update (replace original pointer with pointer to copy) part. Do this play a role in that scheme? If so, what specific structure is copied for, say, a list or a dict? This confusion makes it very difficult for me to understand your proposal, so I can't comment much on it. If you think it could work, just go ahead and create an implementation. It is why I used a word similar. I see the similarity in a way to archieve safe delete phase of RCU. Probably I selected bad title for the text. It is because I was reading about RCU implementation in Linux kernel and I discovered that the idea of postponing critical code to some safe point in future might work in Python interpreter. So that you are right. It is not RCU. It only uses similar technique as RCU uses for free-ing old copy of data. It is based on assumption that an object is typicaly used by single thread. Which thread owns builtins? Or module dictionaries? If two threads are running the same function and share no state except their globals, won't they constantly be thrashing on the module dictionary? Likewise, if the same method is running in two different threads, won't they thrash on the class dictionary? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Testsuite fails on Windows if a space is in the path
On Sat, 16 Sep 2006 19:22:34 +0200, \Martin v. Löwis\ [EMAIL PROTECTED] wrote: The test suite currently (2.5) has two failures on Windows if Python is installed into a directory with a space in it (such as Program Files). The failing tests are test_popen and test_cmd_line. The test_cmd_line failure is shallow: the test fails to properly quote sys.executable when passing it to os.popen. I propose to fix this in Python 2.5.1; see #1559413 test_popen is more tricky. This code has always failed AFAICT, except that the test itself is a recent addition. The test tries to pass the following command to os.popen c:\Program Files\python25\python.exe -c import sys;print sys.version For some reason, os.popen invokes doesn't directly start Python as a new process, but install invokes cmd.exe /c c:\Program Files\python25\python.exe -c import sys;print sys.version Can somebody remember what the reason is to invoke cmd.exe (or COMSPEC) in os.popen? I would guess it was done to force cmd.exe-style argument parsing in the subprocess, which is optional on Win32. In any case, cmd.exe fails to execute this, claiming that c:\Program is not a valid executable. It would run cmd.exe /c c:\Program Files\python25\python.exe just fine, so apparently, the problem is with argument that have multiple pairs of quotes. I found, through experimentation, that it *will* accept cmd.exe /c c:\Program Files\python25\python.exe -c import sys;print sys.version (i.e. doubling the quotes at the beginning and the end). I'm not quite sure what algorithm cmd.exe uses for parsing, but it appears that adding a pair of quotes works in all cases (at least those I could think of). See # 1559298 You can find the quoting/dequoting rules used by cmd.exe documented on msdn: http://msdn.microsoft.com/library/default.asp?url=/library/en-us/vclang/html/_pluslang_Parsing_C.2b2b_.Command.2d.Line_Arguments.asp Interpreting them is something of a challenge (my favorite part is how the examples imply that the final argument is automatically uppercased ;) Here is an attempted implementation of the quoting rules: http://twistedmatrix.com/trac/browser/trunk/twisted/python/win32.py#L41 Whether or not it is correct is probably a matter of discussion. If you find a more generally correct solution, I would certainly like to know about it. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Twisted-Python] Newbie question
On Thu, 7 Sep 2006 11:41:48 -0400, Timothy Fitz [EMAIL PROTECTED] wrote: On 9/5/06, Jean-Paul Calderone [EMAIL PROTECTED] wrote: You cannot stop the reactor and then start it again. Why don't the reactors throw if this happens? This question comes up almost once a month. One could just as easily ask why no one bothers to read mailing list archives to see if their question has been answered before. No one will ever know, it is just one of the mysteries of the universe. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Twisted-Python] Newbie question
Sorry, brainfart. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Signals, threads, blocking C functions
On Mon, 04 Sep 2006 15:05:56 +0100, Nick Maclaren [EMAIL PROTECTED] wrote: Gustavo Carneiro [EMAIL PROTECTED] wrote: That's a very good point; I wasn't aware that child processes inherited the signals mask from their parent processes. That's one of the few places where POSIX does describe what happens. Well, usually. You really don't want to know what happens when you call something revolting, like csh or a setuid program. This particular mess is why I had to write my own nohup - the new POSIX interfaces broke the existing one, and it remains broken today on almost all systems. I am now thinking of something along these lines: typedef void (*PyPendingCallNotify)(void *user_data); PyAPI_FUNC(void) Py_AddPendingCallNotify(PyPendingCallNotify callback, void *user_data); PyAPI_FUNC(void) Py_RemovePendingCallNotify(PyPendingCallNotify callback, void *user_data); Why would that help? The problems are semantic, not syntactic. Anthony Baxter isn't exaggerating the problem, despite what you may think from his posting. Dealing with threads and signals is certainly hairy. However, that barely has anything to do with what Gustavo is talking about. By the time Gustavo's proposed API springs into action, the threads already exist and the signal is already being handled by one. So, let's forget about threads and signals for a moment. The problem to be solved is that one piece of code wants to communicate a piece of information to another piece of code. The first piece of code is in Python itself. The second piece of code could be from any third-party library, and Python has no way of knowing about it - now. Gustavo is suggesting adding a registration API so that these third-party libraries can tell Python that they exist and are interested in this piece of information. Simple, no? PyGTK would presumably implement its pending call callback by writing a byte to a pipe which it is also passing to poll(). This lets them handle signals in a very timely manner without constantly waking up from poll() to see if Python wants to do any work. This is far from a new idea - it's basically the bog standard way of handling this situation. It strikes me as a very useful API to add to Python (although at this point in the 2.5 release process, not to 2.5, sorry Gustavo). Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Signals, threads, blocking C functions
On Mon, 04 Sep 2006 17:24:56 +0100, David Hopwood [EMAIL PROTECTED] wrote: Jean-Paul Calderone wrote: PyGTK would presumably implement its pending call callback by writing a byte to a pipe which it is also passing to poll(). But doing that in a signal handler context invokes undefined behaviour according to POSIX. write(2) is explicitly listed as async-signal safe in IEEE Std 1003.1, 2004. Was this changed in a later edition? Otherwise, I don't understand what you mean by this. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Signals, threads, blocking C functions
On Mon, 04 Sep 2006 18:18:41 +0100, Nick Maclaren [EMAIL PROTECTED] wrote: Jean-Paul Calderone [EMAIL PROTECTED] wrote: On Mon, 04 Sep 2006 17:24:56 +0100, David Hopwood [EMAIL PROTECTED] der.co.uk wrote: Jean-Paul Calderone wrote: PyGTK would presumably implement its pending call callback by writing a byte to a pipe which it is also passing to poll(). But doing that in a signal handler context invokes undefined behaviour according to POSIX. write(2) is explicitly listed as async-signal safe in IEEE Std 1003.1, 2004. Was this changed in a later edition? Otherwise, I don't understand what you mean by this. Try looking at the C90 or C99 standard, for a start :-( NOTHING may safely be done in a real signal handler, except possibly setting a value of type static volatile sig_atomic_t. And even that can be problematic. And note that POSIX defers to C on what the C languages defines. So, even if the function is async-signal-safe, the code that calls it can't be! POSIX's lists are complete fantasy, anyway. Look at the one that defines thread-safety, and then try to get your mind around what exit being thread-safe actually implies (especially with regard to atexit functions). Thanks for expounding. Given that it is basically impossible to do anything useful in a signal handler according to the relevant standards (does Python's current signal handler even avoid relying on undefined behavior?), how would you suggest addressing this issue? It seems to me that it is actually possible to do useful things in a signal handler, so long as one accepts that doing so is relying on platform specific behavior. How hard would it be to implement this for the platforms Python supports, rather than for a hypothetical standards-exact platform? Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Adding an rslice() builtin?
On Tue, 29 Aug 2006 17:44:40 +0100, David Hopwood [EMAIL PROTECTED] wrote: Nick Coghlan wrote: A discussion on the py3k list reminded me that translating a forward slice into a reversed slice is significantly less than obvious to many people. Not only do you have to negate the step value and swap the start and stop values, but you also need to subtract one from each of the step values, and ensure the new start value was actually in the original slice: reversed(seq[start:stop:step]) becomes seq[(stop-1)%abs(step):start-1:-step] An rslice builtin would make the latter version significantly easier to read: seq[rslice(start, stop, step)] Or slice.reversed(). Better, slice.reversed(length). Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-3000] What should the focus for 2.6 be?
On Mon, 21 Aug 2006 14:21:30 -0700, Josiah Carlson [EMAIL PROTECTED] wrote: Talin [EMAIL PROTECTED] wrote: [snip] I've been thinking about the transition to unicode strings, and I want to put forward a notion that might allow the transition to be done gradually instead of all at once. The idea would be to temporarily introduce a new name for 8-bit strings - let's call it ascii. An ascii object would be exactly the same as today's 8-bit strings. There are two parts to the unicode conversion; all literals are unicode, and we don't have strings anymore, we have bytes. Without offering the bytes object, then people can't really convert their code. String literals can be handled with the -U command line option (and perhaps having the interpreter do the str=unicode assignment during startup). A third step would ease this transition significantly: a unicode_literals __future__ import. Here's my suggestion: every feature, syntax, etc., that is slated for Py3k, let us release bit by bit in the 2.x series. That lets the 2.x series evolve into the 3.x series in a somewhat more natural way than the currently proposed *everything breaks*. If it takes 1, 2, 3, or 10 more releases in the 2.x series to get to all of the 3.x features, great. At least people will have a chance to convert, or at least write correct code for the future. This really seems like the right idea. Shoot the moon upgrades are almost always worse than incremental upgrades. The incremental path is better for everyone involved. For developers of Python, it gets more people using and providing feedback on the new features being developed. For developers with Python, it keeps the scope of a particular upgrade more manageable, letting them developer focus on a much smaller set of changes to be made to their application. Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] dict containment annoyance
On Sat, 12 Aug 2006 18:57:02 +0200, tomer filiba [EMAIL PROTECTED] wrote: the logic is simple: every `x` is either contained in `y` or not. if `x` *cannot* be contained in `y`, then the answer is a strong no, but that's still a no. def blacklisted(o): try: # Is the object contained in the blacklist set? return o in _blacklistset except TypeError: # If it *cannot* be contained in the blacklist set, # then it probably isn't. return False Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Release manager pronouncement needed: PEP 302 Fix
On Fri, 28 Jul 2006 18:00:36 -0400, Phillip J. Eby [EMAIL PROTECTED] wrote: At 10:55 PM 7/28/2006 +0200, Martin v. Löwis wrote: Phillip J. Eby wrote: The issue is that a proper fix that caches existence requires adding new types to import.c and thus might appear to be more of a feature. I was therefore reluctant to embark upon the work without some assurance that it wouldn't be rejected as adding a last-minute feature. So do you have a patch, or are going to write one? Yes, it's checked in as r50916. It ultimately turned out to be simpler than I thought; only one new type (imp.NullImporter) was required. Is this going to be the final state of PEP 302 support in Python 2.5? I don't particularly care how this ends up, but I'd like to know what has been decided on (PEP 302 doesn't seem to have been updated yet) so I can fix Twisted's test suite (which cannot even be run with Python 2.5b3 right now). Jean-Paul ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com