[Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
Hi all, This gem from unittest.py is pretty much the opposite of one obvious way: # Synonyms for assertion methods assertEqual = assertEquals = failUnlessEqual assertNotEqual = assertNotEquals = failIfEqual assertAlmostEqual = assertAlmostEquals = failUnlessAlmostEqual assertNotAlmostEqual = assertNotAlmostEquals = failIfAlmostEqual assertRaises = failUnlessRaises assert_ = assertTrue = failUnless assertFalse = failIf Could these be removed for 3k? There was a short discussion about this among some of those those present in the Python Core sprint room at PyCon today and most preferred the assertEqual form for [Not][Almost]Equal and Raises. With assertFalse vs. failIf (and assertTrue vs. failUnless) there was far less agreement. JUnit uses assertTrue exclusively, and most people said they feel that using assertTrue would be more consistent, but many (myself included) still think failUnless and failIf are much more natural. Another issue with assertTrue is that it doesn't actually test for 'True', strictly speaking, since it is based on equality, not identity. Its also interesting to note the original commit message: r34209 | purcell | 2003-09-22 06:08:12 -0500 (Mon, 22 Sep 2003) [...] - New assertTrue and assertFalse aliases for comfort of JUnit users [...] assertEqual (and its cousins) were already present at that point. In any case, if the decision is made to not use failUnless, something still needs to be done with assert_ vs. assertTrue. assert_ seems somewhat better to me, in that it has fewer characters, but I think that a case could certainly be made to keep both of these. I certainly don't have the authority to make a call on any of this, but if someone else decides what colour to paint this bike shed, I can try to get it done (hopefully with 2.6 warnings) tomorrow. Cheers, -Gabriel P.S. If you were in the sprint room and feel terribly misrepresented, please feel free to give me a swift kick both on-list and in person tomorrow morning. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
+1 to assert* from me. the fail* variants always feel like double-negatives. I also always use assertTrue instead of assert_. But I don't care enough to argue about it. :) On Wed, Mar 19, 2008 at 2:24 AM, Gabriel Grant [EMAIL PROTECTED] wrote: Hi all, This gem from unittest.py is pretty much the opposite of one obvious way: # Synonyms for assertion methods assertEqual = assertEquals = failUnlessEqual assertNotEqual = assertNotEquals = failIfEqual assertAlmostEqual = assertAlmostEquals = failUnlessAlmostEqual assertNotAlmostEqual = assertNotAlmostEquals = failIfAlmostEqual assertRaises = failUnlessRaises assert_ = assertTrue = failUnless assertFalse = failIf Could these be removed for 3k? There was a short discussion about this among some of those those present in the Python Core sprint room at PyCon today and most preferred the assertEqual form for [Not][Almost]Equal and Raises. With assertFalse vs. failIf (and assertTrue vs. failUnless) there was far less agreement. JUnit uses assertTrue exclusively, and most people said they feel that using assertTrue would be more consistent, but many (myself included) still think failUnless and failIf are much more natural. Another issue with assertTrue is that it doesn't actually test for 'True', strictly speaking, since it is based on equality, not identity. Its also interesting to note the original commit message: r34209 | purcell | 2003-09-22 06:08:12 -0500 (Mon, 22 Sep 2003) [...] - New assertTrue and assertFalse aliases for comfort of JUnit users [...] assertEqual (and its cousins) were already present at that point. In any case, if the decision is made to not use failUnless, something still needs to be done with assert_ vs. assertTrue. assert_ seems somewhat better to me, in that it has fewer characters, but I think that a case could certainly be made to keep both of these. I certainly don't have the authority to make a call on any of this, but if someone else decides what colour to paint this bike shed, I can try to get it done (hopefully with 2.6 warnings) tomorrow. Cheers, -Gabriel P.S. If you were in the sprint room and feel terribly misrepresented, please feel free to give me a swift kick both on-list and in person tomorrow morning. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/jyasskin%40gmail.com -- Namasté, Jeffrey Yasskin http://jeffrey.yasskin.info/ ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] 3.0 buildbots all red
Trent Nelson wrote: Sounds like a challenge if ever I've heard one -- care to wager a beer on it? (Only applies to buildbots that are connected/online.) Make sure you get a screen shot for OnYourDesktop if/when they *do* go green! Screenshot? I'm going to buy a pack of iron-on transfers and sell t-shirts of it online. All the buildbots were green momentarily after PyCon 2008... and all I got was this lousy t-shirt. British humour Momentarily? You mean they were only up for a few seconds? /British humour ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] The Breaking of distutils and PyPI for Python 3000?
As I'm digging into packaging issues here at PyCon, a couple of Python 3000 related matters occur to me. As I'm new to the Python 3000 development, if these have already been addressed in prior discussions, I apologize for your time. 1. What is the plan for PyPI when Python 3.0 comes out and dependencies start getting satisfied from distribution across the great divide, e.g. a 3.0-specific package pulls from PyPI a 2.x-specific package to meet some need? Are there plans to fork PyPI, apply special tags to uploads or what? While binary distributions are tagged with the Python version, source distributions are not. And of course a dependency expression as it stands today for SomePackage 2.4 may pull 3.0 to satisfy it. 2. There have been attempts over the years to fix distutils, with the last one being in 2006 by Anthony Baxter. He stated that a major hurdle was the strong demand to respect backward compatibility and he finally gave up. One of the purposes of Python 3.0 was the freedom to break backward compatibility for the sake of doing the right thing. So is it now permissible to give distutils a good reworking and stop letting compatibility issues hold us back? -Jeff ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Capsule Summary of Some Packaging/Deployment Technology Concerns
Phillip J. Eby wrote: I'm actually happy to hear that there's this much energy available -- hopefully some of it can be harnessed towards positive solutions. When I began developing setuptools, I often asked for the input of packagers, developers, etc., through the distutils-sig... and was met with overwhelming silence. So the fact that there is now a group of people who are ready to work for some solutions seems like a positive change, to me. I can appreciate how frustrating silence is when you call for input. Let's see if we can keep the volunteer energy going this time around. It's hard to make design decisions regarding itches you don't personally have, and which other people won't help scratch. Unfortunately, a lot of the proposals from packaging system people have been of the form of, fix this for us by breaking things for other people. Not all of them, though. Many have been very helpful, contributing troubleshooting help and good patches. That some of those good patches took nearly a year to get into setuptools (some from Fedora just got into 0.6c8 that were sent to me almost a year ago) is because I'm the only person reviewing setuptools patches, and I've spent only a few days in the last year doing focused development work on setuptools (as opposed to answering questions about it on the SIG). It's never a good thing when people's patches sit around, regardless of where they come from. But that's not the same thing as *rejecting* the patches. I and others appreciate your call for more patches on various topics. However a long delay in applying them will discourage contribution. Are you open to giving certain others patch view/commit privileges to setuptools? I'd be willing to help out, and keep a carefully balanced hand in what is accepted. -Jeff ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Distutils] The Breaking of distutils and PyPI for Python 3000?
On 19/03/2008, Jeff Rush [EMAIL PROTECTED] wrote: 1. What is the plan for PyPI when Python 3.0 comes out and dependencies start getting satisfied from distribution across the great divide, e.g. a 3.0-specific package pulls from PyPI a 2.x-specific package to meet some need? As distutils (and core Python) doesn't do any automatic dependency management, this is a setuptools issue. As such, it's up to setuptools to deal with it. There may be infrastructure changes that would be generally useful, but there's nothing *needed* for the core. 2. There have been attempts over the years to fix distutils, with the last one being in 2006 by Anthony Baxter. He stated that a major hurdle was the strong demand to respect backward compatibility and he finally gave up. One of the purposes of Python 3.0 was the freedom to break backward compatibility for the sake of doing the right thing. So is it now permissible to give distutils a good reworking and stop letting compatibility issues hold us back? Sounds reasonable. I'm sure patches would be considered, but past discussions around including setuptools have been controversial and generally not reached consensus (for reasons other than pure backward compatibility). Also, while compatibility isn't as important for 3.0, smooth migration *is* - so any incompatible proposal must include some consideration of how to assist people with huge, complex setup.py files which use distutils internals in complex ways. So be prepared to do some work :-) (But I'd be happy to see distutils improved. I just don't have any need for such improvement, personally). Paul. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] logging shutdown (was: Re: [Python-checkins] r61431 - python/trunk/Doc/library/logging.rst)
I think (repeatedly) testing an app through IDLE is a reasonable use case. I don't disagree, but cleanup of logging may not be all that trivial in some scenarios: for example, if multiple threads have references to handlers, then after shutdown() was called, logging by those threads would fail due to I/O errors - a reasonable outcome. However, logging cannot know if threads contain references to loggers or handlers, and so I do not e.g. remove loggers on shutdown(). Would it be reasonable for shutdown to remove logging from sys.modules, so that a rerun has some chance of succeeding via its own import? I'm not sure that would be enough in the scenario I mentioned above - would removing a module from sys.modules be a guarantee of removing it from memory? If so, what if still-running threads contained references to loggers etc. - wouldn't they potentially segfault? It's safer, in my view, for the developer of an application to do cleanup of their app if they want to test repeatedly in IDLE. After all, this could involve cleanup of other resources which are nothing to do with logging; and a developer can certainly remove handlers from loggers using the existing API, after ensuring all their threads are done and there are no unaccounted-for references to loggers and handlers. Regards, Vinay Sajip ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Checking for the -3 flag from Python code
This flag is exposed to python code as sys.flags.py3k_warning So the hack added to some of the test code that I saw go by on python-checkins isn't needed :) Cheers, Nick -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Improved thread switching
The company I work for has over the last couple of years created an application server for use in most of our customer projects. It embeds Python and most project code is written in Python by now. It is quite resource-hungry (several GB of RAM, MySQL databases of 50-100GB). And of course it is multi-threaded and, at least originally, we hoped to make it utilize multiple processor cores. Which, as we all know, doesn't sit very well with Python. Our application runs heavy background calculations most of the time (in Python) and has to service multiple (few) GUI clients at the same time, also using Python. The problem was that a single background thread would increase the response time of the client threads by a factor of 10 or (usually) more. This led me to add a dirty hack to the Python core to make it switch threads more frequently. While this hack greatly improved response time for the GUI clients, it also slowed down the background threads quite a bit. top would often show significantly less CPU usage -- 80% instead of the more usual 100%. The problem with thread switching in Python is that the global semaphore used for the GIL is regularly released and immediately reacquired. Unfortunately, most of the time this leads to the very same thread winning the race on the semaphore again and thus more wait time for the other threads. This is where my dirty patch intervened and just did a nanosleep() for a short amount of time (I used 1000 nsecs). I have then created a better scheduling scheme and written a small test program that nicely mimics what Python does for some statistics. I call the scheduling algorithm the round-robin semaphore because threads can now run in a more or less round-robin fashion. Actually, it's just a semaphore with FIFO semantics. The implementation problem with the round-robin semaphore is the __thread variable I had to use because I did not want to change the signature of the Enter() and Leave() methods. For CPython, I have replaced this thread-local allocation with an additional field in the PyThreadState. Because of that, the patch for CPython I have already created is a bit more involved than the simple nanosleep() hack. Consequently, it's not very polished yet and not at all as portable as the rest of the Python core. I now show you the results from the test program which compares all three scheduling mechanisms -- standard python, my dirty hack and the new round-robin semaphore. I also show you the test program containing the three implementations nicely encapsulated. The program was run on a quad-core Xeon 1.86 GHz on Fedora 5 x86_64. The first three lines from the output (including the name of the algorithm) should be self-explanatory. The fourth and the fifth show a distribution of wait times for the individual threads. The ideal distribution would be everything on the number of threads (2 in this case) and zero everywhere else. As you can see, the round-robin semaphore is pretty close to that. Also, because of the high thread switching frequency, we could lower Python's checkinterval -- the jury is still out on the actual value, likely something between 1000 and 1. I can post my Python patch if there is enough interest. Thanks for your attention. Synch: Python lock iteration count: 24443 thread switches: 10 1 2 3 4 5 6 7 8 910 -10 -50 -100 -1k more 24433 0 0 0 0 0 0 0 0 0 0 1 1 6 0 Synch: Dirty lock iteration count: 25390 thread switches: 991 1 2 3 4 5 6 7 8 910 -10 -50 -100 -1k more 2439910 0 0 0 0 1 0 1 0 975 1 1 0 0 Synch: round-robin semaphore iteration count: 23023 thread switches: 22987 1 2 3 4 5 6 7 8 910 -10 -50 -100 -1k more 36 22984 0 0 0 0 0 0 0 0 1 0 0 0 0 // compile with g++ -g -O0 -pthread -Wall p.cpp #include pthread.h #include semaphore.h #include stdio.h #include stdlib.h #include string.h #include errno.h #include assert.h // // posix stuff class TMutex { pthread_mutex_t mutex; static pthread_mutex_t initializer_normal; static pthread_mutex_t initializer_recursive; TMutex(const TMutex ); TMutex operator=(const TMutex ); public: TMutex(bool recursive = true); ~TMutex() { pthread_mutex_destroy(mutex); } void Lock() { pthread_mutex_lock(mutex); } bool TryLock() { return pthread_mutex_trylock(mutex) == 0;} void Unlock() { pthread_mutex_unlock(mutex); } friend class TCondVar; }; class TCondVar { pthread_cond_t cond; static pthread_cond_t initializer; TCondVar(const TCondVar ); TCondVar operator=(const TCondVar ); public: TCondVar(); ~TCondVar() { pthread_cond_destroy(cond); } void Wait(TMutex *mutex) { pthread_cond_wait(cond,
Re: [Python-Dev] [Python-3000-checkins] r61522 - python/branches/py3k/Lib/test/test_print.py
eric.smith wrote: [EMAIL PROTECTED] +def stdout_redirected(new_stdout): +save_stdout = sys.stdout +sys.stdout = new_stdout +try: +yield None +finally: +sys.stdout = save_stdout I think this test could easily be tweaked to use test.test_support.captured_stdout rather than reinventing the wheel :) (cc'ing python-dev for visibility since the sprints are generating a lot of python-checkins traffic) Cheers, Nick. -- Nick Coghlan | [EMAIL PROTECTED] | Brisbane, Australia --- http://www.boredomandlaziness.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-3000-checkins] r61522 - python/branches/py3k/Lib/test/test_print.py
Nick Coghlan wrote: eric.smith wrote: [EMAIL PROTECTED] +def stdout_redirected(new_stdout): +save_stdout = sys.stdout +sys.stdout = new_stdout +try: +yield None +finally: +sys.stdout = save_stdout I think this test could easily be tweaked to use test.test_support.captured_stdout rather than reinventing the wheel :) Who reinvented the wheel? I stole this code from the PEP! :) I'll modify it. Thanks for noticing. Eric. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?
1. What is the plan for PyPI when Python 3.0 comes out and dependencies start getting satisfied from distribution across the great divide, e.g. a 3.0-specific package pulls from PyPI a 2.x-specific package to meet some need? Are there plans to fork PyPI, apply special tags to uploads or what? I don't see the need to for PyPI. For packages (or distributions, to avoid confusion with Python packages), I see two options: a) provide a single release that supports both 2.x and 3.x. The precise strategy to do so might vary. If one is going for a single source version, have setup.py run 2to3 (or perhaps 3to2). For dual-source packages, have setup.py just install the one for the right version; setup.py itself needs to be written so it runs on both versions (which is easy to do). b) switch to Python 3 at some point (i.e. burn your bridges). You seem to be implying that some projects may release separate source distributions. I cannot imagine why somebody would want to do that. 2. There have been attempts over the years to fix distutils, with the last one being in 2006 by Anthony Baxter. He stated that a major hurdle was the strong demand to respect backward compatibility and he finally gave up. Can you kindly refer to some archived discussion for that? One of the purposes of Python 3.0 was the freedom to break backward compatibility for the sake of doing the right thing. So is it now permissible to give distutils a good reworking and stop letting compatibility issues hold us back? I don't know what the proposed changes are, but for some changes; in general, I feel that the need for backwards compatibility is exaggerated. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Mar 19, 2008, at 3:20 AM, Jeffrey Yasskin wrote: +1 to assert* from me. the fail* variants always feel like double-negatives. I also always use assertTrue instead of assert_. But I don't care enough to argue about it. :) I'm in the camp that Gabriel describes. I prefer assertEqual/ assertRaises and failIf/failUnless. I like the latter because it reads nicely: fail unless [this thing is true], fail if [this thing is true]. OTOH, I'd rather there be OOWTDI so whatever the consensus is is fine with me. - -Barry -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.8 (Darwin) iQCVAwUBR+EbBXEjvBPtnXfVAQIWDAQAi3/aoOhxeeeY85J4GEAW8hk3ONBQTUi8 jSdm62ooDndcuROZbC2EqfEGJJvX/JnbwstT195HD1EpsOohtA9tObZ294BO5vpg 4lEQhqqXlkQsZEgwM6+pcW8xUI3mv0HPiT/HZZZj/+71FpToSElist/l/sLYIvZv At7qT4DFKeo= =jyxp -END PGP SIGNATURE- ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Capsule Summary of Some Packaging/Deployment Technology Concerns
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 On Mar 19, 2008, at 3:57 AM, Jeff Rush wrote: I and others appreciate your call for more patches on various topics. However a long delay in applying them will discourage contribution. Are you open to giving certain others patch view/commit privileges to setuptools? I'd be willing to help out, and keep a carefully balanced hand in what is accepted. The Python sandbox has a setuptools directory. Is this the canonical location for the code? If so, then anybody who has Python commit privileges can commit to it and help further develop setuptools. If not, why not and what is the sandbox setuptools used for? - -Barry -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.8 (Darwin) iQCVAwUBR+Eca3EjvBPtnXfVAQLabwP9F8NtQX6YsDXJMHiByCGILPAQ2NgtaIzg en6yYbhl5IAweTr0DtWzxRXjSGMifK/D4PmtRSWWUTy3VY+8cRUkYuBjIxPOHJRF 4TA4dYoW4f2+qM1IO/l59FIAJgUyrXKhv3aznpXBFl+PaRCW9qP9G1lur3lolipB h4i8ya+I7zU= =2/iq -END PGP SIGNATURE- ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
OTOH, I'd rather there be OOWTDI so whatever the consensus is is fine with me. This strikes me as a gratuitous API change of the kind Guido was warning about in his recent post: Don't change your APIs incompatibly when porting to Py3k Yes it removes redundancy, but it really doesn't change the cognitive load (at least for native speakers). If the blessed set were restricted to assert*, what would users of fail* do when trying to test their packages on py3k? Search and replace, or monkey patch unittest? I'm guessing monkey patch unittest, which means the change saves nothing, and costs plenty. Note the acronym is OOWTDI, not OONTDI - using a different name does not necessarily make it a different way. -- Michael Urman ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Installing Python 2.6 alpha1 on Windows XP
Martin v. Löwis wrote: That's odd. In theory, having msvcr90.dll in C:\Python26 should be sufficient, as once python.exe is loaded, its directory is added to the DLL search path. Maybe it's something to do with the side by side manifest installation stuff (or whatever it's called). Yes, with VS 2008, the DLL search path becomes irrelevant (or so it seems). Martin, can you comment? It looks like the 3.0 installer uses 2 copies of msvcr90.dll, where the 2.6 one doesn't. I would have thought that only one is necessary, but Gregor's experiments seem to demonstrate otherwise. I haven't figured it out myself; it's a complete mess, and Microsoft is heavily wasting our time. It seems that you absolutely *must* have the manifest file in each directory that has a DLL which links with the CRT. Whether or not separate copies of the DLL are then also necessary, and whether or not that causes two copies to be loaded into the address space, I don't know. HELP To reproduce the problem, you probably have to test on a machine which doesn't have the CRT redistributable installed centrally (neither through VS 2008 installation, nor by running the standalone CRT installer, nor by having installed any other software that provides an SxS copy of the CRT). FYI, here is a related bug report on this: http://bugs.python.org/issue2256 If you send me private email, I can test installs on that machine. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Capsule Summary of Some Packaging/Deployment Technology Concerns
At 03:57 AM 3/19/2008 -0500, Jeff Rush wrote: Are you open to giving certain others patch view/commit privileges to setuptools? Jim Fulton has such already. I'm open to extending that to others who have a good grasp of the subtleties involved. Truthfully, if we can just get 0.6 put to bed, I could probably open up the trunk a lot wider. One of the things that slows me down is that patches usually don't come with tests, so I usually have to manually smoke-test them for scenarios I think they'll effect. There isn't really any automated procedure. Probably the most frustrating thing (or chief amongst the most frustrating things) about setuptools development is that it's a black hole. By which I mean that backward compatibility and cruft accretion make it difficult to get out of. In the beginning, there was the distutils. Distutils begat setuptools, and setuptools begat virtualenv and zc.buildout and source control plugins. Etc., etc. What I think is really needed in the long run is to keep eggs, but get rid of setuptools and the distutils in their current form. There's a lot of brokenness there, and also a lot of accumulated cruft. We really need a distutils 3000, and it needs to be built on a better approach. In truth, my *real* motivation for PEP 365's bootstrap tool isn't so much to support the package management tools we have today, as it is to support a new one tomorrow. I have a few ideas for ways to shift the paradigm of how individual projects get built, to incorporate many scenarios that don't work well now. But to implement those things in such a next-generation tool, I will not want to be restricted to just what's in the stdlib or what can be bundled in the tool. (Btw, by real motivation, I don't mean I've been deceptive about my intentions, I mean that my strong intuition that such a bootstrap facility is needed, is probably being fueled by the long term desire to replace the entire distutils-based infrastructure with something better.) I'd be willing to help out, and keep a carefully balanced hand in what is accepted. And I think it's probably getting close to time I stepped down from day-to-day management of the codebase (which is more like month-to-month or quarter-to-quarter for me lately). It will probably be a lot easier for me to step back and critique stuff that goes in, after the fact, than to go over the stuff beforehand. :) I'm not sure exactly how to go about such a handoff though. My guess is that we need a bug/patch tracker, and a few people to review, test, and apply. Maybe a transitional period during which I just say yea or nay and let others do the test and apply, before opening it up entirely. That way, we can perhaps solidify a few principles that I'd like to have stay in place. (Like no arbitrary post-install code hooks.) btw, offtopic question: are you by any chance the same Jeff Rush who invented EchoMail? ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
Michael Urman writes: Yes it removes redundancy, but it really doesn't change the cognitive load (at least for native speakers). Actually, OONTDI is important to me, cognitively. Multiple names implies the possibility of multiple semantics, often unintentionally. Here that can't be the case by construction, but I wouldn't have known that if I weren't reading this thread. And I still won't be sure for any given one of them, since there are so many remembering the whole list is hard. If the blessed set were restricted to assert*, what would users of fail* do when trying to test their packages on py3k? Search and replace, or monkey patch unittest? So we should add this to 2to3, no? They're going to run that anyway. I'm guessing monkey patch unittest, which means I can flag them as people in too much of a hurry to do things right.wink the change saves nothing, and costs plenty. That's a bit short-sighted, no? It saves nothing for old working code. But 90% of the Python code in use 5 years from now will be written between now and then. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Improved thread switching
On Tue, Mar 18, 2008 at 1:29 AM, Stefan Ring [EMAIL PROTECTED] wrote: The company I work for has over the last couple of years created an application server for use in most of our customer projects. It embeds Python and most project code is written in Python by now. It is quite resource-hungry (several GB of RAM, MySQL databases of 50-100GB). And of course it is multi-threaded and, at least originally, we hoped to make it utilize multiple processor cores. Which, as we all know, doesn't sit very well with Python. Our application runs heavy background calculations most of the time (in Python) and has to service multiple (few) GUI clients at the same time, also using Python. The problem was that a single background thread would increase the response time of the client threads by a factor of 10 or (usually) more. This led me to add a dirty hack to the Python core to make it switch threads more frequently. While this hack greatly improved response time for the GUI clients, it also slowed down the background threads quite a bit. top would often show significantly less CPU usage -- 80% instead of the more usual 100%. The problem with thread switching in Python is that the global semaphore used for the GIL is regularly released and immediately reacquired. Unfortunately, most of the time this leads to the very same thread winning the race on the semaphore again and thus more wait time for the other threads. This is where my dirty patch intervened and just did a nanosleep() for a short amount of time (I used 1000 nsecs). Can you try with a call to sched_yield(), rather than nanosleep()? It should have the same benefit but without as much performance hit. If it works, but is still too much hit, try tuning the checkinterval to see if you can find an acceptable throughput/responsiveness balance. -- Adam Olsen, aka Rhamphoryncus ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Improved thread switching
Adam Olsen rhamph at gmail.com writes: Can you try with a call to sched_yield(), rather than nanosleep()? It should have the same benefit but without as much performance hit. If it works, but is still too much hit, try tuning the checkinterval to see if you can find an acceptable throughput/responsiveness balance. I tried that, and it had no effect whatsoever. I suppose it would make an effect on a single CPU or an otherwise heavily loaded SMP system but that's not the secnario we care about. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
2008/3/19, Barry Warsaw [EMAIL PROTECTED]: +1 to assert* from me. the fail* variants always feel like double-negatives. I also always use assertTrue instead of assert_. But I don't care enough to argue about it. :) +1 to the plain affirmative propositions (assert*) instead of the kind-of-double-negative stuff. It helps a lot specially if you're not a native english speaker. +1 to remove them in Py3k. Questions: - 2to3 should fix them? - should we add a warning in 2.6 and remove them in 2.7? Or 2.7/2.8? Regards, -- .Facundo Blog: http://www.taniquetil.com.ar/plog/ PyAr: http://www.python.org/ar/ ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Improved thread switching
On Wed, Mar 19, 2008 at 10:09 AM, Stefan Ring [EMAIL PROTECTED] wrote: Adam Olsen rhamph at gmail.com writes: Can you try with a call to sched_yield(), rather than nanosleep()? It should have the same benefit but without as much performance hit. If it works, but is still too much hit, try tuning the checkinterval to see if you can find an acceptable throughput/responsiveness balance. I tried that, and it had no effect whatsoever. I suppose it would make an effect on a single CPU or an otherwise heavily loaded SMP system but that's not the secnario we care about. So you've got a lightly loaded SMP system? Multiple threads all blocked on the GIL, multiple CPUs to run them, but only one CPU is active? I that case I can imagine how sched_yield() might finish before the other CPUs wake up a thread. A FIFO scheduler would be the right thing here, but it's only a short term solution. Care for a long term solution? ;) http://code.google.com/p/python-safethread/ -- Adam Olsen, aka Rhamphoryncus ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
Gabriel Grant wrote: Hi all, This gem from unittest.py is pretty much the opposite of one obvious way: # Synonyms for assertion methods assertEqual = assertEquals = failUnlessEqual assertNotEqual = assertNotEquals = failIfEqual assertAlmostEqual = assertAlmostEquals = failUnlessAlmostEqual assertNotAlmostEqual = assertNotAlmostEquals = failIfAlmostEqual assertRaises = failUnlessRaises assert_ = assertTrue = failUnless assertFalse = failIf Could these be removed for 3k? There was a short discussion about this among some of those those present in the Python Core sprint room at PyCon today and most preferred the assertEqual form for [Not][Almost]Equal and Raises. With assertFalse vs. failIf (and assertTrue vs. failUnless) there was far less agreement. JUnit uses assertTrue exclusively, and most people said they feel that using assertTrue would be more consistent, but many (myself included) still think failUnless and failIf are much more natural. Another issue with assertTrue is that it doesn't actually test for 'True', strictly speaking, since it is based on equality, not identity. +1 on standardising on 'assert*' and removing 'fail*'. +1 on making 'assertTrue' test for True rather than any non-false object (and vice versa for assertFalse) For migration a simple subclass of TestCase that provides the old methods/semantics is trivial to write. No need for monkey-patching. Michael Foord Its also interesting to note the original commit message: r34209 | purcell | 2003-09-22 06:08:12 -0500 (Mon, 22 Sep 2003) [...] - New assertTrue and assertFalse aliases for comfort of JUnit users [...] assertEqual (and its cousins) were already present at that point. In any case, if the decision is made to not use failUnless, something still needs to be done with assert_ vs. assertTrue. assert_ seems somewhat better to me, in that it has fewer characters, but I think that a case could certainly be made to keep both of these. I certainly don't have the authority to make a call on any of this, but if someone else decides what colour to paint this bike shed, I can try to get it done (hopefully with 2.6 warnings) tomorrow. Cheers, -Gabriel P.S. If you were in the sprint room and feel terribly misrepresented, please feel free to give me a swift kick both on-list and in person tomorrow morning. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Improved thread switching
Adam Olsen rhamph at gmail.com writes: On Wed, Mar 19, 2008 at 10:09 AM, Stefan Ring s.r at visotech.at wrote: Adam Olsen rhamph at gmail.com writes: Can you try with a call to sched_yield(), rather than nanosleep()? It should have the same benefit but without as much performance hit. If it works, but is still too much hit, try tuning the checkinterval to see if you can find an acceptable throughput/responsiveness balance. I tried that, and it had no effect whatsoever. I suppose it would make an effect on a single CPU or an otherwise heavily loaded SMP system but that's not the secnario we care about. So you've got a lightly loaded SMP system? Multiple threads all blocked on the GIL, multiple CPUs to run them, but only one CPU is active? I that case I can imagine how sched_yield() might finish before the other CPUs wake up a thread. A FIFO scheduler would be the right thing here, but it's only a short term solution. Care for a long term solution? ;) http://code.google.com/p/python-safethread/ I've already seen that but it would not help us in our current situation. The performance penalty really is too heavy. Our system is slow enough already ;). And it would be very difficult bordering on impossible to parallelize Plus, I can imagine that all extension modules (and our own code) would have to be adapted. The FIFO scheduler is perfect for us because the load is typically quite low. It's mostly at those times when someone runs a lengthy calculation that all other users suffer greatly increased response times. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Improved thread switching
On Wed, Mar 19, 2008 at 10:42 AM, Stefan Ring [EMAIL PROTECTED] wrote: On Mar 19, 2008 05:24 PM, Adam Olsen [EMAIL PROTECTED] wrote: On Wed, Mar 19, 2008 at 10:09 AM, Stefan Ring [EMAIL PROTECTED] wrote: Adam Olsen rhamph at gmail.com writes: Can you try with a call to sched_yield(), rather than nanosleep()? It should have the same benefit but without as much performance hit. If it works, but is still too much hit, try tuning the checkinterval to see if you can find an acceptable throughput/responsiveness balance. I tried that, and it had no effect whatsoever. I suppose it would make an effect on a single CPU or an otherwise heavily loaded SMP system but that's not the secnario we care about. So you've got a lightly loaded SMP system? Multiple threads all blocked on the GIL, multiple CPUs to run them, but only one CPU is active? I that case I can imagine how sched_yield() might finish before the other CPUs wake up a thread. A FIFO scheduler would be the right thing here, but it's only a short term solution. Care for a long term solution? ;) http://code.google.com/p/python-safethread/ I've already seen that but it would not help us in our current situation. The performance penalty really is too heavy. Our system is slow enough already ;). And it would be very difficult bordering on impossible to parallelize Plus, I can imagine that all extension modules (and our own code) would have to be adapted. The FIFO scheduler is perfect for us because the load is typically quite low. It's mostly at those times when someone runs a lengthy calculation that all other users suffer greatly increased response times. So you want responsiveness when idle but throughput when busy? Are those calculations primarily python code, or does a C library do the grunt work? If it's a C library you shouldn't be affected by safethread's increased overhead. -- Adam Olsen, aka Rhamphoryncus ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
On Wed, Mar 19, 2008 at 10:44 AM, Stephen J. Turnbull [EMAIL PROTECTED] wrote: So we should add this to 2to3, no? They're going to run that anyway. If 2to3 can handle this, that removes the larger half of my objection. I was under the impression that this kind of semantic inferencing was beyond its capabilities. But even if so, maybe it's safe to assume that those names aren't used in other contexts. My remaining smaller half of the objection is that these aliases appear to have been added to reduce the friction when moving from another unit test system. Since the exact names are as much a matter of muscle memory as anything else being changed by py3k, that's not very important in this context. I still don't see the benefit paying for the cost. Are people genuinely confused by the plethora of names for the operations (instead of by their occasional misuse)? But I'm not the one offering a patch here, so I'll pipe down now. :) -- Michael Urman ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Capsule Summary of Some Packaging/Deployment Technology Concerns
Jeff Rush writes: I was in a Packaging BoF yesterday and, although not very relevant to the packager bootstrap thread, Guido has asked me to post some of the concerns. We did address many topics on both days, I added the following topics which were addressed on the Friday BoF only, see http://wiki.python.org/moin/PackagingBOF - Linux distributions try to ship only one version of a package/egg/module in one release, only shipping more than one version if necessary. eggs (as least as shipped with Debian, Fedora, Ubuntu) are all built using --single-version-externally-managed. - import foo should work wether installed as an egg or installed with distutils, and without using pkg_resources.require - pkg_resources should handle the situation of one egg version installed as --single-version-externally-managed (default version) and one or more eggs installed not using --single-version-externally-managed. Currently these additional versions cannot be imported. - It would be useful if setuptools could handle separate build and install steps like most configure/make/make install systems do. Access to external resources should optionally be disabled during a build. - The idea was brought up to use a to-be-defined api-version to describe dependencies between eggs. Version numbers are generally used for more than api changes; the idea follows existing practice for shared object names, only changing when the API is changed. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Distutils] PEP 365 (Adding the pkg_resources module)
On Mon, Mar 17, 2008 at 2:19 PM, zooko [EMAIL PROTECTED] wrote: 4. The standard Python library includes a tool to find and read resources (other than Python modules) that came bundled in a Python package. Consider, for example, this snippets of code in Nevow: http://divmod.org/trac/browser/trunk/Nevow/setup.py?rev=13786#L10 http://divmod.org/trac/browser/trunk/Nevow/setup.py?rev=13786 http://divmod.org/trac/browser/trunk/Nevow/setup_egg.py?rev=2406 When Nevow uses pkg_resources to import its files such as default.css, then it is able to find at runtime, even if is being imported from a py2exe or py2app zip, or on other platforms where its homegrown setup script and homegrown find my file function fail. So using pkg_resources (and setuptools to install it) makes test_nevow pass on all of the allmydata.org buildslaves: http://allmydata.org/buildbot/waterfall?show_events=false I think we're pretty close to this already. PEP 302 defines a getdata() method. Hopefully most PEP 302 implementations support it. The only thing missing IMO is a little function that does what getdata() does when there is no __loader__ object (i.e. when the default import-from-filesystem import method is used). -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Improved thread switching
Adam Olsen rhamph at gmail.com writes: So you want responsiveness when idle but throughput when busy? Exactly ;) Are those calculations primarily python code, or does a C library do the grunt work? If it's a C library you shouldn't be affected by safethread's increased overhead. It's Python code all the way. Frankly, it's a huge mess, but it would be very very hard to come up with a scalable solution that would allow to optimize certain hotspots and redo them in C or C++. There isn't even anything left to optimize in particular because all those low hanging fruit have already been taken care of. So it's just ~30kloc Python code over which the total time spent is quite uniformly distributed :(. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Capsule Summary of Some Packaging/Deployment Technology Concerns
Phillip J. Eby writes: 7. Many wanted to ability to install files anywhere in the install tree and not just under the Python package. Under distutils this was possible but it was removed in setuptools for security reasons. It wasn't security, it was manageability. Egg-based installation means containment, (analagous to GNU stow) and therefore portability and disposability of plugins. (Which again is what eggs were really developed for in the first place.) defining containment this way doesn't help when preparing eggs for inclusion in a linux distribution. E.g. users on these distributions are used to find log files in /var/log (maybe in a subdir), documentation in /usr/share/doc/package name. You probably will get different views about manageability depending on your background (used to linux distribution standards or used to standards set by setuptools/cheeseshop). Packagers currently move these files manually to the standard locations and often have to keep symlinks in the egg dirs to these locations. Installation on linux distributions is handled by existing package tools which is unlikely to change. So it would be nice to find a common layer which can be used for both distribution methods, optionally enabling this with some kind of option like --install-files-in-places-not-handled-by-setuptools ;) Matthias ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 365 (Adding the pkg_resources module)
On Tue, Mar 18, 2008 at 3:36 PM, Phillip J. Eby [EMAIL PROTECTED] wrote: At 03:43 PM 3/18/2008 -0500, Guido van Rossum wrote: Only very few people would care about writing a setup script that works with this bootstrap module; basically only package manager implementers. That's true today, sure, but as soon as it is widely available, others are sure to want to use it too. I just want a bright-line distinction between what is and isn't bootstrappable, rather than a murky region of maybe, if you're not doing anything too complicated. How about anything that uses only distutils in its setup.py and doesn't have external dependencies? See a (horribly incomplete) prototype I added as sandbox/bootstrap/bootstrap.py. I wrote this on the plane last night and have only tested it with file:/// URLs; it needs to add the ability to consult PyPI to find the download URL, and probably more. (PS: just now I also managed to successfully install setuptools from source by giving it the URL to the gar.gz file.) There seems to be a misunderstanding about what I am proposing we do instead. The boostrap installer should only be powerful enough to allow it to be used to install a real package manager like setuptools. Which is why PEP 365 proposed only downloading an archive to a cache directory, and optionally running something from it. It explicitly disavows installation of anything, since the downloaded archive wouldn't have been added to sys.path except for the duration of the bootstrap process, and no scripts were to be installed. (Indeed, apart from the methods it would have used to locate the archive on PyPI, and to determine what to run from inside it, there was nothing particularly egg-specific about the proposed bootstrapping process.) My bootstrap.py does exactly that: it downloads and unzips/untars a file and runs its setup.py with install as the only command line argument. (It currently looks for setup.py at the toplevel and one level deep in the unpacked archive.) Of course you will likely have to be root or administrator to run it effectively. So, to fully egg-neutralize the bootstrapping approach, we need only know how to locate an appropriate archive, and how to determine what to run from it. Right. For the latter, we could use the already-in-2.6 convention of running __main__ from a zipfile or directory. (Too bad distutils source distributions have an extra directory name embedded in them, so one can't just execute them directly. Otherwise, we could've just let people drop in a __main__.py next to setup.py. OTOH, maybe it would be enough to use setuptools' algorithm for finding setup.py to locate __main__.py, and I'm fairly sure *that* can be briefly expressed in the PEP.) What's wrong with just running setup.py install? I'd rather continue existing standards / conventions. Of course, it won't work when setup.py requires setuptools; but old style setup.py files that use only distutils work great (I managed to install Django from a file:/// URL). The other open question is a naming convention and version detection, so that the bootstrap tool can identify which of the files listed on PyPI is suitable for its use. (Both with regard to the version selection, and file type.) However, if PyPI were to grow support for designating the appropriate files and/or versions in some other way, we wouldn't need a naming convention as such. I don't understand PyPI all that well; it seems poor design that the browsing via keywords is emphasized but there is no easy way to *search* for a keyword (the list of all packages is not emphasized enough on the main page -- it occurs in the side bar but not in the main text). I assume there's a programmatic API (XML-RPC?) but I haven't found it yet. Without one or the other, the bootstrap tool would have to grow a version parsing scheme of some type, and play guessing games with file extensions. (Which is one reason I limited PEP 365's scope to downloading eggs actually *uploaded* to PyPI, rather than arbitrary packages *linked* from PyPI.) There are two version parsers in distutils, referenced by PEP 345, the PyPI 1.2 metadata standard. So, if I had to propose something right now, I would be inclined to propose: * using setuptools' version parsing semantics for interpretation of alpha/beta/dev/etc. releases Can you point me to the code for this? What is its advantage over distutils.version? * having a bdist_bootstrap format that's essentially a bdist_dumb .zip file with the internal path prefixes stripped off, making it an importable .zip with a different file extension. (Or maybe just .pyboot.zip?) The filename convention would use setuptools' canonicalization and escaping of names and version numbers, to allow unambiguous machine parsing of the filename. A __main__ module would have to be present for the archive to be run, as opposed to just being downloaded
Re: [Python-Dev] Improved thread switching
On Wed, Mar 19, 2008 at 11:25 AM, Stefan Ring [EMAIL PROTECTED] wrote: Adam Olsen rhamph at gmail.com writes: So you want responsiveness when idle but throughput when busy? Exactly ;) Are those calculations primarily python code, or does a C library do the grunt work? If it's a C library you shouldn't be affected by safethread's increased overhead. It's Python code all the way. Frankly, it's a huge mess, but it would be very very hard to come up with a scalable solution that would allow to optimize certain hotspots and redo them in C or C++. There isn't even anything left to optimize in particular because all those low hanging fruit have already been taken care of. So it's just ~30kloc Python code over which the total time spent is quite uniformly distributed :(. I see. Well, at this point I think the most you can do is file a bug so the problem doesn't get forgotten. If nothing else, if my safethread stuff goes in it'll very likely include a --with-gil option, so I may put together a FIFO scheduler. -- Adam Olsen, aka Rhamphoryncus ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Order of Fixers
At the moment, fixers are run in alphabetical order -- but this poses a problem, because some depend on others (for example, fix_print will need to be run _before_ fix_future, because fix_print looks for the 'from __future__ import ...' statement. I'm tempted to simply change fix_future to fix_zz_future... But that has some obvious drawbacks. Alternately, if future is the only dependent module, it might be marginally cleaner to simply special-case it in refactor.get_all_fix_names. So, any better suggestions? ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Order of Fixers
On Wed, Mar 19, 2008 at 1:01 PM, David Wolever [EMAIL PROTECTED] wrote: At the moment, fixers are run in alphabetical order -- but this poses a problem, because some depend on others (for example, fix_print will need to be run _before_ fix_future, because fix_print looks for the 'from __future__ import ...' statement. I'm tempted to simply change fix_future to fix_zz_future... But that has some obvious drawbacks. Alternately, if future is the only dependent module, it might be marginally cleaner to simply special-case it in refactor.get_all_fix_names. So, any better suggestions? I would create a list of fixers that need to go first in refactor.py and run those in order. If you wanted to get complex, you could add a requires member to fixes, but that is probably overkill. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/musiccomposition%40gmail.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Distutils] PEP 365 (Adding the pkg_resources module)
On 19/03/2008, Guido van Rossum [EMAIL PROTECTED] wrote: On Mon, Mar 17, 2008 at 2:19 PM, zooko [EMAIL PROTECTED] wrote: 4. The standard Python library includes a tool to find and read resources (other than Python modules) that came bundled in a Python package. I think we're pretty close to this already. PEP 302 defines a getdata() method. Hopefully most PEP 302 implementations support it. The only thing missing IMO is a little function that does what getdata() does when there is no __loader__ object (i.e. when the default import-from-filesystem import method is used). I'm currently working on an addition to pkgutil to provide this type of function. I'm considering going a little further (adding functions to get a file-like object, test for existence, and list available resources, modelled on the pkg_resources functions - but these extra ones are not supported by the loader protocol, so I'm undecided as to whether it's worth it, I'll see how complex the code gets). Once I have a patch, I'll post it to the tracker. What's the best approach? Code a patch for 3.0 and backport, or code for 2.6 and let the merging process do its stuff? Paul. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Order of Fixers
On 19-Mar-08, at 2:18 PM, Benjamin Peterson wrote: So, any better suggestions? I would create a list of fixers that need to go first in refactor.py and run those in order. If you wanted to get complex, you could add a requires member to fixes, but that is probably overkill. Ok, so I was digging around a bit and there is the option to traverse the tree in preorder or postorder. While the actual order does not matter in this case, what does matter is that the preorder fixers are run first -- so I've just dropped the print fixer in there (and commented everything). This isn't a great general solution... But, at the moment, there is no need for it to be. If the need for a general solution arises, that can be added :)___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Distutils] Capsule Summary of Some Packaging/Deployment Technology Concerns
Phillip J. Eby wrote: At 03:57 AM 3/19/2008 -0500, Jeff Rush wrote: I'd be willing to help out, and keep a carefully balanced hand in what is accepted. I'm not sure exactly how to go about such a handoff though. My guess is that we need a bug/patch tracker, and a few people to review, test, and apply. Maybe a transitional period during which I just say yea or nay and let others do the test and apply, before opening it up entirely. That way, we can perhaps solidify a few principles that I'd like to have stay in place. (Like no arbitrary post-install code hooks.) +1 to blessing more people to commit. +1 to the transition period idea. These two ought to enable things to move a bit quicker than taking a year to accept a patch. :-) In addition to a bug tracker and patch manager, seems like perhaps a wiki to help document some of these solidified principles and other notes would be a good thing. (Like a patch should almost always include at least one test, possibly more.) Given that the source for setuptools is in the python.org svn, couldn't we just use the python.org roundup and wiki for these facilities? Though looking at the list of components, it seems that things in the sandbox generally aren't tracked in this infrastructure. In which case, I'm sure we could use sf, launchpad, or some such external provider. Enthought could even host this stuff. Like Jeff Rush, I'm also willing to help out as both a writer and reviewer of patches. As you can see from my earlier posts there are a number of things (besides running an arbitrary post-install script) that we'd like to be able to get into the codebase. -- Dave ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] 2to3 and print function
At the moment, fix_print.py does the Right Thing when it finds ``from __future__ import print_function``... But the 2to3 parser gets upset when print() is passed kwargs: $ cat x.py from __future__ import print_function print(Hello, world!, end=' ') $ 2to3 x.py ... RefactoringTool: Can't parse x.py: ParseError: bad input: type=22, value='=', context=('', (2, 26)) What would be the best way to start fixing this? #2412 is the related bug. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Distutils] PEP 365 (Adding the pkg_resources module)
On Wed, Mar 19, 2008 at 11:34 AM, Paul Moore [EMAIL PROTECTED] wrote: On 19/03/2008, Guido van Rossum [EMAIL PROTECTED] wrote: On Mon, Mar 17, 2008 at 2:19 PM, zooko [EMAIL PROTECTED] wrote: 4. The standard Python library includes a tool to find and read resources (other than Python modules) that came bundled in a Python package. I think we're pretty close to this already. PEP 302 defines a getdata() method. Hopefully most PEP 302 implementations support it. The only thing missing IMO is a little function that does what getdata() does when there is no __loader__ object (i.e. when the default import-from-filesystem import method is used). I'm currently working on an addition to pkgutil to provide this type of function. I'm considering going a little further (adding functions to get a file-like object, test for existence, and list available resources, modelled on the pkg_resources functions - but these extra ones are not supported by the loader protocol, so I'm undecided as to whether it's worth it, I'll see how complex the code gets). I'd only do what __loader__ offers. People can always wrap a StringIO around it. Once I have a patch, I'll post it to the tracker. What's the best approach? Code a patch for 3.0 and backport, or code for 2.6 and let the merging process do its stuff? Code for 2.6, let the merge do its thing. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3000: Special type for object attributes map keys
Greg Ewing wrote: Neal Norwitz wrote: Part of this is done, but very differently in that all strings used in code objects are interned (stored in a dictionary And since two interned strings can be compared by pointer identity, I don't see how this differs significantly from the unique integer idea. If the integers were used to directly index an array instead of being used as dict keys, it might make a difference. The cost would be that every namespace would need to be as big as the number of names in existence, with most of them being extremely sparse. And we already have a data structure for sparse arrays... it's called a dict. :) If every attribute name were guaranteed to be an interned string (not currently the case - attribute names can be any kind of object), it might be possible to add another dict specialization for interned string lookup. The wins would be that lookup could assume the string's hash is valid, and equality comparison could be done via pointer comparison. HOWEVER... I think that the attribute cache patches that went into 2.6 and 3.0 mostly take care of lookup speed issues. They both assume strings are interned already. A cache hit consists of calculating a cache index from the string hash, ensuring that the attribute name at that index is identical (via pointer comparison) to the attribute name to be looked up, and returning the associated value. Lookup with an attribute that's not a string or not interned is automatically a cache miss, and it happens very rarely. Specializing dicts for interned strings would optimize the cache miss. (When I was making the 3.0 patch, I found that happened rarely on the benchmarks and regression tests. It was somewhere around 5%.) The cache miss isn't expensive just because of the dict lookup. The attribute has to be searched for in every type and super-type of the object. The occasional string equality test probably doesn't register. I'd be happy to be shown to be wrong, though. Neil ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] [Python-checkins] logging shutdown (was: Re: r61431 - python/trunk/Doc/library/logging.rst)
On 3/19/08, Vinay Sajip [EMAIL PROTECTED] wrote: I think (repeatedly) testing an app through IDLE is a reasonable use case. [other threads may still have references to loggers or handlers] Would it be reasonable for shutdown to remove logging from sys.modules, so that a rerun has some chance of succeeding via its own import? I'm not sure that would be enough in the scenario I mentioned above - would removing a module from sys.modules be a guarantee of removing it from memory? No. It will explicitly not be removed from memory while anything holds a live reference. Removing it from sys.modules just means that the next time a module does import logging, the logging initialization code will run again. It is true that this could cause contention if the old version is still holding an exclusive lock on some output file. It's safer, in my view, for the developer of an application to do cleanup of their app if they want to test repeatedly in IDLE. Depending on the issue just fixed, the app may not have a clean shutdown. -jJ ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 365 (Adding the pkg_resources module)
At 10:48 AM 3/19/2008 -0700, Guido van Rossum wrote: I don't understand PyPI all that well; it seems poor design that the browsing via keywords is emphasized but there is no easy way to *search* for a keyword (the list of all packages is not emphasized enough on the main page -- it occurs in the side bar but not in the main text). I assume there's a programmatic API (XML-RPC?) but I haven't found it yet. http://wiki.python.org/moin/CheeseShopXmlRpc There's also a REST API that setuptools uses: http://peak.telecommunity.com/DevCenter/EasyInstall#package-index-api The API was originally designed for screen-scraping an older version of PyPI, but that has been replaced with a lite version served from: http://pypi.python.org/simple/ The lite version is intended for tools such as easy_install to process, as it consists strictly of links and can be statically cached. Zope Corp., for example, maintains a static mirror of this API, to guard themselves against PyPI outages and slowdowns, since their buildouts can involve huge numbers of eggs, both their own and external dependencies. I'd love it if you could write or point me to code that takes a package name and optional version and returns the URL for the source archive, and the type (in case it can't be guessed from the filename or the Content-type header). You can probably do that with the XML-RPC API. There's a function to get the versions of a package, given a (case-sensitive) name, and there's a function to get information for uploaded archives, given a name and a version. I originally intended to use it for the PEP 365 approach, but you can get the necessary information in just one static roundtrip using the REST (/simple) HTML API, if you're willing to parse the URLs for version information. (The catch of course being that distutils source distributions don't have unambiguously parseable filenames.) Hm. Why not just use the existing convention for running setup.py after unpacking? This works great in my experience, and has the advantage of having an easy fallback if you end up having to do this manually for whatever reason. Because I want bootstrap-ees to be able to use the bootstrap mechanism. For example, I expect at some point that setuptools will use other, non-self-contained packages, and other package managers such as zc.buildout et al also want to depend on setuptools without bundling it. * calling the bootstrap module 'bootstrap', as in 'python -m bootstrap projectname optionalversion'. The module would expose an API to allow it to be used programmatically as well as the command line, so that bootstrapped packages can use the bootstrap process to locate dependencies if they so desire. (Today's package management tools, at least, are all based on setuptools, so if it's not present they'll need to download that before beginning their own bootstrapping process.) This sounds like going beyond bootstrapping. My vision is that you use the bootstrap module (with the command line you suggest above) once to install setuptools or the alternate package manager of your choice, and then you can use easy_install (or whatever alternative) to install the rest. Well, I noticed that the other package managers were writing bootstrap scripts that then download setuptools' bootstrap script and run it as part of *their* bootstrap process... and then I got to thinking that it sure would be nice for setuptools to not have to be a giant monolithic download if I wanted to start using other packages in it... and that it sure would be nice to get rid of all these bootstrap scripts downloading other bootstrap scripts... and then I wrote PEP 365. :) One other thing that PEP 365 does for these use cases that your approach doesn't, is that pkg_resources could detect whether a desired package of a usable version was *already* installed, and skip it if so. So, we've already scaled back the intended use cases quite a bit, as people will have to write their own is it already there? and is it the right version? checks. Without one or the other, the bootstrap tool would have to grow a version parsing scheme of some type, and play guessing games with file extensions. (Which is one reason I limited PEP 365's scope to downloading eggs actually *uploaded* to PyPI, rather than arbitrary packages *linked* from PyPI.) There are two version parsers in distutils, referenced by PEP 345, the PyPI 1.2 metadata standard. Yes, and StrictVersion doesn't parse release candidates. And neither LooseVersion nor StrictVersion supports handling multiple pre/post-release tags correctly. (E.g. 1.1a1dev-r2753) So, if I had to propose something right now, I would be inclined to propose: * using setuptools' version parsing semantics for interpretation of alpha/beta/dev/etc. releases Can you point me to the code for this? What is its advantage over distutils.version? It implements
Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?
Martin v. Löwis wrote: 1. What is the plan for PyPI when Python 3.0 comes out and dependencies start getting satisfied from distribution across the great divide, e.g. a 3.0-specific package pulls from PyPI a 2.x-specific package to meet some need? Are there plans to fork PyPI, apply special tags to uploads or what? I don't see the need to for PyPI. For packages (or distributions, to avoid confusion with Python packages), I see two options: a) provide a single release that supports both 2.x and 3.x. The precise strategy to do so might vary. If one is going for a single source version, have setup.py run 2to3 (or perhaps 3to2). For dual-source packages, have setup.py just install the one for the right version; setup.py itself needs to be written so it runs on both versions (which is easy to do). b) switch to Python 3 at some point (i.e. burn your bridges). You seem to be implying that some projects may release separate source distributions. I cannot imagine why somebody would want to do that. While not quite to the same scale as the 2 to 3 transition, this problem seems like one that would generally already exist. If one writes code that uses newer 2.4/2.5 features (say decorators for example,) it won't build/run on 2.3 or earlier installs. How have people been handling that sort of situation? Is it only by not using the newer features or is there some situation I'm just not seeing in a brief review of a few projects pages on PyPI where there is only one source tarball? -- Dave ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?
Martin v. Löwis wrote: I don't see the need to for PyPI. For packages (or distributions, to avoid confusion with Python packages), I see two options: a) provide a single release that supports both 2.x and 3.x. The precise strategy to do so might vary. If one is going for a single source version, have setup.py run 2to3 (or perhaps 3to2). For dual-source packages, have setup.py just install the one for the right version; setup.py itself needs to be written so it runs on both versions (which is easy to do). b) switch to Python 3 at some point (i.e. burn your bridges). You seem to be implying that some projects may release separate source distributions. I cannot imagine why somebody would want to do that. That's odd. I can't imagine why anybody would *not* want to do that. Given the number of issues 2to3 can't fix (because it would be too dangerous to guess), I certainly can't imagine a just-in-time porting solution that would work reliably. Making two releases means I can migrate once and only once and be done with it. Making a single release work on 2.x and 3.x means I have to keep all of the details of both Python 2 and 3 in my head all the time as I code? not to mention litter my codebase with # the following ugly hack lets us work with Python 2 and 3 comments so someone else doesn't undo all my hard work when they run the tests on Python 3 but not 2? No thanks. My brain is too small. Robert Brewer [EMAIL PROTECTED] ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Order of Fixers
On Wed, Mar 19, 2008 at 11:01 AM, David Wolever [EMAIL PROTECTED] wrote: At the moment, fixers are run in alphabetical order -- but this poses a problem, because some depend on others (for example, fix_print will need to be run _before_ fix_future, because fix_print looks for the 'from __future__ import ...' statement. I'm tempted to simply change fix_future to fix_zz_future... But that has some obvious drawbacks. Alternately, if future is the only dependent module, it might be marginally cleaner to simply special-case it in refactor.get_all_fix_names. So, any better suggestions? I would fix the from-future fixer to not remove futures that are specific to 3.0, and let the fixers specific to those features remove them. -- Thomas Wouters [EMAIL PROTECTED] Hi! I'm a .signature virus! copy me into your .signature file to help me spread! ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
On 02:21 pm, [EMAIL PROTECTED] wrote: OTOH, I'd rather there be OOWTDI so whatever the consensus is is fine with me. This strikes me as a gratuitous API change of the kind Guido was warning about in his recent post: Don't change your APIs incompatibly when porting to Py3k I agree emphatically. Actually I think this is the most extreme case. The unit test stuff should be as stable as humanly possible between 2 and 3, moreso than any other library. It's one thing to have a test that fails because the functionality is broken. It's much worse to have a test that fails because the meaning of the test has changed - and this is going to happen anyway with e.g. assertEquals(foo.keys(), ['some', 'list']) which is a common enough assertion. Mixin in changes to the test framework creates even more confusion and pain. Granted, this change is apparently trivial and easy to understand, but each new traceback requires more thinking and there is going to be quite enough thinking going on in 2-3. I say this from experience. Twisted's trial has been burned repeatedly both by introducing apparently trivial changes of our own to our testing tool and by apparently trivial changes to the Python stdlib unittest module, upon which we depend. Nothing we haven't been able to handle, but the 2-3 transition exacerbates this as it does all potential compatibility problems. Whatever the stdlib does in this regard we'll definitely continue to provide an insulating layer in trial and keep both types of assertions for compatibility. So maybe the answer is to use Twisted for your unit tests rather than worry about your own compatibility lib ;-). ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 365 (Adding the pkg_resources module)
On Wed, Mar 19, 2008 at 12:54 PM, Phillip J. Eby [EMAIL PROTECTED] wrote: [a long message] I'm back at Google and *really* busy for another week or so, so I'll have to postpone the rest of this discussion for a while. If other people want to chime in please do so; if this is just a dialog between Phillip and me I might incorrectly assume that nobody besides Phillip really cares. -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?
On 08:34 pm, [EMAIL PROTECTED] wrote: Martin v. L�wis wrote: You seem to be implying that some projects may release separate source distributions. I cannot imagine why somebody would want to do that. That's odd. I can't imagine why anybody would *not* want to do that. Python 2 is going to be around for a long time. No user is going to want to pay the migration cost all at once. Users of library packages will loudly demand this continued support. Long-term maintenance of a complete fork of your software in 2 very very subtly different languages, and backporting every single change effectively doubles the amount of work, in the best case. I certainly can't afford to do that with Twisted. Inserting a few hacks here and there (and annotating your code with some extra metadata, in the py3 case for 2to3) is something we _already_ have to do to maintain compatibility for multiple Python versions in one piece of software. That is why Guido has personally explained, in at least 2 keynote speeches, several blog posts, and several mailing list messages, that maintaining a single source release and translating it is *the* way that you manage the Python 3 transition for anything but a small project, or an application that's a true leaf in the dependency graph. (The burn your bridges solution is not available to anyone who has more than one other person using their code as code and not simply a UI.) I am happy to have found a reason to emphatically agree with you, Martin :-). ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
On Wed, Mar 19, 2008 at 6:24 PM, Gabriel Grant [EMAIL PROTECTED] wrote: Hi all, This gem from unittest.py is pretty much the opposite of one obvious way: # Synonyms for assertion methods [snip] Could these be removed for 3k? I agree with others who say that we shouldn't do this for Python 3k. If we want to get rid of them, we should deprecate them, wait a release or so, *then* remove them. That said, can we axe one of (assertEqual, assertEquals) too? jml ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Capsule Summary of Some Packaging/Deployment Technology Concerns
Phillip J. Eby wrote: At 03:57 AM 3/19/2008 -0500, Jeff Rush wrote: Are you open to giving certain others patch view/commit privileges to setuptools? Jim Fulton has such already. I'm open to extending that to others who have a good grasp of the subtleties involved. Truthfully, if we can just get 0.6 put to bed, I could probably open up the trunk a lot wider. What is needed to put 0.6 to bed? How can we help accelerate this? Probably the most frustrating thing (or chief amongst the most frustrating things) about setuptools development is that it's a black hole. By which I mean that backward compatibility and cruft accretion make it difficult to get out of. In the beginning, there was the distutils. Distutils begat setuptools, and setuptools begat virtualenv and zc.buildout and source control plugins. Etc., etc. I've found in the past a revisiting of basic principles and objectives, communicated in enhanced documentation, can help to clear out such black holes. ;-) I'm pulling something together, from the recent emails and some archived threads -- it definitely is tangled though, I'll agree. What I think is really needed in the long run is to keep eggs, but get rid of setuptools and the distutils in their current form. There's a lot of brokenness there, and also a lot of accumulated cruft. We really need a distutils 3000, and it needs to be built on a better approach. That will require a lot of concensus building as well as collection of use cases so that the architecture team can encompass aspects they are not personally aware of. As you've said, it's hard to address itches that are not your own. It certainly is possible for someone to create a parallel packaging moduleset that uses the existing eggs format and PyPI but without the currently codebase, and then, once proven to work, lobby for it as distutils 3000. Frankly I'd like to see setuptools exploded, with those parts of general use folded back into the standard library, the creation of a set of non-implementation-specific documents of the distribution formats and behavior, leaving a small core of one implementation of how to do it and the door open for others to compete with their own implementation. In truth, my *real* motivation for PEP 365's bootstrap tool isn't so much to support the package management tools we have today, as it is to support a new one tomorrow. I have a few ideas for ways to shift the paradigm of how individual projects get built, to incorporate many scenarios that don't work well now. But to implement those things in such a next-generation tool, I will not want to be restricted to just what's in the stdlib or what can be bundled in the tool. You should document those ideas someplace and start getting community input. There are a lot of diverse opinions on the right way to do this and the way ahead is quite unclear. And I think it's probably getting close to time I stepped down from day-to-day management of the codebase (which is more like month-to-month or quarter-to-quarter for me lately). It will probably be a lot easier for me to step back and critique stuff that goes in, after the fact, than to go over the stuff beforehand. :) I'm not sure exactly how to go about such a handoff though. My guess is that we need a bug/patch tracker, and a few people to review, test, and apply. Maybe a transitional period during which I just say yea or nay and let others do the test and apply, before opening it up entirely. That way, we can perhaps solidify a few principles that I'd like to have stay in place. (Like no arbitrary post-install code hooks.) I'll see about a tracker and identify some people to help out. btw, offtopic question: are you by any chance the same Jeff Rush who invented EchoMail? Yep, that's me. Not many remember the Fidonet days. I designed EchoMail on a napkin during a DFW Sysop pizza party during a conversation on what to do with the unused capability of inter-BBS private file transfers. It too escaped its ecosystem and spread like wildfire, almost getting banned from Fidonet. ;-) -Jeff ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
Jonathan Lange wrote: On Wed, Mar 19, 2008 at 6:24 PM, Gabriel Grant [EMAIL PROTECTED] wrote: Hi all, This gem from unittest.py is pretty much the opposite of one obvious way: # Synonyms for assertion methods [snip] Could these be removed for 3k? I agree with others who say that we shouldn't do this for Python 3k. If we want to get rid of them, we should deprecate them, wait a release or so, *then* remove them. That said, can we axe one of (assertEqual, assertEquals) too? We should keep 'assertEquals'. Michael jml ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/fuzzyman%40voidspace.org.uk ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] 2to3 and print function
On Wed, Mar 19, 2008 at 12:04 PM, David Wolever [EMAIL PROTECTED] wrote: At the moment, fix_print.py does the Right Thing when it finds ``from __future__ import print_function``... But the 2to3 parser gets upset when print() is passed kwargs: $ cat x.py from __future__ import print_function print(Hello, world!, end=' ') $ 2to3 x.py ... RefactoringTool: Can't parse x.py: ParseError: bad input: type=22, value='=', context=('', (2, 26)) What would be the best way to start fixing this? #2412 is the related bug. You can pass -p to refactor.py to fix this on a per-run basis. See r58002 (and the revisions it mentions) for a failed attempt to do this automatically. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
On Wed, Mar 19, 2008 at 10:12 AM, Michael Urman [EMAIL PROTECTED] wrote: On Wed, Mar 19, 2008 at 10:44 AM, Stephen J. Turnbull [EMAIL PROTECTED] wrote: So we should add this to 2to3, no? They're going to run that anyway. If 2to3 can handle this, that removes the larger half of my objection. I was under the impression that this kind of semantic inferencing was beyond its capabilities. But even if so, maybe it's safe to assume that those names aren't used in other contexts. 2to3 can indeed handle this, but I'm not sure I would want it run automatically (rather have it be opt-in, the way several other fixers are). Solid test suites are critical to the transition process, and changing method names around may upset that. It's unlikely, sure, but it may add to general unease. The way I'd see such a fixer working is that people would run it over their 2.x codebase, commit that change, then transition the rest of their code at release-time, without having to worry about gratuitous code changes in their test suite. Collin Winter ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
On Wed, Mar 19, 2008 at 5:02 PM, Jonathan Lange [EMAIL PROTECTED] wrote: On Wed, Mar 19, 2008 at 6:24 PM, Gabriel Grant [EMAIL PROTECTED] wrote: Hi all, This gem from unittest.py is pretty much the opposite of one obvious way: # Synonyms for assertion methods [snip] Could these be removed for 3k? I agree with others who say that we shouldn't do this for Python 3k. If we want to get rid of them, we should deprecate them, wait a release or so, *then* remove them. It seems to me if we want to remove this at some point, the time is now. I'm not aware of anything starting off deprecated in 3k - my impression is the whole point of 3k is to clean house. Deprecation warnings can be put into 2.6 if people think thats necessary, but the more important thing will be including it in 2to3. I'm working on that right now, so if/when the actual wording is finalized, it should just be a matter of changing around the order of the function names in a dict. -Gabriel ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?
Martin v. Löwis wrote: I don't see the need to for PyPI. For packages (or distributions, to avoid confusion with Python packages), I see two options: a) provide a single release that supports both 2.x and 3.x. b) switch to Python 3 at some point (i.e. burn your bridges). You seem to be implying that some projects may release separate source distributions. I cannot imagine why somebody would want to do that. Yes, I am assuming that existing projects would at some point introduce a 3.x version and maybe continue a 2.x version as separate distros, similar to Python itself. Then the large number of existing unqualified dependencies on, say SQLObject, would pull in the higher 3.x version and crash. It's the older projects that don't get updated often that are at risk of being destabilized by the arrival of 3.x specific code in PyPI. Are developers for Python 3.x encouraged in 3.x guidelines to release 'fat' distributions that combine 2.x and 3.x usable versions? There is also some hassle with 2.x programmers browsing PyPI for useful modules to incorporate in their programs, downloading them (w/easy_install so they don't see the project website) and getting streams of errors because they unknowningly hit a 3.x-specific version. Perhaps a convention of a keyword or more likely a new trove classifier that spells outs 3.x stuff, with indicators on package info pages and query filters on PyPI against that? 2. There have been attempts over the years to fix distutils, with the last one being in 2006 by Anthony Baxter. He stated that a major hurdle was the strong demand to respect backward compatibility and he finally gave up. Can you kindly refer to some archived discussion for that? http://mail.python.org/pipermail/python-dev/2006-April/063943.html I started looking at this. The number of complaints I got when I started on this that it would break the existing distutils based installers totally discouraged me. In addition, the existing distutils codebase is ... not good. It is flatly not possible to fix distutils and preserve backwards compatibility. -Anthony Baxter One of the purposes of Python 3.0 was the freedom to break backward compatibility for the sake of doing the right thing. So is it now permissible to give distutils a good reworking and stop letting compatibility issues hold us back? I don't know what the proposed changes are, but for some changes; in general, I feel that the need for backwards compatibility is exaggerated. A controversial point, I'm afraid. Perhaps it is time for a parallel rewrite, so that those setup.py who import distutils get the old behavior, and those who import distutils2 get the new, and we let attrition and the community decide which is standard. -Jeff ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
On 19/03/2008, Michael Urman [EMAIL PROTECTED] wrote: OTOH, I'd rather there be OOWTDI so whatever the consensus is is fine with me. This strikes me as a gratuitous API change of the kind Guido was warning about in his recent post: Don't change your APIs incompatibly when porting to Py3k This seems compelling to me. And as Glyph mentioned, the testing APIs are the most critical ones to keep working when moving from 2 to 3. -1 on this change as part of 3.0. Either do it in 2.6 (which wouldn't satisfy backward compatibility requirements) or defer it to 3.1 or later. OK, starting 3.0 with deprecated methods is a nuisance, but the testing framework seems to me to be a valid special case. Or just don't bother at all. It's not *that* bad. Paul. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Capsule Summary of Some Packaging/Deployment Technology Concerns
At 05:15 PM 3/19/2008 -0500, Jeff Rush wrote: Phillip J. Eby wrote: At 03:57 AM 3/19/2008 -0500, Jeff Rush wrote: Are you open to giving certain others patch view/commit privileges to setuptools? Jim Fulton has such already. I'm open to extending that to others who have a good grasp of the subtleties involved. Truthfully, if we can just get 0.6 put to bed, I could probably open up the trunk a lot wider. What is needed to put 0.6 to bed? How can we help accelerate this? Get a tracker set up. I'm already in the main Python one, might as well use that. It certainly is possible for someone to create a parallel packaging moduleset that uses the existing eggs format and PyPI but without the currently codebase, and then, once proven to work, lobby for it as distutils 3000. Yep. And I believe that something will look rather more like zc.buildout than setuptools, actually. Specifically in being data-driven rather than script-driven, and in the flexibility of what sort of parts get build and by what methods. Setuptools is still too rooted in distutils' world, the world where you can't depend on any other components being around to build things with. Frankly I'd like to see setuptools exploded, with those parts of general use folded back into the standard library, the creation of a set of non-implementation-specific documents of the distribution formats and behavior, leaving a small core of one implementation of how to do it and the door open for others to compete with their own implementation. Apart from the exploding part, there are already documents. The only thing that makes them implementation-specific is that they haven't passed through any magic blessing process to make them standards. You should document those ideas someplace and start getting community input. There are a lot of diverse opinions on the right way to do this and the way ahead is quite unclear. We might be talking about different things, as I'm more concerned with replacing setuptools and distutils on the build-and-distribute side. What's needed there is more the weeding out of too many ways to do simple things, and fixing the complete absence of ways to do complex things. :) For simple things the distutils are too hard, and for slightly-more-complex things, the entry barrier encourages people to abandon and replace them. On the package management side, I'm somewhat more inclined to agree with the need for a community approach, though. btw, offtopic question: are you by any chance the same Jeff Rush who invented EchoMail? Yep, that's me. Not many remember the Fidonet days. I designed EchoMail on a napkin during a DFW Sysop pizza party during a conversation on what to do with the unused capability of inter-BBS private file transfers. It too escaped its ecosystem and spread like wildfire, almost getting banned from Fidonet. ;-) Ah, so you *do* know what it's like to develop setuptools, then. I might even have met you at the one DFW sysop pizza party I ever attended. Back then, I ran the FreeZone, and before that, Ferris Bueller's Fine Arts Forum, back in the late 80's and early 90's. My wife met me through the D/FW BBS list in the back of Computer Shopper, with a modem she bought at Software, Etc., up in Allen or wherever that place was. Not the chain store, the little consignment shop. Those were the days. But now we're *really* getting off-topic. :) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?
Jeff Rush wrote: Perhaps it is time for a parallel rewrite, so that those setup.py who import distutils get the old behavior, and those who import distutils2 get the new, That sounds good to me. If anyone wants to have a go at this, I have some ideas on how to structure it that I'd be happy to discuss. -- Greg ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] First green Windows x64 buildbots!
We've just experienced our first 2.6 green x64 Windows builds on the build slaves! Well, almost green. Thomas's 'amd64 XP trunk' ran out of disk: 304 tests OK. 1 test failed: test_largefile == ERROR: test_seek (test.test_largefile.TestCase) -- Traceback (most recent call last): File C:\buildbot\trunk.heller-windows-amd64\build\lib\test\test_largefile.py, line 42, in test_seek f.flush() IOError: [Errno 28] No space left on device Sorry about that Thomas ;-) Trent. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] PyCon: please review miy pending patches
Dear sprinters! I've a batch of pending patches I like to get into Python before the next alphas are send out. The math, epoll/kqueue and shell folder patches just need a review. The memory management patches are more complex. Please refer to the thread int/float freelists vs pymalloc, too. epoll and kqueue patch: http://bugs.python.org/issue1657 Mark's and my math branch: trunk$ svnmerge.py merge -S svn+ssh://[EMAIL PROTECTED]/python/branches/trunk-math Windows shell folder patch for os.path: http://bugs.python.org/issue1763 Memory management of ints, floats and longs http://bugs.python.org/issue2039 http://bugs.python.org/issue2013 I've also two pending PEPs. I like to see at least PEP 370 in Python 2.6 and 3.0. It's not as intrusive as PEP 370 and IMHO very useful for deployment of Python applications. Post import hooks http://www.python.org/dev/peps/pep-0369/ Per user site-packages directory http://www.python.org/dev/peps/pep-0370/ Christian ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] First green Windows x64 buildbots!
Great work Trent! You'll need to take a picture of Martin buying you the beer once you get the rest green. :-) n On Wed, Mar 19, 2008 at 5:57 PM, Trent Nelson [EMAIL PROTECTED] wrote: We've just experienced our first 2.6 green x64 Windows builds on the build slaves! Well, almost green. Thomas's 'amd64 XP trunk' ran out of disk: 304 tests OK. 1 test failed: test_largefile == ERROR: test_seek (test.test_largefile.TestCase) -- Traceback (most recent call last): File C:\buildbot\trunk.heller-windows-amd64\build\lib\test\test_largefile.py, line 42, in test_seek f.flush() IOError: [Errno 28] No space left on device Sorry about that Thomas ;-) Trent. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/nnorwitz%40gmail.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Python 3000: Special type for object attributes map keys
* Neal Norwitz [EMAIL PROTECTED] [2008-03-18 18:54:47 -0500]: First, you should measure the current speed difference. Something like: $ ./python.exe -m timeit -s 'd = {1: None}' 'd[1]' 100 loops, best of 3: 0.793 usec per loop $ ./python.exe -m timeit -s 'd = {1: None}' 'd[1]' 100 loops, best of 3: 0.728 usec per loop My python is a debug version, so a release version might be faster for ints. If not, the first task would be to speed up int lookups. :-) [EMAIL PROTECTED]:~ python -V Python 2.4.5 [EMAIL PROTECTED]:~ python -m timeit -s 'd = {1: None}' 'd[1]' 1000 loops, best of 3: 0.142 usec per loop [EMAIL PROTECTED]:~ python -m timeit -s 'd = {1: None}' 'd[1]' 1000 loops, best of 3: 0.138 usec per loop [EMAIL PROTECTED]:~ python2.5 -V Python 2.5.2 [EMAIL PROTECTED]:~ python2.5 -m timeit -s 'd = {1: None}' 'd[1]' 1000 loops, best of 3: 0.136 usec per loop [EMAIL PROTECTED]:~ python2.5 -m timeit -s 'd = {1: None}' 'd[1]' 1000 loops, best of 3: 0.126 usec per loop -- mithrandi, i Ainil en-Balandor, a faer Ambar signature.asc Description: Digital signature ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] how to build extensions for Windows?
I've set up a Parallels virtual machine on my Mac, and have succeeded in getting Windows XP running in it! And I've installed MinGW, as well. Now I'd like to learn how to build the SSL module from source on Windows for Python 2.5.2. Is there any documentation on the process of building an extension from scratch that's appropriate for someone who doesn't know much about Windows? I'm looking for step-by-step. What about this? http://www.mingw.org/MinGWiki/index.php/Python%20extensions Bill ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
On Wed, Mar 19, 2008 at 2:15 PM, [EMAIL PROTECTED] wrote: On 02:21 pm, [EMAIL PROTECTED] wrote: OTOH, I'd rather there be OOWTDI so whatever the consensus is is fine with me. This strikes me as a gratuitous API change of the kind Guido was warning about in his recent post: Don't change your APIs incompatibly when porting to Py3k I agree emphatically. Actually I think this is the most extreme case. The unit test stuff should be as stable as humanly possible between 2 and 3, moreso than any other library. This is convincing for me. Move my +1 back to 3.1. -- Namasté, Jeffrey Yasskin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] how to build extensions for Windows?
Having recently sunk a lot of time into the Windows build process, I'd recommend going with Visual C++ Express 2008 rather than MinGW, as this is the official compiler for 2.6/3.0. (You can download a free copy.) FWIW, I've probably been working on the Windows build side of things on and off for the past month or so, and we've only just reached a point where 32bit and 64bit Windows builds are compiling with all extension modules (bsddb, tcl/tk, ssl etc) and passing all tests (most work has gone into the x64 builds though, the 32-bit ones were already green on XP and below for 32bit). Using MinGW/gcc on Windows hasn't seen anywhere near so much attention, so, YMWV. In terms of my Windows-oriented priorities, they are as follows: - Get 3.0 32/64 Windows builds actually compiling successfully and then passing all tests (given that all build slaves for 3.0 are red that's not exactly a quick action). - Move on to the MSI installer improvements for 2.6/3.0, specifically with regards to the VCRT9 runtime and signing of the installer/binaries. - Maybe putting some cycles into Python builds on MinGW. To be honest though, the main motivation for doing that will be to demonstrate that a Python executable compiled with Visual Studio 2008 Professional with Profile Guided Optimisation will outperform a MinGW/gcc build ;-) Trent. From: [EMAIL PROTECTED] [EMAIL PROTECTED] On Behalf Of Bill Janssen [EMAIL PROTECTED] Sent: 19 March 2008 20:02 To: python-dev@python.org Subject: [Python-Dev] how to build extensions for Windows? I've set up a Parallels virtual machine on my Mac, and have succeeded in getting Windows XP running in it! And I've installed MinGW, as well. Now I'd like to learn how to build the SSL module from source on Windows for Python 2.5.2. Is there any documentation on the process of building an extension from scratch that's appropriate for someone who doesn't know much about Windows? I'm looking for step-by-step. What about this? http://www.mingw.org/MinGWiki/index.php/Python%20extensions Bill ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/tnelson%40onresolve.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] trunk buildbot status
Quick update on the status of the trunk buildbots: Failing: [x86 gentoo trunk (Neal Norwitz)] This has been failing at the same point for the past couple of days now: test_sqlite command timed out: 1800 seconds without output, killing pid 15168 process killed by signal 9 program finished with exit code -1 None of the other buildbots seem to be encountering the same problem. Neal, got any idea what's going on with this one? [alpha True64 5.1 trunk (Neal Norwitz)] test_tarfile started failing recently (within the past few days) with CRC checks. See http://www.python.org/dev/buildbot/trunk/alpha%20Tru64%205.1%20trunk/builds/2712/step-test/0. Greg updated the test such that it prints out some more detail about the failure so we're waiting on that at the moment. [hppa Ubuntu trunk (Matthias Klose)] This has been consistently failing in test_socketserver for as long as I can remember: test_socketserver make: *** [buildbottest] Alarm clock program finished with exit code 2 I just updated that test such that it waits 20 seconds instead of 3 seconds at the end of the test if the server hasn't shutdown -- waiting for the test results of this still. [x86 XP trunk (Joseph Armbruster)] This box didn't survive the recent build changes, but I can't figure out why, none of the other Windows boxes encounter this error: The following error has occurred during XML parsing: File: C:\python\buildarea\trunk.armbruster-windows\build\PCbuild\_bsddb.vcproj Line: 179 Column: 1 Error Message: Illegal qualified name character. The file 'C:\python\buildarea\trunk.armbruster-windows\build\PCbuild\_bsddb.vcproj' has failed to load. Can someone check a clean trunk build on a Windows system that *only* has Visual C++ Express 2008? The latest build system updates don't rely on any features of Visual Studio Professional, but the tools use a lot of common files, and perhaps a Service Pack needs to be applied or something. [amd64 XP trunk (Thomas Heller)] Builds fine, all tests pass except for test_largefile, which is failing as there's no more space left on the drive ;-) [x86 XP-4 trunk (David Bolen)] This is currently flagged as having failed test, but I don't think it's finished building since the finalised build updates, so hopefully the BSDDB errors in the last run will be resolved when it finished the latest build. [x86 FreeBSD 2 trunk (Jeroen Ruigrok van der Werven)] This is a FreeBSD 6.3-STABLE box (which switched to curses 5.6 from 5.2) -- there's been an ongoing thread with regards to why curses has started failing, Jeroen can probably provide more info on that. Either way I don't anticipate a quick fix for this particular slave, unfortuantely. Neal/Martin, I'd like to promote the following slaves to the stable list: [g4 osx.4] [x86 W2k8] [AMD64 W2k8] [ppc Debian unstable] [sparc Ubuntu] [sparc Debian] [PPC64 Debian] [S-390 Debian] [x86 XP-3] [amd64 XP] [x86 FreeBSD] [x86 FreeBSD 3] The trunk builds of these slaves have been the most reliable since I've been tracking. Trent. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] how to build extensions for Windows?
Having recently sunk a lot of time into the Windows build process, I'd recommend going with Visual C++ Express 2008 rather than MinGW, as this is the official compiler for 2.6/3.0. (You can download a free copy.) Thanks for the advice, but it's sort of Greek to me. Is there a step-by-step guide somewhere? Bill ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Capsule Summary of Some Packaging/Deployment Technology Concerns
The Python sandbox has a setuptools directory. Is this the canonical location for the code? Yes, it is. If so, then anybody who has Python commit privileges can commit to it and help further develop setuptools. They can, but they shouldn't. Nothing should be committed there without pje's approval (in whatever form he choses to give such approval). If not, why not and what is the sandbox setuptools used for? I think it shouldn't be in sandbox, but toplevel, but that's a minor detail. Maybe I misunderstand the English word sandbox. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] trunk buildbot status
Trent Nelson wrote: Quick update on the status of the trunk buildbots: [x86 XP trunk (Joseph Armbruster)] This box didn't survive the recent build changes, but I can't figure out why, none of the other Windows boxes encounter this error: The following error has occurred during XML parsing: File: C:\python\buildarea\trunk.armbruster-windows\build\PCbuild\_bsddb.vcproj Line: 179 Column: 1 Error Message: Illegal qualified name character. The file 'C:\python\buildarea\trunk.armbruster-windows\build\PCbuild\_bsddb.vcproj' has failed to load. Can someone check a clean trunk build on a Windows system that *only* has Visual C++ Express 2008? The latest build system updates don't rely on any features of Visual Studio Professional, but the tools use a lot of common files, and perhaps a Service Pack needs to be applied or something. I just built the trunk on a Windows XP x86 box that only has Visual C++ Express 2008 installed. I got a bunch of errors with sqlite, tcl, db-4.4.20, and ssl, but the interpreter built and appears to run ok. But since I don't have bsddb installed, I don't think I'm executing the portion of the build process that you find failing. I don't have time to install bsddb tonight, but I can do that in about 24 hours if you still need me to. Eric. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
On Wed, Mar 19, 2008 at 7:05 PM, Paul Moore [EMAIL PROTECTED] wrote: This strikes me as a gratuitous API change of the kind Guido was warning about in his recent post: Don't change your APIs incompatibly when porting to Py3k This seems compelling to me. And as Glyph mentioned, the testing APIs are the most critical ones to keep working when moving from 2 to 3. It seems as though this declaration, in this case, is in direct conflict with the overarching theme of one obvious way to do things. That statement, of course, is the reason so many things are being changed in the move to 3k already, and I don't see why this shouldn't be one of them (particularly, because it's so easy to account for this in 2to3). As one is a global statement, and the other is fairly local, I vote for the change. -- Cheers, Leif ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] First green Windows x64 buildbots!
Then put the picture on your screen and SEND ME A SCREENSHOT! regards Steve Neal Norwitz wrote: Great work Trent! You'll need to take a picture of Martin buying you the beer once you get the rest green. :-) n On Wed, Mar 19, 2008 at 5:57 PM, Trent Nelson [EMAIL PROTECTED] wrote: We've just experienced our first 2.6 green x64 Windows builds on the build slaves! Well, almost green. Thomas's 'amd64 XP trunk' ran out of disk: 304 tests OK. 1 test failed: test_largefile == ERROR: test_seek (test.test_largefile.TestCase) -- Traceback (most recent call last): File C:\buildbot\trunk.heller-windows-amd64\build\lib\test\test_largefile.py, line 42, in test_seek f.flush() IOError: [Errno 28] No space left on device Sorry about that Thomas ;-) Trent. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/nnorwitz%40gmail.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/python-python-dev%40m.gmane.org ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] trunk buildbot status
I'd recommend cd'ing to your trunk root directory and running Tool\buildbot\build.bat from there -- it'll automatically check out all the dependencies and build via command line with vcbuild (building via Visual Studio usually always Does The Right Thing, command line builds often take a bit more coercing). From: Eric Smith [EMAIL PROTECTED] Sent: 19 March 2008 20:49 To: Trent Nelson Cc: python-dev@python.org; [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED]; [EMAIL PROTECTED] Subject: Re: [Python-Dev] trunk buildbot status Trent Nelson wrote: Quick update on the status of the trunk buildbots: [x86 XP trunk (Joseph Armbruster)] This box didn't survive the recent build changes, but I can't figure out why, none of the other Windows boxes encounter this error: The following error has occurred during XML parsing: File: C:\python\buildarea\trunk.armbruster-windows\build\PCbuild\_bsddb.vcproj Line: 179 Column: 1 Error Message: Illegal qualified name character. The file 'C:\python\buildarea\trunk.armbruster-windows\build\PCbuild\_bsddb.vcproj' has failed to load. Can someone check a clean trunk build on a Windows system that *only* has Visual C++ Express 2008? The latest build system updates don't rely on any features of Visual Studio Professional, but the tools use a lot of common files, and perhaps a Service Pack needs to be applied or something. I just built the trunk on a Windows XP x86 box that only has Visual C++ Express 2008 installed. I got a bunch of errors with sqlite, tcl, db-4.4.20, and ssl, but the interpreter built and appears to run ok. But since I don't have bsddb installed, I don't think I'm executing the portion of the build process that you find failing. I don't have time to install bsddb tonight, but I can do that in about 24 hours if you still need me to. Eric. ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 365 (Adding the pkg_resources module)
I don't understand PyPI all that well; it seems poor design that the browsing via keywords is emphasized but there is no easy way to *search* for a keyword (the list of all packages is not emphasized enough on the main page -- it occurs in the side bar but not in the main text). I don't understand. What is browsing via keywords and how is that emphasized? (one I know that, I can look into ways for searching for keywords) I assume there's a programmatic API (XML-RPC?) but I haven't found it yet. The recommended programmatic API is http://pypi.python.org/simple/ Not sure what you were trying to achieve programmatically; typically people know what they want to install (e.g. threadedcomments), and then the tool goes directly to http://pypi.python.org/simple/threadedcomments/ Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?
While not quite to the same scale as the 2 to 3 transition, this problem seems like one that would generally already exist. If one writes code that uses newer 2.4/2.5 features (say decorators for example,) it won't build/run on 2.3 or earlier installs. How have people been handling that sort of situation? Is it only by not using the newer features or is there some situation I'm just not seeing in a brief review of a few projects pages on PyPI where there is only one source tarball? I think packages have taken all sorts of responses to this issue. Some will list the minimum required Python version in their README, some might put a test in setup.py that aborts installation if the Python version is too old, some may just install and let the user find out at runtime. Typically, packages try to support all the Python versions that their users still use. If a user of an older Python version comes along, they'll just need to fetch the older release (which hopefully is still online, or can be extracted from the source repository). Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?
You seem to be implying that some projects may release separate source distributions. I cannot imagine why somebody would want to do that. That's odd. I can't imagine why anybody would *not* want to do that. Given the number of issues 2to3 can't fix (because it would be too dangerous to guess) Like which one specifically? , I certainly can't imagine a just-in-time porting solution that would work reliably. I can imagine that absolutely. Making two releases means I can migrate once and only once and be done with it. No, you won't be done. You have to maintain two releases in parallel now. Making a single release work on 2.x and 3.x means I have to keep all of the details of both Python 2 and 3 in my head all the time as I code? not to mention litter my codebase with # the following ugly hack lets us work with Python 2 and 3 comments so someone else doesn't undo all my hard work when they run the tests on Python 3 but not 2? No thanks. My brain is too small. So you rather put more work into maintenance sequentially. Fair enough. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?
Yes, I am assuming that existing projects would at some point introduce a 3.x version and maybe continue a 2.x version as separate distros, similar to Python itself. Then the large number of existing unqualified dependencies on, say SQLObject, would pull in the higher 3.x version and crash. It's the older projects that don't get updated often that are at risk of being destabilized by the arrival of 3.x specific code in PyPI. Are developers for Python 3.x encouraged in 3.x guidelines to release 'fat' distributions that combine 2.x and 3.x usable versions? Passive voice is misleading here: encouraged by whom? *I* encourage people to consider that option, rather than assuming it couldn't possibly work. Whether it actually works, I don't know. I hope it would work, and I hope it would not be fat at all. Perhaps a convention of a keyword or more likely a new trove classifier that spells outs 3.x stuff, with indicators on package info pages and query filters on PyPI against that? I'm fine with adding more trove classifiers if that solves the problem (although I still assume the problem doesn't actually exist). As always, a classifier should not be added until there actually are two packages that want to use it. Can you kindly refer to some archived discussion for that? http://mail.python.org/pipermail/python-dev/2006-April/063943.html I started looking at this. The number of complaints I got when I started on this that it would break the existing distutils based installers totally discouraged me. In addition, the existing distutils codebase is ... not good. It is flatly not possible to fix distutils and preserve backwards compatibility. -Anthony Baxter Thanks. I still have the same position as I had then - if distutils is broken, it should be fixed, not ignored. A controversial point, I'm afraid. Perhaps it is time for a parallel rewrite, so that those setup.py who import distutils get the old behavior, and those who import distutils2 get the new, and we let attrition and the community decide which is standard. Is there a list of the problems with distutils somewhere? It always worked fine for me, so I see no reason to fix it in the first place. Regards, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] unittest's redundant assertions: asserts vs. failIf/Unlesses
Leif Walsh wrote: On Wed, Mar 19, 2008 at 7:05 PM, Paul Moore [EMAIL PROTECTED] wrote: This strikes me as a gratuitous API change of the kind Guido was warning about in his recent post: Don't change your APIs incompatibly when porting to Py3k This seems compelling to me. And as Glyph mentioned, the testing APIs are the most critical ones to keep working when moving from 2 to 3. It seems as though this declaration, in this case, is in direct conflict with the overarching theme of one obvious way to do things. That statement, of course, is the reason so many things are being changed in the move to 3k already, and I don't see why this shouldn't be one of them (particularly, because it's so easy to account for this in 2to3). As one is a global statement, and the other is fairly local, I vote for the change. As Guido(?) pointed out, this would be acceptable because it's simply different spellings of the same way. regards Steve ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] how to build extensions for Windows?
I've set up a Parallels virtual machine on my Mac, and have succeeded in getting Windows XP running in it! And I've installed MinGW, as well. Now I'd like to learn how to build the SSL module from source on Windows for Python 2.5.2. Is there any documentation on the process of building an extension from scratch that's appropriate for someone who doesn't know much about Windows? I'm looking for step-by-step. I'll make a custom Bill-Janssen-Step-By-Step-List: try: 1. Write a setup.py file. See distutils documentation for details. 2. Install Python 2.5.2 3. Run c:\python25\python.exe setup.py build --compiler=mingw32 4. Run c:\python25\python.exe setup.py build --compiler=mingw32 bdist_wininst except Exception, e: A. Post e to python-dev else: 5. Upload dist/* to PyPI (you can use setup.py upload for that also) HTH, Martin ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] Improved thread switching
Hmmm, sorry if I'm missing something obvious, but, if the occasional background computations are sufficiently heavy -- why not fork, do said computations in the child thread, and return the results via any of the various available IPC approaches? I've recently (at Pycon, mostly) been playing devil's advocate (i.e., being PRO-threads, for once) on the subject of utilizing multiple cores effectively -- but the classic approach (using multiple _processes_ instead) actually works quite well in many cases, and this application server would appear to be one. (the pyProcessing package appears to offer an easy way to migrate threaded code to multiple-processes approaches, although I've only played around with it, not [yet] used it for production code). Alex On Wed, Mar 19, 2008 at 10:49 AM, Adam Olsen [EMAIL PROTECTED] wrote: On Wed, Mar 19, 2008 at 11:25 AM, Stefan Ring [EMAIL PROTECTED] wrote: Adam Olsen rhamph at gmail.com writes: So you want responsiveness when idle but throughput when busy? Exactly ;) Are those calculations primarily python code, or does a C library do the grunt work? If it's a C library you shouldn't be affected by safethread's increased overhead. It's Python code all the way. Frankly, it's a huge mess, but it would be very very hard to come up with a scalable solution that would allow to optimize certain hotspots and redo them in C or C++. There isn't even anything left to optimize in particular because all those low hanging fruit have already been taken care of. So it's just ~30kloc Python code over which the total time spent is quite uniformly distributed :(. I see. Well, at this point I think the most you can do is file a bug so the problem doesn't get forgotten. If nothing else, if my safethread stuff goes in it'll very likely include a --with-gil option, so I may put together a FIFO scheduler. -- Adam Olsen, aka Rhamphoryncus ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/aleaxit%40gmail.com ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] how to build extensions for Windows?
Nice and simple, thanks, Martin! Works OK after I upgraded to MinGW 5.1.3 (5.0.0 is what I had, and the gcc build didn't work there with that). I think I've got it! Bill ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] trunk buildbot status
Unfortunately, I don't have ssh access from my hotel room. This means I won't be able to help much until I get home. On Wed, Mar 19, 2008 at 7:26 PM, Trent Nelson [EMAIL PROTECTED] wrote: Quick update on the status of the trunk buildbots: Failing: [x86 gentoo trunk (Neal Norwitz)] This has been failing at the same point for the past couple of days now: test_sqlite command timed out: 1800 seconds without output, killing pid 15168 process killed by signal 9 program finished with exit code -1 None of the other buildbots seem to be encountering the same problem. Neal, got any idea what's going on with this one? Last status was here: http://mail.python.org/pipermail/python-checkins/2008-March/066824.html I haven't logged in to check what's going on. Gerhard had some ideas in the same thread: http://mail.python.org/pipermail/python-checkins/2008-March/066863.html I just need to have some time on the machine and look into the problem. If I determine the problem is with the underlying sqlite, I'll try to upgrade it. [alpha True64 5.1 trunk (Neal Norwitz)] [hppa Ubuntu trunk (Matthias Klose)] I can probably diagnose both of these too when I get back from Chicago. Neal/Martin, I'd like to promote the following slaves to the stable list: [g4 osx.4] [x86 W2k8] [AMD64 W2k8] [ppc Debian unstable] [sparc Ubuntu] [sparc Debian] [PPC64 Debian] [S-390 Debian] [x86 XP-3] [amd64 XP] [x86 FreeBSD] [x86 FreeBSD 3] The trunk builds of these slaves have been the most reliable since I've been tracking. Most of these have already been promoted to stable. I just didn't restart the buildbot master since making the change. It requires a restart, not just a reconfig. I was waiting for a quiet time when the bots weren't busy, but that hasn't happened yet. :-) I can make more changes and restart when I get home. n ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 365 (Adding the pkg_resources module)
On Mar 19, 2008, at 3:23 PM, Guido van Rossum wrote: If other people want to chime in please do so; if this is just a dialog between Phillip and me I might incorrectly assume that nobody besides Phillip really cares. I really care. I've used setuptools, easy_install, eggs, and pkg_resources extensively for the past year or so (and contributed a few small patches). There have been plenty of problems, but I find them to be overall useful tools. It is a great boon to a programming community to lower the costs of re-using other people's code. The Python community will benefit greatly once a way to do that becomes widely enough accepted to reach a tipping point and become ubiquitous. Setuptools is already the de facto standard, but it hasn't become ubiquitous, possibly in part because of egg hatred, about which more below. I've interviewed several successful Python hackers who hate eggs in order to understand what they hate about them, and I've taken notes from some of these interviews. (The list includes MvL, whose name was invoked earlier in this thread.) After filtering out yer basic complaining about bugs (which complaints are of course legitimate, but which don't indict setuptools as worse than other software of comparable scope and maturity), their objections seem to fall into two categories: 1. The very notion of package dependency resolution and programmable or command-line installation of packages at the language level is a bad notion. This can't really be the case. If the existence of such functionality at the programming language level were an inherently bad notion, then we would be hearing some complaints from the Ruby folks, where the Gems system is standard and ubiquitous. We hear no complaints -- only murmurs of satisfaction. One person recently reported to me that while there are more packages in Python, he finds himself re-using other people's code more often when he works in Ruby, because almost all Ruby software is Gemified, but only a fraction of Python software is Eggified. Often this complaint comes with the idea that eggs conflict with their system-level package management tools. (These are usually Debian/Ubuntu users.) Note that Ruby software is not too hard to include in operating system packaging schemes -- my Ubuntu Hardy apt-cache shows plenty of Ruby software. A sufficiently mature and widely supported setuptools could actually make it easier to integrate Python software into Debian -- see stdeb [1]. 2. Setuptools/eggs give me grief. What can really be the case is that setuptools causes a host of small, unnecessary problems for people who prefer to do things differently than PJE does. Personally, I prefer to use GNU stow, and setuptools causes unnecessary, but avoidable, problems for me. Many people object (rightly enough) to a ./setup.py install automatically fetching new software over the Internet by default. The fact that easy_install creates a site.py that changes the semantics of PYTHONPATH is probably the most widely and deservedly hated example of this kind of thing [2]. I could go on with a few other common technical complaints of this kind. These type-2 problems can be fixed by changing setuptools or they can be grudgingly accepted by users, while retaining compatibility with the large and growing ecosystem of eggy software. Certainly fixing setuptools to play better with others is a more likely path to success than setting out to invent a non-egg-compatible alternative. Such a project might never be implemented well enough to serve, and if it were it would probably never overtake eggs's lead in the Python ecosystem, and if it did it would probably not turn out to be a better tool. So, since you asked for my chime, I advise you to publically bless eggs, setuptools, and easy_install as plausible future standards and solicit patches which address the complaints. For that matter, soliciting specific complaints would be a good start. I've done so in private many times with only partial success as to the specific part. One promising approach is to request objections in the form of automated tests that setuptools fails, e.g. [3]. Regards, Zooko O'Whielacronx [1] http://stdeb.python-hosting.com/ [2] http://www.rittau.org/blog/20070726-02 And no, PJE's suggested trivial fix does not satisfy the objectors, as it can't support the use case of cd somepkg ; python ./ setup.py install ; cd .. ; python -c 'import somepkg'. [3] http://twistedmatrix.com/trac/ticket/2308#comment:5 ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] The Breaking of distutils and PyPI for Python 3000?
Martin v. Löwis wrote: specific code in PyPI. Are developers for Python 3.x encouraged in 3.x guidelines to release 'fat' distributions that combine 2.x and 3.x usable versions? Passive voice is misleading here: encouraged by whom? ... encouraged in __3.x guidelines__ to ...: I presume although I've not found them yet that there is some kind of document for developers titled something like, how to migrate your Python code from 2.x to 3.x. That document would be a logical place for advice and consideration of the tradeoffs of jumping to 3.x, maintaining two synced versions using 2to3 or 3to2, and the risks of keeping two independent releases. Identifying best practices would help them make good choices for the community. *I* encourage people to consider that option, rather than assuming it couldn't possibly work. Whether it actually works, I don't know. I hope it would work, and I hope it would not be fat at all. So we don't have an actual success story of a dual-version distribution, even as a prototype, using 2to3 within a distutils package? I would not encourage a practice without at least one such example. Can you kindly refer to some archived discussion for that? http://mail.python.org/pipermail/python-dev/2006-April/063943.html Thanks. I still have the same position as I had then - if distutils is broken, it should be fixed, not ignored. Since the precise API was not documented well and many people began to make use of ambiguous internal interfaces, such fixes would indeed break them. So your vote would be to do the right thing, even if it results in some breakage. I can respect that philosophy. A controversial point, I'm afraid. Perhaps it is time for a parallel rewrite, so that those setup.py who import distutils get the old behavior, and those who import distutils2 get the new, and we let attrition and the community decide which is standard. Is there a list of the problems with distutils somewhere? Unfortunately no. Much of it is anecdotal, much of it occurs on lists outside the Python community by those attempting to package things. And some of it are comments by developers who peeked into the distutils source and blanched. And some of the problems are not bugs, per se, but disagreement on scope of functionality and a lack of well-known alternatives. So just fix it if broken doesn't work when there is no agreement on how to expand that scope. I am working on pulling together such a list however, and getting it into the tracker, so that debate with a record of decisions can occur. I agree that it is hard to fix problems if no one is _clearly_ reporting them to us. Too much smoke, not enough light. It always worked fine for me, so I see no reason to fix it in the first place. Pardon my lack of knowledge of your background; when you say it always worked fine for you, are you referring to personal experiences using it to _install_ software or to experiences as a packager in actually distributing complex collections of modules on different platforms? -Jeff ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 365 (Adding the pkg_resources module)
-BEGIN PGP SIGNED MESSAGE- Hash: SHA1 Guido van Rossum wrote: On Wed, Mar 19, 2008 at 12:54 PM, Phillip J. Eby [EMAIL PROTECTED] wrote: [a long message] I'm back at Google and *really* busy for another week or so, so I'll have to postpone the rest of this discussion for a while. If other people want to chime in please do so; if this is just a dialog between Phillip and me I might incorrectly assume that nobody besides Phillip really cares. I care, a lot, enough to have volunteered to help with maintenance / development of setuptols back in September 2007. I think that, warts an all, setuptools is a *huge* improvement over bare distutils for nearly every use case I know about. A lot of setuptools warts are driven by related design problems in the distutils, such as the choice to use imperative / procedural code for everything: a declarative approach, with hooks for cases which actually need them (likely 5% of existing packages) would have made writing tools on top of the framework much simpler. It is ironic that Python is *too powerful* a tool for the tasks normally done by distutils / setuptools: a more restricted, and thererfore introspectable, configuration-driven approoach seems much cleaner. Tres. - -- === Tres Seaver +1 540-429-0999 [EMAIL PROTECTED] Palladion Software Excellence by Designhttp://palladion.com -BEGIN PGP SIGNATURE- Version: GnuPG v1.4.6 (GNU/Linux) Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org iD8DBQFH4e7m+gerLs4ltQ4RAt+hAKDBqIrashlgf8U6XRtfMHjTOaiy4gCeO1Zn UfdjDYIb2P6vDCcUGSjITTo= =JTok -END PGP SIGNATURE- ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
[Python-Dev] Pretty-printing 2to3 Nodes
Would anyone be averse to changing pytree.Node's __repr__ so it includes the name of the name of the symbol the node represents? The only downside is that it makes the __reprs__ longer... But I think its worth the length: Node(313:simple_stmt, [Node(298:import_name, [Leaf(1, 'import'), Node (279:dotted_as_name, [Node(281:dotted_name, [Leaf(1, 'foo'), Leaf(23, '.'), Leaf(1, 'bar')]), Leaf(1, 'as'), Leaf(1, 'bang')])]), Leaf(4, '\n')]) OR just names: Node(import_name, [Leaf(1, 'import'), Node(dotted_as_name, [Node (dotted_name, [Leaf(1, 'foo'), Leaf(23, '.'), Leaf(1, 'bar')]), Leaf (1, 'as'), Leaf(1, 'bang')])]) OR the original: Node(313, [Node(298, [Leaf(1, 'import'), Node(279, [Node(281, [Leaf (1, 'foo'), Leaf(23, '.'), Leaf(1, 'bar')]), Leaf(1, 'as'), Leaf(1, 'bang')])]), Leaf(4, '\n')]) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] PEP 365 (Adding the pkg_resources module)
I was using the human interface at python.org/pypi. There are two prominent links at the top of the page: Browse the tree of packages and Submit package information followed by the 30 most recently changed packages. What I was looking for was the page for a specific package. The Browse the tree of packages link was no help. Finally I realized that in the side bar, in a small unobtrusive font, is a link to List packages which links to a list of *all* packages, in alphabetical order. I found my package there. I think repeating that link right below browse the tree would have been sufficient. But it would have been cool if there had been a search box (also in the start page) where I could type (part of) the name of the package and it would have given me the nearest matches. On Wed, Mar 19, 2008 at 6:05 PM, Martin v. Löwis [EMAIL PROTECTED] wrote: I don't understand PyPI all that well; it seems poor design that the browsing via keywords is emphasized but there is no easy way to *search* for a keyword (the list of all packages is not emphasized enough on the main page -- it occurs in the side bar but not in the main text). I don't understand. What is browsing via keywords and how is that emphasized? (one I know that, I can look into ways for searching for keywords) I assume there's a programmatic API (XML-RPC?) but I haven't found it yet. The recommended programmatic API is http://pypi.python.org/simple/ Not sure what you were trying to achieve programmatically; typically people know what they want to install (e.g. threadedcomments), and then the tool goes directly to http://pypi.python.org/simple/threadedcomments/ Regards, Martin -- --Guido van Rossum (home page: http://www.python.org/~guido/) ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com
Re: [Python-Dev] 2to3 and print function
On 19-Mar-08, at 6:44 PM, Collin Winter wrote: You can pass -p to refactor.py to fix this on a per-run basis. See r58002 (and the revisions it mentions) for a failed attempt to do this automatically. So, correct me if I'm wrong, but the relevant code is this: -try: -tree = self.driver.parse_string(data) -except parse.ParseError, e: -if e.type == token.EQUAL: -tree = self.printless_driver.parse_string(data) -else: -raise Why not, instead of trying both parsers, scan for a __future__ import, then do the Right Thing based on that? ___ Python-Dev mailing list Python-Dev@python.org http://mail.python.org/mailman/listinfo/python-dev Unsubscribe: http://mail.python.org/mailman/options/python-dev/archive%40mail-archive.com