VirtualEnvs (venv) and Powershell
Hello everyone, I've just started to investigate VirtualEnvironments as a means of preventing my 3rd party code becoming chaotic. I've discovered that venv's can be managed quite effectively using Powershell. When Activate.ps1 is run, the PowerShell changes to indicate that the venv is active which is nice. However despite the official documention, there doesn't seem to be a corresponding Deactivate.ps1. There is a deactivate.bat but that doesn't appear to switch the paths back to their pre-env state. What I would really like is a Git-Bash based alternative to Powershell to manage my Virtual Environments. Has anyone dicovered tools or techniques to achieve this? Thanks for any response -- Carl -- https://mail.python.org/mailman/listinfo/python-list
Re: How well do you know Python?
On 07/05/2016 05:50 AM, Chris Angelico wrote: > On Tue, Jul 5, 2016 at 9:33 PM, Peter Otten <__pete...@web.de> wrote: >> Chris Angelico wrote: >> >>> On Tue, Jul 5, 2016 at 6:36 PM, Peter Otten <__pete...@web.de> wrote: >>>> What will >>>> >>>> $ cat foo.py >>>> import foo >>>> class A: pass >>>> print(isinstance(foo.A(), A)) >>>> $ python -c 'import foo' >>>> ... >>>> $ python foo.py >>>> ... >>>> >>>> print? [snip] >> The intended lesson was that there may be two distinct classes >> >> __main__.A and foo.A [snip] > The two distinct classes problem is a very real one, and comes of > circular (or not-technically-circular, as in the second case) imports. It can also come of pathological setups where a path and its parent are both on sys.path, so all import paths have an "optional" prefix (but you actually get a different copy of the module depending on whether you use that prefix). Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: pytz and Python timezones
Hi Johannes, On 06/11/2016 05:37 AM, Johannes Bauer wrote: > I try to create a localized timestamp > in the easiest possible way. So, intuitively, I did this: > > datetime.datetime(2016,1,1,0,0,0,tzinfo=pytz.timezone("Europe/Berlin")) That is indeed intuitive, but unfortunately (due to a misunderstanding between the original authors of Python's datetime module and the author of pytz about how timezone-aware datetimes should work in Python) it is not correct. The correct way to create a localized datetime using pytz is this: tz = pytz.timezone('Europe/Berlin') dt = tz.localize(datetime.datetime(2016, 1, 1, 0, 0, 0) This is documented prominently in the pytz documentation: http://pytz.sourceforge.net/ > Which gives me: > > datetime.datetime(2016, 1, 1, 0, 0, tzinfo= LMT+0:53:00 STD>) > > Uh... what? When you create a pytz timezone object, it encompasses all historical UTC offsets that have ever been in effect in that location. When you pass a datetime to the `localize()` method of that timezone object, it is able to figure out which actual UTC offset was in effect at that local time in that location, and apply the correct "version" of itself to that datetime. However, no such logic is built into the datetime module itself. So when you just apply a pytz timezone directly to the tzinfo property of a datetime, pytz by default falls back to the first entry in its historical table of UTC offsets for that location. For most locations, that is something called "LMT" or Local Mean Time, which is the customary time in use at that location prior to the standardization of timezones. And in most locations, LMT is offset from UTC by a strange number of minutes. That's why you see "LMT" and the odd 53-minute offset above. > This here: > > pytz.timezone("Europe/Berlin").localize(datetime.datetime(2016,1,1)) > > Gives me the expected result of: > > datetime.datetime(2016, 1, 1, 0, 0, tzinfo= CET+1:00:00 STD>) > > Can someone explain what's going on here and why I end up with the weird > "00:53" timezone? Is this a bug or am I doing things wrong? It is not a bug in pytz or in datetime, in that it is intended behavior, although that behavior is unfortunately obscure, bug-prone, and little-understood. If you are masochistic enough to want to understand how this bad situation came to be, and what might be done about it, you can read through PEPs 431 and 495. Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: recursive methods require implementing a stack?
On 04/06/2016 03:08 PM, Random832 wrote: > On Wed, Apr 6, 2016, at 16:21, Charles T. Smith wrote: >> I just tried to write a recursive method in python - am I right that >> local >> variables are only lexically local scoped, so sub-instances have the same >> ones? Is there a way out of that? Do I have to push and pop my own >> simulated >> stack frame entry? > > No, and I'm not sure why you would think that. Sounds like a confusion that might arise due to using a mutable default arg? Or generally passing a mutable arg and not understanding Python's calling semantics? Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Missing something about timezones
Hi Skip, On 03/14/2016 09:32 AM, Skip Montanaro wrote: > On Mon, Mar 14, 2016 at 10:26 AM, Ian Kelly wrote: >> Why should it? You only asked pytz for the Chicago timezone. You >> didn't ask for it relative to any specific time. > > Thanks. I thought using America/Chicago was supposed to automagically > take into account transitions into and out of Daylight Savings. Is > there some way to get that? Yes, pytz can handle DST correctly automatically when you give it 'America/Chicago', but until you apply that timezone to a particular datetime, there is no DST to handle. There is no implicit assumption of "today" when you do `pytz.timezone('America/Chicago'). If you apply the timezone to a particular datetime, you'll see that it does reflect DST correctly: >>> import datetime, pytz >>> tz = pytz.timezone('America/Chicago') >>> tz >>> import datetime >>> dt = datetime.datetime.now() >>> dtl = tz.localize(dt) >>> dtl datetime.datetime(2016, 3, 14, 10, 11, 13, 514375, tzinfo=) Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Continuing indentation
On 03/02/2016 04:54 PM, Chris Angelico wrote: > On Thu, Mar 3, 2016 at 10:46 AM, wrote: >> On Wednesday, March 2, 2016 at 3:44:07 PM UTC-5, Skip Montanaro wrote: >>> >>> if (some_condition and >>> some_other_condition and >>> some_final_condition): >>> play_bingo() >> >> How about: >> >> continue_playing = ( >> some_condition and >> some_other_condition and >> some_final_condition >> ) >> >> if continue_playing: >> play_bingo() >> >> or: >> >> play_conditions = [ >> some_condition, >> some_other_condition, >> some_final_condition, >> ] >> >> if all(play_conditions): >> play_bingo() > > Those feel like warping your code around the letter of the law, > without really improving anything. Not at all! Taking a series of boolean-joined conditions and giving the combined condition a single name is often a major improvement in readability. Not primarily for code-layout reasons, but because it forces you to name the concept (e.g. "continue_playing" here.) I often find that the best answer to "how do I wrap this long line?" is "don't, instead extract a piece of it and give that its own name on its own line(s)." The extracted piece might be a new variable or even a new function. The pressure to do this type of refactor more frequently is one reason I continue to prefer relatively short (80 char) line length limits. This is closely related to the XP guideline "when you're tempted to add a comment, instead extract that bit of code into a function or variable and give it a name that clarifies the same thing the comment would have." Names are important! Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Importing two modules of same name
Hi Tim, On 02/09/2016 04:23 PM, Tim Johnson wrote: > Before proceding, let me state that this is to satisfy my > curiousity, not to solve any problem I am having. > > Scenario : > Web application developed at /some/dir/sites/flask/ > > If I have a package - let us call it app and in my > /some/dir/sites/flask/app/__init__.py is the following: > > from config import config > > imports the config dictionary from /some/dir/sites/flask/config.py > > (the real-case scenario is M. Grinberg's tutorial on Flask). > > What if I wanted to add a module in the app package and call it from > __init__.py > > That entails having two modules name config > one at /some/dir/sites/flask/config.py > and the other at /some/dir/sites/flask/app/config.py > > What would be the proper way to do this? (If proper at all :)) I > realize that it may not be best practices. And is a practice that I > avoided in the past. The proper way to do this in Python 2.7 is to place `from __future__ import absolute_import` at the top of flask/app/__init__.py (maybe best at the top of every Python file in your project, to keep the behavior consistent). Once you have that future-import, `import config` will always import the top-level config.py. To import the "local" config.py, you'd either `from . import config` or `import app.config`. Python 3 behaves this way without the need for a future-import. If you omit the future-import in Python 2.7, `import config` will import the neighboring app/config.py by default, and there is no way to import the top-level config.py. Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Turtle
Dear sir/madam I am currently using python 3.5.0 and I have been trying to write a program using turtle but is not seem to be working. I have followed all tutarial on the web and when i compare it with my code my am duing everything the same way but it still don't seems to be working I tryed repairen python but still no difference. Please help me. Thanks in advanced -- https://mail.python.org/mailman/listinfo/python-list
Re: does the order in which the modules are placed in a file matters ?
Hi Ganesh, On 12/16/2015 09:09 AM, Ganesh Pal wrote: > Iam on python 2.7 and linux .I need to know if we need to place the > modules in a particular or it doesn't matter at all > > order while writing the program As you've probably already noticed, it usually doesn't matter to Python (though it can become relevant in certain unusual circular-import cases). Most people will have some opinion about what constitutes good style, though. Opinions tend to look something like these (though details will vary): 1. All imports at the top of the module. 2. Standard library imports, then third-party imports, then local imports. 3. Sometimes the above types of imports are grouped with intervening blank lines. 4. Sometimes imports are alphabetized within those groups. > For Example > > import os > import shlex > import subprocess > import time > import sys > import logging > import plaftform.cluster > from util import run > > > def main(): > """ ---MAIN--- """ > > if __name__ == '__main__': > main() > > In the above example : > > 1. Iam guessing may be the python modules like os , shlex etc come > first and later the user defined modules like import > plaftform.cluster .etc come latter > > Sorry if my question sounds dump , I was running pep8 and don't see > its bothered much about it AFAIK the pep8 module doesn't care about import order. If you'd like to enforce an import order in your project, you can look at isort. [1] Carl [1] https://pypi.python.org/pypi/isort signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Python 3 virtualenvs
On 11/30/2015 10:20 AM, Laura Creighton wrote: > In a message of Mon, 30 Nov 2015 09:32:27 -0700, Carl Meyer writes: >>> I think it is only meant to be used by people who want to install >>> packages but not site-wide, but I am not sure about that. >> >> I don't know what you mean by this either. Isn't the ability to "install >> packages but not site-wide" precisely what virtualenv (and venv) give you? > > I rarely use it for that. What I nearly always want is different > python interpreters. CPython, PyPy, Jython for anything from 2.6 to > 3.6. If you just want the variety of interpreters, virtualenv doesn't give you that -- you have to already have a given interpreter installed system-wide for virtualenv to be able to use it. What virtualenv gives you is isolated environments for package installations (which can use any interpreter you have installed). Venv does the same (and won't have any trouble with PyPy or Jython either, once they reach Python 3.3 compatibility). So I agree that for now you should be sticking with virtualenv (I use it too), but I hope you'll take another look at venv a few years down the road, if you find yourself in a situation where all the interpreters you need are 3.3+. (Or maybe virtualenv will make the transition sooner, and you'll start using venv under the hood for 3.3+ without even realizing it.) Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Python 3 virtualenvs
Hi Laura, On 11/29/2015 07:12 AM, Laura Creighton wrote: > pyenv is going away. python -m venv is the preferred way to get a venv > > https://bugs.python.org/issue25154 > > Of course if you try it, you may get: > > Error: Command '['/bin/python3.4', '-Im', 'ensurepip', > '--upgrade', '--default-pip']' returned non-zero exit status 1 > > which turns out to mean: > > Your Python isn't configured with ensure-pip! AFAIK "isn't configured with ensurepip" is a thing which is only done to Python by some downstream distributors (e.g. Linux packagers). If they remove ensurepip, it should be their responsibility to also fix the venv module accordingly (at least enough to provide a useful error message). So I believe that's a bug that should be filed against whoever is distributing the Python you're using. > Right now, I personally don't know why there is a venv at all. Because virtualenv is an ugly hack which is difficult to maintain (I should know, I used to maintain it). In order to work at all, virtualenv maintains its own patched copy of several stdlib modules (based originally on Python 2.4(?) and modified since then) and uses them instead of the version distributed with whatever version of Python you are using. Some other stdlib modules it monkeypatches. As you might expect, this regularly causes problems when new Python versions are released, requiring even more hacks piled atop the previous ones. It's a real testament to the dedication of the current virtualenv maintainers (thank you, PyPA!) that it even works at all. The built-in venv module in Python 3.3+ fixes that by building a minimal level of support for virtual environments directly into the Python interpreter and stdlib, removing the need for (monkey)patching. Of course that interpreter support isn't available in Pythons prior to 3.3, which is why virtualenv remains much more popular today. (Also, the first version of venv in Python 3.3 didn't automatically install pip into the envs, which made them less useful, because at that point pip was a purely third-party project. That's been fixed with ensurepip in Python 3.4+). I very much hope that some day in the future, when all new Python projects are in Python 3, almost everyone will use venv, and virtualenv will be relevant only to those maintaining legacy Python 2 projects. There has been some work towards writing a transitional version of virtualenv that looks the same to users, but (under the hood) uses the old code only for 2.x and uses venv for Python 3. > Despite > the similarity of names, it doesn't seem to be about doing what virtualenv > does. I don't know what you mean by this. Venv is intended to do _exactly_ what virtualenv does, only better. Unless by "what virtualenv does" you mean "also support Python 2." > I think it is only meant to be used by people who want to install > packages but not site-wide, but I am not sure about that. I don't know what you mean by this either. Isn't the ability to "install packages but not site-wide" precisely what virtualenv (and venv) give you? > I don't think > there are any plans to give venv the functionality of virtualenv, What functionality do you mean? > so > presumably there are people who like it just fine the way it is now. > They must have very different needs than I do. I don't know, since you never said what it is about venv that doesn't meet your needs :-) Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: List comprehension with if-else
Hi Larry, On 10/28/2015 10:25 AM, Larry Martell wrote: > I'm trying to do a list comprehension with an if and that requires an > else, but in the else case I do not want anything added to the list. > > For example, if I do this: > > white_list = [l.control_hub.serial_number if l.wblist == > wblist_enum['WHITE'] else None for l in wblist] > > I end up with None in my list for the else cases. Is there a way I can > do this so for the else cases nothing is added to the list? You're not really using the if clause of the list comprehension here, you're just using a ternary if-else in the result expression. List comprehension if clauses go at the end, and don't require an else: [l.foo for l in wblist if l.bar == "baz"] Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Understanding WSGI together with Apache
Hi Johannes, On 10/10/2015 08:24 AM, Johannes Bauer wrote: > I'm running an Apache 2.4 webserver using mod_wsgi 4.3.0. There are two > different applications running in there running on two completely > separate vhosts. > > I'm seeing some weird crosstalk between them which I do not understand. > In particular, crosstalk concerning the locales of the two. One > application needs to output, e.g., date information using a German > locale. It uses locale.setlocale to set its LC_ALL to de_DE.UTF-8. > > Now the second application doesn't need nor want to be German. It wants > to see the C locale everywhere, in particular because at some point it > uses datetime.datetime.strptime() to parse a datetime. > > Here's where things get weird: Sometimes, my "C" locale process throws > exceptions, because it's unable to parse a date. When looking why this > fails, the string looks like de_DE's "Sa, 10 Okt 2015" instead of C's > "Sat, 10 Oct 2015". This seems to happen depending on which worker > thread is currently serving the request, i.e. nondeterministically. > > So all in all, this is very weird and I must admit that I don't seem to > fully understand how WSGI applications are run and served within a > mod_wsgi framework altogether. In the past it all "just worked" and I > didn't need to understand it all in-depth. But I think to be able to > debug such a weird issue, in-depth knowledge of what happens under the > hood would be helpful. > > So if someone could shed some light on how it works in general or what > could cause the described issue in particular, I'd really be grateful. It's been a number of years since I used mod-wsgi (I prefer gunicorn or uwsgi, in part because I find their process model so much easier to understand), but as best I understand (hopefully if I get anything wrong, someone who knows better can correct me) mod-wsgi uses a little-known CPython feature called "sub-interpreters", meaning that even multiple mod-wsgi sites on the same server can run in sub-interpreters of the same Python process, and certain global state (e.g. os.environ, apparently also the localization context) is shared between those sub-interpreters, which can cause "crosstalk" issues like you're seeing. I'm not sure, but I think _maybe_ using WSGIDaemonProcess [1] and putting your sites in different WSGIProcessGroup [2] might help? Carl [1] https://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess [2] https://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGIDaemonProcess signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: True == 1 weirdness
On 09/16/2015 02:29 PM, Mark Lawrence wrote: > On 16/09/2015 18:53, Sven R. Kunze wrote: >> On 16.09.2015 19:39, Steven D'Aprano wrote: >>> node = left <= ptr => right >> >> Wow. I have absolutely no idea what this is supposed to mean. Do you >> care to elaborate? >> >> >> Best, >> Sven > > Simple, straight forward easy to read bit of Python, where is the > problem? node is bound to the boolean ptr is greater than or equal to > left and right. Except it's a SyntaxError because Python has no => operator. Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: [Datetime-SIG] Are there any "correct" implementations of tzinfo?
Can we please stop cross-posting this thread to python-list and move it to datetime-sig only? I think anyone here on python-list who is sufficiently interested in it can subscribe to datetime-sig. Or the other way around, whatever. I'd just like to stop getting all the messages twice. Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Are there any "correct" implementations of tzinfo?
On 09/12/2015 12:23 PM, Random832 wrote: > I was trying to find out how arithmetic on aware datetimes is "supposed > to" work, and tested with pytz. When I posted asking why it behaves this > way I was told that pytz doesn't behave correctly according to the way > the API was designed. The tzlocal module, on the other hand, appears to > simply defer to pytz on Unix systems. > > My question is, _are_ there any correct reference implementations that > demonstrate the proper behavior in the presence of a timezone that has > daylight saving time transitions? Well, the problem is that because datetime doesn't include any way to disambiguate ambiguous times, it's not really possible to implement complex timezones in a way that is both correct (if your definition of correct includes "timezone conversions are lossless") and also matches the intended model of datetime. I believe that dateutil.tz has a tzinfo implementation (though I haven't used it myself) which is zoneinfo-based and matches the intended model of datetime (in that "Eastern" is always the same tzinfo object, and all operations within "Eastern" are always done on a local-clock-time basis). But in order to do this it has to sacrifice round-trippable conversions during a DST fold (because it has no way to disambiguate between the first and second 1:30am in local time during a DST fold). Pytz makes the other choice, making all operations consistent and loss-less by using only fixed-offset tzinfo instances. The cost of this choice is the need to "normalize" after arithmetic, because you may end up with e.g. an EDT datetime during a timeframe when DST is not in effect and it should be EST instead. PEP 495 is intended to solve the "no way to disambiguate ambiguous local times other than using fixed-offset tzinfos" problem, which would make it possible to implement tzinfo classes following the dateutil model while still having loss-less conversions. Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Searching for a usable X509 implementation
Hi Dennis, On 07/03/2015 06:11 PM, Dennis Jacobfeuerborn wrote: > Hi, I'm trying to implement certificate functionality in a python app > but after fighting with pyOpenSSL and M2Crypto I'm thinking about > writing wrapper functions for the OpenSSL command line tool instead > or switching the app to another language all together. My X.509 needs have never been more than basic, but PyOpenSSL has always had what I need. > Apparently PyOpenSSL has no way to save a public key to a file which > is baffling. M2Crypto has that ability but apparently no usable way > to verify a certificate? Is dump_certificate what you need? See https://pyopenssl.readthedocs.org/en/latest/api/crypto.html#OpenSSL.crypto.dump_certificate or this example for detailed usage: https://github.com/msabramo/pyOpenSSL/blob/master/examples/mk_simple_certs.py > Is there really no usable module out there to enable straightforward > certificate handling? I'm not aware of anything better than PyOpenSSL. Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Using a particular python binary with venv
On 06/01/2015 04:07 PM, greenbay.gra...@gmail.com wrote: > On Tuesday, 2 June 2015 09:43:37 UTC+12, Carl Meyer wrote: >> On 06/01/2015 03:33 PM, orotau wrote: >>> According to this >>> https://docs.python.org/3.4/library/venv.html#module-venv 'Each >>> virtual environment has its own Python binary (allowing creation of >>> environments with various Python versions)' >>> >>> So how would I create a virtual environment using the venv module >>> that has a Python 2.7 binary? >> >> You can't. The venv module only exists in and supports Python 3.3+. >> >> If you need to support earlier Django versions, you'll need to use the >> older "virtualenv" library instead, which looks very similar from the >> end-user side. See https://virtualenv.pypa.io/en/latest/ > > Thanks Carl > I am guessing that the documentation needs to be altered then. I will submit > a documentation bug... I think the documentation is accurate as written (you _can_ create environments with various Python versions, they just have to be versions that include and support the venv module), but a clarification on the supported Python versions in that sentence seems reasonable to me. Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Using a particular python binary with venv
On 06/01/2015 03:33 PM, greenbay.gra...@gmail.com wrote: > According to this > https://docs.python.org/3.4/library/venv.html#module-venv 'Each > virtual environment has its own Python binary (allowing creation of > environments with various Python versions)' > > So how would I create a virtual environment using the venv module > that has a Python 2.7 binary? You can't. The venv module only exists in and supports Python 3.3+. If you need to support earlier Django versions, you'll need to use the older "virtualenv" library instead, which looks very similar from the end-user side. See https://virtualenv.pypa.io/en/latest/ Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: mixing set and list operations
Hi Tim, On 04/30/2015 10:07 AM, Tim wrote: > I noticed this today, using Python2.7 or 3.4, and wondered if it is > implementation dependent: > > You can use 'extend' to add set elements to a list and use 'update' to add > list elements to a set. > >>>> m = ['one', 'two'] >>>> p = set(['three', 'four']) >>>> m.extend(p) >>>> m > ['one', 'two', 'four', 'three'] > >>>> m = ['one', 'two'] >>>> p = set(['three', 'four']) >>>> p.update(m) >>>> p > set(['four', 'three', 'two', 'one']) > > > Useful if you don't care about ordering. Not sure if it's dangerous. I don't think this is surprising, nor implementation dependent, nor dangerous. Lists have an `extend()` method, sets have an `update()` method. Both of these methods take any iterable as input, they don't needlessly constrain the input to be of the same type as the base object. That's the Pythonic way to do it; I'd be surprised if it didn't work. Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Python 3 lack of support for fcgi/wsgi.
On 03/29/2015 09:30 PM, Michael Torrie wrote: > What does this have to do with Python itself? I'm not completely sure, > but maybe it's about the Python community. What's the way forward? I > have no idea. At the very least John is frustrated by the community's > lack of apparent interest in fixing problems in the greater python > ecosystem when it comes to Python 3. I think one could easily draw far too broad a conclusion from John's report here. The title of the thread says "lack of support for fcgi/wsgi", but AFAICT the content of the report, and the thread, is entirely about FCGI. In my experience, WSGI under Python 3 works very well these days, and all of the popular WSGI servers (gunicorn, mod_wsgi, uwsgi, waitress, ...) run just fine under Python 3. I've deployed several Django applications into production on Python 3 (using WSGI) with no issues. FastCGI is a different story. I do some Django support on #django and on django-users, and I see very few people deploying with FastCGI anymore; almost everyone uses WSGI (and when we see someone using FastCGI, we encourage them to switch to WSGI). In fact, the FastCGI support in Django itself is deprecated and will be removed in Django 1.9. So I am not at all surprised to hear that the Python FastCGI libraries are relatively poorly maintained. And it is true and unsurprising that when a particular library is no longer maintained, it will probably be in better shape on Python 2 than on Python 3, because Python 2 is older. So when it comes to "the community's interest in fixing problems" or John's assertion that "nobody uses this stuff," in both cases I think it's far more about FastCGI vs WSGI than it's about Python 2 vs 3. Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Daylight savings time question
Hi Dan, On 03/24/2015 04:24 PM, Dan Stromberg wrote: > Is there a way of "adding" 4 hours and getting a jump of 5 hours on > March 8th, 2015 (due to Daylight Savings Time), without hardcoding > when to spring forward and when to fall back? I'd love it if there's > some library that'll do this for me. > > #!/usr/bin/python > > import pytz > import datetime > > def main(): > # On 2015-03-08, 2:00 AM to 2:59AM Pacific time does not exist - > the clock jumps forward an hour. > weird_naive_datetime = datetime.datetime(2015, 3, 8, 1, 0, > 0).replace(tzinfo=pytz.timezone('US/Pacific')) > weird_tz_aware_datetime = > weird_naive_datetime.replace(tzinfo=pytz.timezone('US/Pacific')) > print(weird_tz_aware_datetime) > four_hours=datetime.timedelta(hours=4) > print('Four hours later is:') > print(weird_tz_aware_datetime + four_hours) > print('...but I want numerically 5 hours later, because of > Daylight Savings Time') > > main() Much like the best advice for handling character encodings is "do all your internal work in unicode, and decode/encode at your input/output boundaries", the best advice for working with datetimes is "do all your internal work in UTC; convert to UTC from the input timezone and localize to the output timezone at your input/output boundaries." UTC has no daylight savings time, so these issues disappear when you do your calculations in UTC, and then the resulting dates are appropriately handled by pytz when converting to local timezone at output. So in this case, the following code gives the correct answer: naive_dt = datetime.datetime(2015, 3, 8, 1) tz = pytz.timezone('US/Pacific') local_dt = tz.localize(naive_dt) utc_dt = pytz.utc.normalize(local_dt) four_hours = datetime.timedelta(hours=4) new_utc_dt = utc_dt + four_hours new_local_dt = tz.normalize(new_utc_dt) Someone may point out that you can actually just use pytz.normalize() to solve this particular problem more directly, without the conversion to UTC and back: naive_dt = datetime.datetime(2015, 3, 8, 1) tz = pytz.timezone('US/Pacific') local_dt = tz.localize(naive_dt) four_hours = datetime.timedelta(hours=4) new_local_dt = tz.normalize(utc_dt + four_hours) (On the last line here, tz.normalize() is able to see that it's been given a datetime which claims to be PST, but is after the spring transition so should actually be PDT, and fixes it for you, correctly bumping it by an hour in the process.) While it's true that for this specific case the non-UTC method is more direct, if you're writing any kind of sizable system that needs to handle timezones correctly, you'll still be doing yourself a favor by handling all datetimes in UTC internally. Also, unless you really know what you're doing, you should generally use a_pytz_timezone_obj.localize(naive_dt) to turn a naive datetime into a timezone-aware one, instead of naive_dt.replace(tzinfo=a_pytz_timezone_obj). The latter just blindly uses the exact timezone object you give it, without regard for when the datetime actually is (thus fails to respect whether the timezone should be in DST or not, or any other historical local-time transitions), whereas the `localize` method ensures that the resulting aware datetime actually has the correct "version" of the timezone applied to it, given when it is. You can easily observe this difference, because the "default" version of a timezone in pytz is the first one listed in the timezone database, which for many timezones is LMT (local mean time), a historical timezone offset abandoned in the late 1800s most places, which is often offset from modern timezones by odd amounts like seven or eight minutes. For example: >>> import pytz, datetime >>> dt = datetime.datetime(2015, 3, 8, 1) >>> tz = pytz.timezone('US/Pacific') >>> tz >>> bad = dt.replace(tzinfo=tz) >>> bad datetime.datetime(2015, 3, 8, 1, 0, tzinfo=) >>> in_utc = pytz.utc.normalize(bad) >>> in_utc datetime.datetime(2015, 3, 8, 8, 53, tzinfo=) Note that the timezone assigned to the `bad` datetime is US/Pacific LMT, which hasn't been in use since Nov 1883 [1] and which is 7 minutes offset from modern timezones, resulting in a very surprising result when you then convert that to UTC. In contrast, localize() does the right thing, using PST instead of LMT because it knows that's the correct offset for US/Pacific at 1am on March 8, 2015: >>> good = tz.localize(dt) >>> good datetime.datetime(2015, 3, 8, 1, 0, tzinfo=) >>> in_utc = pytz.utc.normalize(good) >>> in_utc datetime.datetime(2015, 3, 8, 9, 0, tzinfo=) Carl [1] https://github.com/eggert/tz/blob/master/northamerica#L409 signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Daylight savings time question
On 03/24/2015 04:56 PM, Chris Angelico wrote: > On Wed, Mar 25, 2015 at 9:24 AM, Dan Stromberg wrote: >> Is there a way of "adding" 4 hours and getting a jump of 5 hours on >> March 8th, 2015 (due to Daylight Savings Time), without hardcoding >> when to spring forward and when to fall back? I'd love it if there's >> some library that'll do this for me. > > Fundamentally, this requires knowledge of timezone data. That means > you have to select a political time zone, which basically means you > want the Olsen database (tzdata) Yes, which is made available in Python via the pytz package. > which primarily works with city > names. I'm not sure whether "US/Pacific" is suitable; I usually use > "America/Los_Angeles" for Pacific US time. US/Pacific is an alias for America/Los_Angeles, and is also part of the Olson database (though I guess it's considered an "old" name for the timezone): https://github.com/eggert/tz/blob/master/backward Carl signature.asc Description: OpenPGP digital signature -- https://mail.python.org/mailman/listinfo/python-list
Re: Python 3.4.1 on W2K?
Tim G.: > Of course, if you're happy to work with a slightly older > version of Python, such as 3.2, then you should be fine. Well, I just installed 3.2.5 in W2K and all of my "stuff" seems to work. I'm a happy camper. Many thanks for the information and link! ChrisA: > Wow. I wonder, since you're already poking around with > extremely legacy stuff, would it be easier for you to use OS/2 > instead of Win2K? Paul Smedley still produces OS/2 builds of > Python, and OS/2 itself runs happily under VirtualBox (we have > an OS/2 VM still on our network here, and I use Python to > manage its backups). Might not end up any better than your > current system, but it might be! That's actually an interesting idea. OS/2 was our OS of choice in the '90s. XyWrite ran beautifully under it, and when we needed extensions to the XyWrite Programming Language (XPL) we used Rexx ("RexXPL"). Since last year I've been using Python instead ("XPyL"). The fact is, though, most XyWriters are running XyWrite under Windows, except for the few running it under Linux. Much of our script development since then has focused on integrating Xy with Windows, so to revert to OS/2 would be swimming against the tide. But I may just try it anyway! Thanks, guys. Pal A. -- https://mail.python.org/mailman/listinfo/python-list
Re: daemon thread cleanup approach
On Thursday, May 29, 2014 1:15:35 AM UTC-7, Chris Angelico wrote: > On Thu, May 29, 2014 at 11:20 AM, Carl Banks wrote: > > > Most threads have cleanup work to do (such as deleting temporary > > directories and killing spawned processes). > > > > > > For better or worse, one of the requirements is that the library can't > > cause the program to hang no matter what... > > > > This ma y be a fundamental problem. I don't know how Windows goes with > > killing processes (can that ever hang?), but certainly you can get > > unexpected delays deleting a temp dir, although it would probably > > require some deliberate intervention, like putting your %temp% on a > > remote drive and then bringing that server down. But believe you me, > > if there is a stupid way to do something, someone WILL have done it. > > (Have you ever thought what it'd be like to have your > > swapfile/pagefile on a network drive? I mean, there's acres of room on > > the server, why waste some of your precious local space?) > > > > So you may want to organize this as a separate spin-off process that > > does the cleaning up. [snip rest] Thanks, that's good information. Even if the temp directories do fail to be removed before the join times out (which probably won't happen much) the situation is still no worse than the situation where the daemon thread is just killed without any chance to clean up. And subprocesses would be a more reliable way to ensure cleanup and might be the direction I take it in the future. Carl Banks -- https://mail.python.org/mailman/listinfo/python-list
daemon thread cleanup approach
Ok, so I have an issue with cleaning up threads upon a unexpected exit. I came up with a solution but I wanted to ask if anyone has any advice or warnings. Basically I am writing a Python library to run certain tasks. All of the calls in the library start worker threads to do the actual work, and some of the worker threads are persistent, others not. Most threads have cleanup work to do (such as deleting temporary directories and killing spawned processes). For better or worse, one of the requirements is that the library can't cause the program to hang no matter what, even if it means you have to forego cleanup in the event of an unexpected exit. Therefore all worker threads run as daemons. Nevertheless, I feel like the worker threads should at least be given a fair opportunity to clean up; all threads can be communicated with and asked to exit. One obvious solution is to ask users to put all library calls inside a with-statement that cleans up on exit, but I don't like it for various reasons. Using atexit doesn't work because it's called after the daemon threads are killed. Here's the solution I came up with: in the library's init function, it will start a non-daemon thread that simply joins the main thread, and then asks all existing worker threads to exit gracefully before timing out and leaving them to be killed. So if an exception ends the main thread, there is still a chance to clean up properly. Does anyone see a potential problem with this approach? It it possible that this will cause the program to hang in any case? We can assume that all calls to the library will occur from the main thread, or at least from the same thread. (If that isn't the case, then the caller has taken responsibility to ensure the program doesn't hang.) This is Python 2.7, and it's only ever going to run on Windows. Thanks for any advice/warnings. Carl Banks -- https://mail.python.org/mailman/listinfo/python-list
Re: [Chicago] Getting ASCII encoding where unicode wanted under Py3k
On Mon, May 13, 2013 at 10:59 AM, Jonathan Hayward wrote: That is way too much code for me to try and dig into. Remove everything not needed to demo it. Replace big strings with little strings. My guess is it should be 1-3 lines, like >>> print('123%(a)s' % {'a': u'\u0161' } ) 123š But that works. may need a few other lines, or something. It is also possible that there is a setting in your OS that has an effect. What OS? -- Carl K -- http://mail.python.org/mailman/listinfo/python-list
Re: Python education survey
On Dec 25, 5:44 pm, Rick Johnson wrote: > On Dec 19, 9:51 pm, Raymond Hettinger > wrote: > > > Do you use IDLE when teaching Python? > > If not, what is the tool of choice? > > I believe IDLE has the potential to be a very useful teaching tool and > even in it's current abysmal state, i find it to be quite useful. > > > Students may not be experienced with the command-line and may be > > running Windows, Linux, or Macs. Ideally, the tool or IDE will be > > easy to install and configure (startup directory, path, associated > > with a particular version of Python etc). > > Why install an IDE when IDLE is already there? Oh, yes, IDLE SUCKS. I > know that already. But this revelation begs the question... Why has > this community allowed IDLE to rot? Why has guido NOT started a public > discussion on the matter? > > > Though an Emacs user myself, I've been teaching with IDLE because it's > > free; it runs on multiple OSes, it has tooltips and code colorization > > and easy indent/dedent/comment/uncomment commands, it has tab > > completion; it allows easy editing at the interactive prompt; it has > > an easy run-script command (F5); it has direct access to source code > > (File OpenModule) and a class browser (Cntl+B). > > Yes, IDLE has all the basic tools anyone would need. Some people > complain about a debugger, but i never use a debugger anyway. I feel > debuggers just wreaken your debugging skills. > > > On the downside, some python distros aren't built with the requisite > > Tcl/Tk support; > > And who's fault is that? > > > some distros like the Mac OS ship with a broken Tcl/Tk > > so users have to install a fix to that as well; and IDLE sometimes > > just freezes for no reason. > > And who's fault is that? > > > [IDLE] also doesn't have an easy way to > > specify the startup directory. > > Are you kidding me? That could be fixed so easily! > > > If your goal is to quickly get new users up and running in Python, > > what IDE or editor do you recommend? > > IDLE, of course. But NOT in its current state. > > Why would myself (or anyone) go to the trouble of downloading third > party IDEs when IDLE is just waiting there for us to use? I for one, > like to use tools that have open source code. And what is a better > Python IDE than a Python IDE written in PYTHON? I ask ya? > > Also, what is the purpose of this thread Raymond? Are you (and others) > considering removing IDLE from the source distro? > > You know. Many folks in this community have known for a long time how > much i love IDLE, but at the same time how much i loath it's atrocious > code base. I also know for a fact, that many "movers and shakers" > within this community simultaneously use IDLE, and want to see IDLE > code improved. However. None of these fine folks have taken the time > to contact me privately so we can discuss such an evolution. Why is > that? It boggles the mind really. Do people seriously use IDLE? I thought it was just there for scratchers, like turtle. -- http://mail.python.org/mailman/listinfo/python-list
Re: Python education survey
On Dec 20, 10:58 am, Andrea Crotti wrote: > On 12/20/2011 03:51 AM, Raymond Hettinger wrote: > > > > > > > > > > > Do you use IDLE when teaching Python? > > If not, what is the tool of choice? > > > Students may not be experienced with the command-line and may be > > running Windows, Linux, or Macs. Ideally, the tool or IDE will be > > easy to install and configure (startup directory, path, associated > > with a particular version of Python etc). > > > Though an Emacs user myself, I've been teaching with IDLE because it's > > free; it runs on multiple OSes, it has tooltips and code colorization > > and easy indent/dedent/comment/uncomment commands, it has tab > > completion; it allows easy editing at the interactive prompt; it has > > an easy run-script command (F5); it has direct access to source code > > (File OpenModule) and a class browser (Cntl+B). > > > On the downside, some python distros aren't built with the requisite > > Tcl/Tk support; some distros like the Mac OS ship with a broken Tcl/Tk > > so users have to install a fix to that as well; and IDLE sometimes > > just freezes for no reason. It also doesn't have an easy way to > > specify the startup directory. > > > If your goal is to quickly get new users up and running in Python, > > what IDE or editor do you recommend? > > > Raymond > > I think ipython and a good editor gives a much nicer experience > than IDLE, which I actually almost never used, and > for everything else there is python and python-mode. > > New users however can be pointed to something like PyCharm > or Eclipse+PyDev if they are more familiar to IDEs.. I agree; IPython is a excellent choice. You have a much more powerful interactive Python experience, with all the features you need from an IDE. You can use any editor (VIM) and you can also readily hack IPython to death. I think the fact that anyone with basic programming skills can substantially enhance their console is a big winner in CS education. It gives students something they personally value to work on, it's a place to store all their little bits of code and actually benefit from them in real life. I've never met a programmer that got familiar with IPython and then went on to stop using it. It should be included in the standard library and used as the default Python interactive environment. The last line of my .bashrc file: ipython3 -- http://mail.python.org/mailman/listinfo/python-list
Re: (don't bash me too hard) Python interpreter in JavaScript
On Tuesday, November 15, 2011 12:37:03 PM UTC-8, Passiday wrote: > Hello, > > I am looking for a way how to bring Python interpreter to JavaScript, in > order to provide a web-based application with python scripting capabilities. > The app would have basic IDE for writing and debugging the python code, but > the interpretation, of course, would be done in JavaScript. I'd like to avoid > any client-server transactions, so all the interpretation should take place > on the client side. The purpose of all this would be to create educational > platform for learning the programming in python. > > I hoped somebody already had done something like this, but I couldn't google > up anything. I've found some crazy project emulating PC in JavaScript (and > even running Linux on top of it), but not a python interpreter. > > Of course, I could take the python source and brutally recode it in > JavaScript, but that seems like awful lot of work to do. Any ideas how I > should proceed with this project? Some people have already made an LLVM-to-Javascript compiler, and have managed to build Python 2.7 with it. The LLVM-to-Javascript project is called emscripten. https://github.com/kripken/emscripten/wiki Demo of Python (and a bunch of other languages) here: http://repl.it/ Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: revive a generator
On Thursday, October 20, 2011 6:23:50 AM UTC-7, Yingjie Lan wrote: > Hi, > > it seems a generator expression can be used only once: > > >>> g = (x*x for x in range(3)) > >>> for x in g: print x > 0 > 1 > 4 > >>> for x in g: print x #nothing printed > >>> > > Is there any way to revive g here? Revive is the wrong word for what you want. Once an iterator (be it a generator or some other kind of iterator) is done, it's done. What you are asking for is, given a generator, to create a new generator from the same expression/function that created the original generator. This is not reviving, but recreating. I have two objections to this: a major ideological one and a minor practical one. The practical drawback to allowing generators to be recreated is that it forces all generators to carry around a reference to the code object that created it. if random.random() > 5: g = (x*x for x in xrange(3)) else: g = (x+x for x in xrange(3)) for y in g: print x revive(g) # which generator expression was it? # need to carry around a reference to be able to tell for y in g: print x Carrying a reference to a code object in turn carries around any closures used in the generator expression or function, so it can potentially keep a large amount of data alive. Given that the vast majority of generators would never be recreated, this is quite wasteful. My ideological objection is that it forces the programmer to be wary of the effects of recreation. Right now, if someone writes a generator expression, they can rely on the fact that it can only be iterated through once (per time the generator expression is evaluated). But if you allow a downstream user to recreate the generator at will, then the writer will always have to be wary of adverse side-effects if the generator is iterated through twice. So, although I can see it being occasionally useful, I'm going to opine that it is more trouble than it's worth. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Language Enhancement Idea to help with multi-processing (your opinions please)
On Friday, October 14, 2011 6:23:15 PM UTC-7, alex23 wrote: > On Oct 14, 4:56 pm, Carl Banks > wrote: > > But you can see that, fully realized, syntax like that can do much more > > than can be done with library code. > > Well sure, but imaginary syntax can do _anything_. That doesn't mean > it's possible within CPython. Hey, thanks for backing me up on that sentiment. :) Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: argparse zero-length switch
On Friday, October 14, 2011 12:41:26 AM UTC-7, Peter Otten wrote: > Carl Banks wrote: > > > Is it possible to specify a zero-length switch? Here's what I mean. > > > > I have a use case where some users would have to enter a section name on > > the command line almost every time, whereas other users (the ones using > > only one section) will never have to enter the section name. I don't want > > to burden users with only one "section" to always enter the section name > > as a required argument, but I also want to make it as convenient as > > possible to enter the section name for those who need to. > > > > My thought, on the thinking that practicality beats purity, was to create > > a zero-length switch using a different prefix character (say, @) to > > indicate the section name. So instead of typing this: > > > >sp subcommand -s abc foo bar > > > > they could type this: > > > >sp subcommand @abc foo bar > > > > Admittedly a small benefit. I tried the following but argparse doesn't > > seem to do what I'd hoped: > > > >p = argparse.ArgumentParser(prefix_chars='-@') > >p.add_argument('@',type=str,dest='section') > >ar = p.parse_args(['@abc']) > > > > This throws an exception claiming unrecognized arguments. > > > > Is there a way (that's not a hack) to do this? Since the current behavior > > of the above code seems to do nothing useful, it could be added to > > argparse with very low risk of backwards incompatibility. > > If the number of positional arguments is otherwise fixed you could make > section a positional argument with nargs="?" The positional arguments aren't fixed, otherwise I would have done it that way. I ended up deciding to prescan the command line for arguments starting with @, and that actually has some benefits over doing it with argparse. (One little surprise is if you pass it something like "-x @abc foo", where foo is the argument of -x.) I don't really care for or agree with Steven and Ben Finney's foolish consistency. I already weighed it against the benefits of consistency, and decided that this parameter was easily important enough to warrant special treatment. It's actually a good thing for this parameter to look different from other switches; it marks it as specially important. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Language Enhancement Idea to help with multi-processing (your opinions please)
On Thursday, October 13, 2011 5:35:30 AM UTC-7, Martin P. Hellwig wrote: > What I would expect to happen that all statements within the ooo block > may be executed out > of order. The block itself waits till all statements are returned before > continuing. > > What do you think? The statement is kind of limiting as a unit of uncaring. What if you have two statements that you do want to be executed in order, but still don't care what order they are executed in relative to other sets of two statements? Better would be to have a set of blocks that the compiler is free to execute asynchronously relative to each other (I'll call it async). async: a += 1 f *= a async: b += 1 e *= b async: c += 1 d *= c There is utterly no chance of this syntax entering Python. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Language Enhancement Idea to help with multi-processing (your opinions please)
On Thursday, October 13, 2011 7:16:37 PM UTC-7, Steven D'Aprano wrote: > > What I would expect to happen that all statements within the ooo block > > may be executed out > > of order. The block itself waits till all statements are returned before > > continuing. > > Why do you think this needs to be a language statement? > > You can have that functionality *right now*, without waiting for a syntax > update, by use of the multiprocessing module, or a third party module. > > http://docs.python.org/library/multiprocessing.html > http://wiki.python.org/moin/ParallelProcessing > > There's no need for forcing language changes on everyone, whether they need > it or not, for features that can easily be implemented as library code. This goes a little beyond a simple threading mechanism, though. It's more like guidance to the compiler that you don't care what order these are executed in; the compiler is then free to take advantage of this advice however it like. That could be to spawn threads, but it could also compile instructions to optimize pipelining and cacheing. The compiler could also ignore it. But you can see that, fully realized, syntax like that can do much more than can be done with library code. Obviously that extra capability is a very long way off for being useful in CPython. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
argparse zero-length switch
Is it possible to specify a zero-length switch? Here's what I mean. I have a use case where some users would have to enter a section name on the command line almost every time, whereas other users (the ones using only one section) will never have to enter the section name. I don't want to burden users with only one "section" to always enter the section name as a required argument, but I also want to make it as convenient as possible to enter the section name for those who need to. My thought, on the thinking that practicality beats purity, was to create a zero-length switch using a different prefix character (say, @) to indicate the section name. So instead of typing this: sp subcommand -s abc foo bar they could type this: sp subcommand @abc foo bar Admittedly a small benefit. I tried the following but argparse doesn't seem to do what I'd hoped: p = argparse.ArgumentParser(prefix_chars='-@') p.add_argument('@',type=str,dest='section') ar = p.parse_args(['@abc']) This throws an exception claiming unrecognized arguments. Is there a way (that's not a hack) to do this? Since the current behavior of the above code seems to do nothing useful, it could be added to argparse with very low risk of backwards incompatibility. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Race condition deadlock in communicate when threading?
There's really not enough information for us to debug this, but one possibility is that your subprocess is using buffered I/O; you're expecting the external task to write a string, but it doesn't actually write that string because it's sitting in a buffer. First thing to try is to see if the program accepts some kind of command line argument to run in unbuffered mode (for instance, if you are calling a Python interpreter you can pass it the -u switch to force unbuffered I/O). If (like most programs) it doesn't have an option to disable buffering, you can try running it on a pty device (if you're on Unix). If worse comes to worst, see if there's a way to get the external task to print lots of extra output (a verbosity setting, for instance); that could work in a pinch until you can debug it more thoroughly. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Mixins
On Thursday, September 22, 2011 2:14:39 PM UTC-7, Matt wrote: > In terms of code, lets say we have the following classes: > > class Animal > class Yamlafiable > class Cat(Animal, Yamlafiable) > class Dog(Animal, Yamlafiable) > > I've got an Animal that does animal things, a Cat that does cat things > and a Dog that does dog things. I've also got a Yamlafiable class that > does something clever to generically convert an object into Yaml in > some way. Looking at these classes I can see that a Cat is an Animal, > a Dog is an Animal, a Dog is not a Cat, a Cat is not a Dog, a Dog is a > Yamlafiable? and a Cat is a Yamlafiable? Is that really true? Yes. I hope you are not confusing Cats with cats. > If my > objects are categorized correctly, in the correct inheritance > hierarchy shouldn't that make more sense? Cats and Dogs aren't > Yamlafiable, that doesn't define what they are, rather it defines > something that they can do because of things that they picked up from > their friend the Yamlafile. The whole point of OOP is that objects are defined by their behavior. A Cat is whatever it can do. A Dog is whatever it can do. If a Cat is yamlafiable, then it's coorect to say that a Cat is a Yamlafible (even if a cat isn't). Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Python Mixins
On Thursday, September 22, 2011 2:14:39 PM UTC-7, Matt wrote: [snip] > class MyMixin(object): > def one_thing(self): > return "something cool" > > @mixin(MyMixin) > class MyClass(object): > pass > > x = MyClass() > x.one_thing() == 'something cool' > x.__class__.__bases__ == (object,) > > To me, this is much more concise. By looking at this I can tell what > MyClass IS, who it's parents are and what else it can do. I'm very > interested to know if there are others who feel as dirty as I do when > using inheritance for mixins Not me. Inheritance perfectly encompasses the mixin relationship, and because inheritance is so thoroughly ingrained in Python, it makes sense not to create a completely different mechanism to share behavior just for the case of mixins. I know that, as someone who reads code, I would rather coders stick to well-known mechanisms than to create their own ad hoc mechanisms that don't actually add any new capability. Take your MyClass example above. If you had stuck to inheritance I could have seen what classes all the behavior was implemented by listing the __bases__. But since you used an ad hoc mechanism, now I have to track down where the hell that one_thing() method is coming from. No mechanism is ever perfect, and Python's MI is very far from perfect, but sticking to well-known and understood methods is usually more important than whatever little improvement you can make. (And it is little; best as I can tell, your main objection is that mixins make it harder to see what the "main" parent is. I'd say that's a dubious justification to spring a new behavior-sharing mechanism on a reader.) > or if there are other things that Python > developers are doing to mix in functionality without using inheritance > or if the general populous of the Python community disagrees with me > and thinks that this is a perfectly valid use of inheritance. I'd guess the majority just use inheritance, although I can't say I've seen enough code out there to gauge it. But there is something else a lot of Pythonistas will do in many cases: just define regular functions. In your example, instead of defining one_thing() as a method of a mixin, define it as a function. Personally, I find that I almost never use mixins, though I have absolutely nothing against them and I use MI and metaclasses all the time. It's just that for most things I'd use a mixin for, I find that one or two regular functions work perfectly well. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: PC locks up with list operations
On Wednesday, August 31, 2011 5:49:24 AM UTC-7, Benjamin Kaplan wrote: > 32-bit or 64-bit Python? A 32-bit program will crash once memory hits > 2GB. A 64-bit program will just keep consuming RAM until your computer > starts thrashing. The problem isn't your program using more RAM than > you have, just more RAM than you have free. Last time I faced a > situation like this, I just decided it was better to stick to the > 32-bit program and let it crash if it got too big. On my 64-bit Linux system, I got a memory error in under a second, no thrashing. I have no swap. It's overrated. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Why doesn't threading.join() return a value?
On Friday, September 2, 2011 11:53:43 AM UTC-7, Adam Skutt wrote: > On Sep 2, 2:23 pm, Alain Ketterlin > wrote: > > Sorry, you're wrong, at least for POSIX threads: > > > > void pthread_exit(void *value_ptr); > > int pthread_join(pthread_t thread, void **value_ptr); > > > > pthread_exit can pass anything, and that value will be retrieved with > > pthread_join. > > No, it can only pass a void*, which isn't much better than passing an > int. Passing a void* is not equivalent to passing anything, not even > in C. Moreover, specific values are still reserved, like > PTHREAD_CANCELLED. Yes, it was strictly inappropriate for me to say > both return solely integers, but my error doesn't meaningful alter my > description of the situation. The interface provided by the > underlying APIs is not especially usable for arbitrary data transfer. I'm sorry, but your claim is flat out wrong. It's very common in C programming to use a void* to give a programmer ability to pass arbitrary data through some third-party code. The Python API itself uses void* in this way in several different places. For instance, ake a look at the Capsule API (http://docs.python.org/c-api/capsule.html). You'll notice it uses a void* to let a user pass in opaque data. Another case is when declaring properties in C: it's common to define a single get or set function, and only vary some piece of data for the different properties. The API provides a void* so that the extension writer can pass arbitrary data to the get and set functions. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Why doesn't threading.join() return a value?
On Friday, September 2, 2011 11:01:17 AM UTC-7, Adam Skutt wrote: > On Sep 2, 10:53 am, Roy Smith wrote: > > I have a function I want to run in a thread and return a value. It > > seems like the most obvious way to do this is to have my target > > function return the value, the Thread object stash that someplace, and > > return it as the return value for join(). > > > Yes, I know there's other ways for a thread to return values (pass the > > target a queue, for example), but making the return value of the > > target function available would have been the most convenient. I'm > > curious why threading wasn't implemented this way. > > I assume it is because the underlying operating system APIs do not > support it. Nope. This could easily be implemented by storing the return value in the Thread object. It's not done that way probably because no one thought of doing it. Carl Bannks -- http://mail.python.org/mailman/listinfo/python-list
Re: sqlite3 with context manager
On Friday, September 2, 2011 11:43:53 AM UTC-7, Tim Arnold wrote: > Hi, > I'm using the 'with' context manager for a sqlite3 connection: > > with sqlite3.connect(my.database,timeout=10) as conn: > conn.execute('update config_build set datetime=?,result=? > where id=?', >(datetime.datetime.now(), success, > self.b['id'])) > > my question is what happens if the update fails? Shouldn't it throw an > exception? If you look at the sqlite3 syntax documentation, you'll see it has a SQL extension that allows you to specify error semantics. It looks something like this: UPDATE OR IGNORE UPDATE OR FAIL UPDATE OR ROLLBACK I'm not sure exactly how this interacts with pysqlite3, but using one of these might help it throw exceptions when you want it to. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Optparse buggy?
On Thursday, September 1, 2011 7:16:13 PM UTC-7, Roy Smith wrote: > In article , > Terry Reedy wrote: > > > Do note "The optparse module is deprecated and will not be developed > > further; development will continue with the argparse module." > > One of the unfortunate things about optparse and argparse is the names. > I can never remember which is the new one and which is the old one. It > would have been a lot simpler if the new one had been named optparse2 > (in the style of unittest2 and urllib2). It's easy: "opt"parse parses only "opt"ions (-d and the like), whereas "arg"parse parses all "arg"uments. argparse is the more recent version since it does more. optparse2 would have been a bad name for something that parses more than options. (In fact, although I have some minor philosophical disagreements with optparse's design decisions, the main reason I always recommended using argparse instead was that optparse didn't handle positional arguments. optparse has all these spiffy features with type checking and defaults, but it never occurred to the optparse developers that this stuff would be useful for positional arugments, too. They just dropped the ball there.) Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: fun with nested loops
On Wednesday, August 31, 2011 8:51:45 AM UTC-7, Daniel wrote: > Dear All, > > I have some complicated loops of the following form > > for c in configurations: # loop 1 > while nothing_bad_happened: # loop 2 > while step1_did_not_work: # loop 3 > for substeps in step1 # loop 4a > # at this point, we may have to > -leave loop 1 > -restart loop 4 > -skip a step in loop 4 > -continue on to loop 4b > > while step2_did_not_work: # loop 4b > for substeps in step2: > # at this point, we may have to > -leave loop 1 > -restart loop 2 > -restart loop 4b > ... > ...many more loops... > > > I don't see any way to reduce these nested loops logically, they > describe pretty well what the software has to do. > This is a data acquisition application, so on ever line there is > a lot of IO that might fail or make subsequent steps useless or > require a > retry. > > Now every step could need to break out of any of the enclosing loops. I feel your pain. Every language, even Python, has cases where the trade-offs made in the language design make some legitimate task very difficult. In such cases I typically throw out the guidebook and make use of whatever shameless Perlesque thing it takes to keep things manageable. In your example you seem like you're trying to maintain some semblance of structure and good habit; I'd it's probably no longer worth it. Just store the level to break to in a variable, and after every loop check the variable and break if you need to break further. Something like this, for example: break_level = 99 while loop1: while loop2: while loop3: if some_condition: break_level = (1, 2, or 3) break if break_level < 3: break break_level = 99 if break_level < 2: break break_level = 99 Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Why do closures do this?
On Saturday, August 27, 2011 8:45:05 PM UTC-7, John O'Hagan wrote: > Somewhat apropos of the recent "function principle" thread, I was recently > surprised by this: > > funcs=[] > for n in range(3): > def f(): > return n > funcs.append(f) > > [i() for i in funcs] > > The last expression, IMO surprisingly, is [2,2,2], not [0,1,2]. Google tells > me I'm not the only one surprised, but explains that it's because "n" in the > function "f" refers to whatever "n" is currently bound to, not what it was > bound to at definition time (if I've got that right), and that there are at > least two ways around it: > My question is, is this an inescapable consequence of using closures, or is > it by design, and if so, what are some examples of where this would be the > preferred behaviour? It is the preferred behavior for the following case. def foo(): def printlocals(): print a,b,c,d a = 1; b = 4; c = 5; d = 0.1 printlocals() a = 2 printlocals() When seeing a nested function, there are strong expectations by most people that it will behave this way (not to mention it's a lot more useful). It's only for the less common and much more advanced case of creating a closure in a loop that the other behavior would be preferred. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Run time default arguments
On Thursday, August 25, 2011 1:54:35 PM UTC-7, ti...@thsu.org wrote: > On Aug 25, 10:35 am, Arnaud Delobelle wrote: > > You're close to the usual idiom: > > > > def doSomething(debug=None): > > if debug is None: > > debug = defaults['debug'] > > ... > > > > Note the use of 'is' rather than '==' > > HTH > > Hmm, from what you are saying, it seems like there's no elegant way to > handle run time defaults for function arguments, meaning that I should > probably write a sql-esc coalesce function to keep my code cleaner. I > take it that most people who run into this situation do this? I don't; it seems kind of superfluous when "if arg is not None: arg = whatever" is just as easy to type and more straightforward to read. I could see a function like coalesce being helpful if you have a list of several options to check, though. Also, SQL doesn't give you a lot of flexibility, so coalesce is a lot more needed there. But for simple arguments in Python, I'd recommend sticking with "if arg is not None: arg = whatever" Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Help on PyQt4 QProcess
On Friday, August 19, 2011 12:55:40 PM UTC-7, Edgar Fuentes wrote: > On Aug 19, 1:56 pm, Phil Thompson > wrote: > > On Fri, 19 Aug 2011 10:15:20 -0700 (PDT), Edgar Fuentes > > wrote: > > > Dear friends, > > > > > I need execute an external program from a gui using PyQt4, to avoid > > > that hang the main thread, i must connect the signal "finished(int)" > > > of a QProcess to work properly. > > > > > for example, why this program don't work? > > > > > from PyQt4.QtCore import QProcess > > > pro = QProcess() # create QProcess object > > > pro.connect(pro, SIGNAL('started()'), lambda > > > x="started":print(x)) # connect > > > pro.connect(pro, SIGNAL("finished(int)"), lambda > > > x="finished":print(x)) > > > pro.start('python',['hello.py']) # star hello.py program > > > (contain print("hello world!")) > > > timeout = -1 > > > pro.waitForFinished(timeout) > > > print(pro.readAllStandardOutput().data()) > > > > > output: > > > > > started > > > 0 > > > b'hello world!\n' > > > > > see that not emit the signal finished(int) > > > > Yes it is, and your lambda slot is printing "0" which is the return code > > of the process. > > > > Phil > > Ok, but the output should be: > > started > b'hello world!\n' > finished > > no?. > > thanks Phil Two issues. First of all, your slot for the finished function does not have the correct prototype, and it's accidentally not throwing an exception because of your unnecessary use of default arguments. Anyway, to fix that, try this: pro.connect(pro, SIGNAL("finished(int)"), lambda v, x="finished":print(x)) Notice that it adds an argument to the lambda (v) that accepts the int argument of the signal. If you don't have that argument there, the int argument goes into x, which is why Python prints 0 instead of "finished". Second, processess run asynchrously, and because of line-buffering, IO can output asynchronously, and so there's no guarantee what order output occurs. You might try calling the python subprocess with the '-u' switch to force unbuffered IO, which might be enough to force synchronous output (depending on how signal/slot and subprocess semantics are implemented). Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Help with regular expression in python
On Friday, August 19, 2011 10:33:49 AM UTC-7, Matt Funk wrote: > number = r"\d\.\d+e\+\d+" > numbersequence = r"%s( %s){31}(.+)" % (number,number) > instance_linetype_pattern = re.compile(numbersequence) > > The results obtained are: > results: > [(' 2.199000e+01', ' : (instance: 0)\t:\tsome description')] > so this matches the last number plus the string at the end of the line, but > no > retaining the previous numbers. > > Anyway, i think at this point i will go another route. Not sure where the > issues lies at this point. I think the problem is that repeat counts don't actually repeat the groupings; they just repeat the matchings. Take this expression: r"(\w+\s*){2}" This will match exactly two words separated by whitespace. But the match result won't contain two groups; it'll only contain one group, and the value of that group will match only the very last thing repeated: Python 2.7.1+ (r271:86832, Apr 11 2011, 18:13:53) [GCC 4.5.2] on linux2 Type "help", "copyright", "credits" or "license" for more information. >>> import re >>> m = re.match(r"(\w+\s*){2}","abc def") >>> m.group(1) 'def' So you see, the regular expression is doing what you think it is, but the way it forms groups is not. Just a little advice (I know you've found a different method, and that's good, this is for the general reader). The functions re.findall and re.finditer could have helped here, they find all the matches in a string and let you iterate through them. (findall returns the strings matched, and finditer returns the sequence of match objects.) You could have done something like this: row = [ float(x) for x in re.findall(r'\d+\.\d+e\+d+',line) ] And regexp matching is often overkill for a particular problem; this may be of them. line.split() could have been sufficient: row = [ float(x) for x in line.split() ] Of course, these solutions don't account for the case where you have lines, some of which aren't 32 floating-point numbers. You need extra error handling for that, but you get the idea. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: thread and process
On Saturday, August 13, 2011 2:09:55 AM UTC-7, 守株待兔 wrote: > please see my code: > import os > import threading > print threading.currentThread() > print "i am parent ",os.getpid() > ret = os.fork() > print "i am here",os.getpid() > print threading.currentThread() > if ret == 0: > print threading.currentThread() > else: > os.wait() > print threading.currentThread() > > > print "i am runing,who am i? > ",os.getpid(),threading.currentThread() > > the output is: > <_MainThread(MainThread, started -1216477504)> > i am parent 13495 > i am here 13495 > <_MainThread(MainThread, started -1216477504)> > i am here 13496 > <_MainThread(MainThread, started -1216477504)> > <_MainThread(MainThread, started -1216477504)> > i am runing,who am i? 13496 <_MainThread(MainThread, started > -1216477504)> > <_MainThread(MainThread, started -1216477504)> > i am runing,who am i? 13495 <_MainThread(MainThread, started > -1216477504)> > it is so strange that two different processes use one mainthread!! They don't use one main thread; it's just that each process's main thread has the same name. Which makes sense: when you fork a process all the data in the process has to remain valid in both parent and child, so any pointers would have to have the same value (and the -1216477504 happens to be the value of that pointer cast to an int). Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: list comprehension to do os.path.split_all ?
On Thursday, July 28, 2011 2:31:43 PM UTC-7, Ian wrote: > On Thu, Jul 28, 2011 at 3:15 PM, Emile van Sebille wrote: > > On 7/28/2011 1:18 PM gry said... > >> > >> [python 2.7] I have a (linux) pathname that I'd like to split > >> completely into a list of components, e.g.: > >> '/home/gyoung/hacks/pathhack/foo.py' --> ['home', 'gyoung', > >> 'hacks', 'pathhack', 'foo.py'] > >> > >> os.path.split gives me a tuple of dirname,basename, but there's no > >> os.path.split_all function. > >> > > > > Why not just split? > > > > '/home/gyoung/hacks/pathhack/foo.py'.split(os.sep) > > Using os.sep doesn't make it cross-platform. On Windows: > > >>> os.path.split(r'C:\windows') > ('C:\\', 'windows') > >>> os.path.split(r'C:/windows') > ('C:/', 'windows') > >>> r'C:\windows'.split(os.sep) > ['C:', 'windows'] > >>> r'C:/windows'.split(os.sep) > ['C:/windows'] It's not even fullproof on Unix. '/home//h1122/bin///ghi/'.split('/') ['','home','','bin','','','ghi',''] The whole point of the os.path functions are to take care of whatever oddities there are in the path system. When you use string manipulation to manipulate paths, you bypass all of that and leave yourself open to those oddities, and then you find your applications break when a user enters a doubled slash. So stick to os.path. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Aw: python.org back up ?(was Re: python.org is down?)
On Sunday, July 24, 2011 11:42:45 AM UTC-7, David Zerrenner wrote: > *pew* I can't live without the docs, that really made my day now. If you can't live without the docs, you should consider downloading them and accessing them locally. That'll let you work whenever python.org goes down, and will help keep the load off the server when it's up. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: list(), tuple() should not place at "Built-in functions" in documentation
On Thursday, July 14, 2011 8:00:16 PM UTC-7, Terry Reedy wrote: > I once proposed, I believe on the tracker, that 'built-in functions' be > expanded to 'built-in function and classes'. That was rejected on the > basis that people would then expect the full class documentation that is > in the 'built-in types' section (which could now be called the > built-isssn classes section. Built in functions and contructors? Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Functional style programming in python: what will you talk about if you have an hour on this topic?
On Wednesday, July 13, 2011 5:39:16 AM UTC-7, Anthony Kong wrote: [snip] > I think I will go through the following items: > > itertools module > functools module > concept of currying ('partial') > > > I would therefore want to ask your input e.g. > > Is there any good example to illustrate the concept? > What is the most important features you think I should cover? > What will happen if you overdo it? Java is easily worst language I know of for support of functional programming (unless they added delegates or some other tacked-on type like that), so my advice would be to keep it light, for two reasons: 1. It won't take a lot to impress them 2. Too much will make them roll their eyes Thinking about it, one of the problems with demonstrating functional features is that it's not obvious how those features can simplify things. To get the benefit, you have to take a step back and redo the approach somewhat. Therefore, I'd recommend introducing these features as part of a demo on how a task in Python can be solved much more concisely than in Java. It's kind of an art to find good examples, though. Off the top of my head, I can think of using functools module to help with logging or to apply patches, whereas in Java they'd have to resort to a code weaver or lots of boilerplate. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: "Python Wizard," with apologies to The Who
On Tuesday, July 12, 2011 9:40:23 AM UTC-7, John Keisling wrote: > After too much time coding Python scripts and reading Mark Lutz's > Python books, I was inspired to write the following lyrics. For those > too young to remember, the tune is that of "Pinball Wizard," by The > Who. May it bring you as much joy as it brought me! > > > I cut my teeth on BASIC > At scripting I'm no pawn > From C++ to Java > My code goes on and on > But I ain't seen nothing like this > In any place I've gone > That modeling and sim guy > Sure codes some mean Python! That's pretty funny. I knew what it would be even when I saw the cut-off subject line, and I am too young to remember it. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Function docstring as a local variable
On Sunday, July 10, 2011 4:06:27 PM UTC-7, Corey Richardson wrote: > Excerpts from Carl Banks's message of Sun Jul 10 18:59:02 -0400 2011: > > print __doc__ > > > > Python 2.7.1 (r271:86832, Jul 8 2011, 22:48:46) > [GCC 4.4.5] on linux2 > Type "help", "copyright", "credits" or "license" for more information. > >>> def foo(): > ... "Docstring" > ... print __doc__ > ... > >>> foo() > None > >>> > > What does yours do? It prints the module docstring, same as your example does. You did realize that was the question I was answering, right? Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Function docstring as a local variable
On Sunday, July 10, 2011 3:50:18 PM UTC-7, Tim Johnson wrote: > Here's a related question: > I can get the docstring for an imported module: > >>> import tmpl as foo > >>> print(foo.__doc__) > Python templating features > >Author - tim at akwebsoft dot com > > ## Is it possible to get the module docstring > ## from the module itself? print __doc__ Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: What makes functions special?
On Saturday, July 9, 2011 2:28:58 PM UTC-7, Eric Snow wrote: > A tracker issue [1] recently got me thinking about what makes > functions special. The discussion there was regarding the distinction > between compile time (generation of .pyc files for modules and > execution of code blocks), [function] definition time, and [function] > execution time. Definition time actually happens during compile time, Nope. Compile time and definition time are always distinct. > but it has its own label to mark the contrast with execution time. So > why do functions get this special treatment? They don't really. [snip] > Am I wrong about the optimization expectation? As best as I can tell, you are asking (in a very opaque way) why the Python compiler even bothers to create code objects, rather than just to create a function object outright, because it doesn't (you think) do that for any other kind of object. Two answers (one general, one specific): 1. You're looking for a pattern where it doesn't make any sense for there to be one. The simple truth of the matter is different syntaxes do different things, and there isn't anything more to it. A lambda expression or def statement does one thing; a different syntax, such as an integer constant, does another thing. Neither one is treated "specially"; they're just different. Consider another example: tuple syntax versus list syntax. Python will often build the tuple at compile time, but it never builds a list at compile time. Neither one is "special"; it's just that tuple syntax does one thing, list syntax does a different thing. 2. Now that we've dispensed with the idea that Python is treating functions specially, let's answer your specific question. It's not special, but still, why the code object? The reason, simply, is that code objects are used for more than just functions. Code objects are also used in modules, and in eval and exec statements, and there's one for each statement at the command line. Code objects are also used directly by the interpreter when executing byte code. A function object is only one of several "interfaces" to a code object. A minor reason is that code objects are constant (in fact, any object that is built at compile time must be a constant). However, function objects are mutable. I hope that helps clear things up. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Does hashlib support a file mode?
On Wednesday, July 6, 2011 12:07:56 PM UTC-7, Phlip wrote: > If I call m = md5() twice, I expect two objects. > > I am now aware that Python bends the definition of "call" based on > where the line occurs. Principle of least surprise. Phlip: We already know about this violation of the least surprise principle; most of us acknowledge it as small blip in an otherwise straightforward and clean language. (Incidentally, fixing it would create different surprises, but probably much less common ones.) We've helped you with your problem, but you risk alienating those who helped you when you badmouth the whole language on account of this one thing, and you might not get such prompt help next time. So try to be nice. You are wrong about Python bending the definition of "call", though. Surprising though it be, the Python language is very explicit that the default arguments are executed only once, when creating the function, *not* when calling it. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Nested/Sub Extensions in Python
On Saturday, July 2, 2011 6:35:19 AM UTC-7, H Linux wrote: > On Jul 2, 2:28 am, Carl Banks > wrote: > > On Friday, July 1, 2011 1:02:15 PM UTC-7, H Linux wrote: > > > Once I try to nest this, I cannot get the module to load anymore: > > > >import smt.bar > > > Traceback (most recent call last): > > > File "", line 1, in > > > ImportError: No module named bar > > > > [snip] > > > > > PyMODINIT_FUNC > > > initbar(void) > > > { > > > Py_InitModule("smt.bar", bar_methods); > > > } > > > > This should be: Py_InitModule("bar", bar_methods); > > That's probably it; other than that, it looks like you did everything right. > Thanks for your help, but I actually tried both ways. This does not > seem to be the problem, as it fails both ways with identical error > message. Correct, I misspoke. The problem would be if the initbar function name was misspelled. > > What does the installed file layout look like after running distutils setup? > Tree output is: > /usr/local/lib/python2.6/dist-packages/ > ├── foo.so > ├── smt > │ ├── bar.so > │ ├── __init__.py > │ └── __init__.pyc > └── smt-0.1.egg-info > > Just in case anyone is willing to have a look, here is a link to the > complete module as built with: > python setup.py sdist: > https://docs.google.com/leaf?id=0Byt62fSE5VC5NTgxOTFkYzQtNzI3NC00OTUzLWI1NzMtNmJjN2E0ZTViZTJi&hl=en_US > > If anyone has any other ideas how to get it to work, thanks in > advance... I got and built the package, and it imported smt.bar just fine for me. So my advice would be to rename all the modules. My guess is that there is a conflict for smt and Python is importing some other module or package. Is there a file called smt.py in your working directory? Try doing this: import smt print smt.__file__ And see if it prints at the location where your smt module is installed. If not, you have a conflict. And if that is the problem, in the future be more careful to keep your module namespace clean. Choose good, distinct names for modules and packages to lessen the risk of conflict. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Nested/Sub Extensions in Python
On Friday, July 1, 2011 1:02:15 PM UTC-7, H Linux wrote: > Once I try to nest this, I cannot get the module to load anymore: > >import smt.bar > Traceback (most recent call last): > File "", line 1, in > ImportError: No module named bar [snip] > PyMODINIT_FUNC > initbar(void) > { > Py_InitModule("smt.bar", bar_methods); > } This should be: Py_InitModule("bar", bar_methods); That's probably it; other than that, it looks like you did everything right. What does the installed file layout look like after running distutils setup? Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: writable iterators?
On Wednesday, June 22, 2011 4:10:39 PM UTC-7, Neal Becker wrote: > AFAIK, the above is the only python idiom that allows iteration over a > sequence > such that you can write to the sequence. And THAT is the problem. In many > cases, indexing is much less efficient than iteration. Well, if your program is such that you can notice a difference between indexing and iteration, you probably have better things to worry about. But whatever. You can get the effect you're asking for like this: class IteratorByProxy(object): def __init__(self,iterable): self.set(iterable) def __iter__(self): return self def next(self): return self.current_iter.next() def set(self,iterable): self.current_iter = iter(iterable) s = IteratorByProxy(xrange(10)) for i in s: print i if i == 6: s.set(xrange(15,20)) Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: how to inherit docstrings?
On Friday, June 10, 2011 7:30:06 PM UTC-7, Steven D'Aprano wrote: > Carl, I'm not exactly sure what your opposition is about here. Others > have already given real-world use cases for where inheriting docstrings > would be useful and valuable. Do you think that they are wrong? If so, > you should explain why their use-case is invalid and what solution they > should use. I don't have any issue with inheriting docstrings explicitly. Elsewhere in this thread I said I was +1 on the language helping to simplify this. What I am opposed to automatically inheriting the docstrings. I do think people are overstating the uses where inherited methods would share the same docstring, but that's besides the point. Overstated or not, one cannot deny that the base method's docstring is frequently unacceptable for the derived method, and my opposition to automatic inheritance is because in those cases will lead to incorrect docstrings, and no other reason. > If you fear that such docstring inheritance will become the default, > leading to a flood of inappropriate documentation, then I think we all > agree that this would be a bad thing. That is exactly what I fear, and you are wrong that "we all agree that this would be a bad thing". Several people in this thread are arguing that inheriting docstrings by default is the right thing, and that would lead to heaps of inappropriate documentation. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: how to inherit docstrings?
On Friday, June 10, 2011 2:51:20 AM UTC-7, Steven D'Aprano wrote: > On Thu, 09 Jun 2011 20:36:53 -0700, Carl Banks wrote: > > Put it this way: if Python doesn't automatically inherit docstrings, the > > worst that can happen is missing information. If Python does inherit > > docstrings, it can lead to incorrect information. > > This is no different from inheriting any other attribute. If your class > inherits "attribute", you might get an invalid value unless you take > steps to ensure it is a valid value. This failure mode doesn't cause us > to prohibit inheritance of attributes. Ridiculous. The docstring is an attribute of the function, not the class, which makes it very different from any other attribute. Consider this: class A(object): foo = SomeClass() class B(A): foo = SomeOtherUnrelatedClass() Would you have B.foo "inherit" all the attributes of A.foo that it doesn't define itself? That's the analogous case to inheriting docstrings. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: how to inherit docstrings?
On Thursday, June 9, 2011 10:18:34 PM UTC-7, Ben Finney wrote: [snip example where programmer is expected to consult class docstring to infer what a method does] > There's nothing wrong with the docstring for a method referring to the > context within which the method is defined. > > > Whenever somebody overrides a method to do something different, the > > inherited docstring will be insufficient (as in your ABC example) or > > wrong. > > I hope the above demonstrates that your assertion is untrue. Every > single method on a class doesn't need to specify the full context; a > docstring that requires the reader to know what class the method belongs > to is fine. It does not. A docstring that requires the user to to figure out that is poor docstring. There is nothing wrong, as you say, incomplete documentation that doesn't say what the function actually does. There's nothing wrong with omitting the docstring entirely for that matter. However, the question here is not whether a programmer is within right to use poor docstrings, but whether the langauge would go out of its way to support them. It should not. There is one thing that is very wrong to do with a docstring: provide incorrect or misleading information. So, despite having brought the point up myself, I am going to say the point is moot. Even if it is absolutely desirable for a language to go out it's way to support incomplete docstrings, part of that bargain is that the language will go out of its way to support flat-out wrong docstrings, and that trumps any ostensible benefit. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: how to inherit docstrings?
On Thursday, June 9, 2011 7:37:19 PM UTC-7, Eric Snow wrote: > When I write ABCs to capture an interface, I usually put the > documentation in the docstrings there. Then when I implement I want > to inherit the docstrings. Implicit docstring inheritance for > abstract base classes would meet my needs. Do all the subclasses do exactly the same thing? What's the use of a docstring if it doesn't document what the function does? class Shape(object): def draw(self): "Draw a shape" raise NotImplementedError class Triangle(Shape): def draw(self): print "Triangle" class Square(Shape): def draw(self): print "Square" x = random.choice([Triange(),Square()]) print x.draw.__doc__ # prints "Draws a shape" Quick, what shape is x.draw() going to draw? Shouldn't your docstring say what the method is going to do? So, I'm sorry, but I don't see this being sufficient for your use case for ABCs. > I'm just not clear on the > impact this would have for the other use cases of docstrings. Whenever somebody overrides a method to do something different, the inherited docstring will be insufficient (as in your ABC example) or wrong. This, I would say, is the case most of the time when overriding a base class method. When this happens, the language is committing an error. Put it this way: if Python doesn't automatically inherit docstrings, the worst that can happen is missing information. If Python does inherit docstrings, it can lead to incorrect information. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: how to inherit docstrings?
On Thursday, June 9, 2011 6:42:44 PM UTC-7, Ben Finney wrote: > Carl Banks > writes: > > > Presumably, the reason you are overriding a method in a subclass is to > > change its behavior; I'd expect an inherited docstring to be > > inaccurate more often than not. > > In which case the onus is on the programmer implementing different > behaviour to also override the docstring. Totally disagree. The programmer should never be under onus to correct mistakes made by the langauge. "In the face of ambiguity, refuse the temptation to guess." When the language tries to guess what the programmer wants, you get monstrosities like Perl. Don't want to go there. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: how to inherit docstrings?
On Thursday, June 9, 2011 3:27:36 PM UTC-7, Gregory Ewing wrote: > IMO, it shouldn't be necessary to explicitly copy docstrings > around like this in the first place. Either it should happen > automatically, or help() should be smart enough to look up > the inheritance hierarchy when given a method that doesn't > have a docstring of its own. Presumably, the reason you are overriding a method in a subclass is to change its behavior; I'd expect an inherited docstring to be inaccurate more often than not. So I'd be -1 on automatically inheriting them. However, I'd be +1 easily on a little help from the language to explicitly request to inherit the docstring. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: how to inherit docstrings?
On Thursday, June 9, 2011 12:13:06 AM UTC-7, Eric Snow wrote: > On Thu, Jun 9, 2011 at 12:37 AM, Ben Finney wrote: > > So, it's even possible to do what you ask without decorators at all: > > > > class Foo(object): > > def frob(self): > > """ Frobnicate thyself. """ > > > > class Bar(Foo): > > def frob(self): > > pass > > frob.__doc__ = Foo.frob.__doc__ > > > > Not very elegant, and involving rather too much repetition; but not > > difficult. > > > > Yeah, definitely you can do it directly for each case. However, the > inelegance, repetition, and immodularity are exactly why I am pursuing > a solution. :) (I included a link in the original message to > examples of how you can already do it with metaclasses and class > decorators too.) > > I'm just looking for a way to do it with decorators in the class body > without using metaclasses or class decorators. The tricky part is that, inside the class body (where decorators are being evaluated) the class object doesn't exist yet, so the method decorator has no way to infer what the base classes are at that point. A class decorator or metaclass can operate after the class object is made, but a method decorator can't. The best you could probably do with a method decorator is something like this: def inherit_docstring(base): def set_docstring(f): f.__doc__ = getattr(base,f.func_name).__doc__ return f return set_docstring where you have to repeat the base class every time: class Bar(Foo): @inherit_docstring(Foo) def somefunction(self): pass Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: GIL in alternative implementations
On Monday, June 6, 2011 9:03:55 PM UTC-7, Gabriel Genellina wrote: > En Sat, 28 May 2011 14:05:16 -0300, Steven D'Aprano > escribi�: > > > On Sat, 28 May 2011 09:39:08 -0700, John Nagle wrote: > > > >> Python allows patching code while the code is executing. > > > > Can you give an example of what you mean by this? > > > > If I have a function: > > > > > > def f(a, b): > > c = a + b > > d = c*3 > > return "hello world"*d > > > > > > how would I patch this function while it is executing? > > I think John Nagle was thinking about rebinding names: > > > def f(self, a, b): >while b>0: > b = g(b) > c = a + b > d = self.h(c*3) >return "hello world"*d > > both g and self.h may change its meaning from one iteration to the next, > so a complete name lookup is required at each iteration. This is very > useful sometimes, but affects performance a lot. It's main affect performance is that it prevents an optimizer from inlining a function call(which is a good chunk of the payoff you get in languages that can do that). I'm not sure where he gets the idea that this has any impact on concurrency, though. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: float("nan") in set or as key
On Wednesday, June 1, 2011 5:53:26 PM UTC-7, Steven D'Aprano wrote: > On Tue, 31 May 2011 19:45:01 -0700, Carl Banks wrote: > > > On Sunday, May 29, 2011 8:59:49 PM UTC-7, Steven D'Aprano wrote: > >> On Sun, 29 May 2011 17:55:22 -0700, Carl Banks wrote: > >> > >> > Floating point arithmetic evolved more or less on languages like > >> > Fortran where things like exceptions were unheard of, > >> > >> I'm afraid that you are completely mistaken. > >> > >> Fortran IV had support for floating point traps, which are "things like > >> exceptions". That's as far back as 1966. I'd be shocked if earlier > >> Fortrans didn't also have support for traps. > >> > >> http://www.bitsavers.org/pdf/ibm/7040/C28-6806-1_7040ftnMathSubrs.pdf > > > > Fine, it wasn't "unheard of". I'm pretty sure the existence of a few > > high end compiler/hardware combinations that supported traps doesn't > > invalidate my basic point. > > On the contrary, it blows it out of the water and stomps its corpse into > a stain on the ground. Really? I am claiming that, even if everyone and their mother thought exceptions were the best thing ever, NaN would have been added to IEEE anyway because most hardware didn't support exceptions. Therefore the fact that NaN is in IEEE is not any evidence that NaN is a good idea. You are saying that the existence of one early system that supported exceptions not merely argument against that claim, but blows it out of the water? Your logic sucks then. You want to go off arguing that there were good reasons aside from backwards compatibility they added NaN, be my guest. Just don't go around saying, "Its in IEEE there 4 its a good idear LOL". Lots of standards have all kinds of bad ideas in them for the sake of backwards compatibility, and when someone goes around claiming that something is a good idea simply because some standard includes it, it is the first sign that they're clueless about what standarization actually is. > NANs weren't invented as an alternative for > exceptions, but because exceptions are usually the WRONG THING in serious > numeric work. > > Note the "usually". For those times where you do want to interrupt a > calculation just because of an invalid operation, the standard allows you > to set a trap and raise an exception. I don't want to get into an argument over best practices in serious numerical programming, so let's just agree with this point for argument's sake. Here's the problem: Python is not for serious numerical programming. Yeah, it's a really good language for calling other languages to do numerical programming, but it's not good for doing serious numerical programming itself. Anyone with some theoretical problem where NaN is a good idea should already be using modules or separate programs written in C or Fortran. Casual and lightweight numerical work (which Python is good at) is not a wholly separate problem domain where the typical rules ("Errors should never pass silently") should be swept aside. [snip] > You'll note that, out of the box, numpy generates NANs: > > >>> import numpy > >>> x = numpy.array([float(x) for x in range(5)]) > >>> x/x > Warning: invalid value encountered in divide > array([ nan, 1., 1., 1., 1.]) Steven, seriously I don't know what's going through your head. I'm saying strict adherence to IEEE is not the best idea, and you cite the fact that a library tries to strictly adhere to IEEE as evidence that strictly adhering to IEEE is a good idea. Beg the question much? > The IEEE standard supports both use-cases: those who want exceptions to > bail out early, and those who want NANs so the calculation can continue. > This is a good thing. Failing to support the standard is a bad thing. > Despite your opinion, it is anything but obsolete. There are all kinds of good reasons to go against standards. "Failing to support the standard is a bad thing" are the words of a fool. A wise person considers the cost of breaking the standard versus the benefit got. It's clear tha IEEE's NaN handling is woefully out of place in the philosophy of Python, which tries to be newbie friendly and robust to errors; and Python has no real business trying to perform serious numerical work where (ostensibly) NaNs might find a use. Therefore, the cost of breaking standard is small, but the benefit significant, so Python would be very wise to break with IEEE in the handling of NaNs. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: float("nan") in set or as key
On Wednesday, June 1, 2011 11:10:33 AM UTC-7, Ethan Furman wrote: > Carl Banks wrote: > > For instance, say you are using an implementation that uses > > floating point, and you define a function that uses Newton's > > method to find a square root: > > > > def square_root(N,x=None): > > if x is None: > > x = N/2 > > for i in range(100): > > x = (x + N/x)/2 > > return x > > > > It works pretty well on your floating-point implementation. > > Now try running it on an implementation that uses fractions > > by default > > > > (Seriously, try running this function with N as a Fraction.) > > Okay, will this thing ever stop? It's been running for 90 minutes now. > Is it just incredibly slow? > > Any enlightenment appreciated! Fraction needs to find the LCD of the denominators when adding; but LCD calculation becomes very expensive as the denominators get large (which they will since you're dividing by an intermediate result in a loop). I suspect the time needed grows exponentially (at least) with the value of the denominators. The LCD calculation should slow the calculation down to an astronomical crawl well before you encounter memory issues. This is why representation simply cannot be left as an implementation detail; rationals and floating-points behave too differently. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: float("nan") in set or as key
On Wednesday, June 1, 2011 10:17:54 AM UTC-7, OKB (not okblacke) wrote: > Carl Banks wrote: > > > On Tuesday, May 31, 2011 8:57:57 PM UTC-7, Chris Angelico wrote: > >> On Wed, Jun 1, 2011 at 1:30 PM, Carl Banks wrote: > > Python has several non-integer number types in the standard > > library. The one we are talking about is called float. If the > > type we were talking about had instead been called real, then your > > question might make some sense. But the fact that it's called > > float really does imply that that underlying representation is > > floating point. > > That's true, but that's sort of putting the cart before the horse. Not really. The (original) question Chris Angelico was asking was, "Is it an implementation detail that Python's non-integer type is represented as an IEEE floating-point?" Which the above is the appropriate answer to. > In response to that, one can just ask: why is this type called "float"? Which is a different question; not the question I was answering, and not one I care to discuss. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: float("nan") in set or as key
On Tuesday, May 31, 2011 8:57:57 PM UTC-7, Chris Angelico wrote: > On Wed, Jun 1, 2011 at 1:30 PM, Carl Banks > wrote: > > I think you misunderstood what I was saying. > > > > It's not *possible* to represent a real number abstractly in any digital > > computer. Python couldn't have an "abstract real number" type even it > > wanted to. > > True, but why should the "non-integer number" type be floating point > rather than (say) rational? Python has several non-integer number types in the standard library. The one we are talking about is called float. If the type we were talking about had instead been called real, then your question might make some sense. But the fact that it's called float really does imply that that underlying representation is floating point. > Actually, IEEE floating point could mostly > be implemented in a two-int rationals system (where the 'int' is > arbitrary precision, so it'd be Python 2's 'long' rather than its > 'int'); in a sense, the mantissa is the numerator, and the scale > defines the denominator (which will always be a power of 2). Yes, > there are very good reasons for going with the current system. But are > those reasons part of the details of implementation, or are they part > of the definition of the data type? Once again, Python float is an IEEE double-precision floating point number. This is part of the language; it is not an implementation detail. As I mentioned elsewhere, the Python library establishes this as part of the language because it includes several functions that operate on IEEE numbers. And, by the way, the types you're comparing it to aren't as abstract as you say they are. Python's int type is required to have a two's-compliment binary representation and support bitwise operations. > > (Math aside: Real numbers are not countable, meaning they > > cannot be put into one-to-one correspondence with integers. > > A digital computer can only represent countable things > > exactly, for obvious reasons; therefore, to model > > non-countable things like real numbers, one must use a > > countable approximation like floating-point.) > > Right. Obviously a true 'real number' representation can't be done. > But there are multiple plausible approximations thereof (the best > being rationals). That's a different question. I don't care to discuss it, except to say that your default real-number type would have to be called something other than float, if it were not a floating point. > Not asking for Python to be changed, just wondering why it's defined > by what looks like an implementation detail. It's like defining that a > 'character' is an 8-bit number using the ASCII system, which then > becomes problematic with Unicode. It really isn't. Unlike with characters (which are trivially extensible to larger character sets, just add more bytes), different real number approximations differ in details too important to be left to the implementation. For instance, say you are using an implementation that uses floating point, and you define a function that uses Newton's method to find a square root: def square_root(N,x=None): if x is None: x = N/2 for i in range(100): x = (x + N/x)/2 return x It works pretty well on your floating-point implementation. Now try running it on an implementation that uses fractions by default (Seriously, try running this function with N as a Fraction.) So I'm going to opine that the representation does not seem like an implementation detail. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: float("nan") in set or as key
On Tuesday, May 31, 2011 8:05:43 PM UTC-7, Chris Angelico wrote: > On Wed, Jun 1, 2011 at 12:59 PM, Carl Banks > wrote: > > On Sunday, May 29, 2011 7:53:59 PM UTC-7, Chris Angelico wrote: > >> Okay, here's a question. The Python 'float' value - is it meant to be > >> "a Python representation of an IEEE double-precision floating point > >> value", or "a Python representation of a real number"? > > > > The former. Unlike the case with integers, there is no way that I know of > > to represent an abstract real number on a digital computer. > > This seems peculiar. Normally Python seeks to define its data types in > the abstract and then leave the concrete up to the various > implementations - note, for instance, how Python 3 has dispensed with > 'int' vs 'long' and just made a single 'int' type that can hold any > integer. Does this mean that an implementation of Python on hardware > that has some other type of floating point must simulate IEEE > double-precision in all its nuances? I think you misunderstood what I was saying. It's not *possible* to represent a real number abstractly in any digital computer. Python couldn't have an "abstract real number" type even it wanted to. (Math aside: Real numbers are not countable, meaning they cannot be put into one-to-one correspondence with integers. A digital computer can only represent countable things exactly, for obvious reasons; therefore, to model non-countable things like real numbers, one must use a countable approximation like floating-point.) You might be able to get away with saying float() merely represents an "abstract floating-point number with provisions for nan and inf", but pretty much everyone uses IEEE format, so what's the point? And no it doesn't mean Python has to support every nuance (and it doesn't). Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: float("nan") in set or as key
On Sunday, May 29, 2011 7:53:59 PM UTC-7, Chris Angelico wrote: > Okay, here's a question. The Python 'float' value - is it meant to be > "a Python representation of an IEEE double-precision floating point > value", or "a Python representation of a real number"? The former. Unlike the case with integers, there is no way that I know of to represent an abstract real number on a digital computer. Python also includes several IEEE-defined operations in its library (math.isnan, math.frexp). Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: float("nan") in set or as key
On Sunday, May 29, 2011 8:59:49 PM UTC-7, Steven D'Aprano wrote: > On Sun, 29 May 2011 17:55:22 -0700, Carl Banks wrote: > > > Floating point arithmetic evolved more or less on languages like Fortran > > where things like exceptions were unheard of, > > I'm afraid that you are completely mistaken. > > Fortran IV had support for floating point traps, which are "things like > exceptions". That's as far back as 1966. I'd be shocked if earlier > Fortrans didn't also have support for traps. > > http://www.bitsavers.org/pdf/ibm/7040/C28-6806-1_7040ftnMathSubrs.pdf Fine, it wasn't "unheard of". I'm pretty sure the existence of a few high end compiler/hardware combinations that supported traps doesn't invalidate my basic point. NaN was needed because few systems had a separate path to deal with exceptional situations like producing or operating on something that isn't a number. When they did exist few programmers used them. If floating-point were standardized today it might not even have NaN (and definitely wouldn't support the ridiculous NaN != NaN), because all modern systems can be expected to support exceptions, and modern programmers can be expected to use them. > The IEEE standard specifies that you should be able to control whether a > calculation traps or returns a NAN. That's how Decimal does it, that's > how Apple's (sadly long abandoned) SANE did it, and floats should do the > same thing. If your aim is to support every last clause of IEEE for better or worse, then yes that's what Python should do. If your aim is to make Python the best language it can be, then Python should reject IEEE's obsolete notions, and throw exceptions when operating on NaN. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: float("nan") in set or as key
On Sunday, May 29, 2011 6:14:58 PM UTC-7, Chris Angelico wrote: > On Mon, May 30, 2011 at 10:55 AM, Carl Banks > wrote: > > If exceptions had commonly existed in that environment there's no chance > > they would have chosen that behavior; comparison against NaN (or any > > operation with NaN) would have signaled a floating point exception. That > > is the correct way to handle exceptional conditions. > > > > The only reason to keep NaN's current behavior is to adhere to IEEE, > > but given that Python has trailblazed a path of correcting arcane > > mathematical behavior, I definitely see an argument that Python > > should do the same for NaN, and if it were done Python would be a > > better language. > > If you're going to change behaviour, why have a floating point value > called "nan" at all? If I were designing a new floating-point standard for hardware, I would consider getting rid of NaN. However, with the floating point standard that exists, that almost all floating point hardware mostly conforms to, there are certain bit pattern that mean NaN. Python could refuse to construct float() objects out of NaN (I doubt it would even be a major performance penalty), but there's reasons why you wouldn't, the main one being to interface with other code that does use NaN. It's better, then, to recognize the NaN bit patterns and do something reasonable when trying to operate on it. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: float("nan") in set or as key
On Sunday, May 29, 2011 7:41:13 AM UTC-7, Grant Edwards wrote: > It treats them as identical (not sure if that's the right word). The > implementation is checking for ( A is B or A == B ). Presumably, the > assumpting being that all objects are equal to themselves. That > assumption is not true for NaN objects, so the buggy behavior is > observed. Python makes this assumption in lots of common situations (apparently in an implementation-defined manner): >>> nan = float("nan") >>> nan == nan False >>> [nan] == [nan] True Therefore, I'd recommend never to rely on NaN != NaN except in casual throwaway code. It's too easy to forget that it will stop working when you throw an item into a list or tuple. There's a function, math.isnan(), that should be the One Obvious Way to test for NaN. NaN should also never be used as a dictionary key or in a set (of course). If it weren't for compatibility with IEEE, there would be no sane argument that defining an object that is not equal to itself isn't a bug. But because there's a lot of code out there that depends on NaN != NaN, Python has to tolerate it. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: float("nan") in set or as key
On Sunday, May 29, 2011 4:31:19 PM UTC-7, Steven D'Aprano wrote: > On Sun, 29 May 2011 22:19:49 +0100, Nobody wrote: > > > On Sun, 29 May 2011 10:29:28 +, Steven D'Aprano wrote: > > > >>> The correct answer to "nan == nan" is to raise an exception, > >>> because > >>> you have asked a question for which the answer is nether True nor > >>> False. > >> > >> Wrong. > > > > That's overstating it. There's a good argument to be made for raising an > > exception. > > If so, I've never heard it, and I cannot imagine what such a good > argument would be. Please give it. Floating point arithmetic evolved more or less on languages like Fortran where things like exceptions were unheard of, and defining NaN != NaN was a bad trick they chose for testing against NaN for lack of a better way. If exceptions had commonly existed in that environment there's no chance they would have chosen that behavior; comparison against NaN (or any operation with NaN) would have signaled a floating point exception. That is the correct way to handle exceptional conditions. The only reason to keep NaN's current behavior is to adhere to IEEE, but given that Python has trailblazed a path of correcting arcane mathematical behavior, I definitely see an argument that Python should do the same for NaN, and if it were done Python would be a better language. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Why did Quora choose Python for its development?
On Friday, May 27, 2011 6:47:21 AM UTC-7, Roy Smith wrote: > In article <948l8n...@mid.individual.net>, > Gregory Ewing wrote: > > > John Bokma wrote: > > > > > A Perl programmer will call this line noise: > > > > > > double_word_re = re.compile(r"\b(?P\w+)\s+(?P=word)(?!\w)", > > > re.IGNORECASE) > > One of the truly awesome things about the Python re library is that it > lets you write complex regexes like this: > > pattern = r"""\b # beginning of line > (?P\w+) # a word > \s+# some whitespace > (?P=word)(?!\w)# the same word again >""" > double_word_re = re.compile(pattern, re.I | re.X) Perl has the X flag as well, in fact I'm pretty sure Perl originated it. Just saying. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: bug in str.startswith() and str.endswith()
On Thursday, May 26, 2011 4:27:22 PM UTC-7, MRAB wrote: > On 27/05/2011 00:27, Ethan Furman wrote: > > I've tried this in 2.5 - 3.2: > > > > --> 'this is a test'.startswith('this') > > True > > --> 'this is a test'.startswith('this', None, None) > > Traceback (most recent call last): > > File "", line 1, in > > TypeError: slice indices must be integers or None or have an __index__ > > method > > > > The 3.2 docs say this: > > > > str.startswith(prefix[, start[, end]]) > > Return True if string starts with the prefix, otherwise return False. > > prefix can also be a tuple of prefixes to look for. With optional start, > > test string beginning at that position. With optional end, stop > > comparing string at that position > > > > str.endswith(suffix[, start[, end]]) > > Return True if the string ends with the specified suffix, otherwise > > return False. suffix can also be a tuple of suffixes to look for. With > > optional start, test beginning at that position. With optional end, stop > > comparing at that position. > > > > Any reason this is not a bug? > > > Let's see: 'start' and 'end' are optional, but aren't keyword > arguments, and can't be None... > > I'd say bug. I also say bug. The end parameter looks pretty useless for .startswith() and is probably only present for consistency with other string search methods like .index(). Yet on .index() using None as an argument works as intended: >>> "cbcd".index("c",None,None) 0 So it's there for consistency, yet is not consistent. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: super() in class defs?
On Wednesday, May 25, 2011 10:54:11 AM UTC-7, Jess Austin wrote: > I may be attempting something improper here, but maybe I'm just going > about it the wrong way. I'm subclassing > http.server.CGIHTTPRequestHandler, and I'm using a decorator to add > functionality to several overridden methods. > > def do_decorate(func): > . def wrapper(self): > . if appropriate(): > . return func() > . complain_about_error() > . return wrapper > > class myHandler(CGIHTTPRequestHandler): > . @do_decorate > . def do_GET(self): > . return super().do_GET() > . # also override do_HEAD and do_POST > > My first thought was that I could just replace that whole method > definition with one line: > > class myHandler(CGIHTTPRequestHandler): > . do_GET = do_decorate(super().do_GET) > > That generates the following error: > > SystemError: super(): __class__ cell not found > > So I guess that when super() is called in the context of a class def > rather than that of a method def, it doesn't have the information it > needs. Right. Actually the class object itself doesn't even exist yet when super() is invoked. (It won't be created until after the end of the class statement block.) > Now I'll probably just say: > > do_GET = do_decorate(CGIHTTPRequestHandler.do_GET) > > but I wonder if there is a "correct" way to do this instead? Thanks! Well, since the class object isn't created until after the end of the class statement block, it's impossible to invoke super() on the class from inside the block. So there's only two ways to invoke super(): 1. like you did above, by calling it inside a method, and 2. call it beyond the end of the class statement, like this: class myHandler(CGIHTTPRequestHandler): pass myHandler.do_GET = do_decorate(super(myHandler).do_GET) I wouldn't call that correct, though. (I'm not even sure it'll work, since I don't have Python 3 handy to test it, but as far as I can tell it will.) It's just one of the quirks of Python's type system. I don't agree with Ian's recommendation not to use super() in general, but I'd probably agree that one should stick to using it only in its intended way (to invoke base-class methods directly). Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Why did Quora choose Python for its development?
On Sunday, May 22, 2011 12:44:18 AM UTC-7, Octavian Rasnita wrote: > I've noticed that on many Perl mailing lists the list members talk very > rarely about Python, but only on this Python mailing list I read many > discussions about Perl, in which most of the participants use to agree that > yes, Python is better, as it shouldn't be obvious that most of the list > members prefer Python. Evidently Perl users choose to bash other languages in those languages' own mailing lists. > If Python would be so great, you wouldn't talk so much about how bad are > other languages, Sure we would. Sometimes it's fun to sit on your lofty throne and scoff at the peasantry. > or if these discussions are not initiated by envy, you would > be also talking about how bad is Visual Basic, or Pascal, or Delphi, or who > knows other languages. I would suggest that envy isn't the reason, the reason is that Perl is just that much worse than Visual Basic, Pascal, and Delphi. We only make fun of the really, really bad langauges. (Or, less cynically, it's because Perl and Python historically filled the same niche, whereas VB, Pascal, and Delphi were often used for different sorts of programming.) What I'm trying to say here is your logic is invalid. People have all kinds of reasons to badmouth other languages; that some mailing list has a culture that is a bit more or a bit less approving of it than some other list tells us nothing. In any case it's ridiculous to claim envy as factor nowadays, as Python is clearly on the rise while Perl is on the decline. Few people are choosing Perl for new projects. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: in search of graceful co-routines
On Tuesday, May 17, 2011 10:04:25 AM UTC-7, Chris Withers wrote: > Now, since the sequence is long, and comes from a file, I wanted the > provider to be an iterator, so it occurred to me I could try and use the > new 2-way generator communication to solve the "communicate back with > the provider", with something like: > > for item in provider: >try: > consumer.handleItem(self) >except: > provider.send('fail') >else: > provider.send('succeed') > > ..but of course, this won't work, as 'send' causes the provider > iteration to continue and then returns a value itself. That feels weird > and wrong to me, but I guess my use case might not be what was intended > for the send method. You just have to call send() in a loop yourself. Note that you should usually catch StopIteration whenever calling send() or next() by hand. Untested: result = None while True: try: item = provider.send(result) except StopIteration: break try: consumer.handleItem(item) except: result = 'failure' else: result = 'success' Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Composition instead of inheritance
On Friday, April 29, 2011 2:44:56 PM UTC-7, Ian wrote: > On Fri, Apr 29, 2011 at 3:09 PM, Carl Banks > wrote: > > Here is my advice on mixins: > > > > Mixins should almost always be listed first in the bases. (The only > > exception is to work around a technicality. Otherwise mixins go first.) > > > > If a mixin defines __init__, it should always accept self, *args and > > **kwargs (and no other arguments), and pass those on to super().__init__. > > Same deal with any other function that different sister classes might > > define in varied ways (such as __call__). > > Really, *any* class that uses super().__init__ should take its > arguments and pass them along in this manner. If you are programming defensively for any possible scenario, you might try this (and you'd still fail). In the real world, certain classes might have more or less probability to be used in a multiple inheritance situations, and programmer needs to weigh the probability of that versus the loss of readability. For me, except when I'm designing a class specifically to participate in MI (such as a mixin), readability wins. [snip] > > A mixin should not accept arguments in __init__. Instead, it should burden > > the derived class to accept arguments on its behalf, and set attributes > > before calling super().__init__, which the mixin can access. > > Ugh. This breaks encapsulation, since if I ever need to add an > optional argument, I have to add handling for that argument to every > derived class that uses that mixin. The mixin should be able to > accept new optional arguments without the derived classes needing to > know about them. Well, encapsulation means nothing to me; if it did I'd be using Java. If you merely mean DRY, then I'd say this doesn't necessarily add to it. The derived class has a responsibility one way or another to get the mixin whatever initializers it needs. Whether it does that with __init__ args or through attributes it still has to do it. Since attributes are more versatile than arguments, and since it's messy to use arguments in MI situations, using attributes is the superior method. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Composition instead of inheritance
On Thursday, April 28, 2011 6:43:35 PM UTC-7, Ethan Furman wrote: > Carl Banks wrote: > > The sorts of class that this decorator will work for are probably not > > the ones that are going to have problems cooperating in the first place. > > So you might as well just use inheritance; that way people trying to read > > the code will have a common, well-known Python construct rather than a > > custom decorator to understand. > > From thread 'python and super' on Python-Dev: > Ricardo Kirkner wrote: > > I'll give you the example I came upon: > > > > I have a TestCase class, which inherits from both Django's TestCase > > and from some custom TestCases that act as mixin classes. So I have > > something like > > > > class MyTestCase(TestCase, Mixin1, Mixin2): > >... > > > > now django's TestCase class inherits from unittest2.TestCase, which we > > found was not calling super. > > This is the type of situation the decorator was written for (although > it's too simplistic to handle that exact case, as Ricardo goes on to say > he has a setUp in each mixin that needs to be called -- it works fine > though if you are not adding duplicate names). The problem is that he was doing mixins wrong. Way wrong. Here is my advice on mixins: Mixins should almost always be listed first in the bases. (The only exception is to work around a technicality. Otherwise mixins go first.) If a mixin defines __init__, it should always accept self, *args and **kwargs (and no other arguments), and pass those on to super().__init__. Same deal with any other function that different sister classes might define in varied ways (such as __call__). A mixin should not accept arguments in __init__. Instead, it should burden the derived class to accept arguments on its behalf, and set attributes before calling super().__init__, which the mixin can access. If you insist on a mixin that accepts arguments in __init__, then it should should pop them off kwargs. Avoid using positional arguments, and never use named arguments. Always go through args and kwargs. If mixins follow these rules, they'll be reasonably safe to use on a variety of classes. (Maybe even safe enough to use in Django classes.) Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Composition instead of inheritance
On Thursday, April 28, 2011 10:15:02 AM UTC-7, Ethan Furman wrote: > For anybody interested in composition instead of multiple inheritance, I > have posted this recipe on ActiveState (for python 2.6/7, not 3.x): > > http://code.activestate.com/recipes/577658-composition-of-classes-instead-of-multiple-inherit/ > > Comments welcome! That's not what we mean by composition. Composition is when one object calls upon another object that it owns to implement some of its behavior. Often used to model a part/whole relationship, hence the name. The sorts of class that this decorator will work for are probably not the ones that are going to have problems cooperating in the first place. So you might as well just use inheritance; that way people trying to read the code will have a common, well-known Python construct rather than a custom decorator to understand. If you want to enforce no duplication of attributes you can do that, such as with this untested metaclass: class MakeSureNoBasesHaveTheSameClassAttributesMetaclass(type): def __new__(metatype,name,bases,dct): u = collections.Counter() for base in bases: for key in base.__dict__.keys(): u[key] += 1 for key in dct.keys(): u[key] += 1 if any(u[key] > 1 for key in u.keys()): raise TypeError("base classes and this class share some class attributes") return type.__new__(metatype,name,bases,dct) Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: A question about Python Classes
On Thursday, April 21, 2011 11:00:08 AM UTC-7, MRAB wrote: > On 21/04/2011 18:12, Pascal J. Bourguignon wrote: > > chad writes: > > > >> Let's say I have the following > >> > >> class BaseHandler: > >> def foo(self): > >> print "Hello" > >> > >> class HomeHandler(BaseHandler): > >> pass > >> > >> > >> Then I do the following... > >> > >> test = HomeHandler() > >> test.foo() > >> > >> How can HomeHandler call foo() when I never created an instance of > >> BaseHandler? > > > > But you created one! > > > No, he didn't, he created an instance of HomeHandler. > > > test is an instance of HomeHandler, which is a subclass of BaseHandler, > > so test is also an instance of BaseHandler. > > > test isn't really an instance of BaseHandler, it's an instance of > HomeHandler, which is a subclass of BaseHandler. I'm going to vote that this is incorrect usage. An instance of HomeHandler is also an instance of BaseHandler, and it is incorrect to say it is not. The call to HomeHandler does create an instance of BaseHandler. The Python language itself validates this usage. isinstance(test,BaseHandler) returns True. If you are looking for a term to indicate an object for which type(test) == BaseHandler, then I would suggest "proper instance". test is an instance of BaseHandler, but it is not a proper instance. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Python CPU
It'd be kind of hard. Python bytecode operates on objects, not memory slots, registers, or other low-level entities like that. Therefore, in order to implement a "Python machine" one would have to implement the whole object system in the hardware, more or less. So it'd be possible but not too practical or likely. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Why aren't copy and deepcopy in __builtins__?
On Mar 27, 8:29 pm, John Ladasky wrote: > Simple question. I use these functions much more frequently than many > others which are included in __builtins__. I don't know if my > programming needs are atypical, but my experience has led me to wonder > why I have to import these functions. I rarely use them (for things like lists I use list() constructor to copy, and for most class instances I usually don't want a straight copy of all members), but I wouldn't have a problem if they were builtin. They make more sense than a lot of builtins. I'd guess the main reason they're not builtin is that they aren't really that simple. The functions make use of a lot of knowledge about Python types. Builtins tend to be for straightforward, simple, building-block type functions. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Guido rethinking removal of cmp from sort method
On Mar 25, 3:06 pm, Steven D'Aprano wrote: > The reason Guido is considering re-introducing cmp is that somebody at > Google approached him with a use-case where a key-based sort did not > work. The use-case was that the user had masses of data, too much data > for the added overhead of Decorate-Sort-Undecorate (which is what key > does), but didn't care if it took a day or two to sort. > > So there is at least one use-case for preferring slowly sorting with a > comparison function over key-based sorting. I asked if there any others. > It seems not. 1. You asked for a specific kind of use case. Antoon gave you a use case, you told him that wasn't the kind of use case you were asking for, then you turn around and say "I guess there are no use cases" (without the mentioning qualification). 2. I posted two use cases in this thread that fit your criteria, and you followed up to that subthread so you most likely read them. Here they are again so you won't overlook them this time: "You have are given an obscure string collating function implented in a C library you don't have the source to." (Fits your criterion "can't be done with key=".) "I'm sitting at an interactive session and I have a convenient cmp function but no convenient key, and I care more about the four minutes it'd take to whip up a clever key function or an adapter class than the 0.2 seconds I'd save to on sorting time." (Fits your criterion "performs really badly when done so".) 3. You evidently also overlooked the use-case example posted on Python- dev that you followed up to. Call me crazy, but you seem to be overlooking a lot of things in your zeal to prove your point. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: dynamic assigments
On Mar 25, 5:29 am, Seldon wrote: > I thought to refactor the code in a more declarative way, like > > assignment_list = ( > ('var1', value1), > ('var2', value2), > .. , > ) > > for (variable, value) in assignment_list: > locals()[variable] = func(arg=value, *args) Someday we'll get through a thread like this without anyone mistakenly suggesting the use of locals() for this > My question is: what's possibly wrong with respect to this approach ? I'll answer this question assuming you meant, "hypothetically, if it actually worked". The thing that's wrong with your "declarative way" is that it adds nothing except obscurity. Just do this: var1 = value2 var2 = value2 What you're trying to do is akin to writing poetry, or a sociological research paper. The emphasis in that kind of writing is not on clear communication of ideas, but on evoking some emotion with the form of the words (almost always at the expense of clear communication). Same thing with your "declarative way". It adds nothing to the code apart from a feeling of formalism. It doesn't save you any work: you still have to type out all the variables and values. It doesn't save you from repeating yourself. It doesn't minimize the possibility of typos or errors; quite the opposite. It DOES make your code a lot harder to read. So stick with regular assignments. "But wait," you say, "what if I don't know the variable names?" Well, if you don't know the variable names, how can you write a function that uses those names as local variables? "Er, well I can access them with locals() still." You should be using a dictionary, then. I have found that whenever I thought I wanted to dynamically assign local variables, it turned out I also wanted to access them dynamically, too. Therefore, I would say that any urge to do this should always be treated as a red flag that you should be using a dictionary. "Ok, but say I do know what the variables are, but for some reason I'm being passed a huge list of these key,value pairs, and my code consists of lots and lots of formulas and with lots of these variables, so it'd be unwieldy to access them through a dictionary or as object attributes, not to mention a lot slower." Ah, now we're getting somewhere. This is the main use case for dynamically binding local variables in Python, IMO. You're getting a big list of variables via some dynamic mechanism, you know what the variables are, and you want to operate on them as locals, but you also want to avoid boilerplate of binding all of them explicitly. Not a common use case, but it happens. (I've faced it several times, but the things I work on make it more common for me. I bit the bullet and wrote out the boilerplate.) Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Guido rethinking removal of cmp from sort method
On Mar 24, 5:37 pm, "Martin v. Loewis" wrote: > > The cmp argument doesn't depend in any way on an object's __cmp__ > > method, so getting rid of __cmp__ wasn't any good readon to also get > > rid of the cmp argument > > So what do you think about the cmp() builtin? Should have stayed, > or was it ok to remove it? Since it's trivial to implement by hand, there's no point for it to be a builtin. There wasn't any point before rich comparisons, either. I'd vote not merely ok to remove, but probably a slight improvement. It's probably the least justified builtin other than pow. > If it should have stayed: how should it's implementation have looked like? Here is how cmp is documented: "The return value is negative if x < y, zero if x == y and strictly positive if x > y." So if it were returned as a built-in, the above documentation suggests the following implementation: def cmp(x,y): if x < y: return -1 if x == y: return 0 if x > y: return 1 raise ValueError('arguments to cmp are not well-ordered') (Another, maybe better, option would be to implement it so as to have the same expectations as list.sort, which I believe only requires __eq__ and __gt__.) > If it was ok to remove it: how are people supposed to fill out the cmp= > argument in cases where they use the cmp() builtin in 2.x? Since it's trivial to implement, they can just write their own cmp function, and as an added bonus they can work around any peculiarities with an incomplete comparison set. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Guido rethinking removal of cmp from sort method
On Mar 23, 1:38 pm, Paul Rubin wrote: > Carl Banks writes: > > It's kind of ridiculous to claim that cmp adds much complexity (it's > > maybe ten lines of extra C code), so the only reason not to include it > > is that it's much slower than using key. > > Well, I thought it was also to get rid of 3-way cmp in general, in favor > of rich comparison. Supporting both __cmp__ and rich comparison methods of a class does add a lot of complexity. The cmp argument of sort doesn't. The cmp argument doesn't depend in any way on an object's __cmp__ method, so getting rid of __cmp__ wasn't any good readon to also get rid of the cmp argument; their only relationship is that they're spelled the same. Nor is there any reason why cmp being a useful argument of sort should indicate that __cmp__ should be retained in classes. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Guido rethinking removal of cmp from sort method
On Mar 23, 10:51 am, Stefan Behnel wrote: > Carl Banks, 23.03.2011 18:23: > > > > > > > On Mar 23, 6:59 am, Stefan Behnel wrote: > >> Antoon Pardon, 23.03.2011 14:53: > > >>> On Sun, Mar 13, 2011 at 12:59:55PM +, Steven D'Aprano wrote: > >>>> The removal of cmp from the sort method of lists is probably the most > >>>> disliked change in Python 3. On the python-dev mailing list at the > >>>> moment, Guido is considering whether or not it was a mistake. > > >>>> If anyone has any use-cases for sorting with a comparison function that > >>>> either can't be written using a key function, or that perform really > >>>> badly when done so, this would be a good time to speak up. > > >>> How about a list of tuples where you want them sorted first item in > >>> ascending > >>> order en second item in descending order. > > >> You can use a stable sort in two steps for that. > > > How about this one: you have are given an obscure string collating > > function implented in a C library you don't have the source to. > > > Or how about this: I'm sitting at an interactive session and I have a > > convenient cmp function but no convenient key, and I care more about > > the four minutes it'd take to whip up a clever key function or an > > adapter class than the 0.2 seconds I'd save to on sorting time. > > As usual with Python, it's just an import away: > > http://docs.python.org/library/functools.html#functools.cmp_to_key > > I think this is a rare enough use case to merit an import rather than being > a language feature. The original question posted here was, "Is there a use case for cmp?" There is, and your excuse-making doesn't change the fact. It's the most natural way to sort sometimes; that's a use case. We already knew it could be worked around. It's kind of ridiculous to claim that cmp adds much complexity (it's maybe ten lines of extra C code), so the only reason not to include it is that it's much slower than using key. Not including it for that reason would be akin to the special-casing of sum to prevent strings from being concatenated, although omitting cmp would not be as drastic since it's not a special case. Do we omit something that's useful but potentially slow? I say no. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Guido rethinking removal of cmp from sort method
On Mar 23, 6:59 am, Stefan Behnel wrote: > Antoon Pardon, 23.03.2011 14:53: > > > On Sun, Mar 13, 2011 at 12:59:55PM +, Steven D'Aprano wrote: > >> The removal of cmp from the sort method of lists is probably the most > >> disliked change in Python 3. On the python-dev mailing list at the > >> moment, Guido is considering whether or not it was a mistake. > > >> If anyone has any use-cases for sorting with a comparison function that > >> either can't be written using a key function, or that perform really > >> badly when done so, this would be a good time to speak up. > > > How about a list of tuples where you want them sorted first item in > > ascending > > order en second item in descending order. > > You can use a stable sort in two steps for that. How about this one: you have are given an obscure string collating function implented in a C library you don't have the source to. Or how about this: I'm sitting at an interactive session and I have a convenient cmp function but no convenient key, and I care more about the four minutes it'd take to whip up a clever key function or an adapter class than the 0.2 seconds I'd save to on sorting time. Removing cmp from sort was a mistake; it's the most straightforward and natural way to sort in many cases. Reason enough for me to keep it. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Abend with cls.__repr__ = cls.__str__ on Windows.
On Mar 18, 5:31 pm, J Peyret wrote: > If I ever specifically work on an OSS project's codeline, I'll post > bug reports, but frankly that FF example is a complete turn-off to > contributing by reporting bugs. You probably shouldn't take it so personally if they don't agree with you. But it's ok, it's not unreasonable to call attention to (actual) bugs here. I was surprised, though, when several people confirmed but no one reported it, especially since it was a crash, which is quite a rare thing to find. (You should feel proud.) Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Abend with cls.__repr__ = cls.__str__ on Windows.
On Mar 18, 2:18 am, Duncan Booth wrote: > Terry Reedy wrote: > > On 3/17/2011 10:00 PM, Terry Reedy wrote: > >> On 3/17/2011 8:24 PM, J Peyret wrote: > >>> This gives a particularly nasty abend in Windows - "Python.exe has > >>> stopped working", rather than a regular exception stack error. I've > >>> fixed it, after I figured out the cause, which took a while, but > maybe > >>> someone will benefit from this. > > >>> Python 2.6.5 on Windows 7. > > >>> class Foo(object): > >>> pass > > >>> Foo.__repr__ = Foo.__str__ # this will cause an abend. > > >> 2.7.1 and 3.2.0 on winxp, no problem, interactive intepreter or IDLE > >> shell. Upgrade? > > > To be clear, the above, with added indent, but with extra fluff > (fixes) > > removed, is exactly what I ran. If you got error with anything else, > > please say so. Described behavior for legal code is a bug. However, > > unless a security issue, it would not be fixed for 2.6. > > On Windows, I can replicate this with Python 2.7, Python 3.1.2, and > Python 3.2. Here's the exact script (I had to change the print to be > compatible with Python 3.2): > > bug.py -- > class Foo(object): > pass > #def __str__(self): #if you have this defined, no abend > # return "a Foo" > > Foo.__repr__ = Foo.__str__ # this will cause an abend. > #Foo.__str__ = Foo.__repr__ #do this instead, no abend > > foo = Foo() > print(str(foo)) > > -- > > for Python 3.2 the command: > C:\Temp>c:\python32\python bug.py > > generates a popup: > > python.exe - Application Error > The exception unknown software exception (0xcfd) occurred in the > application at location 0x1e08a325. > > Click on OK to terminate the program > Click on CANCEL to debug the program > > So it looks to me to be a current bug. Multiple people reproduce a Python hang/crash yet it looks like no one bothered to submit a bug report I observed the same behavior (2.6 and 3.2 on Linux, hangs) and went ahead and submitted a bug report. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: having both dynamic and static variables
On Mar 5, 7:46 pm, Corey Richardson wrote: > On 03/05/2011 10:23 PM, MRAB wrote: > > > Having a fixed binding could be useful elsewhere, for example, with > > function definitions: > > [..] > > fixed PI = 3.1415926535897932384626433832795028841971693993751 > > > fixed def squared(x): > > return x * x > > This question spawns from my ignorance: When would a functions > definition change? What is the difference between a dynamic function and > a fixed function? There's a bit of ambiguity here. We have to differentiate between "fixed binding" (which is what John Nagle and MRAB were talking about) and "immutable object" (which, apparently, is how you took it). I don't like speaking of "constants" in Python because it's not always clear which is meant, and IMO it's not a constant unless it's both. An immutable object like a number or tuple can't be modified, but the name refering to it can be rebound to a different object. a = (1,2,3) a.append(4) # illegal, can't modify a tuple a = (1,2,3,4) # but this is legal, can set a to a new tuple If a hypothetical fixed binding were added to Python, you wouldn't be able to rebind a after it was set: fixed a = (1,2,3) a = (1,2,3,4) # now illegal If you could define functions with fixed bindings like this, then a compiler that's a lot smarter than CPython's would be able to inline functions for potentially big speed increases. It can't do that now because the name of the function can always be rebound to something else. BTW, a function object is definitely mutable. def squared(x): return x*x squared.foo = 'bar' Carl Banks -- http://mail.python.org/mailman/listinfo/python-list