[ANN] Python 2.5.5 Release Candidate 2.
On behalf of the Python development team and the Python community, I'm happy to announce the release candidate 2 of Python 2.5.5. This is a source-only release that only includes security fixes. The last full bug-fix release of Python 2.5 was Python 2.5.4. Users are encouraged to upgrade to the latest release of Python 2.6 (which is 2.6.4 at this point). This releases fixes issues with the logging and tarfile modules, and with thread-local variables. Since the release candidate 1, additional bugs have been fixed in the expat module. See the detailed release notes at the website (also available as Misc/NEWS in the source distribution) for details of bugs fixed. For more information on Python 2.5.5, including download links for various platforms, release notes, and known issues, please see: http://www.python.org/2.5.5 Highlights of the previous major Python releases are available from the Python 2.5 page, at http://www.python.org/2.5/highlights.html Enjoy this release, Martin Martin v. Loewis mar...@v.loewis.de Python Release Manager (on behalf of the entire python-dev team) -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
[ANN] Pyspread 0.0.14 released
Pyspread 0.0.14 released I am pleased to announce the new release 0.0.14 of pyspread. About: -- Pyspread is a cross-platform Python spreadsheet application. It is based on and written in the programming language Python. Instead of spreadsheet formulas, Python expressions are entered into the spreadsheet cells. Each expression returns a Python object that can be accessed from other cells. These objects can represent anything including lists or matrices. Pyspread runs on Linux and *nix platforms with GTK support as well as on Windows (XP and Vista tested). On Mac OS X, some icons are too small but the application basically works. Homepage http://pyspread.sourceforge.net New features * Sparse grid introduced. It supports up to 80 000 000 rows. * Safe mode for inspecting foreign files * GPG signatures for own files (requires PyMe) * Help framework * New About dialog * Insertion of cell access code via Ctrl + Insert * Simplified Macro editor dialog * Improved cycle detection algorithm for grid Bug fixes - * Globals now update immediately * Color works now when loading pys files * wxPython v 2.8 is now preferred * Error handling bug removed * Help files are now found after changing the path * Font rendering in Windows fixed Known issues * Grid updates cause ficker on Windows (BUG 2938160) Enjoy Martin -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
Cape Town Python Users Group meeting - 30/01/2010
The next Cape Town Python Users Group meeting will be Sat, 30th January, starting from around 14:00, in the sudo room at the bandwidth barn. See http://python.org.za/pugs/cape-town/MeetingTwentyFour for details. -- Neil Muller -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
ANN: Urwid 0.9.9.1 - Console UI Library
Announcing Urwid 0.9.9.1 Urwid home page: http://excess.org/urwid/ Screen shots: http://excess.org/urwid/examples.html Tarball: http://excess.org/urwid/urwid-0.9.9.1.tar.gz About this release: === This maintenance release fixes a number of bugs including a backwards incompatibility introduced in the last release and a poor ListBox snapping behaviour. New in this release: * Fix for ListBox snapping to selectable widgets taller than the ListBox itself * raw_display switching to alternate buffer now works properly with Terminal.app * Fix for BoxAdapter backwards incompatibility introduced in 0.9.9 * Fix for a doctest failure under powerpc * Fix for systems with gpm_mev installed but not running gpm About Urwid === Urwid is a console UI library for Python. It features fluid interface resizing, UTF-8 support, multiple text layouts, simple attribute markup, powerful scrolling list boxes and flexible interface design. Urwid is released under the GNU LGPL. -- http://mail.python.org/mailman/listinfo/python-announce-list Support the Python Software Foundation: http://www.python.org/psf/donations/
TypeError not caught by except statement
Hi, except not able to caught the TypeError exception occured in the below code log.info(refer,ret) in the try block throws a TypeError which is not caught . Also sometimes process is getting hanged. import logging log = logging.getLogger() fileName = strftime(%d-%b-%Y-, gmtime()) + str(int(time.time())) + - Log.log log = logging.getLogger() log.setLevel(logging.NOTSET) fh = logging.FileHandler(logFile) logFileLevel = logging.DEBUG fh.setLevel(logFileLevel) format_string = '%(process)d %(thread)d %(asctime)-15s %(levelname)-5s at %(filename)-15s in %(funcName)-10s at line %(lineno)-3d %(message) s' fh.setFormatter(logging.Formatter(format_string)) log.addHandler(fh) try: log.info(start) log.info(refer,ret) log.info(end) except TypeError: log.exception(Exception raised) -- OUTPUT message: Traceback (most recent call last): File C:\Python26\lib\logging\__init__.py, line 768, in emit msg = self.format(record) File C:\Python26\lib\logging\__init__.py, line 648, in format return fmt.format(record) File C:\Python26\lib\logging\__init__.py, line 436, in format record.message = record.getMessage() File C:\Python26\lib\logging\__init__.py, line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting -- http://mail.python.org/mailman/listinfo/python-list
Is defaultdict thread safe?
Hi all Is defaultdict thread safe? Assume I have - from collections import defaultdict my_dict = defaultdict(list) If two threads call my_dict['abc'].append(...) simultaneously, is it guaranteed that my_dict['abc'] will end up containing two elements? Thanks Frank Millman -- http://mail.python.org/mailman/listinfo/python-list
Re: TypeError not caught by except statement
* siddu: Hi, except not able to caught the TypeError exception occured in the below code log.info(refer,ret) in the try block throws a TypeError which is not caught . Also sometimes process is getting hanged. import logging log = logging.getLogger() fileName = strftime(%d-%b-%Y-, gmtime()) + str(int(time.time())) + - Log.log log = logging.getLogger() log.setLevel(logging.NOTSET) fh = logging.FileHandler(logFile) Where does 'logFile' come from. logFileLevel = logging.DEBUG fh.setLevel(logFileLevel) format_string = '%(process)d %(thread)d %(asctime)-15s %(levelname)-5s at %(filename)-15s in %(funcName)-10s at line %(lineno)-3d %(message) s' fh.setFormatter(logging.Formatter(format_string)) log.addHandler(fh) try: log.info(start) log.info(refer,ret) log.info(end) except TypeError: log.exception(Exception raised) Try to reduce the example to a small, complete program that exhibits the problem. -- OUTPUT message: Traceback (most recent call last): File C:\Python26\lib\logging\__init__.py, line 768, in emit msg = self.format(record) File C:\Python26\lib\logging\__init__.py, line 648, in format return fmt.format(record) File C:\Python26\lib\logging\__init__.py, line 436, in format record.message = record.getMessage() File C:\Python26\lib\logging\__init__.py, line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting Is this a complete listing? CHheers hth., - Alf -- http://mail.python.org/mailman/listinfo/python-list
Re: Sikuli: the coolest Python project I have yet seen...
On Jan 24, 7:18 pm, Ron ursusmaxi...@gmail.com wrote: Sikuli is the coolest Python project I have ever seen in my ten year hobbyist career. An MIT oepn source project, Sikuli uses Python to automate GUI tasks (in any GUI or GUI baed app that runs the JVM) by simply drag and dropping GUI elements into Python scripts as function arguments. Download athttp://sikuli.csail.mit.edu/I also did this podcast about Sikulihttp://media.libsyn.com/media/awaretek/Python411_20100124_Sikuli.mp3 Nice, thanks for the link. Very happy to see people using Python for cool stuff like this, also to see the alternate implementations in use. And it couldn't come at a better time for me as I am trying to figure out how to automate some GUI-only program I am forced to use at work. Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
Re: Is defaultdict thread safe?
On Jan 25, 12:59 am, Frank Millman fr...@chagford.com wrote: Hi all Is defaultdict thread safe? Sometimes. It depends on whether an operation has callbacks to pure Python. Assume I have - from collections import defaultdict my_dict = defaultdict(list) If two threads call my_dict['abc'].append(...) simultaneously, is it guaranteed that my_dict['abc'] will end up containing two elements? Yes. But, if the constructor is a user defined class, the pure python code runs for the instantiation and all bets are off. class A: def __init__(self): . . . my_dict = defaultdict(A) # not thread-safe. Raymond -- http://mail.python.org/mailman/listinfo/python-list
Re: Sikuli: the coolest Python project I have yet seen...
2010/1/25 Ron ursusmaxi...@gmail.com: Sikuli is the coolest Python project I have ever seen in my ten year hobbyist career. An MIT oepn source project, Sikuli uses Python to automate GUI tasks (in any GUI or GUI baed app that runs the JVM) by simply drag and dropping GUI elements into Python scripts as function arguments. Download at http://sikuli.csail.mit.edu/ I also did this podcast about Sikuli http://media.libsyn.com/media/awaretek/Python411_20100124_Sikuli.mp3 -- http://mail.python.org/mailman/listinfo/python-list It looks really nice, but the screenhost-taking did not work on my computer (Win7). Innovative yet simple idea this mix-visual-and-code. -- http://olofb.wordpress.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Is defaultdict thread safe?
On Jan 25, 11:26 am, Raymond Hettinger pyt...@rcn.com wrote: On Jan 25, 12:59 am, Frank Millman fr...@chagford.com wrote: Hi all Is defaultdict thread safe? Sometimes. It depends on whether an operation has callbacks to pure Python. Assume I have - from collections import defaultdict my_dict = defaultdict(list) If two threads call my_dict['abc'].append(...) simultaneously, is it guaranteed that my_dict['abc'] will end up containing two elements? Yes. But, if the constructor is a user defined class, the pure python code runs for the instantiation and all bets are off. class A: def __init__(self): . . . my_dict = defaultdict(A) # not thread-safe. Raymond Many thanks Frank -- http://mail.python.org/mailman/listinfo/python-list
Re: TypeError not caught by except statement
On 2010-1-25 16:35, siddu wrote: Hi, except not able to caught the TypeError exception occured in the below code log.info(refer,ret) in the try block throws a TypeError which is not caught . Also sometimes process is getting hanged. import logging log = logging.getLogger() fileName = strftime(%d-%b-%Y-, gmtime()) + str(int(time.time())) + - Log.log log = logging.getLogger() log.setLevel(logging.NOTSET) fh = logging.FileHandler(logFile) logFileLevel = logging.DEBUG fh.setLevel(logFileLevel) format_string = '%(process)d %(thread)d %(asctime)-15s %(levelname)-5s at %(filename)-15s in %(funcName)-10s at line %(lineno)-3d %(message) s' fh.setFormatter(logging.Formatter(format_string)) log.addHandler(fh) try: log.info(start) log.info(refer,ret) ~~~ Seems this line causes the exception, and it is handled inside log.info(), which prints those traceback info. Usually log.info(msg, args) raises the same exception as print(msg%args). log.info(end) except TypeError: log.exception(Exception raised) -- OUTPUT message: Traceback (most recent call last): File C:\Python26\lib\logging\__init__.py, line 768, in emit msg = self.format(record) File C:\Python26\lib\logging\__init__.py, line 648, in format return fmt.format(record) File C:\Python26\lib\logging\__init__.py, line 436, in format record.message = record.getMessage() File C:\Python26\lib\logging\__init__.py, line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting -- http://mail.python.org/mailman/listinfo/python-list
Re: TypeError not caught by except statement
On Jan 25, 12:35 am, siddu siddhartha.veedal...@gmail.com wrote: Hi, except not able to caught the TypeError exception occured in the below code log.info(refer,ret) in the try block throws a TypeError which is not caught . Also sometimes process is getting hanged. import logging log = logging.getLogger() fileName = strftime(%d-%b-%Y-, gmtime()) + str(int(time.time())) + - Log.log log = logging.getLogger() log.setLevel(logging.NOTSET) fh = logging.FileHandler(logFile) logFileLevel = logging.DEBUG fh.setLevel(logFileLevel) format_string = '%(process)d %(thread)d %(asctime)-15s %(levelname)-5s at %(filename)-15s in %(funcName)-10s at line %(lineno)-3d %(message) s' fh.setFormatter(logging.Formatter(format_string)) log.addHandler(fh) try: log.info(start) log.info(refer,ret) log.info(end) except TypeError: log.exception(Exception raised) -- OUTPUT message: Traceback (most recent call last): File C:\Python26\lib\logging\__init__.py, line 768, in emit msg = self.format(record) File C:\Python26\lib\logging\__init__.py, line 648, in format return fmt.format(record) File C:\Python26\lib\logging\__init__.py, line 436, in format record.message = record.getMessage() File C:\Python26\lib\logging\__init__.py, line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting The logging module is swallowing the exception (and printing a traceback), I guess because logging is considered something that shouldn't bring your program down on error. You can apparently define a logging handler that overrides handleError to propogate the exception if you want. Can't tell you why it's hanging, but the logging error you're getting is probably because your string formatter is trying to perform the following operation: refer % (ret,) Carl Banks -- http://mail.python.org/mailman/listinfo/python-list
[Private Photo Share] Cali Girl- Has sent you private photos.
I do not want the entire group seeing these photos.Because some may recognize me. Here's the link: http://www.ourlivespace.com/hotgirl/photos.htm Enjoy babe :) -- http://mail.python.org/mailman/listinfo/python-list
reading from pipe
Hello, is there any solution to catch if a pipe has closed? Maybe the signal modul? For Simulation: #!/usr/bin/env python # -*- coding: utf-8 -*- import sys while True: line = sys.stdin.readline() sys.stdout.write(line) sys.stdout.flush() time cat /tmp/proxy.test | test.py Solution: if line == : break Kind Regards, Richi -- http://mail.python.org/mailman/listinfo/python-list
Re: Is defaultdict thread safe?
Frank Millman, 25.01.2010 09:59: Is defaultdict thread safe? Assume I have - from collections import defaultdict my_dict = defaultdict(list) If two threads call my_dict['abc'].append(...) simultaneously, is it guaranteed that my_dict['abc'] will end up containing two elements? Thread-safety is implementation specific. Other runtime environments than CPython (e.g. Jython, IronPython, PyPy) may or may not provide any guarantees on thread safety here. Stefan -- http://mail.python.org/mailman/listinfo/python-list
Re: reading from pipe
Hi, Richard Lamboj schrieb: is there any solution to catch if a pipe has closed? Maybe the signal modul? Since sys.stdin is a file object, you can use sys.stdin.closed to check if it has been closed. Lutz -- http://mail.python.org/mailman/listinfo/python-list
WxPython upgrade trouble on Ubuntu 8.04
Hello everyone, You know the old saying, in for a penny, in for a pound. Several hours ago I posted this... http://groups.google.com/group/comp.lang.python/msg/0a86e792c674adc8 ...in which I described my desire to acquire Python 2.6 without upgrading my Ubuntu Linux installation from 8.04. Since Python 2.6 is not part of the Ubuntu 8.04 repository, I installed Python 2.6 manually. Things are a little messy right now, since invoking IDLE from the desktop still defaults to Python 2.5. But I'm getting there. At least I can access Python 2.6 from the command prompt, and SCIte also finds 2.6. Of course, I also have to reinstall all of the Python modules that I use on 2.6. I have succeeded with numpy and biopython, both installed manually. WxPython is giving me trouble, though. I visited wxpython.org and noted that wx version 2.8.10.1 fixes some important bugs that can occur when using Python 2.6. The instructions here... http://wiki.wxpython.org/InstallingOnUbuntuOrDebian ...allowed me to add the wxWidgets apt repository to the list of web sites that my Synaptic Package Manager checks for updates. It found wx 2.8.10.1 and auto-installed it. I got error messages from Python 2.5 until I also installed python-wxtools, wx2.8-i18n, libwxgtk2.8- dev, and libgtk2.0-dev as recommended on the web page mentioned above. Now import wx worked again, with no error messages, from Python 2.5. Installing wx on Python 2.6 is my ultimate goal, and here's where I'm stuck. I actually tried import wx from Python 2.6 first, and I got this result: import wx Traceback (most recent call last): File stdin, line 1, in module File wx/__init__.py, line 45, in module from wx._core import * File wx/_core.py, line 4, in module import _core_ ImportError: No module named _core_ I went back to Synaptic, said that I wanted to reinstall wx 2.8.10.1 -- but not really, as I then selected download package files only. I received the tarball. The top-level folder has the wx binary stuff, which I believe is already installed. I selected the wxPython directory at the second level. Tried python setup.py build, and after several minutes and dozens of warning messages, I finally got this fatal error: In file included from /usr/include/wx-2.8/wx/glcanvas.h:54, from contrib/glcanvas/gtk/glcanvas_wrap.cpp:2661: /usr/include/wx-2.8/wx/gtk/glcanvas.h: At global scope: /usr/include/wx-2.8/wx/gtk/glcanvas.h:47: error: ‘GLXContext’ does not name a type /usr/include/wx-2.8/wx/gtk/glcanvas.h:124: error: ISO C++ forbids declaration of ‘GLXFBConfig’ with no type /usr/include/wx-2.8/wx/gtk/glcanvas.h:124: error: expected ‘;’ before ‘*’ token error: command 'gcc' failed with exit status 1 Worse -- NOW if I start Python 2.5, I get the same error message I got from 2.6. Now I'm stuck again. Any advice? Many thanks! -- http://mail.python.org/mailman/listinfo/python-list
Re: Broken Python 2.6 installation on Ubuntu Linux 8.04
On Jan 24, 3:52 pm, Christian Heimes li...@cheimes.de wrote: By the way you mustn't install your own Python with make install, use make altinstall! Your /usr/local/bin/python binary masks the original python command in /usr/bin. You should remove all /usr/local/bin/py* binaries that do not end with 2.6. Otherwise you may and will break existing programs on your system. Christian Hello Christian, In my earlier response to Benjamin, I thought I was going to solve this problem quickly. Maybe not! I know for a fact that my Linux printer management program, HPLIP toolbox, uses wxPython. And now HPLIP won't start! However, my usr/local/bin ONLY contains references to Python 2.6. So I think this is a problem with me installing wx... see this other post... http://groups.google.com/group/comp.soft-sys.wxwindows/msg/f33e245eb0956067 Sigh. All this, just so I could use some itertools functions. -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
Le Sun, 24 Jan 2010 11:28:53 -0800, Aahz a écrit : Again, your responsibility is to provide a patch and a spectrum of benchmarking tests to prove it. Then you would still have to deal with the objection that extensions use the list internals -- that might be an okay sell given the effort otherwise required to port extensions to Python 3, but that's not the way to bet. IMO, code accessing the list internals should be considered broken. The macros (PyList_GET_ITEM, etc.) are there for a reason. We can't just freeze every internal characteristic of the interpreter just because someone might be messing around with it in unrecommended ways. -- http://mail.python.org/mailman/listinfo/python-list
modbus
Hi all, I started using pymodbus. I am trying to gain pointers as to how the communcation between devices be achieved through modbus. Suggestions on Simulators/ master-slave codes would be of gr8 help Regards -- http://mail.python.org/mailman/listinfo/python-list
Re: Sikuli: the coolest Python project I have yet seen...
Ron wrote: Sikuli is the coolest Python project I have ever seen in my ten year hobbyist career. An MIT oepn source project, Sikuli uses Python to automate GUI tasks (in any GUI or GUI baed app that runs the JVM) by simply drag and dropping GUI elements into Python scripts as function arguments. Download at http://sikuli.csail.mit.edu/ I also did this podcast about Sikuli http://media.libsyn.com/media/awaretek/Python411_20100124_Sikuli.mp3 It looks like your web site is down. JM -- http://mail.python.org/mailman/listinfo/python-list
ANN: Urwid 0.9.9.1 - Console UI Library
Announcing Urwid 0.9.9.1 Urwid home page: http://excess.org/urwid/ Screen shots: http://excess.org/urwid/examples.html Tarball: http://excess.org/urwid/urwid-0.9.9.1.tar.gz About this release: === This maintenance release fixes a number of bugs including a backwards incompatibility introduced in the last release and a poor ListBox snapping behaviour. New in this release: * Fix for ListBox snapping to selectable widgets taller than the ListBox itself * raw_display switching to alternate buffer now works properly with Terminal.app * Fix for BoxAdapter backwards incompatibility introduced in 0.9.9 * Fix for a doctest failure under powerpc * Fix for systems with gpm_mev installed but not running gpm About Urwid === Urwid is a console UI library for Python. It features fluid interface resizing, UTF-8 support, multiple text layouts, simple attribute markup, powerful scrolling list boxes and flexible interface design. Urwid is released under the GNU LGPL. -- http://mail.python.org/mailman/listinfo/python-list
Re: Sikuli: the coolest Python project I have yet seen...
On 25/01/2010 12:27, Jean-Michel Pichavant wrote: Ron wrote: Sikuli is the coolest Python project I have ever seen in my ten year hobbyist career. An MIT oepn source project, Sikuli uses Python to automate GUI tasks (in any GUI or GUI baed app that runs the JVM) by simply drag and dropping GUI elements into Python scripts as function arguments. Download at http://sikuli.csail.mit.edu/ I also did this podcast about Sikuli http://media.libsyn.com/media/awaretek/Python411_20100124_Sikuli.mp3 It looks like your web site is down. JM there's a u-tube here http://www.youtube.com/watch?v=FxDOlhysFcM -- Robin Becker -- http://mail.python.org/mailman/listinfo/python-list
Re: Is python not good enough?
In article hij24v$e7...@panix5.panix.com, Aahz a...@pythoncraft.com wrote: In article 1b42700d-139a-4653-8669-d4ee2fc48...@r5g2000yqb.googlegroups.com, ikuta liu ikut...@gmail.com wrote: Is python not good enough? for google, enhance python performance is the good way better then choose build Go language? It is not at all clear that -- despite some comments to the contrary -- the Go developers are intending to compete with Python. Go seems much more intended to compete with C++/Java. If they're successful, we may eventually see GoPython. ;-) As far as I can tell, Go was not intended to compete with anything. It was their own itch they scratched. Then they opened it to the world, which I applaud. If Go was to compete with anything, they would have give it a name that was Googleable. ;-) -- Aahz (a...@pythoncraft.com) * http://www.pythoncraft.com/ Groetjes Albert -- -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. alb...@spearc.xs4all.nl =n http://home.hccnet.nl/a.w.m.van.der.horst -- http://mail.python.org/mailman/listinfo/python-list
Re: Is python not good enough?
2010/1/25 Albert van der Horst alb...@spenarnc.xs4all.nl: If Go was to compete with anything, they would have give it a name that was Googleable. ;-) If they want it Googleable, it will be. ;-) -- Cheers, Simon B. -- http://mail.python.org/mailman/listinfo/python-list
Re: Sikuli: the coolest Python project I have yet seen...
On 25-Jan-2010 04:18, Ron wrote: Sikuli is the coolest Python project I have ever seen in my ten year hobbyist career. An MIT oepn source project, Sikuli uses Python to automate GUI tasks (in any GUI or GUI baed app that runs the JVM) by simply drag and dropping GUI elements into Python scripts as function arguments. Download at http://sikuli.csail.mit.edu/ I also did this This link is broken! --V -- http://mail.python.org/mailman/listinfo/python-list
www.visualstudio2010.learn.net.in
www.visualstudio2010.learn.net.in http://www.visualstudio2010.learn.net.in/ visual studio 2010 http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010 visual studio 2010 download http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+download visual studio 2010 release date http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+release+date microsoft visual studio 2010 http://www.visualstudio2010.learn.net.in/videos/index.php?search=micros\ oft+visual+studio+2010 visual studio 2010 release http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+release visual studio 2010 screenshots http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+screenshots visual studio 2010 features http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+features visual studio 2010 wiki http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+wiki visual studio 2010 blog http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+blog visual studio 2010 rapidshare http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+rapidshare visual studio 2010 review http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+review visual studio 2010 screenshot http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+screenshot visual studio 2010 video http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+video microsoft visual studio 2010 download http://www.visualstudio2010.learn.net.in/videos/index.php?search=micros\ oft+visual+studio+2010+download visual studio 2010 iso http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+iso visual studio 2010 new features http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+new+features visual studio 2010 preview http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+preview what's new in visual studio 2010 http://www.visualstudio2010.learn.net.in/videos/index.php?search=what's\ +new+in+visual+studio+2010 visual studio 2010 mfc http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+mfc visual studio 2010 msdn http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+msdn visual studio 2010 requirements http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+requirements visual studio 2010 wikipedia http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+wikipedia visual studio 2010 ctp2 http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+ctp2 visual studio 2010 news http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+news visual studio 2010 password http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+password visual studio 2010 professional http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+professional visual studio 2010 sdk http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+sdk visual studio 2010 what's new http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+what's+new visual studio 2010 downloads http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+downloads visual studio 2010 expired http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+expired visual studio 2010 launch http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+launch visual studio 2010 trial http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+trial visual studio 2010 videos http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+videos whats new in visual studio 2010 http://www.visualstudio2010.learn.net.in/videos/index.php?search=whats+\ new+in+visual+studio+2010 new features in visual studio 2010 http://www.visualstudio2010.learn.net.in/videos/index.php?search=new+fe\ atures+in+visual+studio+2010 new in visual studio 2010 http://www.visualstudio2010.learn.net.in/videos/index.php?search=new+in\ +visual+studio+2010 visual studio 2010 c# http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+c# visual studio 2010 demo http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+demo visual studio 2010 extensibility http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\ +studio+2010+extensibility visual studio 2010 ide http://www.visualstudio2010.learn.net.in/videos/index.php?search=visual\
Re: ANN: Urwid 0.9.9.1 - Console UI Library
Ian Ward wrote: Announcing Urwid 0.9.9.1 Urwid home page: http://excess.org/urwid/ Screen shots: http://excess.org/urwid/examples.html Tarball: http://excess.org/urwid/urwid-0.9.9.1.tar.gz About this release: === This maintenance release fixes a number of bugs including a backwards incompatibility introduced in the last release and a poor ListBox snapping behaviour. New in this release: * Fix for ListBox snapping to selectable widgets taller than the ListBox itself * raw_display switching to alternate buffer now works properly with Terminal.app * Fix for BoxAdapter backwards incompatibility introduced in 0.9.9 * Fix for a doctest failure under powerpc * Fix for systems with gpm_mev installed but not running gpm About Urwid === Urwid is a console UI library for Python. It features fluid interface resizing, UTF-8 support, multiple text layouts, simple attribute markup, powerful scrolling list boxes and flexible interface design. Urwid is released under the GNU LGPL. I tried to use urwid to provide some friendly interface to some of our command line tools. It's pretty much effective, but I had difficulties redirecting stdout/stdin to one of the widget. In fact I don't know how do it properly, so it's currently broken. Most of ours tools are written so they log and take input from stdout/stdin, is there a widget you can use to act as stdin/stdout ? Google didn't find any receip for that, it would have been great. If it exists, any code sample would be much appreciated. Jean-Michel -- http://mail.python.org/mailman/listinfo/python-list
Re: Sikuli: the coolest Python project I have yet seen...
Hello, I think the site is under maintenance. I tried a couple of hours ago and it worked fine. As an alternative, I found that this link also worked: http://www.sikuli.org/ Unfortunately, it seems it's not working right now. Best regards, Javier 2010/1/25 Virgil Stokes v...@it.uu.se: On 25-Jan-2010 04:18, Ron wrote: Sikuli is the coolest Python project I have ever seen in my ten year hobbyist career. An MIT oepn source project, Sikuli uses Python to automate GUI tasks (in any GUI or GUI baed app that runs the JVM) by simply drag and dropping GUI elements into Python scripts as function arguments. Download at http://sikuli.csail.mit.edu/ I also did this This link is broken! --V -- http://mail.python.org/mailman/listinfo/python-list -- http://mail.python.org/mailman/listinfo/python-list
Re: WxPython upgrade trouble on Ubuntu 8.04
On Jan 25, 5:18 am, lada...@my-deja.com lada...@my-deja.com wrote: Hello everyone, You know the old saying, in for a penny, in for a pound. Several hours ago I posted this... http://groups.google.com/group/comp.lang.python/msg/0a86e792c674adc8 ...in which I described my desire to acquire Python 2.6 without upgrading my Ubuntu Linux installation from 8.04. Since Python 2.6 is not part of the Ubuntu 8.04 repository, I installed Python 2.6 manually. Things are a little messy right now, since invoking IDLE from the desktop still defaults to Python 2.5. But I'm getting there. At least I can access Python 2.6 from the command prompt, and SCIte also finds 2.6. Of course, I also have to reinstall all of the Python modules that I use on 2.6. I have succeeded with numpy and biopython, both installed manually. WxPython is giving me trouble, though. I visited wxpython.org and noted that wx version 2.8.10.1 fixes some important bugs that can occur when using Python 2.6. The instructions here... http://wiki.wxpython.org/InstallingOnUbuntuOrDebian ...allowed me to add the wxWidgets apt repository to the list of web sites that my Synaptic Package Manager checks for updates. It found wx 2.8.10.1 and auto-installed it. I got error messages from Python 2.5 until I also installed python-wxtools, wx2.8-i18n, libwxgtk2.8- dev, and libgtk2.0-dev as recommended on the web page mentioned above. Now import wx worked again, with no error messages, from Python 2.5. Installing wx on Python 2.6 is my ultimate goal, and here's where I'm stuck. I actually tried import wx from Python 2.6 first, and I got this result: import wx Traceback (most recent call last): File stdin, line 1, in module File wx/__init__.py, line 45, in module from wx._core import * File wx/_core.py, line 4, in module import _core_ ImportError: No module named _core_ I went back to Synaptic, said that I wanted to reinstall wx 2.8.10.1 -- but not really, as I then selected download package files only. I received the tarball. The top-level folder has the wx binary stuff, which I believe is already installed. I selected the wxPython directory at the second level. Tried python setup.py build, and after several minutes and dozens of warning messages, I finally got this fatal error: In file included from /usr/include/wx-2.8/wx/glcanvas.h:54, from contrib/glcanvas/gtk/glcanvas_wrap.cpp:2661: /usr/include/wx-2.8/wx/gtk/glcanvas.h: At global scope: /usr/include/wx-2.8/wx/gtk/glcanvas.h:47: error: ‘GLXContext’ does not name a type /usr/include/wx-2.8/wx/gtk/glcanvas.h:124: error: ISO C++ forbids declaration of ‘GLXFBConfig’ with no type /usr/include/wx-2.8/wx/gtk/glcanvas.h:124: error: expected ‘;’ before ‘*’ token error: command 'gcc' failed with exit status 1 Worse -- NOW if I start Python 2.5, I get the same error message I got from 2.6. Now I'm stuck again. Any advice? Many thanks! Try re-posting to the wxPython mailing list. Someone there will know what the next step is. --- Mike Driscoll Blog: http://blog.pythonlibrary.org PyCon 2010 Atlanta Feb 19-21 http://us.pycon.org/ -- http://mail.python.org/mailman/listinfo/python-list
Re: A simple-to-use sound file writer
In article hinfjn$8s...@speranza.aioe.org, Mel mwil...@the-wire.com wrote: Alf P. Steinbach wrote: * Steve Holden: It's not clear to me that you can approximate any waveform with a suitable combination of square waves, Oh. It's simple to prove. At least conceptually! :-) Consider first that you need an infinite number of sine waves to create a perfect square wave. The opposite also holds: infinite number of square waves to create a perfect sine wave (in a way sines and squares are opposites, the most incompatible). No, it doesn't. The infinite set of sine waves that make a square wave leave out the sine waves of frequency 2f, 4f, 6f, 8f, ... (2*n*f) ... . Once you've left them out, you can never get them back. So sawtooth waves, for example, can't generally be built out of sets of square waves. Bullshit. My Boehm (B\ohm) electronic organ does exactly that. They even have a chip for it. In the 70's it was a great hype, a sawtooth organ. Well not exactly a hype, the sound, especially the low registers, is dramatically better. If you're interested in frequencies above audible (organ builders aren't), you need an infinity of squares to build a perfect sawtooth. But then you need an inifinity of sines to build a perfect square wave. SNIP Mel. Groetjes Albert -- -- Albert van der Horst, UTRECHT,THE NETHERLANDS Economic growth -- being exponential -- ultimately falters. alb...@spearc.xs4all.nl =n http://home.hccnet.nl/a.w.m.van.der.horst -- http://mail.python.org/mailman/listinfo/python-list
Re: Sikuli: the coolest Python project I have yet seen...
Thew link at MIT does appear to be down right now, but I presume it will come back up. Well, those of you who find it underwhelming are in good company. See the blog post at Lambda the Ultimate http://lambda-the-ultimate.org/node/3783 I was impressed though by the application to notify you when your bus gets close to the pickup point, using Google maps, and by the app to automatically chart a course to Houston from LA on I-10, again using Google maps. And perhaps most of all, the app to notify you when your sleeping baby wakes up, from a picture on a digital camera. Hey, most of life is non-deterministic. I am in the analog engineering world and simple, deterministic black and white situations are all fine and useful, but I can see this very easy to use and simple technology being useful also ;-)) All of the above apps are but a few lines of code. Ron -- http://mail.python.org/mailman/listinfo/python-list
Re: WxPython upgrade trouble on Ubuntu 8.04
Hi Mike, Thanks, I forgot that wxPython-users is distinct from comp.soft- sys.wxwindows. I'll give it a try. -- http://mail.python.org/mailman/listinfo/python-list
Re: medians for degree measurements
On Jan 23, 1:09 am, Steve Howell showel...@yahoo.com wrote: [snip problem with angle data wrapping around at 360 degrees] Hi, This problem is trivial to solve if you can assume that you that your data points are measured consecutively and that your boat does not turn by more than 180 degrees between two samples, which seems a reasonable use case. If you cannot make this assumption, the answer seems pretty arbitrary to me anyhow. The standard trick in this situation is to 'unwrap' the data (fix 180 deg jumps by adding or subtracting 360 to subsequent points), do your thing and then 'rewrap' to your desired interval ([0-355] or [-180,179] degrees). In [1]: from numpy import * In [2]: def median_degree(degrees): ...: return mod(rad2deg(median(unwrap(deg2rad(degrees,360) ...: In [3]: print(median_degree([1, 2, 3, 4, 5, 6, 359])) 3.0 In [4]: print(median_degree([-179, 174, 175, 176, 177, 178, 179])) 177.0 If the deg2rad and rad2deg bothers you, you should write your own unwrap function that handles data in degrees. Hope this helps, Bas P.S. Slightly off-topic rant against both numpy and matlab implementation of unwrap: They always assume data is in radians. There is some option to specify the maximum jump size in radians, but to me it would be more useful to specify the interval of a complete cycle, so that you can do unwrapped_radians = unwrap(radians) unwrapped_degrees = unwrap(degrees, 360) unwrapped_32bit_counter = unwrap(overflowing_counter, 2**32) -- http://mail.python.org/mailman/listinfo/python-list
Re: medians for degree measurements
On 2010-01-25 10:16 AM, Bas wrote: P.S. Slightly off-topic rant against both numpy and matlab implementation of unwrap: They always assume data is in radians. There is some option to specify the maximum jump size in radians, but to me it would be more useful to specify the interval of a complete cycle, so that you can do unwrapped_radians = unwrap(radians) unwrapped_degrees = unwrap(degrees, 360) unwrapped_32bit_counter = unwrap(overflowing_counter, 2**32) Rants accompanied with patches are more effective. :-) -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Re: medians for degree measurements
On 2010-01-25 10:16 AM, Bas wrote: P.S. Slightly off-topic rant against both numpy and matlab implementation of unwrap: They always assume data is in radians. There is some option to specify the maximum jump size in radians, but to me it would be more useful to specify the interval of a complete cycle, so that you can do unwrapped_radians = unwrap(radians) unwrapped_degrees = unwrap(degrees, 360) unwrapped_32bit_counter = unwrap(overflowing_counter, 2**32) On Jan 25, 5:34 pm, Robert Kern robert.k...@gmail.com wrote: Rants accompanied with patches are more effective. :-) As you wish (untested): def unwrap(p, cycle=2*pi, axis=-1): docstring to be updated p = asarray(p) half_cycle = cycle / 2 nd = len(p.shape) dd = diff(p, axis=axis) slice1 = [slice(None, None)]*nd # full slices slice1[axis] = slice(1, None) ddmod = mod(dd+half_cycle, cycle)-half_cycle _nx.putmask(ddmod, (ddmod==-half_cycle) (dd 0), half_cycle) ph_correct = ddmod - dd; _nx.putmask(ph_correct, abs(dd)half_cycle, 0) up = array(p, copy=True, dtype='d') up[slice1] = p[slice1] + ph_correct.cumsum(axis) return up I never saw a use case for the discontinuity argument, so in my preferred version it would be removed. Of course this breaks old code (by who uses this option anyhow??) and breaks compatibility between matlab and numpy. Chears, Bas -- http://mail.python.org/mailman/listinfo/python-list
Re: Is python not good enough?
On Mon, Jan 25, 2010 at 8:57 AM, Simon Brunning si...@brunningonline.net wrote: 2010/1/25 Albert van der Horst alb...@spenarnc.xs4all.nl: If Go was to compete with anything, they would have give it a name that was Googleable. ;-) If they want it Googleable, it will be. ;-) http://www.google.com/search?q=go+language -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Jan 24, 11:24 pm, Paul Rubin no.em...@nospam.invalid wrote: Steve Howell showel...@yahoo.com writes: There is nothing wrong with deque, at least as far as I know, if the data strucure actually applies to your use case. It does not apply to my use case. You haven't explained why deque doesn't apply to your use case. Until a convincing explanation emerges, the sentiment you're creating seems to be what's wrong with that guy and why doesn't he just use deque?. So, why aren't you using deque? If deque somehow isn't adequate for your use case, maybe it can be improved. These are the reasons I am not using deque: 1) I want to use native lists, so that downstream methods can use them as lists. 2) Lists are faster for accessing elements. 3) I want to be able to insert elements into the middle of the list. 4) I have no need for rotating elements. I also have reasons for not using the other workarounds, such as reversing the list. And when discussing performance in this context, additive constants do matter. Wrong again. Operations that mutate lists are already expensive I'm talking about memory consumption, which is part of Python's concept of performance. You're proposing adding a word or two to every list, with insufficient justification presented so far. Any such justification would have to include a clear and detailed explanation of why using deque is insufficient, so that would be a good place to start. Adding a word or two to a list is an O(1) addition to a data structure that takes O(N) memory to begin with. That extra pointer should really be taken not just in context of the list itself taking O(N) memory, but also the fact that all the elements in the list are also consuming memory (until they get popped off). So adding the pointer has neglible cost. Another way of looking at it is that you would need to have 250 or so lists in memory at the same time before the extra pointer was even costing you kilobytes of memory. My consumer laptop has 3027908k of memory. -- http://mail.python.org/mailman/listinfo/python-list
Re: medians for degree measurements
On 2010-01-25 11:06 AM, Bas wrote: On 2010-01-25 10:16 AM, Bas wrote: P.S. Slightly off-topic rant against both numpy and matlab implementation of unwrap: They always assume data is in radians. There is some option to specify the maximum jump size in radians, but to me it would be more useful to specify the interval of a complete cycle, so that you can do unwrapped_radians = unwrap(radians) unwrapped_degrees = unwrap(degrees, 360) unwrapped_32bit_counter = unwrap(overflowing_counter, 2**32) On Jan 25, 5:34 pm, Robert Kernrobert.k...@gmail.com wrote: Rants accompanied with patches are more effective. :-) As you wish (untested): def unwrap(p, cycle=2*pi, axis=-1): docstring to be updated p = asarray(p) half_cycle = cycle / 2 nd = len(p.shape) dd = diff(p, axis=axis) slice1 = [slice(None, None)]*nd # full slices slice1[axis] = slice(1, None) ddmod = mod(dd+half_cycle, cycle)-half_cycle _nx.putmask(ddmod, (ddmod==-half_cycle) (dd 0), half_cycle) ph_correct = ddmod - dd; _nx.putmask(ph_correct, abs(dd)half_cycle, 0) up = array(p, copy=True, dtype='d') up[slice1] = p[slice1] + ph_correct.cumsum(axis) return up I never saw a use case for the discontinuity argument, so in my preferred version it would be removed. Of course this breaks old code (by who uses this option anyhow??) and breaks compatibility between matlab and numpy. Sometimes legitimate features have phase discontinuities greater than pi. If you want your feature to be accepted, please submit a patch that does not break backwards compatibility and which updates the docstring and tests appropriately. I look forward to seeing the complete patch! Thank you. http://projects.scipy.org/numpy -- Robert Kern I have come to believe that the whole world is an enigma, a harmless enigma that is made terrible by our own mad attempt to interpret it as though it had an underlying truth. -- Umberto Eco -- http://mail.python.org/mailman/listinfo/python-list
Xah's Edu Corner: Unix Pipe As Functional Language
Dear comrades, Hot from the press: • Unix Pipe As Functional Language http://xahlee.org/comp/unix_pipes_and_functional_lang.html plain text version follows: -- Unix Pipe As Functional Language Xah Lee, 2010-01-25 Found the following juicy interview snippet today: Q: Is there a connection between the idea of composing programs together from the command line throught pipes and the idea of writing little languages, each for a specific domain? Alfred Aho: I think there's a connection. Certainly in the early days of Unix, pipes facilitated function composition on the command line. You could take an input, perform some transformation on it, and then pipe the output into another program. ... Q: When you say “function composition”, that brings to mind the mathematical approach of function composition. Alfred Aho: That's exactly what I mean. Q: Was that mathematical formalism in mind at the invention of the pipe, or was that a metaphor added later when someone realized it worked the same way? Alfred Aho: I think it was right there from the start. Doug McIlroy, at least in my book, deserves the credit for pipes. He thought like a mathematician and I think he had this connection right from the start. I think of the Unix command line as a prototypical functional language. It is from a interview with Alfred Aho, one of the creators of AWK. The source is from this book: Masterminds of Programming: Conversations with the Creators of Major Programming Languages (2009), by Federico Biancuzzi et al. (amazon) Since about 1998, when i got into the unix programing industry, i see the pipe as a post fix notation, and sequencing pipes as a form of functional programing, but finding it overall extremely badly designed. I've wrote a few essays explaining the functional programing connection and exposing the lousy syntax. (mostly in years around 2000) However, i've never seen another person expressing the idea that unix pipes is a form of postfix notation and functional programing. It is a great satisfaction to see one of the main unix author state so. -- Unix Pipe As Functional Programing The following email content (slighted edited) is posted to Mac OS X mailing list, 2002-05. Source From: xah / xahlee.org Subject: Re: mail handling/conversion between OSes/apps Date: May 12, 2002 8:41:58 PM PDT Cc: macosx-talk / omnigroup.com Yes, unix have this beautiful philosophy. The philosophy is functional programing. For example, define: power(x) := x*x so “power(3)” returns “9”. Here “power” is a function that takes 2 arguments. First parameter specifies the number to be raised to power, the second the number of times to multiply itself. functions can be nested, f(g(h(x))) or composed compose(f,g,h)(x) Here the “compose” itself is a function, which take other functions as arguments, and the output of compose is a new function that is equivalent to nesting f g h. Nesting does not necessarily involved nested syntax. Here's a post fix notation in Mathematica for example: x // h // g // h or prefix notation: f @ g @ h @ x or in lisp (f (g (h x))) The principle is that everything is either a function definition or function application, and function's behavior is strictly determined by its argument. Apple around 1997 or so have this OpenDoc technology, which is similar idea applied more broadly across OS. That is, instead of one monolithic browser or big image editors or other software, but have lots of small tools or components that each does one specific thing and all can call each other or embedded in a application framework as services or the like. For example, in a email apps, you can use BBEdit to write you email, use Microsoft's spell checker, use XYZ brand of recorder to record a message, without having to open many applications or use the Finder the way we would do today. This multiplies flexibility. (OpenDoc was killed when Steve Jobs become the iCEO around 1998 and did some serious house-keeping, against the ghastly anger of Mac developers and fanatics, I'm sure many of you remember this piece of history.) The unix pipe syntax “|”, is a postfix notation for nesting. e.g. ps auwwx | awk '{print $2}' | sort -n | xargs echo in conventional syntax it might look like this: xargs( echo, sort(n, awk('print $2', ps(auwwx))) ) So when you use “pipe” to string many commands in unix, you are doing supreme functional programing. That's why it is so flexible and useful, because each component or function does one thing, and you can combine them in myriad of ways. However, this beautiful functional programing idea, when it is implemented by the unix heads, becomes a fucking mess. Nothing works and nothing works right. I don't feel like writing a comprehensive exposition on this at the moment. Here's a quick summary: * Fantastically stupid syntax. * Inconsistencies everywhere. Everywhere. *
Re: list.pop(0) vs. collections.dequeue
On Jan 24, 10:07 pm, Steven D'Aprano ste...@remove.this.cybersource.com.au wrote: On Sun, 24 Jan 2010 20:12:11 -0800, Steve Howell wrote: The most ambitious proposal is to fix the memory manager itself to allow the release of memory from the start of the chunk. That's inappropriate given the memory fragmentation it would cause. Bullshit. Memory managers consolidate free memory chunks all the time. That's their job. So let me get this straight... You've complained that Python's list.pop(0) is lame because it moves memory around. And your solution to that is to have the memory manager move the memory around instead? Perhaps I'm missing something, but I don't see the advantage here. At best, you consolidate all those moves you wanted to avoid and do them all at once instead of a few at a time. At worst, you get a situation where the application periodically, and unpredictably, grinds to a halt while the memory manager tries to defrag all those lists. You are misunderstanding what I meant, because I did not explain it very well. When you release memory from the front of the list, if the memory before it was also free, the memory manager could consolidate the two chunks as one free chunk. There is no rational scenario where the memory manager grinds to a halt tries to defrag all those lists. Of course, once the list gets fully garbage collected, then entire chunk of memory is freed up. Your approach of snarling against list is not persuading anyone that list needs to be changed, because most everyone is satisfied with the existing solution. Please provide evidence of that. I am pretty sure that everybody who chooses alternatives to Python would disagree. Do you honestly believe that everybody who prefers another language over Python does so because they dislike the performance of list.pop(0)? No I don't believe any statement that makes gross generalizations, so I also don't believe most everyone is satisfied with the existing solution. You might change approaches and discuss deque, what's wrong with it, and whether it can be fixed. Getting a change approved for deque is probably much easier than getting one approved for list, just because nowhere near as many things depend on deque's performance. Again...I am not looking to improve deque, which is a perfectly valid data structure for a limited set of problems. And when discussing performance in this contextc additive constants do matter. Wrong again. Operations that mutate lists are already expensive, and a few checks to see if unreleased memory can be reclaimed are totally NEGLIGIBLE. Popping from the end of the list isn't expensive. Reversing lists is relatively cheap. In-place modifications are very cheap. I am talking in relative terms here. I am saying that checking a single flag in C code isn't gonna significantly slow down any operation that calls list_resize(). Delete operations would already be doing a memmove operation, and insert operations already have to decide whether to optimistically allocate memory and create the new list element. Regarding the extra use of memory, I addressed this in my prior posting. Here is code for list_resize: static int list_resize(PyListObject *self, Py_ssize_t newsize) { PyObject **items; size_t new_allocated; Py_ssize_t allocated = self-allocated; /* Bypass realloc() when a previous overallocation is large enough to accommodate the newsize. If the newsize falls lower than half the allocated size, then proceed with the realloc() to shrink the list. */ if (allocated = newsize newsize = (allocated 1)) { assert(self-ob_item != NULL || newsize == 0); Py_SIZE(self) = newsize; return 0; } /* This over-allocates proportional to the list size, making room * for additional growth. The over-allocation is mild, but is * enough to give linear-time amortized behavior over a long * sequence of appends() in the presence of a poorly-performing * system realloc(). * The growth pattern is: 0, 4, 8, 16, 25, 35, 46, 58, 72, 88, ... */ new_allocated = (newsize 3) + (newsize 9 ? 3 : 6); /* check for integer overflow */ if (new_allocated PY_SIZE_MAX - newsize) { PyErr_NoMemory(); return -1; } else { new_allocated += newsize; } if (newsize == 0) new_allocated = 0; items = self-ob_item; if (new_allocated = ((~(size_t)0) / sizeof(PyObject *))) PyMem_RESIZE(items, PyObject *, new_allocated); else items = NULL; if (items == NULL) { PyErr_NoMemory(); return -1; } self-ob_item = items; Py_SIZE(self) = newsize; self-allocated =
Re: list.pop(0) vs. collections.dequeue
On Jan 25, 9:31 am, Steve Howell showel...@yahoo.com wrote: On Jan 24, 11:24 pm, Paul Rubin no.em...@nospam.invalid wrote: Steve Howell showel...@yahoo.com writes: There is nothing wrong with deque, at least as far as I know, if the data strucure actually applies to your use case. It does not apply to my use case. You haven't explained why deque doesn't apply to your use case. Until a convincing explanation emerges, the sentiment you're creating seems to be what's wrong with that guy and why doesn't he just use deque?. So, why aren't you using deque? If deque somehow isn't adequate for your use case, maybe it can be improved. These are the reasons I am not using deque: 1) I want to use native lists, so that downstream methods can use them as lists. 2) Lists are faster for accessing elements. 3) I want to be able to insert elements into the middle of the list. 4) I have no need for rotating elements. I also have reasons for not using the other workarounds, such as reversing the list. And when discussing performance in this context, additive constants do matter. Wrong again. Operations that mutate lists are already expensive I'm talking about memory consumption, which is part of Python's concept of performance. You're proposing adding a word or two to every list, with insufficient justification presented so far. Any such justification would have to include a clear and detailed explanation of why using deque is insufficient, so that would be a good place to start. Adding a word or two to a list is an O(1) addition to a data structure that takes O(N) memory to begin with. That extra pointer should really be taken not just in context of the list itself taking O(N) memory, but also the fact that all the elements in the list are also consuming memory (until they get popped off). So adding the pointer has neglible cost. Another way of looking at it is that you would need to have 250 or so lists in memory at the same time before the extra pointer was even costing you kilobytes of memory. My consumer laptop has 3027908k of memory. I should also point out that my telephone has gigabytes of memory. It's a fairly expensive device, but I regularly carry multiple gigabytes of memory around in my front pants pocket. There are some valid reasons to reject a proposal to make deleting elements off the top of the list be O(1). Memory consumption is not one of them. Even the most naive patch to make pop(0) and del lst[0] advance the pointer would eventually reclaim memory once the list is garbage collected. Also, by allowing users to pop elements off the list without a memmove, you encourage users to discard elements earlier in the process, which means you can amortize the garbage collection for the list elements themselves (i.e. less spiky), and do it earlier. -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Jan 24, 1:51 pm, Daniel Stutzbach dan...@stutzbachenterprises.com wrote: On Sun, Jan 24, 2010 at 1:53 PM, Steve Howell showel...@yahoo.com wrote: I don't think anybody provided an actual link, but please correct me if I overlooked it. I have to wonder if my messages are all ending up in your spam folder for some reason. :-) PEP 3128 (which solves your problem, but not using the implementation you suggest)http://www.python.org/dev/peps/pep-3128/ Implementation as an extension module:http://pypi.python.org/pypi/blist/ Related discussion:http://mail.python.org/pipermail/python-3000/2007-April/006757.htmlhttp://mail.python.org/pipermail/python-3000/2007-May/007491.html be Detailed performance comparison:http://stutzbachenterprises.com/performance-blist I maintain a private fork of Python 3 with the blist replacing the regular list, as a way of rigorously testing the blist implementation. Although I originally proposed a PEP, I am content to have the blist exist as a third-party module. Hi Daniel, I agree with what Raymond Hettinger says toward the top of the PEP. Blist, while extremely useful, does seem to have to trade off performance of common operations, notably get item, in order to get better performance for other operations (notably insert/delete). My algorithm does exactly N pops and roughly N list accesses, so I would be going from N*N + N to N + N log N if switched to blist. That would be at least a theoretical gain over the current performance, but if pop() were O(1), I could get the whole thing down to N time. -- http://mail.python.org/mailman/listinfo/python-list
Re: medians for degree measurements
On 2010-01-25 10:16 AM, Bas wrote: P.S. Slightly off-topic rant against both numpy and matlab implementation of unwrap: They always assume data is in radians. There is some option to specify the maximum jump size in radians, but to me it would be more useful to specify the interval of a complete cycle, so that you can do [snip] I never saw a use case for the discontinuity argument, so in my preferred version it would be removed. Of course this breaks old code (by who uses this option anyhow??) and breaks compatibility between matlab and numpy. On Jan 25, 6:39 pm, Robert Kern robert.k...@gmail.com wrote: Sometimes legitimate features have phase discontinuities greater than pi. We are dwelling more and more off-topic here, but anyhow: According to me, the use of unwrap is inherently related to measurement instruments that wrap around, like rotation encoders, interferometers or up/down counters. Say you have a real phase step of +1.5 pi, how could you possibly discern if from a real phase step of -pi/2? This is like an aliasing problem, so the only real solution would be to increase the sampling speed of your system. To me, the discontinuity parameter is serving some hard to explain corner case (see matlab manual), which is better left to be solved by hand in cases it appears. I regret matlab ever added the feature. If you want your feature to be accepted, please submit a patch that does not break backwards compatibility and which updates the docstring and tests appropriately. I look forward to seeing the complete patch! Thank you. I think my 'cycle' argument does have real uses, like the degrees in this thread and the digital-counter example (which comes from own experience and required me to write my own unwrap). I'll try to submit a non-breaking patch if I ever have time. Bas -- http://mail.python.org/mailman/listinfo/python-list
Re: medians for degree measurements
On Jan 24, 5:26 pm, Robert Kern robert.k...@gmail.com wrote: On 2010-01-23 05:52 , Steven D'Aprano wrote: On Fri, 22 Jan 2010 22:09:54 -0800, Steve Howell wrote: On Jan 22, 5:12 pm, MRABpyt...@mrabarnett.plus.com wrote: Steve Howell wrote: I just saw the thread for medians, and it reminded me of a problem that I need to solve. We are writing some Python software for sailing, and we need to detect when we've departed from the median heading on the leg. Calculating arithmetic medians is straightforward, but compass bearings add a twist. [...] I like this implementation, and it would probably work 99.% of the time for my particular use case. The only (very contrived) edge case that I can think of is when you have 10 bearings to SSW, 10 bearings to SSE, and the two outliers are unfortunately in the NE and NW quadrants. It seems like the algorithm above would pick one of the outliers. The trouble is that median of angular measurements is not a meaningful concept. The median depends on the values being ordered, but angles can't be sensibly ordered. Which is larger, 1 degree north or 359 degrees? Is the midpoint between them 0 degree or 180 degree? Then don't define the median that way. Instead, define the median as the point that minimizes the sum of the absolute deviations of the data from that point (the L1 norm of the deviations, for those familiar with that terminology). For 1-D data on the real number line, that corresponds to sorting the data and taking the middle element (or the artithmetic mean of the middle two in the case of even-numbered data). My definition applies to other spaces, too, that don't have a total order attached to them including the space of angles. The circular median is a real, well-defined statistic that is used for exactly what the OP intends to use it for. I admitted pretty early in the thread that I did not define the statistic with much rigor, although most people got the gist of the problem, and as Robert points out, you can more clearly the define the problem, although I think under any definition, some inputs will have multiple solutions, such as (0, 90, 180, 270) and (0, 120, 240). If you've ever done lake sailing, you probably have encountered days where the wind seems to be coming from those exact angles. This is the code that I'll be using (posted by Nobody). I'll report back it if it has any issues. def mean(bearings): x = sum(sin(radians(a)) for a in bearings) y = sum(cos(radians(a)) for a in bearings) return degrees(atan2(x, y)) def median(bearings): m = mean(bearings) bearings = [(a - m + 180) % 360 - 180 for a in bearings] bearings.sort() median = bearings[len(bearings) / 2] median += m median %= 360 return median -- http://mail.python.org/mailman/listinfo/python-list
Total maximal size of data
I have a simple question to which I could not find an answer. What is the total maximal size of list including size of its elements? I do not like to look into python source. Here is a code example: import struct KB=1024 MB=KB*KB GB=MB*KB buf=[] bs=32*KB n=4*GB/bs print N,n i=0 size=0L while i n: data = struct.pack(%ss % (bs,), ) buf.append(data) size = size+bs if size % (100*MB) == 0: print SIZE, size/MB, MB i=i+1 while 1: # to keep script running while I am looking at the machine status pass Here is what I get on 32-bit architecture: cat /proc/meminfo MemTotal: 8309860 kB MemFree: 5964888 kB Buffers: 84396 kB Cached: 865644 kB SwapCached: 0 kB .. The program output: N 131072 SIZE 100 MB SIZE 200 MB SIZE 300 MB SIZE 400 MB SIZE 500 MB SIZE 600 MB SIZE 700 MB SIZE 800 MB SIZE 900 MB SIZE 1000 MB SIZE 1100 MB SIZE 1200 MB SIZE 1300 MB SIZE 1400 MB SIZE 1500 MB SIZE 1600 MB SIZE 1700 MB SIZE 1800 MB SIZE 1900 MB SIZE 2000 MB SIZE 2100 MB SIZE 2200 MB SIZE 2300 MB SIZE 2400 MB SIZE 2500 MB SIZE 2600 MB SIZE 2700 MB SIZE 2800 MB SIZE 2900 MB SIZE 3000 MB Traceback (most recent call last): File bs.py, line 14, in ? data = struct.pack(%ss % (bs,), ) MemoryError The number of list elements for a given block size is 131072. If I change block size the script traces back at the same total size 3000MB. Somewhere I read that list could have 2147483647 items, on most platforms. Somewhere else that it is *536,870,912 (http://stackoverflow.com/questions/855191/how-big-can-a-python-array-get)** *But what is the maximal size of the whole list including the size of its elements? Thanks. -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
Am 25.01.10 20:05, schrieb Alexander Moibenko: I have a simple question to which I could not find an answer. What is the total maximal size of list including size of its elements? I do not like to look into python source. But it would answer that question pretty fast. Because then you'd see that all list-object-methods are defined in terms of Py_ssize_t, which is an alias for ssize_t of your platform. 64bit that should be a 64bit long. Diez -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
Am 25.01.10 20:05, schrieb Alexander Moibenko: I have a simple question to which I could not find an answer. What is the total maximal size of list including size of its elements? I do not like to look into python source. But it would answer that question pretty fast. Because then you'd see that all list-object-methods are defined in terms of Py_ssize_t, which is an alias for ssize_t of your platform. 64bit that should be a 64bit long. Diez -- http://mail.python.org/mailman/listinfo/python-list
Re: Sikuli: the coolest Python project I have yet seen...
On 1/25/2010 9:14 AM, Javier Collado wrote: I think the site is under maintenance. I tried a couple of hours ago and it worked fine. As an alternative, I found that this link also worked: http://www.sikuli.org/ This just redirects to the link below http://sikuli.csail.mit.edu/ I also did this This link is broken! Worked for me both yesterday and now. -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Mon, Jan 25, 2010 at 12:24 PM, Steve Howell showel...@yahoo.com wrote: Hi Daniel, I agree with what Raymond Hettinger says toward the top of the PEP. Blist, while extremely useful, does seem to have to trade off performance of common operations, notably get item, in order to get better performance for other operations (notably insert/delete). Actually, the latest version of blist is competitive for get item and similar operations. See: http://stutzbachenterprises.com/performance-blist/item http://stutzbachenterprises.com/performance-blist/set-item http://stutzbachenterprises.com/performance-blist/lifo http://stutzbachenterprises.com/performance-blist/shuffle I added a flat cache of the leaf nodes, yielding O(1) get/set item operations whenever those operations dominate over insert/delete operations. The cache adds around 1.5% memory overhead. My algorithm does exactly N pops and roughly N list accesses, so I would be going from N*N + N to N + N log N if switched to blist. That would be at least a theoretical gain over the current performance, but if pop() were O(1), I could get the whole thing down to N time. If I understand correctly, you feel strongly that a list.pop(0) that runs in O(n) time is broken, but you're comfortable with a list.pop(1) that runs in O(n) time. Is that correct? How do you feel about a bisect.insort(list, item) that takes O(n) time? Different people are bound to have different opinions about which operations are most important and where lies the best tradeoff between different operations (as well as code complexity). I am not sure why you feel so strongly that particular spot is best. Obviously, I prefer a slightly different spot, but I also respect the core developers' choice. -- Daniel Stutzbach, Ph.D. President, Stutzbach Enterprises, LLC http://stutzbachenterprises.com -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
On Jan 25, 1:23 pm, Diez B. Roggisch de...@nospam.web.de wrote: Am 25.01.10 20:05, schrieb Alexander Moibenko: I have a simple question to which I could not find an answer. What is the total maximal size of list including size of its elements? I do not like to look into python source. But it would answer that question pretty fast. Because then you'd see that all list-object-methods are defined in terms of Py_ssize_t, which is an alias for ssize_t of your platform. 64bit that should be a 64bit long. Diez Then how do explain the program output? Alex. -- http://mail.python.org/mailman/listinfo/python-list
Re: start .pyo files with doubleclick on windows
En Sun, 24 Jan 2010 19:36:53 -0300, News123 news...@free.fr escribió: Hi Alf, Alf P. Steinbach wrote: * News123: Hi, I'd like to start .pyo files under windows with a double click. C:\ assoc .pyo .pyo=Python.CompiledFile C:\ ftype python.compiledfile python.compiledfile=C:\Program Files\cpython\python31\python.exe %1 %* C:\ _ Use ftype to change the association. Thanks a lot, I leared something new about Windows What I did now is this: assoc .pyo=Python.CompiledOptimizedFile ftype Python.CompiledOptimizedFile=C:\Python26\python.exe -OO %1 %* This looks like a bug (or two) - python foo.pyo should fail; a .pyo has a different magic number than a .pyc file. Importing a .pyo file fails in this case. A .pyo file should be run with python -O foo.pyo (or -OO) - on Windows, .pyo files should be associated with python -O by default, you should not need to do that by yourself. -- Gabriel Genellina -- http://mail.python.org/mailman/listinfo/python-list
Re: Default path for files
En Sun, 24 Jan 2010 15:04:48 -0300, Günther Dietrich gd_use...@spamfence.net escribió: Rotwang sg...@hotmail.co.uk wrote: Check out http://docs.python.org/library/os.html and the function chdir it is what you are looking for. Thank you. So would adding import os os.chdir(path) to site.py (or any other module which is automatically imported during initialisation) change the default location to path every time I used Python? Don't change the library modules. It would catch you anytime when you expect it least. See for the environment variable PYTHONSTARTUP and the associated startup file. sitecustomize.py would be a better place. PYTHONSTARTUP is only used when running in interactive mode. Anyway, I'd do that explicitely on each script that requires it; after upgrading the Python version, or moving to another PC, those scripts would start failing... -- Gabriel Genellina -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
Am 25.01.10 20:39, schrieb AlexM: On Jan 25, 1:23 pm, Diez B. Roggischde...@nospam.web.de wrote: Am 25.01.10 20:05, schrieb Alexander Moibenko: I have a simple question to which I could not find an answer. What is the total maximal size of list including size of its elements? I do not like to look into python source. But it would answer that question pretty fast. Because then you'd see that all list-object-methods are defined in terms of Py_ssize_t, which is an alias for ssize_t of your platform. 64bit that should be a 64bit long. Diez Then how do explain the program output? What exactly? That after 3GB it ran out of memory? Because you don't have 4GB memory available for processes. Diez -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
On 1/25/2010 2:05 PM, Alexander Moibenko wrote: I have a simple question to which I could not find an answer. Because it has no finite answer What is the total maximal size of list including size of its elements? In theory, unbounded. In practice, limited by the memory of the interpreter. The maximum # of elements depends on the interpreter. Each element can be a list whose maximum # of elements . and recursively so on... Terry Jan Reedy -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
On Jan 25, 2:03 pm, Diez B. Roggisch de...@nospam.web.de wrote: Am 25.01.10 20:39, schrieb AlexM: On Jan 25, 1:23 pm, Diez B. Roggischde...@nospam.web.de wrote: Am 25.01.10 20:05, schrieb Alexander Moibenko: I have a simple question to which I could not find an answer. What is the total maximal size of list including size of its elements? I do not like to look into python source. But it would answer that question pretty fast. Because then you'd see that all list-object-methods are defined in terms of Py_ssize_t, which is an alias for ssize_t of your platform. 64bit that should be a 64bit long. Diez Then how do explain the program output? What exactly? That after 3GB it ran out of memory? Because you don't have 4GB memory available for processes. Diez Did you see my posting? Here is what I get on 32-bit architecture: cat /proc/meminfo MemTotal: 8309860 kB MemFree: 5964888 kB Buffers: 84396 kB Cached: 865644 kB SwapCached: 0 kB . I have more than 5G in memory not speaking of swap space. -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
On Jan 25, 2:07 pm, Terry Reedy tjre...@udel.edu wrote: On 1/25/2010 2:05 PM, Alexander Moibenko wrote: I have a simple question to which I could not find an answer. Because it has no finite answer What is the total maximal size of list including size of its elements? In theory, unbounded. In practice, limited by the memory of the interpreter. The maximum # of elements depends on the interpreter. Each element can be a list whose maximum # of elements . and recursively so on... Terry Jan Reedy I am not asking about maximum numbers of elements I am asking about total maximal size of list including size of its elements. In other words: if size of each list element is ELEMENT_SIZE and all elements have the same size what would be the maximal number of these elements in 32 - bit architecture? I see 3 GB, and wonder why? Why not 2 GB or not 4 GB? AlexM AlexM -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
* AlexM: On Jan 25, 2:07 pm, Terry Reedy tjre...@udel.edu wrote: On 1/25/2010 2:05 PM, Alexander Moibenko wrote: I have a simple question to which I could not find an answer. Because it has no finite answer What is the total maximal size of list including size of its elements? In theory, unbounded. In practice, limited by the memory of the interpreter. The maximum # of elements depends on the interpreter. Each element can be a list whose maximum # of elements . and recursively so on... Terry Jan Reedy I am not asking about maximum numbers of elements I am asking about total maximal size of list including size of its elements. In other words: if size of each list element is ELEMENT_SIZE and all elements have the same size what would be the maximal number of these elements in 32 - bit architecture? I see 3 GB, and wonder why? Why not 2 GB or not 4 GB? At a guess you were running this in 32-bit Windows. By default it reserves the upper two gig of address space for mapping system DLLs. It can be configured to use just 1 gig for that, and it seems like your system is, or you're using some other system with that kind of behavior, or, it's just arbitrary... Cheers hth., - Alf (by what mechanism do socks disappear from the washer?) -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
Am 25.01.10 21:15, schrieb AlexM: On Jan 25, 2:03 pm, Diez B. Roggischde...@nospam.web.de wrote: Am 25.01.10 20:39, schrieb AlexM: On Jan 25, 1:23 pm, Diez B. Roggischde...@nospam.web.dewrote: Am 25.01.10 20:05, schrieb Alexander Moibenko: I have a simple question to which I could not find an answer. What is the total maximal size of list including size of its elements? I do not like to look into python source. But it would answer that question pretty fast. Because then you'd see that all list-object-methods are defined in terms of Py_ssize_t, which is an alias for ssize_t of your platform. 64bit that should be a 64bit long. Diez Then how do explain the program output? What exactly? That after 3GB it ran out of memory? Because you don't have 4GB memory available for processes. Diez Did you see my posting? Here is what I get on 32-bit architecture: cat /proc/meminfo MemTotal: 8309860 kB MemFree: 5964888 kB Buffers: 84396 kB Cached: 865644 kB SwapCached: 0 kB . I have more than 5G in memory not speaking of swap space. Yes, I saw your posting. 32Bit is 32Bit. Do you know about PAE? http://de.wikipedia.org/wiki/Physical_Address_Extension Just because the system can deal with more overall memory - one process can't get more than 4 GB (or even less, through re-mapped memory). Except it uses specific APIs like the old hi-mem-stuff under DOS. Diez -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
On Jan 25, 2:37 pm, Alf P. Steinbach al...@start.no wrote: * AlexM: On Jan 25, 2:07 pm, Terry Reedy tjre...@udel.edu wrote: On 1/25/2010 2:05 PM, Alexander Moibenko wrote: I have a simple question to which I could not find an answer. Because it has no finite answer What is the total maximal size of list including size of its elements? In theory, unbounded. In practice, limited by the memory of the interpreter. The maximum # of elements depends on the interpreter. Each element can be a list whose maximum # of elements . and recursively so on... Terry Jan Reedy I am not asking about maximum numbers of elements I am asking about total maximal size of list including size of its elements. In other words: if size of each list element is ELEMENT_SIZE and all elements have the same size what would be the maximal number of these elements in 32 - bit architecture? I see 3 GB, and wonder why? Why not 2 GB or not 4 GB? At a guess you were running this in 32-bit Windows. By default it reserves the upper two gig of address space for mapping system DLLs. It can be configured to use just 1 gig for that, and it seems like your system is, or you're using some other system with that kind of behavior, or, it's just arbitrary... Cheers hth., - Alf (by what mechanism do socks disappear from the washer?) No, it is 32-bit Linux. Alex -- http://mail.python.org/mailman/listinfo/python-list
Re: how can i know if a python object have a attribute such as 'attr1'?
24-01-2010, 00:38:29 Terry Reedy tjre...@udel.edu wrote: On 1/23/2010 10:56 AM, Arnaud Delobelle wrote: thinke365thinke...@gmail.com writes: for example, i may define a python class: class A: def sayHello(): print 'hello' a = A() a.attr1 = 'hello' a.attr2 = 'bb' b = A() a.attr2 = 'aa' how can i know whether an object have an attribute named attr1? hasattr(a, 'attr1') or try: a.attr1 except AttributeError: pass or -- if you are interested only in attributes contained by attributes dict of this particular object (and no in attributes of its type, base types nor attributes calculated on-demand by __getattr__/__getattribute__ methods) -- you can check its __dict__ -- * using vars(), e.g.: if 'attr1' in vars(a)... * or directly (less elegant?), e.g.: if 'attr1' in a.__dict__... But please remember that it doesn't work for instances of types with __slots__ defined (see: http://docs.python.org/reference/datamodel.html#slots). Regards, *j -- Jan Kaliszewski (zuo) z...@chopin.edu.pl -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
Am 25.01.10 21:49, schrieb AlexM: On Jan 25, 2:37 pm, Alf P. Steinbachal...@start.no wrote: * AlexM: On Jan 25, 2:07 pm, Terry Reedytjre...@udel.edu wrote: On 1/25/2010 2:05 PM, Alexander Moibenko wrote: I have a simple question to which I could not find an answer. Because it has no finite answer What is the total maximal size of list including size of its elements? In theory, unbounded. In practice, limited by the memory of the interpreter. The maximum # of elements depends on the interpreter. Each element can be a list whose maximum # of elements . and recursively so on... Terry Jan Reedy I am not asking about maximum numbers of elements I am asking about total maximal size of list including size of its elements. In other words: if size of each list element is ELEMENT_SIZE and all elements have the same size what would be the maximal number of these elements in 32 - bit architecture? I see 3 GB, and wonder why? Why not 2 GB or not 4 GB? At a guess you were running this in 32-bit Windows. By default it reserves the upper two gig of address space for mapping system DLLs. It can be configured to use just 1 gig for that, and it seems like your system is, or you're using some other system with that kind of behavior, or, it's just arbitrary... Cheers hth., - Alf (by what mechanism do socks disappear from the washer?) No, it is 32-bit Linux. Alex I already answered that (as did Alf, the principle applies for both OSs) - kernel memory space is mapped into the address-space, reducing it by 1 or 2 GB. Diez -- http://mail.python.org/mailman/listinfo/python-list
Re: Terminal application with non-standard print
Grant Edwards wrote: On 2010-01-24, R?mi babedo...@yahoo.fr wrote: I would like to do a Python application that prints data to stdout, but not the common way. I do not want the lines to be printed after each other, but the old lines to be replaced with the new ones, like wget does it for example (when downloading a file you can see the percentage increasing on a same line). sys.stdout.write(Here's the first line) time.sleep(1) sys.stdout.write(\rAnd this line replaces it.) That does not work on my system, because sys.stdout is line buffered. This causes both strings to be written when sys.stdout is closed because Python is shutting down. This works better: import sys, time sys.stdout.write(Here's the first line) sys.stdout.flush() time.sleep(1) sys.stdout.write(\rAnd this line replaces it.) sys.stdout.flush() Hope this helps, -- HansM -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
Steve Howell showel...@yahoo.com writes: These are the reasons I am not using deque: Thanks for these. Now we are getting somewhere. 1) I want to use native lists, so that downstream methods can use them as lists. It sounds like that could be fixed by making the deque API a proper superset of the list API. 2) Lists are faster for accessing elements. It sounds like that could be fixed by optimizing deque somewhat. Also, have you profiled your application to show that accessing list elements is actually using a significant fraction of its runtime and that it would be slowed down noticably by deque? If not, it's a red herring. 3) I want to be able to insert elements into the middle of the list. I just checked, and was surprised to find that deque doesn't support this. I'd say go ahead and file a feature request to add it to deque. 4) I have no need for rotating elements. That's unpersuasive since you're advocating adding a feature to list that many others have no need for. Adding a word or two to a list is an O(1) addition to a data structure that takes O(N) memory to begin with. Yes, as mentioned, additive constants matter. Another way of looking at it is that you would need to have 250 or so lists in memory at the same time before the extra pointer was even costing you kilobytes of memory. I've often run applications with millions of lists, maybe tens of millions. Of course it would be 100's of millions if the machines were big enough. My consumer laptop has 3027908k of memory. I thought the idea of buying bigger machines was to solve bigger problems, not to solve the same problems more wastefully. -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
On Jan 25, 2:42 pm, Diez B. Roggisch de...@nospam.web.de wrote: Am 25.01.10 21:15, schrieb AlexM: On Jan 25, 2:03 pm, Diez B. Roggischde...@nospam.web.de wrote: Am 25.01.10 20:39, schrieb AlexM: On Jan 25, 1:23 pm, Diez B. Roggischde...@nospam.web.de wrote: Am 25.01.10 20:05, schrieb Alexander Moibenko: I have a simple question to which I could not find an answer. What is the total maximal size of list including size of its elements? I do not like to look into python source. But it would answer that question pretty fast. Because then you'd see that all list-object-methods are defined in terms of Py_ssize_t, which is an alias for ssize_t of your platform. 64bit that should be a 64bit long. Diez Then how do explain the program output? What exactly? That after 3GB it ran out of memory? Because you don't have 4GB memory available for processes. Diez Did you see my posting? Here is what I get on 32-bit architecture: cat /proc/meminfo MemTotal: 8309860 kB MemFree: 5964888 kB Buffers: 84396 kB Cached: 865644 kB SwapCached: 0 kB . I have more than 5G in memory not speaking of swap space. Yes, I saw your posting. 32Bit is 32Bit. Do you know about PAE? http://de.wikipedia.org/wiki/Physical_Address_Extension Just because the system can deal with more overall memory - one process can't get more than 4 GB (or even less, through re-mapped memory). Except it uses specific APIs like the old hi-mem-stuff under DOS. Diez Yes, I do. Good catch! I have PAE enabled, but I guess I have compiled python without extended memory. So I was looking in the wrong place. Thanks! AlexM -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
Am 25.01.10 22:22, schrieb AlexM: On Jan 25, 2:42 pm, Diez B. Roggischde...@nospam.web.de wrote: Am 25.01.10 21:15, schrieb AlexM: On Jan 25, 2:03 pm, Diez B. Roggischde...@nospam.web.dewrote: Am 25.01.10 20:39, schrieb AlexM: On Jan 25, 1:23 pm, Diez B. Roggischde...@nospam.web.de wrote: Am 25.01.10 20:05, schrieb Alexander Moibenko: I have a simple question to which I could not find an answer. What is the total maximal size of list including size of its elements? I do not like to look into python source. But it would answer that question pretty fast. Because then you'd see that all list-object-methods are defined in terms of Py_ssize_t, which is an alias for ssize_t of your platform. 64bit that should be a 64bit long. Diez Then how do explain the program output? What exactly? That after 3GB it ran out of memory? Because you don't have 4GB memory available for processes. Diez Did you see my posting? Here is what I get on 32-bit architecture: cat /proc/meminfo MemTotal: 8309860 kB MemFree: 5964888 kB Buffers: 84396 kB Cached: 865644 kB SwapCached: 0 kB . I have more than 5G in memory not speaking of swap space. Yes, I saw your posting. 32Bit is 32Bit. Do you know about PAE? http://de.wikipedia.org/wiki/Physical_Address_Extension Just because the system can deal with more overall memory - one process can't get more than 4 GB (or even less, through re-mapped memory). Except it uses specific APIs like the old hi-mem-stuff under DOS. Diez Yes, I do. Good catch! I have PAE enabled, but I guess I have compiled python without extended memory. So I was looking in the wrong place. You can't compile it with PAE. It's an extension that doesn't make sense in a general purpose language. It is used by Databases or some such, that can hold large structures in memory that don't need random access, but can cope with windowing. Diez -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
Steve Howell showel...@yahoo.com writes: [...] My algorithm does exactly N pops and roughly N list accesses, so I would be going from N*N + N to N + N log N if switched to blist. Can you post your algorithm? It would be interesting to have a concrete use case to base this discussion on. -- Arnaud -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Mon, Jan 25, 2010 at 9:31 AM, Steve Howell showel...@yahoo.com wrote: Another way of looking at it is that you would need to have 250 or so lists in memory at the same time before the extra pointer was even costing you kilobytes of memory. My consumer laptop has 3027908k of memory. Umm, I think the issue here is that some people have use-cases which are talking of number of lists whole orders of magnitude higher then you're talking about here. In your program, maybe you only count the number of lists in the hundreds, and so a few extra words wouldn't matter. I have applications that have hundreds of thousands to millions of lists in memory-- and which have to be managed somewhat carefully to avoid the 32-bit memory allocation limit not smacking them (64-bit python isn't an option for me presently). I've never had an algorithm which needed to pop off the top of a list that I couldn't with utter triviality simply operate in the reverse. If Python's gonna get more memory hungry, I'd like to see how it benefits me in some way. I mean, Unladen Swallow is talking about boosting Python's memory need for the JIT, but I'm getting distinct performance improvements out of that. That sounds like a fair trade. You want Python to eat up a few more of my megs that I'd rather put to use elsewhere, because... you don't want to just reverse your algorithm to treat the FIFO as a LILO? Sure, I can break my program up to run in separate processes and double how much data I can have at once, with some IPC overhead. And if I got something out of it, I'd be happy to! Or you can alter your algorithm. Why must I be the one to change? :) --S -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
On Jan 25, 3:31 pm, Diez B. Roggisch de...@nospam.web.de wrote: Am 25.01.10 22:22, schrieb AlexM: On Jan 25, 2:42 pm, Diez B. Roggischde...@nospam.web.de wrote: Am 25.01.10 21:15, schrieb AlexM: On Jan 25, 2:03 pm, Diez B. Roggischde...@nospam.web.de wrote: Am 25.01.10 20:39, schrieb AlexM: On Jan 25, 1:23 pm, Diez B. Roggischde...@nospam.web.de wrote: Am 25.01.10 20:05, schrieb Alexander Moibenko: I have a simple question to which I could not find an answer. What is the total maximal size of list including size of its elements? I do not like to look into python source. But it would answer that question pretty fast. Because then you'd see that all list-object-methods are defined in terms of Py_ssize_t, which is an alias for ssize_t of your platform. 64bit that should be a 64bit long. Diez Then how do explain the program output? What exactly? That after 3GB it ran out of memory? Because you don't have 4GB memory available for processes. Diez Did you see my posting? Here is what I get on 32-bit architecture: cat /proc/meminfo MemTotal: 8309860 kB MemFree: 5964888 kB Buffers: 84396 kB Cached: 865644 kB SwapCached: 0 kB . I have more than 5G in memory not speaking of swap space. Yes, I saw your posting. 32Bit is 32Bit. Do you know about PAE? http://de.wikipedia.org/wiki/Physical_Address_Extension Just because the system can deal with more overall memory - one process can't get more than 4 GB (or even less, through re-mapped memory). Except it uses specific APIs like the old hi-mem-stuff under DOS. Diez Yes, I do. Good catch! I have PAE enabled, but I guess I have compiled python without extended memory. So I was looking in the wrong place. You can't compile it with PAE. It's an extension that doesn't make sense in a general purpose language. It is used by Databases or some such, that can hold large structures in memory that don't need random access, but can cope with windowing. Diez Well, there actually is a way of building programs that may use more than 4GB of memory on 32 machines for Linux with higmem kernels, but I guess this would not work for python. I'll just switch to 64-bit architecture. Thanks again. AlexM -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Jan 25, 1:32 pm, Arnaud Delobelle arno...@googlemail.com wrote: Steve Howell showel...@yahoo.com writes: [...] My algorithm does exactly N pops and roughly N list accesses, so I would be going from N*N + N to N + N log N if switched to blist. Can you post your algorithm? It would be interesting to have a concrete use case to base this discussion on. It is essentially this, in list_ass_slice: if (d 0) { /* Delete -d items */ if (ilow == 0) { a-popped -= d; a-ob_item -= d * sizeof(PyObject *); list_resize(a, Py_SIZE(a)); } else { memmove(item[ihigh+d], item[ihigh], (Py_SIZE(a) - ihigh)*sizeof(PyObject *)); list_resize(a, Py_SIZE(a) + d); } item = a-ob_item; } I am still working through the memory management issues, but when I have a complete working patch, I will give more detail. -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Jan 25, 1:32 pm, Arnaud Delobelle arno...@googlemail.com wrote: Steve Howell showel...@yahoo.com writes: [...] My algorithm does exactly N pops and roughly N list accesses, so I would be going from N*N + N to N + N log N if switched to blist. Can you post your algorithm? It would be interesting to have a concrete use case to base this discussion on. I just realized you meant the Python code itself. It is here: https://bitbucket.org/showell/shpaml_website/src/tip/shpaml.py -- http://mail.python.org/mailman/listinfo/python-list
Re: Total maximal size of data
Well, there actually is a way of building programs that may use more than 4GB of memory on 32 machines for Linux with higmem kernels, but I guess this would not work for python. As I said, it's essentially paging: http://kerneltrap.org/node/2450 And it's not something you can just compile in, you need explicit code-support for it. Which python hasn't. And most other programs. So there is not a magic compile option. I'll just switch to 64-bit architecture. That's the solution, yes :) Diez -- http://mail.python.org/mailman/listinfo/python-list
Re: Terminal application with non-standard print
On 2010-01-25, Hans Mulder han...@xs4all.nl wrote: Grant Edwards wrote: On 2010-01-24, R?mi babedo...@yahoo.fr wrote: I would like to do a Python application that prints data to stdout, but not the common way. I do not want the lines to be printed after each other, but the old lines to be replaced with the new ones, like wget does it for example (when downloading a file you can see the percentage increasing on a same line). sys.stdout.write(Here's the first line) time.sleep(1) sys.stdout.write(\rAnd this line replaces it.) That does not work on my system, because sys.stdout is line buffered. That's correct of course. This causes both strings to be written when sys.stdout is closed because Python is shutting down. This works better: import sys, time sys.stdout.write(Here's the first line) sys.stdout.flush() time.sleep(1) sys.stdout.write(\rAnd this line replaces it.) sys.stdout.flush() Or you can tell Python to do unbuffered output: #!/usr/bin/python -u -- Grant Edwards grante Yow! I'm using my X-RAY at VISION to obtain a rare visi.comglimpse of the INNER WORKINGS of this POTATO!! -- http://mail.python.org/mailman/listinfo/python-list
site.py confusion
Inspired by the 'Default path for files' thread I tried to use sitecustomize in my code. What puzzles me is that the site.py's main() is not executed. My sitecustomize.py is def main(): print 'In Main()' main() and the test program is import site #site.main() print 'Hi' The output is $ python try.py Hi When I uncomment the site.main() line the output is $ python try.py In Main() Hi If I change import site to import sitecustomize the output is as above. What gives? Adding to the confusion, I found http://code.activestate.com/recipes/552729/ which contradicts http://docs.python.org/library/site.html George -- http://mail.python.org/mailman/listinfo/python-list
Re: Terminal application with non-standard print
On Jan 24, 11:27 am, Rémi babedo...@yahoo.fr wrote: Hello everyone, I would like to do a Python application that prints data to stdout, but not the common way. I do not want the lines to be printed after each other, but the old lines to be replaced with the new ones, like wget does it for example (when downloading a file you can see the percentage increasing on a same line). I looked into the curses module, but this seems adapted only to do a whole application, and the terminal history is not visible anymore when the application starts. Any ideas? Thanks, Remi You might want to take a look at the readline module. ~Sean -- http://mail.python.org/mailman/listinfo/python-list
Python, PIL and 16 bit per channel images
Does anyone know whether PIL can handle 16 bit per channel RGB images? PyPNG site (http://packages.python.org/pypng/ca.html) states PIL uses 8 bits per channel internally. Thanks, Pete -- http://www.petezilla.co.uk -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Mon, Jan 25, 2010 at 5:09 PM, Steve Howell showel...@yahoo.com wrote: On Jan 25, 1:32 pm, Arnaud Delobelle arno...@googlemail.com wrote: Steve Howell showel...@yahoo.com writes: [...] My algorithm does exactly N pops and roughly N list accesses, so I would be going from N*N + N to N + N log N if switched to blist. Can you post your algorithm? It would be interesting to have a concrete use case to base this discussion on. I just realized you meant the Python code itself. It is here: https://bitbucket.org/showell/shpaml_website/src/tip/shpaml.py -- http://mail.python.org/mailman/listinfo/python-list looking at that code, i think you could solve your whole problem with a single called to reversed() (which is NOT the same as list.reverse()) -- http://mail.python.org/mailman/listinfo/python-list
Re: Python, PIL and 16 bit per channel images
On Mon, Jan 25, 2010 at 5:04 PM, Peter Chant pet...@mpeteozilla.vco.ukewrote: Does anyone know whether PIL can handle 16 bit per channel RGB images? PyPNG site (http://packages.python.org/pypng/ca.html) states PIL uses 8 bits per channel internally. Thanks, Pete -- http://www.petezilla.co.uk -- http://mail.python.org/mailman/listinfo/python-list Mode The mode of an image defines the type and depth of a pixel in the image. The current release supports the following standard modes: - *1* (1-bit pixels, black and white, stored with one pixel per byte) - *L* (8-bit pixels, black and white) - *P* (8-bit pixels, mapped to any other mode using a colour palette) - *RGB* (3x8-bit pixels, true colour) - *RGBA* (4x8-bit pixels, true colour with transparency mask) - *CMYK* (4x8-bit pixels, colour separation) - *YCbCr* (3x8-bit pixels, colour video format) - *I* (32-bit signed integer pixels) - *F* (32-bit floating point pixels) http://www.pythonware.com/library/pil/handbook/concepts.htm -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Jan 25, 1:00 pm, Paul Rubin no.em...@nospam.invalid wrote: Steve Howell showel...@yahoo.com writes: These are the reasons I am not using deque: Thanks for these. Now we are getting somewhere. 1) I want to use native lists, so that downstream methods can use them as lists. It sounds like that could be fixed by making the deque API a proper superset of the list API. That is probably a good idea. 2) Lists are faster for accessing elements. It sounds like that could be fixed by optimizing deque somewhat. Also, have you profiled your application to show that accessing list elements is actually using a significant fraction of its runtime and that it would be slowed down noticably by deque? If not, it's a red herring. I haven't profiled deque vs. list, but I think you are correct about pop() possibly being a red herring. It appears that the main bottleneck might still be the processing I do on each line of text, which in my cases is regexes. For really large lists, I suppose memmove() would eventually start to become a bottleneck, but it's brutally fast when it just moves a couple kilobytes of data around. 3) I want to be able to insert elements into the middle of the list. I just checked, and was surprised to find that deque doesn't support this. I'd say go ahead and file a feature request to add it to deque. It might be a good thing to add just for consistency sake. If somebody first implements an algorithm with lists, then discovers it has overhead relating to inserting/appending at the end of the list, then the more deque behaves like a list, the more easily they could switch over their code to deque. Not knowing much about deque's internals, I assume its performance for insert() would O(N) just like list, although maybe a tiny bit slower. 4) I have no need for rotating elements. That's unpersuasive since you're advocating adding a feature to list that many others have no need for. To be precise, I wasn't really advocating for a new feature but an internal optimization of a feature that already exists. Adding a word or two to a list is an O(1) addition to a data structure that takes O(N) memory to begin with. Yes, as mentioned, additive constants matter. Another way of looking at it is that you would need to have 250 or so lists in memory at the same time before the extra pointer was even costing you kilobytes of memory. I've often run applications with millions of lists, maybe tens of millions. Of course it would be 100's of millions if the machines were big enough. I bet even in your application, the amount of memory consumed by the PyListObjects themselves is greatly dwarfed by other objects, notably the list elements themselves, not to mention any dictionaries that your app uses. My consumer laptop has 3027908k of memory. I thought the idea of buying bigger machines was to solve bigger problems, not to solve the same problems more wastefully. Well, I am not trying to solve problems wastefully here. CPU cycles are also scarce, so it seems wasteful to do an O(N) memmove that could be avoided by storing an extra pointer per list. I also think that encouraging the use of pop(0) would actually make many programs more memory efficient, in the sense that you can garbage collect list elements earlier. Thanks for your patience in responding to me, despite the needlessly abrasive tone of my earlier postings. I am coming around to this thinking: 1) Summarize all this discussion and my lessons learned in some kind of document. It does not have to be a PEP per se, but I could provide a useful service to the community by listing pros/cons/etc. 2) I would still advocate for removing the warning against list.pop (0) from the tutorial. I agree with Steven D'Aprano that docs really should avoid describing implementation details in many instances (although I do not know what he thinks about this particular case). I also think that the performance penalty for pop(0) is negligible for most medium-sized programs. For large-sized programs where you really want to swap in deque, I think most authors are beyond reading the tutorial and are looking elsewhere for insight on Python data structures. 3) I am gonna try to implement the patch anyway for my own edification. 4) I do think that there are ways that deque could be improved, but it is not high on my priority list. I will try to mention it in the PEP, though. -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Jan 25, 1:32 pm, Arnaud Delobelle arno...@googlemail.com wrote: Steve Howell showel...@yahoo.com writes: [...] My algorithm does exactly N pops and roughly N list accesses, so I would be going from N*N + N to N + N log N if switched to blist. Can you post your algorithm? It would be interesting to have a concrete use case to base this discussion on. These are the profile results for an admittedly very large file (430,000 lines), which shows that pop() consumes more time than any other low level method. So pop() is not a total red herring. But I have to be honest and admit that I grossly overestimated the penalty for smaller files. Typical files are a couple hundred lines, and for that use case, pop()'s expense gets totally drowned out by regex handling. In other words, it's a lot cheaper to move a couple hundred pointers per list element pop than it is to apply a series of regexes to them, which shouldn't be surprising. ncalls tottime percall cumtime percall filename:lineno (function) 230001/1 149.5080.001 222.432 222.432 /home/showell/workspace/ shpaml_website/shpaml.py:192(recurse) 42 17.6670.000 17.6670.000 {method 'pop' of 'list' objects} 538.4280.000 14.1250.000 /home/showell/workspace/ shpaml_website/shpaml.py:143(get_indented_block) 3787.8770.0007.8770.000 {built-in method match} 5410125/54101215.6970.0005.6970.000 {len} 303.9380.000 22.2860.000 /home/showell/workspace/ shpaml_website/shpaml.py:96(convert_line) 953.8470.0006.7590.000 /home/showell/workspace/ shpaml_website/shpaml.py:29(INDENT) 953.7170.000 12.5470.000 /home/showell/workspace/ shpaml_website/shpaml.py:138(find_indentation) 373.4950.000 20.2040.000 /home/showell/workspace/ shpaml_website/shpaml.py:109(apply_jquery) 373.3220.0006.5280.000 {built-in method sub} 1462.5750.0002.5750.000 {built-in method groups} As an aside, I am a little surprised by how often I call len() and that it also takes a large chunk of time, but that's my problem to fix. -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
--- On Mon, 1/25/10, Chris Colbert sccolb...@gmail.com wrote: looking at that code, i think you could solve your whole problem with a single called to reversed() (which is NOT the same as list.reverse()) I do not think that's actually true. It does no good to pop elements off a copy of the list if there is still code that refers to the original list. So I think you really do want list.reverse(). The problem with reversing the lists is that it gets sliced and diced and passed around to other methods, one of which, html_block_tag, recursively calls back to the main method. So you could say that everybody just has to work with a reversed list, but in my mind, that would be just backward and overly complicated. I am not completely ruling out the approach, though. The idea of modelling the program essentially as a stack has some validity, and it probably would run faster. https://bitbucket.org/showell/shpaml_website/src/tip/shpaml.py -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Sat, Jan 23, 2010 at 4:38 AM, Alf P. Steinbach al...@start.no wrote: snip Hm, it would be nice if the Python docs offered complexity (time) guarantees in general... Cheers, - Alf This would be a very welcome improvement IMHO- especially in collections. Geremy Condra -- http://mail.python.org/mailman/listinfo/python-list
RE: ctypes for AIX
Chris, Thanks for responding to my email. I apologize for the remark about python only being developed for windows. I got the impression when I was looking at the ActivePython web site and saw that the version of python that they had available was not supported on very many unix systems. I should not make general statement based on only one web site. After reading your email I decided to see for myself what the issue was about compiling python on AIX 5.3. This is the error I saw the first time I tried to use ctypes. Python 2.4.3 (#1, Jul 17 2006, 20:00:23) [C] on aix5 Type help, copyright, credits or license for more information. import ctypes Traceback (most recent call last): File stdin, line 1, in ? ImportError: No module named ctypes This version of python was downloaded and installed from ActivePython and when I checked their webpage it states that ctypes is not available on AIX. I then figured I would get a new copy of python and install it on AIX. I downloaded python.2.5.5c2 from http://www.python.org. I did the configure and make which posted many errors in the ctypes function which I guess is the reason that is does not get include in the final make. an example of the build error I get when doing the make is: xlc_r -q64 -DNDEBUG -O -I. -I/s/users/cz030a/xferjunk/python/Python-2.5.5c2/./Include -Ibuild/temp.aix-5.3-2.5/libffi/inclu de -Ibuild/temp.aix-5.3-2.5/libffi -I/s/users/cz030a/xferjunk/python/Python-2.5.5c2/Modules/_ctypes/libffi/src -I/s/users/c z030a/xferjunk/ots/python2.5/include -I. -IInclude -I./Include -I/s/users/cz030a/xferjunk/python/Python-2.5.5c2/Include -I/ s/users/cz030a/xferjunk/python/Python-2.5.5c2 -c /s/users/cz030a/xferjunk/python/Python-2.5.5c2/Modules/_ctypes/_ctypes.c - o build/temp.aix-5.3-2.5/s/users/cz030a/xferjunk/python/Python-2.5.5c2/Modules/_ctypes/_ctypes.o /s/users/cz030a/xferjunk/python/Python-2.5.5c2/Modules/_ctypes/_ctypes.c, line 2820.31: 1506-068 (W) Operation between ty pes void* and int(*)(void) is not allowed. /s/users/cz030a/xferjunk/python/Python-2.5.5c2/Modules/_ctypes/_ctypes.c, line 3363.28: 1506-280 (W) Function argument as signment between types int(*)(void) and void* is not allowed. /s/users/cz030a/xferjunk/python/Python-2.5.5c2/Modules/_ctypes/_ctypes.c, line 4768.67: 1506-280 (W) Function argument as signment between types void* and void*(*)(void*,const void*,unsigned long) is not allowed. /s/users/cz030a/xferjunk/python/Python-2.5.5c2/Modules/_ctypes/_ctypes.c, line 4769.66: 1506-280 (W) Function argument as signment between types void* and void*(*)(void*,int,unsigned long) is not allowed. I do not have sufficient knowledge to know how to fix this. I would think that this error somehow is related to compiling on aix. If you have any suggestions on how to correct this problem , I would appreciate it Jim Waddle KIT-D 425-785-5194 -Original Message- From: ch...@rebertia.com [mailto:ch...@rebertia.com] On Behalf Of Chris Rebert Sent: Sunday, January 24, 2010 7:31 AM To: Waddle, Jim Cc: python-list@python.org Subject: Re: ctypes for AIX On Sun, Jan 24, 2010 at 5:54 AM, Waddle, Jim jim.wad...@boeing.com wrote: I need to use ctypes with python running on AIX. According to the ctypes readme, ctypes is based on libffi, which according to its website, supports AIX for PowerPC64. So, perhaps you could state what the actual error or problem you're encountering is? It is theoretically possible the ctypes-bundled libffi is either outdated or had the AIX-specific bits removed; I don't know, I'm not a CPython dev. It appears that python is being developed mostly for windows. No, not really; your statement is especially ironic considering one of Python's primary areas of use is for web applications as part of a LAMP stack. Is there a policy concerning getting functions like ctypes working on AIX. No idea. Someone will probably chime in though. Cheers, Chris -- http://blog.rebertia.com -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
Steve Howell wrote: On Sat, 23 Jan 2010 09:57:04 -0500, Roy Smith wrote: So, we're right back to my statement earlier in this thread that the docs are deficient in that they describe behavior with no hint about cost. Given that, it should be no surprise that users make incorrect assumptions about cost. No hint? Looking at the below snippet of docs -- not efficient and slow sound like pretty good hints to me. Bringing this thread full circle, does it make sense to strike this passage from the tutorial?: ''' It is also possible to use a list as a queue, where the first element added is the first element retrieved (“first-in, first-out”); however, lists are not efficient for this purpose. While appends and pops from the end of list are fast, doing inserts or pops from the beginning of a list is slow (because all of the other elements have to be shifted by one). ''' I think points #3 and #6 possibly apply. Regarding points #2 and #4, the tutorial is at least not overly technical or specific; it just explains the requirement to shift other elements one by one in simple layman's terms. I think the paragraph is fine. Instead of waiting for the (hundreds of?) posts wondering why making a FIFO queue from a list is so slow, and what's wrong with Python, etc, etc, it points out up front that yes you can, and here's why you don't want to. This does not strike me as too much knowledge. ~Ethan~ -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
* Ethan Furman: Steve Howell wrote: On Sat, 23 Jan 2010 09:57:04 -0500, Roy Smith wrote: So, we're right back to my statement earlier in this thread that the docs are deficient in that they describe behavior with no hint about cost. Given that, it should be no surprise that users make incorrect assumptions about cost. No hint? Looking at the below snippet of docs -- not efficient and slow sound like pretty good hints to me. Bringing this thread full circle, does it make sense to strike this passage from the tutorial?: ''' It is also possible to use a list as a queue, where the first element added is the first element retrieved (“first-in, first-out”); however, lists are not efficient for this purpose. While appends and pops from the end of list are fast, doing inserts or pops from the beginning of a list is slow (because all of the other elements have to be shifted by one). ''' I think points #3 and #6 possibly apply. Regarding points #2 and #4, the tutorial is at least not overly technical or specific; it just explains the requirement to shift other elements one by one in simple layman's terms. I think the paragraph is fine. Instead of waiting for the (hundreds of?) posts wondering why making a FIFO queue from a list is so slow, and what's wrong with Python, etc, etc, it points out up front that yes you can, and here's why you don't want to. This does not strike me as too much knowledge. Is the tutorial regarded as part of the language specification? I understand that the standard library docs are part (e.g. 'object' is only described there), and that at least some PEPs are. Cheers, - Alf -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Sat, Jan 23, 2010 at 4:38 AM, Alf P. Steinbach al...@start.no wrote: Hm, it would be nice if the Python docs offered complexity (time) guarantees in general... Last time it came up, I don't think there was any core developer interest in putting complexity guarantees in the Python Language Reference. Some folks did document the behavior of most of the common CPython containers though: http://wiki.python.org/moin/TimeComplexity -- Jerry -- http://mail.python.org/mailman/listinfo/python-list
TypeError not caught by except statement
Hi, except not able to caught the TypeError exception occured in the below code log.info(refer,ret) in the try block throws a TypeError which is not caught . Also sometimes process is getting hanged. import logging log = logging.getLogger() fileName = strftime(%d-%b-%Y-, gmtime()) + str(int(time.time())) + - Log.log log = logging.getLogger() log.setLevel(logging.NOTSET) fh = logging.FileHandler(logFile) logFileLevel = logging.DEBUG fh.setLevel(logFileLevel) format_string = '%(process)d %(thread)d %(asctime)-15s %(levelname)-5s at %(filename)-15s in %(funcName)-10s at line %(lineno)-3d %(message) s' fh.setFormatter(logging.Formatter(format_string)) log.addHandler(fh) try: log.info(start) log.info(refer,ret) log.info(end) except TypeError: log.exception(Exception raised) -- OUTPUT message: Traceback (most recent call last): File C:\Python26\lib\logging\__init__.py, line 768, in emit msg = self.format(record) File C:\Python26\lib\logging\__init__.py, line 648, in format return fmt.format(record) File C:\Python26\lib\logging\__init__.py, line 436, in format record.message = record.getMessage() File C:\Python26\lib\logging\__init__.py, line 306, in getMessage msg = msg % self.args TypeError: not all arguments converted during string formatting -- http://mail.python.org/mailman/listinfo/python-list
Re: Sikuli: the coolest Python project I have yet seen...
OK, here's an idea. I used to do screen scraping scripts and run them as CGI scripts with an HTMl user interface. Why not run Sikuli on Jython on a JVM running on my server, so that I can do my screen scraping with Sikuli? I can take user inputs by using CGI forms from a web client, process the requests using a Sikuli script on the server, and send the results back to the web client. This sounds like fun to me, and easier to highlight and capture the appropriate screen information on targeted web sites using Sikuli than to hand code location information or even using Beautiful Soup. -- http://mail.python.org/mailman/listinfo/python-list
Re: easy_install error ...
FYI, I figured out what I was doing wrong. After reading the setuptools docs, I noticed took out the quotes around the package name and it works, see details below: python setup.py easy_install -m docutils==0.4 running easy_install Searching for docutils==0.4 Best match: docutils 0.4 Processing docutils-0.4-py2.6.egg Removing docutils 0.4 from easy-install.pth file Installing rst2html.py script to C:\Python26\Scripts Installing rst2latex.py script to C:\Python26\Scripts Installing rst2newlatex.py script to C:\Python26\Scripts Installing rst2pseudoxml.py script to C:\Python26\Scripts Installing rst2s5.py script to C:\Python26\Scripts Installing rst2xml.py script to C:\Python26\Scripts Using c:\python26\lib\site-packages\docutils-0.4-py2.6.egg Because this distribution was installed --multi-version, before you can import modules from this package in an application, you will need to 'import pkg_resources' and then use a 'require()' call similar to one of these examples, in order to select the desired version: pkg_resources.require(docutils) # latest installed version pkg_resources.require(docutils==0.4) # this exact version pkg_resources.require(docutils=0.4) # this version or higher Processing dependencies for docutils==0.4 Finished processing dependencies for docutils==0.4 I hope this helps other having the similar issue as me. -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
Steve Howell showel...@yahoo.com writes: I haven't profiled deque vs. list, but I think you are correct about pop() possibly being a red herring For really large lists, I suppose memmove() would eventually start to become a bottleneck, but it's brutally fast when it just moves a couple kilobytes of data around. One way to think of Python is as a scripting wrapper around a bunch of C functions, rather than as a full-fledged programming language. Viewed that way, list operations like pop(0) are essentially constant time unless the list is quite large. By that I mean you can implement classic structures like doubly-linked lists using Python tuples, but even though inserting into the middle of them is theoretically O(1), the memmove's of the native list operations will be much faster in practice. Programs dealing with large lists (more than a few thousand elements) are obviously different and if your program is using such large lists, you have to plan a little differently when writing the code. I've often run applications with millions of lists I bet even in your application, the amount of memory consumed by the PyListObjects themselves is greatly dwarfed by other objects, notably the list elements themselves Such lists often would just one element or even be empty. For example, you might have a dictionary mapping names to addresses. Most people have just one address, but some might have no address, and a few might have more than one address, so you would have a list of addresses for each name. Of course the dictionary slots in that example would also use space. Well, I am not trying to solve problems wastefully here. CPU cycles are also scarce, so it seems wasteful to do an O(N) memmove that could be avoided by storing an extra pointer per list. Realistically the CPython interpreter is so slow that the memmove is unnoticable, and Python (at least CPython) just isn't all that conductive to writing fast code. It makes up for this in programmer productivity for the many sorts of problems in which moderate speed is acceptable. Thanks for your patience in responding to me, despite the needlessly abrasive tone of my earlier postings. I wondered whether you might have come over from the Lisp newsgroups, which are pretty brutal. We try to be friendlier here (not that we're always successful). Anyway, welcome. 1) Summarize all this discussion and my lessons learned in some kind of document. It does not have to be a PEP per se, but I could provide a useful service to the community by listing pros/cons/etc. I suppose that can't hurt, but there are probably other areas (multicore parallelism is a perennial one) of much higher community interest. http://wiki.python.org/moin/ is probably a good place to put such a document. 2) I would still advocate for removing the warning against list.pop (0) from the tutorial. I agree with Steven D'Aprano that docs really should avoid describing implementation details in many instances On general principles I agree with Alex Stepanov that the running time of a function should be part of its interface (nobody wants to use a stack of popping an element takes quadratic time) and therefore should be stated in the docs. Python just has a weird incongruence between the interpreter layer and the C layer, combined with a library well-evolved for everyday problem sizes, so the traditional asymptotic approach to algorithm selection often doesn't give the best practical choice. I don't feel like looking up what the tutorial says about pop(0), but if it just warns against it without qualification, it should probably be updated. -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Jan 24, 11:28 am, a...@pythoncraft.com (Aahz) wrote: In article b4440231-f33f-49e1-9d6f-5fbce0a63...@b2g2000yqi.googlegroups.com, Steve Howell showel...@yahoo.com wrote: Even with realloc()'s brokenness, you could improve pop(0) in a way that does not impact list access at all, and the patch would not change the time complexity of any operation; it would just add negligible extract bookkeeping within list_resize() and a few other places. Again, your responsibility is to provide a patch and a spectrum of benchmarking tests to prove it. Then you would still have to deal with the objection that extensions use the list internals -- that might be an okay sell given the effort otherwise required to port extensions to Python 3, but that's not the way to bet. Ok, I just submitted a patch to python-dev that illustrates a 100x speedup on an admittedly artificial program. It still has a long way to go, but it demonstrates proof of concept. I'm done for the day, but tomorrow I will try to polish it up and improve it, even if its doomed for rejection. Apologies to all I have offended in this thread. I frankly found some of the pushback to be a bit hasty and disrespectful, but I certainly overreacted to some of the criticism. And now I'm in the awkward position of asking the people I offended to help me with the patch. If anybody can offer me a hand in understanding some of CPython's internals, particularly with regard to memory management, it would be greatly appreciated. (Sorry I don't have a link to the python-dev posting; it is not showing up in the archives yet for some reason.) -- http://mail.python.org/mailman/listinfo/python-list
Re: Splitting text at whitespace but keeping the whitespace in thereturned list
MRAB pyt...@mrabarnett.plus.com wrote in message news:mailman.1362.1264353878.28905.python-l...@python.org... pyt...@bdurham.com wrote: I need to parse some ASCII text into 'word' sized chunks of text AND collect the whitespace that seperates the split items. By 'word' I mean any string of characters seperated by whitespace (newlines, carriage returns, tabs, spaces, soft-spaces, etc). This means that my split text can contain punctuation and numbers - just not whitespace. The split( None ) method works fine for returning the word sized chunks of text, but destroys the whitespace separators that I need. Is there a variation of split() that returns delimiters as well as tokens? I'd use the re module: import re re.split(r'(\s+)', Hello world!) ['Hello', ' ', 'world!'] also, partition works though it returns a tuple instead of a list. s = 'hello world' s.partition(' ') ('hello', ' ', 'world') --Tim Arnold -- http://mail.python.org/mailman/listinfo/python-list
Re: list.pop(0) vs. collections.dequeue
On Jan 25, 8:31 pm, Paul Rubin no.em...@nospam.invalid wrote: Steve Howell showel...@yahoo.com writes: I haven't profiled deque vs. list, but I think you are correct about pop() possibly being a red herring For really large lists, I suppose memmove() would eventually start to become a bottleneck, but it's brutally fast when it just moves a couple kilobytes of data around. One way to think of Python is as a scripting wrapper around a bunch of C functions, rather than as a full-fledged programming language. Viewed that way, list operations like pop(0) are essentially constant time unless the list is quite large. By that I mean you can implement classic structures like doubly-linked lists using Python tuples, but even though inserting into the middle of them is theoretically O(1), the memmove's of the native list operations will be much faster in practice. Programs dealing with large lists (more than a few thousand elements) are obviously different and if your program is using such large lists, you have to plan a little differently when writing the code. Thanks. That is a good way of looking at. Realistically the CPython interpreter is so slow that the memmove is unnoticable, and Python (at least CPython) just isn't all that conductive to writing fast code. It makes up for this in programmer productivity for the many sorts of problems in which moderate speed is acceptable. Definitely, and moderate speed is enough in a surprisingly large number of applications. Thanks for your patience in responding to me, despite the needlessly abrasive tone of my earlier postings. I wondered whether you might have come over from the Lisp newsgroups, which are pretty brutal. We try to be friendlier here (not that we're always successful). Anyway, welcome. :) 1) Summarize all this discussion and my lessons learned in some kind of document. It does not have to be a PEP per se, but I could provide a useful service to the community by listing pros/cons/etc. I suppose that can't hurt, but there are probably other areas (multicore parallelism is a perennial one) of much higher community interest. http://wiki.python.org/moin/is probably a good place to put such a document. Ok, that's where I'll start. 2) I would still advocate for removing the warning against list.pop (0) from the tutorial. I agree with Steven D'Aprano that docs really should avoid describing implementation details in many instances On general principles I agree with Alex Stepanov that the running time of a function should be part of its interface (nobody wants to use a stack of popping an element takes quadratic time) and therefore should be stated in the docs. Python just has a weird incongruence between the interpreter layer and the C layer, combined with a library well-evolved for everyday problem sizes, so the traditional asymptotic approach to algorithm selection often doesn't give the best practical choice. I don't feel like looking up what the tutorial says about pop(0), but if it just warns against it without qualification, it should probably be updated. Here it is: http://docs.python.org/tutorial/datastructures.html#using-lists-as-queues My opinion is that the warning should be either removed or qualified, but it is probably fine as written. ''' It is also possible to use a list as a queue, where the first element added is the first element retrieved (“first-in, first-out”); however, lists are not efficient for this purpose. While appends and pops from the end of list are fast, doing inserts or pops from the beginning of a list is slow (because all of the other elements have to be shifted by one). ''' The qualifications would be that deque lacks some features that list has, and that the shift-by-one operation is actually a call to memmove () and may not apply to all implementations. -- http://mail.python.org/mailman/listinfo/python-list
Authenticated encryption with PyCrypto
Just got done reading this thread: http://groups.google.com/group/comp.lang.python/browse_thread/thread/b31a5b5f58084f12/0e09f5f5542812c3 and I'd appreciate feedback on this recipe: http://code.activestate.com/recipes/576980/ Of course, it does not meet all of the requirements set forth by the OP in the referenced thread (the pycrypto dependency is a problem), but it is an attempt to provide a simple interface for performing strong, password-based encryption. Are there already modules out there that provide such a simple interface? If there are, they seem to be hiding somewhere out of Google's view. I looked at ezPyCrypto, but it seemed to require public and private keys, which was not convenient in my situation... maybe password-based encryption is trivial to do with ezPyCrypto as well? In addition to ezPyCrypto, I looked at Google's keyczar, but despite the claims of the documentation, the API seemed overly complicated. Is it possible to have a simple API for an industry-strength encryption module? If not, is it possible to document that complicated API such that a non- cryptographer could use it and feel confident that he hadn't made a critical mistake? Also, slightly related, is there an easy way to get the sha/md5 deprecation warnings emitted by PyCrypto in Python 2.6 to go away? ~ Daniel -- http://mail.python.org/mailman/listinfo/python-list
Re: [Edu-sig] some turtle questions
Hi Brian -- If you wanna go to a lot of work, but not a huge amount, write wrapper class for the Standard Library turtle that intercepts its commands and updates an on-board data structure, representing pixels x pixels, specifying self position, keep color info stashed per each one. That's a lot of data, depending on screen resolution. Consider a thick line option if you have one, make your turtle wide body. Or stay with thin. So then if you go turtle.forward(10) you will send it to your self-made forward method. Stop and smell the pixels, see what color was stashed there, either by another turtle (! -- shared data structure) or by this turtle, or maybe it's still the default untrammeled color. You can add new methods, like glide or explode that translate to the underlying turtle somehow -- use your imagination. Kirby On Sun, Jan 24, 2010 at 7:29 AM, Brian Blais bbl...@bryant.edu wrote: Hello, I am trying to think of things to do with the turtle module with my students, and I have some ideas where I am not sure whether the turtle module can do it. 1) is there a way to determine the current screen pixel color? I am thinking about having the turtle go forward until it reaches an object, say a red circle. I can probably do this by making circle objects (drawn with turtles themselves) which know their own position, and check against this info. But I thought it might be useful also for the turtle to know. 2) is there a way to put a limit on the extent the turtle can travel? it seems I can keep moving off of the screen. Is there a way to make it so that a forward(50) command, at the edge, either raises an exception (at the wall) or simply doesn't move the turtle because of the limit? thanks! bb -- Brian Blais bbl...@bryant.edu http://web.bryant.edu/~bblais http://bblais.blogspot.com/ ___ Edu-sig mailing list edu-...@python.org http://mail.python.org/mailman/listinfo/edu-sig -- http://mail.python.org/mailman/listinfo/python-list
[issue7775] str.rpartition(sep) - (tail, sep, head)
kai zhu kaizhu...@gmail.com added the comment: documentation bug should be changed to: S.rpartition(sep) - (head, sep, tail) help(str.rpartition) Help on method_descriptor: rpartition(...) S.rpartition(sep) - (tail, sep, head) Search for the separator sep in S, starting at the end of S, and return the part before it, the separator itself, and the part after it. If the separator is not found, return two empty strings and S. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7775 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7775] str.rpartition(sep) - (tail, sep, head)
Changes by Florent Xicluna la...@yahoo.fr: -- stage: - needs patch ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7775 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7775] str.rpartition(sep) - (tail, sep, head)
July Tikhonov july.t...@gmail.com added the comment: Not only str, but also bytearray, unicode, and bytes. -- keywords: +patch nosy: +july Added file: http://bugs.python.org/file15998/rpartition-docstrings-trunk.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7775 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7775] str.rpartition(sep) - (tail, sep, head)
Changes by July Tikhonov july.t...@gmail.com: Added file: http://bugs.python.org/file15999/rpartition-docstrings-py3k.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7775 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com