[issue7978] SocketServer doesn't handle syscall interruption
Jerzy Kozera jerzy.koz...@gmail.com added the comment: I forgot to mention my patch is 3.3-only, sorry - it depends on changes from #12555 (http://hg.python.org/cpython/rev/41a1de81ef2b#l18.21 to be precise). To support 3.2 and 2.7: (1) select.error must be caught as in the original patch, (2) e.args[0] must be used - select.error doesn't have 'errno' attribute. Should I prepare the patch for 3.2 and 2.7? Regarding not updating the timeout, it was already mentioned above. Though as an afterthought, it might be worrying that if the process is receiving repeated signals with interval between them less than timeout, we might fall into infinite loop of select() when it should timeout, but that is probably very obscure case. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7978 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue7978] SocketServer doesn't handle syscall interruption
Jerzy Kozera jerzy.koz...@gmail.com added the comment: I've updated the patch according to suggestions from Gregory P. Smith. Thanks to a change from #12555 (PEP 3151) now just checking for OSError is enough. (I've decided to use mocked select() instead of calling alarm() to avoid depending on timing.) -- nosy: +Jerzy.Kozera Added file: http://bugs.python.org/file25144/socketserver_eintr_20120406.diff ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue7978 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue9742] Python 2.7: math module fails to build on Solaris 9
Jerzy Kozera jerzy.koz...@gmail.com added the comment: Running gcc -Wl,-R/usr/local/lib,-R/usr/lib -o python Python/pymath.o Modules/python.o libpython2.7.a -lresolv -lsocket -lnsl -lrt -ldl -lpthread -lm mv build/lib.solaris-2.8-sun4u-2.7/math_failed.so build/lib.solaris-2.8-sun4u-2.7/math.so seems to have made math module import correctly and work: bash-2.03$ ./python Python 2.7 (r27:82500, Nov 23 2010, 14:49:30) [GCC 3.4.6] on sunos5 Type help, copyright, credits or license for more information. import math math.floor(2.4) 2.0 I suppose it's more a workaround than a solution, but hopefully it makes using math module possible and confirms the suggestion there might be something wrong with ar/gcc linking the .a file. -- nosy: +Jerzy.Kozera ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue9742 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6832] Outputting unicode crushes when printing to file on Linux
New submission from Jerzy jer...@genesilico.pl: Hi When I am outputting unicode strings to terminal my script works OK, but when I redirect it to file I get a crash: $ python mailing/message_sender.py -l Bia Białystok $ python mailing/message_sender.py -l Bia ~/tmp/aaa.txt Traceback (most recent call last): File mailing/message_sender.py, line 71, in module list_groups(unicode(args[0],'utf-8')) File mailing/message_sender.py, line 53, in list_groups print group[1].name UnicodeEncodeError: 'ascii' codec can't encode character u'\u0142' in position 3: ordinal not in range(128) -- components: Unicode messages: 92196 nosy: Orlowski severity: normal status: open title: Outputting unicode crushes when printing to file on Linux type: crash versions: Python 2.6 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6832 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6832] Outputting unicode crushes when printing to file on Linux
Jerzy jer...@genesilico.pl added the comment: I know how to make it work. The question is why outputting to file makes it crush when outputting to terminal does not. I have never seen $program file behaving in a different way than $program in any other language Jerzy Orlowski Benjamin Peterson wrote: Benjamin Peterson benja...@python.org added the comment: You have to use an encoding that's not ascii then. -- nosy: +benjamin.peterson resolution: - works for me status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6832 ___ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6832 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6832] Outputting unicode crushes when printing to file on Linux
Jerzy jer...@genesilico.pl added the comment: Well, I would suggest using the terminal encoding as default one when redirecting. In my opinion sys.stdin and sys.stdout should always have the terminal encoding Alternatively you could make the function sys.setdefaultencoding() visible to change it in a reasonable way Jerzy Georg Brandl wrote: Georg Brandl ge...@python.org added the comment: When output goes to a terminal, Python can determine its encoding. For a file, it cannot, therefore it refuses to guess. Also, many programs behave differently when used with redirection; namely, all those that use `isatty()` to determine if stdout is a terminal. -- nosy: +georg.brandl ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6832 ___ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6832 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6832] Outputting unicode crushes when printing to file on Linux
Jerzy jer...@genesilico.pl added the comment: OK, I give up. The problem is that one might test a program on terminal and think that everything is running OK and then spend a reasonable amount of time trying to find the problem later Another approach: couldn't utf8 be set as default encoding for all inputs and outputs? I know that some of my questions are caused by the fact that I do not understand how python works. But You have to bear in mind that most of the people don't. Such behaviour of Python (see also http://bugs.python.org/issue5092) is illogical in the common sense for standard poeple. If interpreter does something illogical for me, I am more eager to switch to another language. Jerzy Martin v. Löwis wrote: Martin v. Löwis mar...@v.loewis.de added the comment: Using the terminal encoding for sys.stdout does not work in the general case, as a (background) process may not *have* a controlling terminal (such as a CGI script, a cron job, or a Windows service). That Python recognizes the terminal encoding is primarily a convenience feature for the interactive mode. Exposing sys.setdefaultencoding is not implementable in a reasonable way. -- nosy: +loewis ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6832 ___ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6832 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6832] Outputting unicode crushes when printing to file on Linux
Jerzy jer...@genesilico.pl added the comment: good point! I will give it a try Jerzy Martin v. Löwis wrote: Martin v. Löwis mar...@v.loewis.de added the comment: If you want to switch to a different language, consider switching to Python 3. There, all strings are Unicode strings, and files opened in text mode always use the locale encoding. -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6832 ___ -- ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6832 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
P3 weird sys.stdout.write()
I've stumbled upon the following in Python 3: Python 3.0.1+ (r301:69556, Apr 15 2009, 15:59:22) [GCC 4.3.3] on linux2 Type help, copyright, credits or license for more information. import sys sys.stdout.write() 0 sys.stdout.write(something) something9 write() is appending the length of the string to it's output. That's not how it worked in 2.6. What's the reason for this? Is this intended? I couldn't find a bug report for this. -- http://mail.python.org/mailman/listinfo/python-list
Re: P3 weird sys.stdout.write()
import sys n = sys.stdout.write('something') something n 9 Yes, that works as expected, now, similar to 2.6. Thank you both, Diez and André! -Jerzy -- http://mail.python.org/mailman/listinfo/python-list
Re: P3 weird sys.stdout.write()
On Mon, Aug 24, 2009 at 10:52 AM, Dave Angelda...@ieee.org wrote: The write() function changed in 3.0, but not in the way you're describing. It now (usually) has a return value, the count of the number of characters written. [...] But because you're running from the interpreter, you're seeing the return value(9), which is suppressed if it's None, which it was in 2.x. This has nothing to do with how the language behaves in normal use. This makes it much clearer! You are right, output in a shell script is normal, without the return value. Thank you, Dave. -Jerzy -- http://mail.python.org/mailman/listinfo/python-list
[issue6518] Enable 'with' statement in ossaudiodev module
New submission from Jerzy Jalocha N jjalo...@gmail.com: Actually, it is not possible to use the 'with' statement in the ossaudiodev module: import ossaudiodev with ossaudiodev.open('/dev/dsp', 'r') as device: ... pass ... Traceback (most recent call last): File stdin, line 1 in module AttributeError: 'ossaudodev.oss_audio_device' object has no attribute '__exit__' In order to provide a similar interface as standard Python files, and encourage safe coding practices, the 'with' statement should be supported in the ossaudiodev module. Thanks. -- components: Extension Modules messages: 90697 nosy: jjalocha severity: normal status: open title: Enable 'with' statement in ossaudiodev module type: feature request versions: Python 2.6, Python 2.7, Python 3.0, Python 3.1, Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6518 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue6519] Reorder 'with' statement for files in Python Tutorial
New submission from Jerzy Jalocha N jjalo...@gmail.com: Actually, the Python Tutorial recommends the use of the 'with' statement in Section 7.2.1. Methods of File Objects: It is good practice to use the with keyword when dealing with file objects. [etc.] But the example and description are at the very bottom of this very large section, and are easily missed by new Python users. If this suggestion is to be taken seriously, I suggest putting this information at a more prominent place, somewhere at the top of the short section 7.2. Reading and Writing Files. -- assignee: georg.brandl components: Documentation messages: 90698 nosy: georg.brandl, jjalocha severity: normal status: open title: Reorder 'with' statement for files in Python Tutorial type: feature request versions: Python 2.6, Python 2.7, Python 3.0, Python 3.1, Python 3.2 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue6519 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5092] weird memory usage in multiprocessing module
Jerzy jer...@genesilico.pl added the comment: OK, I see and if don't want l to exist in f() I have to: def f(): pass def a(): l=[] f() a() Jurek Martin v. Löwis wrote: Martin v. Löwis mar...@v.loewis.de added the comment: I still do not understand what is going on when python executed thic code. I have a local variable l in my parent process. No, you don't. It's a global variable, not a local one. When I create a child process, program makes first makes a copy of memory. Than what? It doesn't have to do anything with the multiprocessing at all. For comparison, just run the Python script def f(): del l l = [] f() It produces the same error, with no multiprocessing involved. ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5092 ___ ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5092 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5092] weird memory usage in multiprocessing module
Jerzy jer...@genesilico.pl added the comment: And anyway, for me it's not OK if something in a code of a function like 'del' affect how variables are affected in whole function. It is really illogical. There code is in lines and line are one below another. The logical way is that a line of code affects the program ONLY when it is executed and ONLY from the time it is executed. A statement that is not executed (python never reach the place) should not affect the program in ANY way. You may think what you think, but for me it is a big bug in the heart of python Jerzy Martin v. Löwis wrote: Martin v. Löwis mar...@v.loewis.de added the comment: I still do not understand what is going on when python executed thic code. I have a local variable l in my parent process. No, you don't. It's a global variable, not a local one. When I create a child process, program makes first makes a copy of memory. Than what? It doesn't have to do anything with the multiprocessing at all. For comparison, just run the Python script def f(): del l l = [] f() It produces the same error, with no multiprocessing involved. ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5092 ___ ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5092 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5092] weird memory usage in multiprocessing module
Jerzy jer...@genesilico.pl added the comment: I am not an expert. But for me it is much better. If you cannot delete the global variable in a function (del makes the variable local anyway). So trying to delete a global variable should raise an exception Cannot delete a global variable or something like that. In a function variable should be global till the place when you define a local one. Example: a='Something' def f(): print a #prints the global variable a del a #Make an exception that a is global so it cannot be deleted a='anotherthing' #make a local a print a #print local a del a#delete local a print a #print global a f() Also, if there are two variable (global and local) with seme name, there should be a way to access either of them like 'print loc(a)' and 'print glob(a)'. This is just a suggestion Another way of resolving the problem would be making it impossible to make a local variable when there is anothe one with the same name. David W. Lambert pisze: David W. Lambert lamber...@corning.com added the comment: The alternative is unreasonable. I doubt you'd be happy with this: a = 'Something' def variable_both_global_and_local()-Exception('No good!'): del a# delete a from global name space a = 'anotherthing' # define a in local name space ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5092 ___ ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5092 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5092] weird memory usage in multiprocessing module
Jerzy jer...@genesilico.pl added the comment: I still do not understand what is going on when python executed thic code. I have a local variable l in my parent process. When I create a child process, program makes first makes a copy of memory. Than what? I am sure that l still exists in child process because 1. It can be printed 2. It has still a lot of memory allocated for it You say that l does not exist as a local variable in child process. Is it global? How can I dealocate it in child process? Jerzy Martin v. Löwis pisze: Martin v. Löwis mar...@v.loewis.de added the comment: As David says, this is not a bug. del l indicates that there is a local variable to be deleled, but when the del statement is executed, there is no local variable. The error message is confusing in this case: there actually is no later assignment to l (in the function at all). Typically, when you have an unbound local, it is because of a later assignment, such as def foo(): a = l + 1 l = 2 In this specific example, there is no later assignment - yet it is still an unbound local. So that you get the exception is not a bug. I was going to suggest that the error message could be better, but I can't think of any other error message that is better and still correct, hence closing it as won't fix. -- nosy: +loewis resolution: - wont fix status: open - closed ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5092 ___ ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5092 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
[issue5092] weird memory usage in multiprocessing module
New submission from Jerzy jer...@genesilico.pl: Hi I am using the multiprocessing mudule and I found a very weird thing. It seems that that the result of one fragment of the code depends on the fragment of the code that is after it, which should not happen. My script looks like this import time import multiprocessing import sys def f(): sys.stderr.write(str(len(l))+\n) print len(l) #del l while(True): time.sleep(1) l=[] for i in range(2*1000*1000): l.append(str(i)) process = multiprocessing.Process(target=f) process.start() while(True): time.sleep(1) And its output is as expected: 200 200 but when I uncoment the 'del l' line I get: File /home/jerzyo/programs/python2.6/Python-2.6.1/Lib/multiprocessing/process.py, line 231, in _bootstrap self.run() File /home/jerzyo/programs/python2.6/Python-2.6.1/Lib/multiprocessing/process.py, line 88, in run self._target(*self._args, **self._kwargs) File bin/momory.py, line 6, in f sys.stderr.write(str(len(l))+\n) UnboundLocalError: local variable 'l' referenced before assignment How is that? The line that deletes l is after the printing line. How python interpreter knows that l will be deleted. This is a very anomalus behaviour and should never happen. By the way. Is there any way to free some parts of memory in child process. Suppose I wand to create 100 child processes that do not use the l list. How can I avoid making 100 copies of l in child processes. That is my firs post and won't come here very often, so please answer also to my email (if it is not automaic). I am running python 2.6.1 on ubuntu 8.04 32bit. jerzy -- messages: 80731 nosy: Orlowski severity: normal status: open title: weird memory usage in multiprocessing module type: resource usage versions: Python 2.6 ___ Python tracker rep...@bugs.python.org http://bugs.python.org/issue5092 ___ ___ Python-bugs-list mailing list Unsubscribe: http://mail.python.org/mailman/options/python-bugs-list/archive%40mail-archive.com
Re: Can't find Python Library packages in Ubuntu (Debian)
Scott David Daniels wrote: So, the first question is: How do I install the complete Python test framework under Ubuntu (Debian)? You could use BZR or SVN to get a copy of the full Lib/test tree. Given your long-disconnected running, I'd consider getting a full source set for release25-maint. Note that as of Python 2.6 / 3.0, if Python is not restricted to system directories for security reasons (if sys.flags.no_user_site is non-0), you may make a directory in your home directory to be searched. See http://docs.python.org/library/site.html for details on USER_SITE and USER_BASE. That will allow you to place a test subdirectory under site.USER_SITE and get to test.test_list (for example) on your python search path. Since it is a user-specific location, you can make a tester user with the directory in his own space and not worry about affecting the standard environment. Since I already compiled Python 2.6, and am using it basically only for testing purposes, this won't be necessary, right now. But someday, Python 2.6 will get released for Debian, and if it doesn't ship with the tests, I will come back your instructions. - Thanks, Scott! Paul Boddie wrote: I will try to contact whoever is responsible for the packaging of Python in Ubuntu (or Debian), and ask them if they are willing to support the _complete_ Python release. They may already do so, but I'd argue that they could document the packages which provide the complete release a bit better if these are not already mentioned in /usr/share/doc/python/README.Debian or some similar file. You are right, /usr/share/doc/python2.5/README.Debian , should contain that information, but it doesn't. I already filed a bug in Launchpad, and will move upstream, if necessary. By looking at the documentation for the Debian stable source package, I did manage to find a list of generated packages: http://packages.debian.org/source/etch/python-defaults Perhaps one of these contains the test files. I checked the most obvious packages there manually without success before posting my question here. I also used apt-file, which searches for a specific file in _all_ available packages, including not installed ones, ...no success, again. Although such files are arguably only of use to people building Python, and such people would therefore obtain the source package in order to perform the build process, there could be some benefit in having a package which provides these files separately. For example, one may wish to verify the behaviour of an installed version or to test aspects of another implementation. My personal interest in these tests is the following: I was working with some sequential dictionaries from ActiveState, but experienced problems with all recipes. I started writing my own test cases, but found somewhere a reference about test_dict, test_list et al. Using these, I've been able to fix a few problems. (Even if some test cases are quite difficult to interpret right now.) In general, I think that the Python test suite is extrermely valuable, and should be made more accessible to all users. Especially in combination with Python's new Abstract Base Classes, They could prove extremely useful in creating robust classes with standard interfaces. But I digress. Thank you, Paul! -- http://mail.python.org/mailman/listinfo/python-list
Re: Can't find Python Library packages in Ubuntu (Debian)
On Fri, Nov 21, 2008 at 9:37 AM, David Cournapeau [EMAIL PROTECTED] wrote: I think most people using python as a development tool use the version available in their distribution. Personally, I mostly use the stock python of Ubuntu. Although building python itself is not difficult on the typical linux box, keep in mind that you will almost certainly need to re-build all the packages you need. Tt is not easy if at all possible to use extensions from one python interpreter to the other, in particular for packages which contain C code (things like pygtk, pyqt come to mind). If you want to use python 2.6, you don't have a choice, though, since it is not available on Ubuntu yet as you said. David, I agree with you, that compiling all the additional packages could become quite difficult, especially for an unexperienced guy like me. Thus, I will keep using the stock install for everyday use, and use my custom installation (standard python only, without additional packages) for the missing unit-tests, and for testing upcoming 2.6/3.0 compatibility. I will try to contact whoever is responsible for the packaging of Python in Ubuntu (or Debian), and ask them if they are willing to support the _complete_ Python release. Thanks, Jerzy -- http://mail.python.org/mailman/listinfo/python-list
Re: Can't find Python Library packages in Ubuntu (Debian)
So, the first question is: How do I install the complete Python test framework under Ubuntu (Debian)? So, my second question: What (meta?-)package(s) do I have to install under Ubuntu (Debian) in order to get a full (as in the official release) Python installation? I don't have the slightest idea where the tests are - but you can of course always install the source package :) I think, I will take the chance, and install Python 2.6 which isn't available as an Ubuntu package yet. :) And my third question could be: Do all Python developers that work with Debian (or derivations) have to compile Python? The one thing you will definitely need is the python-dev-package. It will contain things such as headers and distutils that are needed to build and install 3rd-party-packages. Thanks for your comments, Diez! Jerzy -- http://mail.python.org/mailman/listinfo/python-list
Can't find Python Library packages in Ubuntu (Debian)
I'm new in this list (and to Python), so I'd like to start saying hello to everyone first. I am really enjoying this new language! I am trying to use the standard tests (like test_list.py or test_dict.py) from the standard library (Python2.5), but they aren't available on a standard Ubuntu Hardy or Ibex installation. Searching in the official download, I found a rich test structure under 'Lib/test/', but in my installation, this directory doesn't contain much. The dpkg-file script didn't find any packages for these specific files either. I looked manually in any 'python-' package that seemed reasonable, with no success, and Google didn't help this time. So, the first question is: How do I install the complete Python test framework under Ubuntu (Debian)? Since I spend long times in a remote area without network connection, I usually try to set-up everything I need (and might eventually need) on my computer in advance, in order to avoid unpleasant surprises. Thus, I would really try to make sure, I have a complete Python installed on my notebook. But I couldn't find out for sure what parts are missing in a standard Ubuntu installation, and what needs to be added manually. So, my second question: What (meta?-)package(s) do I have to install under Ubuntu (Debian) in order to get a full (as in the official release) Python installation? Thank you in advance! Jerzy -- http://mail.python.org/mailman/listinfo/python-list
Re: Addressing the last element of a list
Peter Otten wrote: [EMAIL PROTECTED] wrote: Just to satisfy my curiousity: Is there a way to do something like the reference solution I suggest above? No. You cannot overload assignment. I have the impression that this is not an issue, to overload assignments, which btw. *can* be overloaded, but the absence of *aliasing* (undiscriminate handling of pointers) in Python. Am I wrong? Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
Re: Addressing the last element of a list
Peter Otten wrote cites me: I have the impression that this is not an issue, to overload assignments, which btw. *can* be overloaded, but the absence of *aliasing* (undiscriminate handling of pointers) in Python. Am I wrong? I think so. a = b will always make a a reference to (the same object as) b. What can be overloaded is attribute assignment: x.a = b can do anything from creating an attribute that references b to wiping your hard disk. I don't understand what you mean by absence of aliasing, but conceptually every python variable is a -- well-behaved -- pointer. Would you please concentrate on - what I underlined - the sense of C aliasing, where you can make a pointer to point to anything, say, the 176th byte of a function code? *This* is impossible in Python. What you wrote is OK, but I still don't know where I have been wrong, unless you over-interpret my words. Sure, I didn't want to claim that the assignment a=anything can be plainly overloaded. But getitem, setitem, getattr, setattr - yes. And they (set-) are also assignments. Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
Re: OT - Re: Microsoft Hatred FAQ
Steven D'Aprano wrote: Jaywalking is a crime. So is littering. So is merely belonging to certain organisations, such as the German Nazi party or any number of allegedly terrorist groups. Walking around naked in public is a crime, and in many places in the world, including the USA, you then become a registered sex offender for the rest of your life. (So much for doing time and wiping the slate clean.) Possession of many substances is a crime in the USA, and not just drugs of addiction. There is no Fraud, Force or Threat involved in growing cannabis in your backyard, or selling pornography to teenagers, or driving without a licence. Possession of banned books is a crime in many countries, [enough ...] Now, tell me: is the polluting of a newsgroup with off-topic postings, a crime, and if yes then what? Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
Re: Microsoft Hatred FAQ
Roedy Green wrote: On Wed, 19 Oct 2005 07:10:55 GMT, Alan Connor [EMAIL PROTECTED] wrote or quoted : To all the shit-for-brains trolls that are polluting these groups with this crap, which I haven't even bothered to read: A single thread does not pollute a group. It is trivially easy to ignore a thread. If your newsreader does not support that feature, try an different newsreader. The pollution *is* there, despite the possibility of individual screening. The subject and the contents violates some basic nsgroup principles, such as topicality. One to ten irrelevant postings do no harm. More than hundred - become annoying. Cross-posting to 5 groups is bad. Please go away. Claiming that this is an interesting, great thread is utterly silly in this context. Shall Python newsgroup discuss the trial of Saddam Hussein as well? Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
Technical question on complexity.
Anybody knows where I can find a concrete and guaranteed answer to the following extremely basic and simple question? What is the complexity of appending an element at the end of a list? Concatenating with another? The point is that I don't know what is the allocation policy... If Python lists are standard pointer-chained small chunks, then obviously linear. But perhaps there is some optimisation, after all a tuple uses a contiguous chunk, so it *might* be possible that a list is essentially equivalent to an array, with its length stored within, and adding something may be done in constant time (unless the whole stuff is copied, which again makes the complexity related to the size of existing structure...) It is probably possible to retrieve this information from the sources, but I try first an easier way. Thank you. Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
What is executed when in a generator
I thought that the following sequence gl=0 def gen(x): global gl gl=x yield x s=gen(1) suspends the generator just before the yield, so after the assignment of s gl becomes 1. Well, no. It is still zero. If I put print something before the yield, this doesn't get executed either. *EVERYTHING* from the beginning until the yield gets executed only upon s.next(). Could you tell me please where can I read something in depth about the semantics of generators? I feel a bit lost. Thank you. Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
Re: What is executed when in a generator
Thank you all for some precisions about yield and generators. But I would like to underline that I am *not* a complete newbie, and I understand this stuff in general. I read the yield.html material quite a time ago. But thank you for the PEP, I should have scanned it more carefully. My problem was the *context of suspension*. You see, when some text speaks about the preservation of local state, etc., *a priori* the following sequence def gen() a b c yield x d e f yield y *COULD BE* understood as follows. Upon the call s=gen(), a, b and c *get* executed, change the global state, install some local, and then the system makes the snapshot, and returns the generator with its context. The call to next returns x to the caller, but the generator works for some time, and executes d, e and f before the next suspension. Now I know that this is not true. The CO- flag of the gen() procedure inhibits the execution of the code altogether, and a, b, c are executed upon the first s.next; d, e and f - upon the second next. Etc. OK, that's it, I accept such behaviour, although the alternative could be interesting as well. Steve Holden wrote: The first hing you need to realise is that s is a generator, and it needs to be used in an iterative context to see the (sequence of) yielded results: This is not exact, this is a *particular* consumer view of generators. I don't want them for iterators, I use them as emulators of *lazy programming*, I implement some co-recursive algorithms with them. So i use next() when I wish, and never 'for'. Thank you once more. Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
Twist and perversion. Was: Software bugs aren't inevitable
Terry Hancock wrote: /a few statements which seem to be there - apparently - just for the sake of quarreling/ The FP camp (apparently) wants to advance the claim that FP will *always* reduce bugs. I find that very hard to believe. Good. Now go, and talk to some FP people before accusing them of being *so* sectarian. Your supposition that they claim that FP is always better is unjustified. Were I more aggressive, I would say: 'sheer nonsense'. I would not say - as you did - a 'ludicrous sophistry', because it is not ludicrous. Quite sad, in fact... Your further posting, about twists and perversion of functional programming makes me invite you to learn a bit more of FP. It won't harm you, and it might raise in your spirit the question why in thousands of educational establishment this programming style is considered good for beginners. I might agree that thousands of teachers are more stupid than you, but that they are all perverts, I believe not. Anyway. In a further posting you comment the psychological aspect of language choice in such a way: I said this, because an earlier poster had *dismissed* mere psychological reasons as unimportant, claiming that functional programming was superior on technical grounds. 1. I never said that FP was technically superior. 2. I never dismissed psychological reasons as unimportant. Read it again, please. Please, stop putting in other people mouths fake arguments, just to have something to argue about, OK? FP appeals to many. Well, *why* people who jump into Python from other languages very often like functional constructs, and dislike the fact that destructive methods return nothing?... Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
Re: Software bugs aren't inevitable
Terry Reedy cites: Mike Meyer who fights with: While that's true, one of the reasons Guido has historically rejected this optimization is because there are plenty of recursive algorithms not amenable to tail-call optimization. Since the BDFL is *not* known for doing even mildly silly things when it comes to Python's design and implementation, I suspect there's more to the story than that. Yes. The reason Guido rejected tail-call optimization the last time it was suggested is because it is semanticly incorrect for Python. Python's name bindings are dynamic, not static, and the optimization (and the discussion here) assumes static bindings. In Python, runtime recursiveness (defined as a function *object* calling itself directly or indirectly), is dynamically determined. You cannot tell whether a function object will act recursive or not just by looking at its code body. Hmm. The question is to optimize the TAIL CALLS, not just the recursivity. Mind you, Scheme has also a dynamical name binding, yet it does this optimization. This is for me more a question of policy than of semantics [with the *true* meaning of the word semantics]. The situation is a bit different in, say, Prolog, where the tail calls cannot - typically - be optimized for *serious* reasons, the indeterminism. Prolog has to keep/stack the choice points in recursive generators. In Python not so. Hm. Now I began to scratch my head. I will have to translate some Prolog algorithms to Python generators... Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
Re: Software bugs aren't inevitable
Steven D'Aprano is still unhappy with the linear complexity recursive Fibonacci I proposed as as an alternative to the cascading recursion which for some people is standard or obvious or other similar attribution which is not valid anymore. RuntimeError: maximum recursion depth exceeded (eg calling Jerzy Karczmarczuk's efficiently recursive function with n=1000, while my iterative version works for at least values of n an order of magnitude larger.) Yes, the maximum recursion depth in Python is an artificial limit. But that artificial limit is built into Python specifically to protect you from running into a real recursion limit based on the hardware and architecture of your PC, with painful consequences. Oh, I LOVE technical solutions like that: Everybody knows that you should not exceed some speed in your car. So, our new technology is such that if you reach 160 km per hour, your engine breaks. Surely it is an artificial limit, but it is there to protect you from worse problems with painful consequences. I do not feel guilty for proposing a function which fails for n1000. This solution, in Haskell works for ANY n, and doesn't fill the stack at all (provided it is strictified, i.e. the laziness does not produce some thunk accumulation) fib n = fibo n 0 1 where fibo 0 a _ = a fibo 1 _ b = b fibo n a b = fibo (n-1) b (a+b) But tail recursion is NOT iteration in Python. So, this version: def fib1(n,a=0,b=1): if n==0: return a elif n==1: return b return fib1(n-1,b,a+b) which in a 'decent' language (no offense meant, just thinking what will be considered scandalous in 40 years...) would run for any n, in Python breaks for n1000 again. [[Terry Reedy proposed another variant; mine is a bit shorter, perhaps a bit more elegant]]. I am sorry, but Python, as every other programming language is full of decisions ad hoc, not always universally appreciated. Recursion can be and IS often optimised, and tail recursion in functional languages is entirely removed (no stack filling.) Stacks can be and are sometimes allocated on heap, so the comment of somebody who discussed the dichotomy stack/heap, pointing out the difference in size, may be altogether irrelevant. Again, we, the users are not responsible for the memory allocation policy in Python... So, this paragraph Recursion is frequently extravagant in its use of resources: if nothing else, it takes resources to call a function, and recursion means you call the same function over and over again. There is a reason why functional programming never really took off. is for me a biased view of the problem. Justified only by the fact that at the beginning of functional programming (sixties) nobody cared about the efficiency. Now, such languages as Clean, or good implementations of Scheme are very well optimized. OCaml as well, and Haskell is improving every 3 months. Functional languages did take off, but since a pure functionalism requires particular, quite sophisticated techniques in GOOD algorithm design and implementation, they are less adapted to the brutal world than C or Python. The reasons of relatively small popularity are much less technical than psychological, and this is KNOWN. (Terry Hancock formulated this plainly, he prefers dumb ways because he wants to solve problems, and he doesn't like to perform gymnastics with his brain. We have to accept those attitudes. But I believe that this is the effect of teaching standards; people don't learn high-level algorithm design when they are young enough...) If you google for Fibonacci sequences, you will find dozens, possibly hundreds, of implementations virtually identical to the one I gave. Also significant numbers of Java apps that run slow for values of n larger than 30 or 40 -- a good sign that they are using the naive algorithm. It is a rare under-graduate or secondary school textbook that suggests that the naive algorithm is anything but a poor idea. If you Google for anything, you will find hundreds of stupidities, since nowadays the proliferation of amateurish tutorials etc. on the Web is simply terrible... I WILL NOT assume the responsibility for all the bad solutions. On the other hand, I suspect that there will be people who will not follow this thread, who will just remember your first posting on the subject, and they will remain convinced that recursion /per se/ is lousy, and that your cascading algorithm is *THE* standard recursive solution. Yes, YOU are in the state of sin! [Optional smiley here]. But, I have used myself the cascading version. It was done on purpose, in order to get to the following solution. [[I preferred to use a global dict, but other ways of doing it are also possible]]. fibdic={0:0,1:1} def fibd(n): if not fibdic.has_key(n): r=fibd(n-1)+fibd(n-2) fibdic[n]=r return fibdic[n] And here the recursion limit won't get you!! But the memoization
Re: Software bugs aren't inevitable
Steven D'Aprano wrote: On Wed, 14 Sep 2005 12:23:00 -0700, Paul Rubin wrote: Every serious FP language implementation optimizes tail calls and thus using recursion instead of iteration doesn't cost any stack space and it probably generates the exact same machine code. Are you saying that the recursion done by serious languages is a fake? That it is actually implemented behind the scenes by iteration? It seems to me that if recursion and iteration produce the exact same machine code, the argument for preferring recursion over iteration is gutted. Well, in such a way we can discuss for eternity... Please, distinguish between the high-level algorithm formulation, and what computer does under the carpet. The recursion - as put forward by functionalists is a conceptual way of thinking. With passing of new parameters without explicitly updating the variables to which the old ones are assigned. Without loop control variables, nor looping constructs. You don't reassign your variables, you avoid the side-effects. The programs are often clearer and safer, less error-prone. Now, the ugly world of basic computing at the assembly level is imperative, not functional (within standard architectures). The registers ARE reassigned. The stack must be explicitly handled, etc. You *SEE* explicitly the iterative structures. The point of functionalists is that one should avoid that, and leave those nasty things to the compiler. That's all. Your final conclusion is for me rather inacceptable. It is not the machine code which matters, but human effort [provided you spent sufficient time to be fluent in *good* recursive programming of complex tasks.] Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
Re: Software bugs aren't inevitable
Steven D'Aprano recommends iteration over recursion: For instance, try these two simple functions for the nth number in the Fibonacci sequence: def fibr(n): Recursive version of Fibonacci sequence. if n == 0: return 0 elif n == 1: return 1 else:return fibr(n-1) + fibr(n-2) def fibi(n): Simple iterative version of Fibonacci sequence. if n == 0: return 0 etc. Try timing how long it takes to generate the 30th Fibonacci number (832040) using both of those algorithms. Now try the 50th. (Warning: the amount of work done by the recursive version increases at the same rate as the Fibonacci sequence itself increases. That's not quite exponentially, but it is fast enough to be very painful.) First of all, the recursive version of Fibonacci IS EXPONENTIAL in complexity, don't say such not-quite-truth as not quite. But, what is more important: If you don't know too much about the way the functional programming is used nowadays, please refrain from giving nonsensical examples, since NOBODY serious programs something in the style of your recursive version. Such anti-advertizing of recursion says less about the recursion than about yourself. Here you are a recursive version linear in n; it returns the two last Fibonacci numbers of the sequence def fibo(n): if n2: return (n-1,n) else: (a,b)=fibo(n-1) return (b,a+b) The exponential complexity, cascading version is a nice test case how to use memoization, though, so it is not entirely senseless to learn it. Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list
Inconsistent reaction to extend
Gurus, before I am tempted to signal this as a bug, perhaps you might convince me that it should be so. If I type l=range(4) l.extend([1,2]) l gives [0,1,2,3,1,2], what else... On the other hand, try p=range(4).extend([1,2]) Then, p HAS NO VALUE (NoneType). With append the behaviour is similar. I didn't try other methods, but I suspect that it won't improve. WHY? It seems that there was already some discussion about consistency and somebody produced the example: h = {}.update(l) which didn't work, but I wasn't subscribed to this nsgr, I couldn't follow this affair. Jerzy Karczmarczuk -- http://mail.python.org/mailman/listinfo/python-list